Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

May 4, 2014

Thanks for Unguling

Filed under: Books,eBooks,Publishing — Patrick Durusau @ 6:59 pm

Thanks-for-Ungluing launches!

From the post:

Great books deserve to be read by all of us, and we ought to be supporting the people who create these books. “Thanks for Ungluing” gives readers, authors, libraries and publishers a new way to build, sustain, and nourish the books we love.

“Thanks for Ungluing” books are Creative Commons licensed and free to download. You don’t need to register or anything. But when you download, the creators can ask for your support. You can pay what you want. You can just scroll down and download the book. But when that book has become your friend, your advisor, your confidante, you’ll probably want to show your support and tell all your friends.

We have some amazing creators participating in this launch.

An attempt to address the problem of open access to published materials while at the same time compensating authors for their efforts.

There is some recent material and old standbys like The Communist Manifesto by Karl Marx and Friedrich Engels. Which is good but having more recent works such as A Theology of Liberation by Gustavo Gutiérrez would be better.

If you are thinking about writing a book on CS topics, please think about “Thanks for Ungluing” as an option.

I first saw this in a tweet by Tim O’Reilly.

“Credibility” As “Google Killer”?

Filed under: Facebook,Relevance,Search Algorithms,Search Engines,Searching — Patrick Durusau @ 6:26 pm

Nancy Baym tweets: “Nice article on flaws of ”it’s not our fault, it’s the algorithm” logic from Facebook with quotes from @TarletonG” pointing to: Facebook draws fire on ‘related articles’ push.

From the post:

A surprise awaited Facebook users who recently clicked on a link to read a story about Michelle Obama’s encounter with a 10-year-old girl whose father was jobless.

Facebook responded to the click by offering what it called “related articles.” These included one that alleged a Secret Service officer had found the president and his wife having “S*X in Oval Office,” and another that said “Barack has lost all control of Michelle” and was considering divorce.

A Facebook spokeswoman did not try to defend the content, much of which was clearly false, but instead said there was a simple explanation for why such stories are pushed on readers. In a word: algorithms.

The stories, in other words, apparently are selected by Facebook based on mathematical calculations that rely on word association and the popularity of an article. No effort is made to vet or verify the content.

Facebook’s explanation, however, is drawing sharp criticism from experts who said the company should immediately suspend its practice of pushing so-called related articles to unsuspecting users unless it can come up with a system to ensure that they are credible. (emphasis added)

Just imagine the hue and outcry had that last line read:

Imaginary Quote Google’s explanation of search results, however, is drawing sharp criticism from experts who said the company should immediately suspend its practice of pushing so-called related articles to unsuspecting users unless it can come up with a system to ensure that they are credible. End Imaginary Quote

Is demanding “credibility” of search results the long sought after “Google Killer?”

“Credibility” is closely related to the “search” problem but I think it should be treated separately from search.

In part because the “credibility” question is one that can require multiple searches upon the author of search result content, searches for reviews and comments on search result content, searches of other sources of data on the content in the search result and then a collation of that additional content to make a credibility judgement on the search result content. The procedure isn’t always that elaborate but the main point is that it requires additional searching and evaluation of content to even begin to answer a credibility question.

Not to mention why the information is being sought has a bearing on credibility. If I want to find examples of nutty things said about President Obama to cite, then finding the cases mentioned above is not only relevant (the search question) but also “credible” in the sense that Facebook did not make they up. They are published nutty statements about the current President.

What if a user wanted to search for “coffee and bagels?” The top hit on one popular search engine today is: Coffee Meets Bagel: Free Online Dating Sites, along with numerous other links to information on the first link. Was this relevant to my search? No, but search results aren’t always predictable. They are relevant to someone’s search using “coffee and bagels.”

It is the responsibility of every reader to decide for themselves what is relevant, credible, useful, etc. in terms of content, whether it is hard copy or digital.

Any other solution takes us to Plato‘s Republic, which was great to read about, would not want to live there.

What I Said Is Not What You Heard

Filed under: Communication,Language — Patrick Durusau @ 1:22 pm

Another example of where semantic impedance can impair communication, not to mention public policy decisions:

science-public-disconnect

Those are just a few terms that are used in public statements from scientists.

I can hardly imagine the disconnect between lawyers and the public. Or economists and the public.

To say nothing of computer science in general and the public.

I’m not sold on the solution to bias being heard as distortion, political motive, is: offset from an observation.

Literally true but I’m not sure omitting the reason for the offset is all that helpful.

Something more along the lines of: “test A misses the true value B by C, so we (subtract/add) C to A to get a more correct value.”

A lot more words but clearer.

The image is from: Communicating the science of climate change by Richard C. J. Somerville and Susan Joy Hassol. A very good article on the perils of trying to communicate with the general public about climate change.

But it isn’t just the general public that has difficulty understanding scientists. Scientists have difficulty understanding other scientists, particularly if the scientists in question are from different domains or even different fields within a domain.

All of which has to make you wonder: If human beings, including scientists fail to understand each other on a regular basis, who is watching for misunderstandings between computers?

I first saw this in a tweet by Austin Frakt.

PS: Pointers to research on words that fail to communicate greatly appreciated.

Lock and Load Hadoop

Filed under: Hadoop,MapReduce — Patrick Durusau @ 10:35 am

How to Load Data for Hadoop into the Hortonworks Sandbox

Summary:

This tutorial describes how to load data into the Hortonworks sandbox.

The Hortonworks sandbox is a fully contained Hortonworks Data Platform (HDP) environment. The sandbox includes the core Hadoop components (HDFS and MapReduce), as well as all the tools needed for data ingestion and processing. You can access and analyze sandbox data with many Business Intelligence (BI) applications.

In this tutorial, we will load and review data for a fictitious web retail store in what has become an established use case for Hadoop: deriving insights from large data sources such as web logs. By combining web logs with more traditional customer data, we can better understand our customers, and also understand how to optimize future promotions and advertising.

“Big data” applications are fun to read about but aren’t really interesting until your data has been loaded.

If you don’t have the Hortonworks Sandbox you need to get it: Hortonworks Sandbox.

May 3, 2014

Clojure Cookbook – Revisited (Review)

Filed under: Clojure — Patrick Durusau @ 7:32 pm

Konrad Garus has published a review of the Clojure Cookbook that reads in part:


In my opinion the book is very uneven. It’s very detailed about the primitives and basic collections, but at the same time it doesn’t do justice to state management (atoms, refs, agents) or concurrency. Yet it has two chapters on building a red-black tree. It is very detailed about Datomic, but barely scratches the surface of much more common tools like core.async, core.logic or core.match. It does not include anything about graphics or ClojureScript.

Comments? Suggestions?

RegExr

Filed under: Regex,Regexes — Patrick Durusau @ 7:18 pm

RegExr

From the webpage:

RegExr is an online tool to learn, build, & test Regular Expressions (RegEx / RegExp).

  • Results update in real-time as you type.
  • Roll over a match or expression for details.
  • Save & share expressions with others.
  • Explore the Library for help & examples.
  • Undo & Redo with Ctrl-Z / Y.
  • Search for & rate Community patterns.

For fast text processing, very little can touch regexes and Unix command line utilities.

I first saw this at Nathan Yau’s Learn regular expressions with RegExr.

OCLC releases WorldCat Works as linked data

Filed under: Linked Data,OCLC,WorldCat — Patrick Durusau @ 7:08 pm

OCLC releases WorldCat Works as linked data

From the press release:

OCLC has made 197 million bibliographic work descriptions—WorldCat Works—available as linked data, a format native to the Web that will improve discovery of library collections through a variety of popular sites and Web services.

Release of this data marks another step toward providing interconnected linked data views of WorldCat. By making this linked data available, library collections can be exposed to the wider Web community, integrating these collections and making them more easily discoverable through websites and services that library users visit daily, such as Google, Wikipedia and social networks.

“Bibliographic data stored in traditional record formats has reached its limits of efficiency and utility,” said Richard Wallis, OCLC Technology Evangelist. “New technologies, influenced by the Web, now enable us to move toward managing WorldCat data as entities—such as ‘Works,’ ‘People,’ ‘Places’ and more—as part of the global Web of data.”

OCLC has created authoritative work descriptions for bibliographic resources found in WorldCat, bringing together multiple manifestations of a work into one logical authoritative entity. The release of “WorldCat Works” is the first step in providing linked data views of rich WorldCat entities. Other WorldCat descriptive entities will be created and released over time.

If you are looking for a smallish set of entity identifiers, this is a good start on bibliographic materials.

I say smallish because as of 2009, there were 672 million assigned phone numbers in the United States (Numbering Resource Utilization in the United States).

Each of those phone numbers has the potential to identify some subject. The assigned number if nothing else. Although other uses suggest themselves.

Facts vs. Expert Opinion

Filed under: Measurement,Medical Informatics — Patrick Durusau @ 4:31 pm

In a recent story about randomized medical trials:

“I should leave the final word to Archie Cochrane. In his trial of coronary care units, run in the teeth of vehement opposition, early results suggested that home care was at the time safer than hospital care. Mischievously, Cochrane swapped the results round, giving the cardiologists the (false) message that their hospitals were best all along.

“They were vociferous in their abuse,” he later wrote, and demanded that the “unethical” trial stop immediately. He then revealed the truth and challenged the cardiologists to close down their own hospital units without delay. “There was dead silence.”

Followed by Harford’s closing line: “The world often surprises even the experts. When considering an intervention that might profoundly affect people’s lives, if there is one thing more unethical than running a randomised trial, it’s not running the trial”

One of the persistent dangers of randomized trials is that the results can contradict what is “known” to be true by experts.

Another reason for user rather than c-suite “testing” of product interfaces, assuming the c-suite types are willing to hear “bad” news.

And a good illustration that claims of “ethics” can be hiding less pure concerns.

I first saw this in A brilliant anecdote on how scientists react to science against their interests by Chris Blattman, which lead me to: Weekly Links May 2: Mobile phones, working with messy data, funding, working with children, and more… and thence to the original post: The random risks of randomised trials by Tim Harford.

Judgmental Maps

Filed under: Humor,Mapping,Maps — Patrick Durusau @ 3:59 pm

Judgmental Maps

Imagine a map of a city with the neighborhood names removed but the interstate highways and a few other geographic features remaining.

Now further imagine that you have annotated that map with new names to represent the neighborhoods and activities in that city.

I tried to pick my favorite but in these sensitive times, someone would be offended by any choice I made.

You can create an submit maps in addition to viewing ones already posted.

I first saw this at Judgmental Maps on Chart Porn.

OpenPolicy [Patent on Paragraphs?]

Filed under: Government,Patents,Searching — Patrick Durusau @ 3:29 pm

OpenPolicy: Knowledge Makes Document Searches Smarter

From the webpage:

The government has a wealth of policy knowledge derived from specialists in myriad fields. What it lacked, until now, was a flexible method for searching the content of thousands of policies using the knowledge of those experts. LMI has developed a tool—OpenPolicy™—to provide agencies with the ability to capture the knowledge of their experts and use it to intuitively search their massive storehouse of policy at hyper speeds.

Traditional search engines produce document-level results. There’s no simple way to search document contents and pinpoint appropriate paragraphs. OpenPolicy solves this problem. The search tool, running on a semantic-web database platform, LMI SME-developed ontologies, and web-based computing power, can currently host tens of thousands of pages of electronic documents. Using domain-specific vocabularies (ontologies), the tool also suggests possible search terms and phrases to help users refine their search and obtain better results.

For agencies wanting to use OpenPolicy, LMI initially builds a powerful computing environment to host the knowledgebase. It then loads all of an agency’s documents—policies, regulations, meeting notes, trouble tickets, essentially any text-based file—into the database. The system can scale to store billions of paragraphs.

No detail on the technology behind OpenPolicy but the mention of paragraphs is enough to make me wary of possible patents on paragraphs.

I am hopeful that even the USPTO would balk at patenting paragraphs in general or as the results of a search but I would not bet money on it.

If you know of any such patents, please post them in comments below.

I first saw this at: LMI Named a Winner in Destination Innovation Competition by Angela Guess.

Guerilla Usability Test: Yelp

Filed under: Interface Research/Design,Usability — Patrick Durusau @ 3:07 pm

Guerilla Usability Test: Yelp by Gary Yu.

Proof that usability testing doesn’t have to be overly complex or long.

Useful insights from five (5) users and a low-tech approach to analysis.

Like you, I have known organizations to spend more than a year on web design/re-design issues and demur on doing even this degree of user testing.

Lack of time, resources, expertise (they got that one right but not on the user testing issue), were the most common excuses.

There was a greatly reduced chance the executives would hear any disagreement with their design choices but I don’t consider that to be a goal in interface design.

Human Sense Making

Filed under: Bioinformatics,Interface Research/Design,Sense,Sensemaking,Workflow — Patrick Durusau @ 12:38 pm

Scientists’ sense making when hypothesizing about disease mechanisms from expression data and their needs for visualization support by Barbara Mirel and Carsten Görg.

Abstract:

A common class of biomedical analysis is to explore expression data from high throughput experiments for the purpose of uncovering functional relationships that can lead to a hypothesis about mechanisms of a disease. We call this analysis expression driven, -omics hypothesizing. In it, scientists use interactive data visualizations and read deeply in the research literature. Little is known, however, about the actual flow of reasoning and behaviors (sense making) that scientists enact in this analysis, end-to-end. Understanding this flow is important because if bioinformatics tools are to be truly useful they must support it. Sense making models of visual analytics in other domains have been developed and used to inform the design of useful and usable tools. We believe they would be helpful in bioinformatics. To characterize the sense making involved in expression-driven, -omics hypothesizing, we conducted an in-depth observational study of one scientist as she engaged in this analysis over six months. From findings, we abstracted a preliminary sense making model. Here we describe its stages and suggest guidelines for developing visualization tools that we derived from this case. A single case cannot be generalized. But we offer our findings, sense making model and case-based tool guidelines as a first step toward increasing interest and further research in the bioinformatics field on scientists’ analytical workflows and their implications for tool design.

From the introduction:

In other domains, improvements in data visualization designs have relied on models of analysts’ actual sense making for a complex analysis [2]. A sense making model captures analysts’ cumulative, looped (not linear) “process [es] of searching for a representation and encoding data in that representation to answer task-specific questions” relevant to an open-ended problem [3]: 269. As an end-to-end flow of application-level tasks, a sense making model may portray and categorize analytical intentions, associated tasks, corresponding moves and strategies, informational inputs and outputs, and progression and iteration over time. The importance of sense making models is twofold: (1) If an analytical problem is poorly understood developers are likely to design for the wrong questions, and tool utility suffers; and (2) if developers do not have a holistic understanding of the entire analytical process, developed tools may be useful for one specific part of the process but will not integrate effectively in the overall workflow [4,5].

As the authors admit, one case isn’t enough to be generalized but their methodology, with its focus on the work flow of a scientist, is a refreshing break from imagined and/or “ideal” work flows for scientists.

Until now semantic software has followed someone’s projection of an “ideal” work flow.

The next generation of semantic software should follow the actual work flows of people working with their data.

I first saw this in a tweet by Neil Saunders

May 2, 2014

Next Middle East War – Syria

Filed under: History,Topic Maps — Patrick Durusau @ 8:17 pm

Just in case you are collecting topics and occurrences for the next war in the Middle East, I wanted to pass on some remarks made by James Comey about Syria earlier today.

I won’t be dignifying his comments by quoting them but you can see them in full at: FBI Director: Radicalization Of Westerners In Syria Is Of Great Concern.

Comey’s main concern is over approximately one hundred (100) Americans who may be in Syria to take part in the local civil war. Apparently the FBI doesn’t know which side they are on.

I mention this because the repetition of “concern” eventually infects even reasonable people, without them even being aware that unsupported allegations were simply repeated until they sound truthful.

Much like the weapons of mass destruction in Iraq.

Before the background material grows too much more, you might want to start tracking stories to their points of origin and then to who repeats the story, etc. There will be no shortage of such material in the coming months.

I first saw this in a tweet by Ken Dilanian.

Experimental CS – Networks

Filed under: Computer Science,Networks — Patrick Durusau @ 8:00 pm

Design and analysis of experiments in networks: Reducing bias from interference by Dean Eckles, Brian Karrer, and, Johan Ugander.

Abstract:

Estimating the effects of interventions in networks is complicated when the units are interacting, such that the outcomes for one unit may depend on the treatment assignment and behavior of many or all other units (i.e., there is interference). When most or all units are in a single connected component, it is impossible to directly experimentally compare outcomes under two or more global treatment assignments since the network can only be observed under a single assignment. Familiar formalism, experimental designs, and analysis methods assume the absence of these interactions, and result in biased estimators of causal effects of interest. While some assumptions can lead to unbiased estimators, these assumptions are generally unrealistic, and we focus this work on realistic assumptions. Thus, in this work, we evaluate methods for designing and analyzing randomized experiments that aim to reduce this bias and thereby reduce overall error. In design, we consider the ability to perform random assignment to treatments that is correlated in the network, such as through graph cluster randomization. In analysis, we consider incorporating information about the treatment assignment of network neighbors. We prove sufficient conditions for bias reduction through both design and analysis in the presence of potentially global interference. Through simulations of the entire process of experimentation in networks, we measure the performance of these methods under varied network structure and varied social behaviors, finding substantial bias and error reductions. These improvements are largest for networks with more clustering and data generating processes with both stronger direct effects of the treatment and stronger interactions between units.

Deep sledding but that is to be expected as CS matures and abandons simplistic models, such as non-interaction between units in a network.

While I was reading the abstract, it occurred to me that merges that precipitate other merges could be said to cause interaction between topics.

Since the authors found error reduction in networks with as few as 1,000 vertices, you should not wait until you are building very large topic maps to take this paper into account.

IE Patched!

Filed under: Cybersecurity,Microsoft,Security — Patrick Durusau @ 7:49 pm

Microsoft patches major Internet Explorer security flaw, even for Windows XP by Kif Leswing.

From the post:

Microsoft has patched a major Internet Explorer browser security flaw, the company announced in a blog post Thursday. Notably, the patch will be pushed out to Windows XP machines, which Microsoft had said it would stop supporting on April 8.

Here is the sequence of events as I understand them:

  1. Microsoft announces the bug. No fixes for XP.
  2. Department of Homeland Security says: “Don’t Use Internet Explorer!”
  3. Microsoft announces a patch for the bug, including XP.

All in less than a week.

Should we be reporting security bugs to DHS and not US-CERT?

Seems like DHS has found a way to get bugs fixed.

Yes?

Find Papers We Love

Filed under: Computation,Computer Science,CS Lectures — Patrick Durusau @ 7:29 pm

Find Papers We Love by Zachary Tong.

A search interface to the Github repository maintained by @Papers_we_love.

It’s not “big data” but this search interface is going to make my life better.

You?

PS: Papers We Love

From their homepage:

What was the last paper within the realm of computing you read and loved? What did it inspire you to build or tinker with? Come share the ideas in an awesome academic/research paper with fellow engineers, programmers, and paper-readers. Lead a session and show off code that you wrote that implements these ideas or just give us the lowdown about the paper (because of HARD MATH!). Otherwise, just come, listen, and discuss.

We’re curating a repository for papers and places-to-find papers. You can contribute by adding PR’s for papers, code, and/or links to other repositories.

We’re posting videos of all our presentations, from all our chapters.

This is a productive use of the Internet.

Apache Solr 4.8 Documentation

Filed under: Search Engines,Solr — Patrick Durusau @ 7:22 pm

Apache Solr 4.8 Reference Guide (pdf)

Apache Solr 4.8.0 Documentation

From the documentation page:

Solr is the popular, blazing fast open source enterprise search platform from the Apache Lucene project. Its major features include powerful full-text search, hit highlighting, faceted search, dynamic clustering, database integration, rich document (e.g., Word, PDF) handling, and geospatial search. Solr is highly scalable, providing distributed search and index replication, and it powers the search and navigation features of many of the world’s largest internet sites.

Solr is written in Java and runs as a standalone full-text search server within a servlet container such as Jetty. Solr uses the Lucene Java search library at its core for full-text indexing and search, and has REST-like HTTP/XML and JSON APIs that make it easy to use from virtually any programming language. Solr’s powerful external configuration allows it to be tailored to almost any type of application without Java coding, and it has an extensive plugin architecture when more advanced customization is required.

This is the official documentation for Apache Solr 4.8.0.

I haven’t had good experiences with either the “official” Solr documentation or commercial publications on the same.

Not that any of it in particular was wrong so much as it was incomplete. Not that any of it was short. 😉

Perhaps it was more of an organizational problem than anything else.

I will be using the documentation on a regular basis for a while so I will start contributing suggestions as issues arise.

Curious to know if your experience with the Solr documentation has been the same? Different?

Big Data Report

Filed under: BigData — Patrick Durusau @ 2:24 pm

The Big Data and Privacy Working Group has issued its first report: Findings of the Big Data and Privacy Working Group Review.

John Podesta writes:

Over the past several days, severe storms have battered Arkansas, Oklahoma, Mississippi and other states. Dozens of people have been killed and entire neighborhoods turned to rubble and debris as tornadoes have touched down across the region. Natural disasters like these present a host of challenges for first responders. How many people are affected, injured, or dead? Where can they find food, shelter, and medical attention? What critical infrastructure might have been damaged?

Drawing on open government data sources, including Census demographics and NOAA weather data, along with their own demographic databases, Esri, a geospatial technology company, has created a real-time map showing where the twisters have been spotted and how the storm systems are moving. They have also used these data to show how many people live in the affected area, and summarize potential impacts from the storms. It’s a powerful tool for emergency services and communities. And it’s driven by big data technology.

In January, President Obama asked me to lead a wide-ranging review of “big data” and privacy—to explore how these technologies are changing our economy, our government, and our society, and to consider their implications for our personal privacy. Together with Secretary of Commerce Penny Pritzker, Secretary of Energy Ernest Moniz, the President’s Science Advisor John Holdren, the President’s Economic Advisor Jeff Zients, and other senior officials, our review sought to understand what is genuinely new and different about big data and to consider how best to encourage the potential of these technologies while minimizing risks to privacy and core American values.

The full text of the Big Data Report.

The executive summary seems to shift between “big data” and “not big data.” Maps of where twisters hit recently are hardly the province of “big data.” Every local news program produces similar maps. Even summarizing potential damage isn’t a “big” data issue. Both are data issues but that isn’t the same thing as “big data.”

If we are not careful, “big data” with very soon equal “data” and a useful distinction will have been lost.

Read the report over the weekend and post comments if you see other issues that merit mention.

Thanks!

May 1, 2014

Digitising Tithe Maps

Filed under: Mapping,Maps — Patrick Durusau @ 7:40 pm

Digitising Tithe Maps by Einion Gruffudd.

From the post:

Cynefin is a Welsh word for the area you are familiar with. It is also the name of a project for digitising the Tithe Maps of Wales. This is a considerable challenge, there are over a thousand of them and they are large, some are two by three metres or more. The maps are highly popular with the public, and with television programmes, and it is easy to see why. Not only are the maps very detailed, but they also have attached schedules or apportionment documents which include the names of the people who were paying tithes around the 1840s, and where they were farmers, as most were, more often than not the names of their fields are included.

The Cynefin project will produce digitised images of the maps and the apportionment documents, and much more as well. As there is such a wealth of information in the apportionment documents, they will be transcribed, but for this we are reliant on help from the community. We also plan to link the apportionment entries directly to the field numbers which are on the maps, this work will also involve volunteers. There will soon be a crowd sourcing platform giving the public an opportunity to contribute directly to the project.

Think of these maps as being the equivalents of a modern data tax assessor’s maps for property taxes, sans the need to tithe to support the local church.

Such rich maps offer many opportunities to create links (read associations) between these records and other sources of information from the same time period.

Enjoy!

The Pocket Guide to Bullshit Prevention

Filed under: Communication,Marketing — Patrick Durusau @ 7:10 pm

The Pocket Guide to Bullshit Prevention by Michelle Nijhuis.

From the post:

bs-detector

I am often wrong. I misunderstand; I misremember; I believe things I shouldn’t. I’m overly optimistic about the future quality of Downton Abbey, and inexact in my recall of rock-star shenanigans. But I am not often—knock wood—wrong in print, and that’s because, as a journalist, I’ve had advanced training in Bullshit Prevention Protocol (BPP).

Lately, as I’ve watched smarter and better-dressed friends believe all manner of Internet nonsense, I’ve come to appreciate my familiarity with BPP. Especially because we’re all publishers now. (Sharing a piece of news with 900 Facebook friends is not talking. It’s publishing.) And publishing bullshit is extremely destructive: It makes it harder for the rest of us to distinguish between bogus news and something real, awful, and urgent.

While BPP is not failsafe, generations of crotchety, underpaid, truth-loving journalists have found that it dramatically reduces one’s chances of publishing bullshit.

So I believe that everyone should practice BPP before publishing. No prior experience is required: Though it’s possible to spend a lifetime debating the finer points of BPP (and the sorely-needed news literacy movement wants high-school and college students to spend at least a semester doing so) its general principles, listed in a handy, portable, and free—free!—form above, are simple.

Here’s how they work in practice.

This rocks!

Michelle’s post is a must read to get the maximum benefit from the Pocket Guide to Bullshit Prevention (PGBP).

If I could equip every librarian with one and only one resource for evaluating technologies, it would be this pocket guide.

The 49 words of the PGBP will serve you better than any technology review, guide, article, or testimonial.

It takes effort on your part but the choice is effort on your part or your being taken advantage of by others.

Your call.

I first saw this in a tweet by Neil Saunders.

« Newer Posts

Powered by WordPress