Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

March 7, 2014

Building fast Bayesian computing machines…

Filed under: Artificial Intelligence,Bayesian Data Analysis,Precision — Patrick Durusau @ 11:41 am

Building fast Bayesian computing machines out of intentionally stochastic, digital parts by Vikash Mansinghka and Eric Jonas.

Abstract:

The brain interprets ambiguous sensory information faster and more reliably than modern computers, using neurons that are slower and less reliable than logic gates. But Bayesian inference, which underpins many computational models of perception and cognition, appears computationally challenging even given modern transistor speeds and energy budgets. The computational principles and structures needed to narrow this gap are unknown. Here we show how to build fast Bayesian computing machines using intentionally stochastic, digital parts, narrowing this efficiency gap by multiple orders of magnitude. We find that by connecting stochastic digital components according to simple mathematical rules, one can build massively parallel, low precision circuits that solve Bayesian inference problems and are compatible with the Poisson firing statistics of cortical neurons. We evaluate circuits for depth and motion perception, perceptual learning and causal reasoning, each performing inference over 10,000+ latent variables in real time – a 1,000x speed advantage over commodity microprocessors. These results suggest a new role for randomness in the engineering and reverse-engineering of intelligent computation.

Ironic that the greater precision and repeatability of our digital computers may be choices that are holding back advancements in Bayesian digital computing machines.

I have written before about the RDF ecosystem being over complex and precise for use by everyday users.

We should strive to capture semantics as understood by scientists, researchers, students, and others. Less precise than professional semantics but precise enough to make it usable?

I first saw this in a tweet by Stefano Bertolo.

January 3, 2014

Wikibase DataModel released!

Filed under: Data Models,Identification,Precision,Subject Identity,Wikidata,Wikipedia — Patrick Durusau @ 5:04 pm

Wikibase DataModel released! by Jeroen De Dauw.

From the post:

I’m happy to announce the 0.6 release of Wikibase DataModel. This is the first real release of this component.

DataModel?

Wikibase is the software behind Wikidata.org. At its core, this software is about describing entities. Entities are collections of claims, which can have qualifiers, references and values of various different types. How this all fits together is described in the DataModel document written by Markus and Denny at the start of the project. The Wikibase DataModel component contains (PHP) domain objects representing entities and their various parts, as well as associated domain logic.

I wanted to draw your attention to this discussion of “items:”

Items are Entities that are typically represented by a Wikipage (at least in some Wikipedia languages). They can be viewed as “the thing that a Wikipage is about,” which could be an individual thing (the person Albert Einstein), a general class of things (the class of all Physicists), and any other concept that is the subject of some Wikipedia page (including things like History of Berlin).

The IRI of an Item will typically be closely related to the URL of its page on Wikidata. It is expected that Items store a shorter ID string (for example, as a title string in MediaWiki) that is used in both cases. ID strings might have a standardized technical format such as “wd1234567890” and will usually not be seen by users. The ID of an Item should be stable and not change after it has been created.

The exact meaning of an Item cannot be captured in Wikidata (or any technical system), but is discussed and decided on by the community of editors, just as it is done with the subject of Wikipedia articles now. It is possible that an Item has multiple “aspects” to its meaning. For example, the page Orca describes a species of whales. It can be viewed as a class of all Orca whales, and an individual whale such as Keiko would be an element of this class. On the other hand, the species Orca is also a concept about which we can make individual statements. For example, one could say that the binomial name (a Property) of the Orca species has the Value “Orcinus orca (Linnaeus, 1758).”

However, it is intended that the information stored in Wikidata is generally about the topic of the Item. For example, the Item for History of Berlin should store data about this history (if there is any such data), not about Berlin (the city). It is not intended that data about one subject is distributed across multiple Wikidata Items: each Item fully represents one thing. This also helps for data integration across languages: many languages have no separate article about Berlin’s history, but most have an article about Berlin.

What do you make of the claim:

The exact meaning of an Item cannot be captured in Wikidata (or any technical system), but is discussed and decided on by the community of editors, just as it is done with the subject of Wikipedia articles now. It is possible that an Item has multiple “aspects” to its meaning. For example, the page Orca describes a species of whales. It can be viewed as a class of all Orca whales, and an individual whale such as Keiko would be an element of this class. On the other hand, the species Orca is also a concept about which we can make individual statements. For example, one could say that the binomial name (a Property) of the Orca species has the Value “Orcinus orca (Linnaeus, 1758).”

I may write an information system that fails to distinguish between a species of whales, a class of whales and a particular whale, but that is a design choice, not a foregone conclusion.

In the case of Wikipedia, which relies upon individuals repeating the task of extracting relevant information from loosely gathered data, that approach words quite well.

But there isn’t one degree of precision of identification that works for all cases.

My suspicion is that for more demanding search applications, such as drug interactions, less precise identifications could lead to unfortunate, even fatal, results.

Yes?

December 27, 2013

Imprecise machines mess with history

Filed under: Precision,Search Engines — Patrick Durusau @ 4:33 pm

Imprecise machines mess with history by Kaiser Fung.

From the post:

The mass media continues to gloss over the imprecision of machines/algorithms.

Here is another example I came across the other day. In conversation, the name Martin Van Buren popped up. I was curious about this eighth President of the United States.

What caught my eye in the following Google search result (right panel) is his height:

See Kaiser’s post for an amusing error on U.S. Presidents which has been echoed in U.S. classrooms without a doubt.

Kaiser asks how to make fact-checking machines possible?

I’m not sure we need fact-checking machines as much as we need several canonical sources of information on the WWW.

At one time, there were several world almanacs in print (may still be) and for most routine information, those were authoritative sources.

I don’t know that search engines need fact checkers so much as they need to be less promiscuous. At least in terms of the content that they repeat as fact.

There is a difference between “facts” you index at the New York Times and some local historical society.

The source of data was important before the WWW and it continues to be important today.

July 20, 2013

11 Billion Clues in 800 Million Documents:…

Filed under: Data,Freebase,Precision,Recall — Patrick Durusau @ 2:16 pm

11 Billion Clues in 800 Million Documents: A Web Research Corpus Annotated with Freebase Concepts by Dave Orr, Amar Subramanya, Evgeniy Gabrilovich, and Michael Ringgaard.

From the post:

When you type in a search query — perhaps Plato — are you interested in the string of letters you typed? Or the concept or entity represented by that string? But knowing that the string represents something real and meaningful only gets you so far in computational linguistics or information retrieval — you have to know what the string actually refers to. The Knowledge Graph and Freebase are databases of things, not strings, and references to them let you operate in the realm of concepts and entities rather than strings and n-grams.

We’ve previously released data to help with disambiguation and recently awarded $1.2M in research grants to work on related problems. Today we’re taking another step: releasing data consisting of nearly 800 million documents automatically annotated with over 11 billion references to Freebase entities.

These Freebase Annotations of the ClueWeb Corpora (FACC) consist of ClueWeb09 FACC and ClueWeb12 FACC. 11 billion phrases that refer to concepts and entities in Freebase were automatically labeled with their unique identifiers (Freebase MID’s). …

(…)

Based on review of a sample of documents, we believe the precision is about 80-85%, and recall, which is inherently difficult to measure in situations like this, is in the range of 70-85%….

(…)

Evaluate precision and recall by asking:

Your GPS gives you relevant directions on an average eight (8) times out of ten and it finds relevant locations on average of seven (7) times out of ten (10). (Wikipedia on Precision and Recall)

Is that a good GPS?

A useful data set but still a continuation of the approach of guessing what authors meant when they authored documents.

What if by some yet unknown technique, precision goes to nine (9) out of ten (10) and recall goes to nine (9) out of ten (10) as well?

The GPS question becomes:

Your GPS gives you relevant directions on an average nine (9) times out of ten and it finds relevant locations on average of nine (9) times out of ten (10).

Is that a good GPS?

Not that any automated technique has shown that level of performance.

Rather than focusing on data post-authoring, why not enable authors to declare their semantics?

Author declared semantics would reduce the cost and uncertainty of post-authoring semantic solutions.

I first saw this in a tweet by Nicolas Torzec.

December 23, 2011

How accurate can manual review be?

Filed under: Authoring Topic Maps,Precision,Recall,Retrieval — Patrick Durusau @ 4:31 pm

How accurate can manual review be?

From the post:

One of the chief pleasures for me of this year’s SIGIR in Beijing was attending the SIGIR 2011 Information Retrieval for E-Discovery Workshop (SIRE 2011). The smaller and more selective the workshop, it often seems, the more focused and interesting the discussion.

My own contribution was “Re-examining the Effectiveness of Manual Review”. The paper was inspired by an article from Maura Grossman and Gord Cormack, whose message is neatly summed up in its title: “Technology-assisted review in e-discovery can be more effective and more efficient than exhaustive manual review”.

Fascinating work!

Does this give you pause about automated topic map authoring? Why/why not?

November 15, 2011

Recall vs. Precision

Filed under: Precision,Recall — Patrick Durusau @ 7:57 pm

Recall vs. Precision by Gene Golovchinsky.

From the post:

Stephen Robertson’s talk at the CIKM 2011 Industry event caused me to think about recall and precision again. Over the last decade precision-oriented searches have become synonymous with web searches, while recall has been relegated to narrow verticals. But is precision@5 or NCDG@1 really the right way to measure the effectiveness of interactive search? If you’re doing a known-item search, looking up a common factoid, etc., then perhaps it is. But for most searches, even ones that might be classified as precision-oriented ones, the searcher might wind up with several attempts to get at the answer. Dan Russell’s a Google a day lists exactly those kinds of challenges: find a fact that’s hard to find.

So how should we think about evaluating the kinds of searches that take more than one query, ones we might term session-based searches?

Read the post and the comments more than once!

Then think about how you would answer the questions raised, in or out of a topic map context.

Much food for thought here.

November 14, 2011

Stephen Robertson on Why Recall Matters

Filed under: Information Retrieval,Precision,Recall — Patrick Durusau @ 7:14 pm

Stephen Robertson on Why Recall Matters November 14th, 2011 by Daniel Tunkelang.

Daniel has the slides and an extensive summary of the presentation. Just to give you an taste of what awaits at Daniel’s post:

Stephen started by reminding us of ancient times (i.e., before the web), when at least some IR researchers thought in terms of set retrieval rather than ranked retrieval. He reminded us of the precision and recall “devices” that he’d described in his Salton Award Lecture — an idea he attributed to the late Cranfield pioneer Cyril Cleverdon. He noted that, while set retrieval uses distinct precision and recall devices, ranking conflates both into decision of where to truncate a ranked result list. He also pointed out an interesting asymmetry in the conventional notion of precision-recall tradeoff: while returning more results can only increase recall, there is no certainly that the additional results will decrease precision. Rather, this decrease is a hypothesis that we associate with systems designed to implement the probability ranking principle, returning results in decreasing order of probability of relevance.

Interested? There’s more where that came from, see like to Daniel’s post above.

September 28, 2011

Is Precision the Enemy of Serendipity?

Filed under: Precision,Serendipity — Patrick Durusau @ 7:33 pm

I was reading claims of increased precision by software X the other day. I probably have mentioned this before (and it wasn’t original, then or now) that precision seems to me to be the enemy of serendipity.

For example, when I was an undergraduate, the library would display all the recent issues of journals on long angled shelves. So it was possible to walk along looking at the new issues in a variety of areas with ease. As a political science major I could have gone directly to journals on political science. But I would have missed the Review of Metaphysics and/or the Journal of the History of Ideas, both of which are rich sources of ideas relevant to topic maps (and information systems more generally).

But precision about the information available, a departmental page that links only to electronic versions of journals relevant to the “discipline,” reduces the opportunity to perhaps recognize relevant literature outside the confines of a discipline.

True, I still browse a lot, otherwise I would not notice titles like: k-means Approach to the Karhunen-Loéve Transform (aka PCA – Principal Component Analysis). I knew that k-means was a form of clustering that could help with gathering members of collective topics together but quite honestly did not recognize Karhunen-Loéve Transform. I know it as either PCA – Principal Component Analysis, which I inserted in my blog title to help others recognize the technique.

Of course the problem is that sometimes I really want precision, perhaps I am rushed to finish a job or need to find a reference for a standard, etc. In those cases I don’t have time to wade through a lot of search results and appreciate whatever (little) precision I can wring out of a search engine.

Whether I want more precision or more serendipity varies on a day to day basis for me. How about you?

May 6, 2011

Evaluating Recommender Systems…

Filed under: F-Score,Precision,Recall — Patrick Durusau @ 12:21 pm

Evaluating Recommender Systems – Explaining F-Score, Recall and Precision using Real Data Set from Apontador

Marcel Caraciolo says:

In this post I will introduce three metrics widely used for evaluating the utility of recommendations produced by a recommender system : Precision , Recall and F-1 Score. The F-1 Score is slightly different from the other ones, since it is a measure of a test’s accuracy and considers both the precision and the recall of the test to compute the final score.

Recommender systems are quite common and you are likely to encounter them while deploying topic maps. (Or you may wish to build one as part of a topic map system.)

May 17, 2010

Precision versus Recall

Filed under: Precision,Recall — Patrick Durusau @ 7:36 pm

High precision means resources are missed.

High recall means sifting garbage.

Q: Based on what assumption?

A: No assumption, observed behavior of texts and search engines.

Q: Based on what texts?

A: All texts, yes, all texts.

Q: Texts where same subjects have different works/phrases and same words/phrases mean different subjects?

A: Yes, those texts!

Q: If the subjects were identified in those texts, we could have high precision and high recall?

A: Yes, but not possible, too many texts!

Q: If the authors of new texts were to identify….

A: Sorry, no time, have to search now. Good-bye!

Powered by WordPress