Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

January 24, 2018

‘Learning to Rank’ (No Unique Feature Name Fail – Update)

Filed under: Artificial Intelligence,ElasticSearch,Ranking,Searching — Patrick Durusau @ 8:02 pm

Elasticsearch ‘Learning to Rank’ Released, Bringing Open Source AI to Search Teams

From the post:

Search experts at OpenSource Connections, the Wikimedia Foundation, and Snagajob, deliver open source cognitive search capabilities to the Elasticsearch community. The open source Learning to Rank plugin allows organizations to control search relevance ranking with machine learning. The plugin is currently delivering search results at Wikipedia and Snagajob, providing significant search quality improvements over legacy solutions.

Learning to Rank lets organizations:

  • Directly optimize sales, conversions and user satisfaction in search
  • Personalize search for users
  • Drive deeper insights from a knowledge base
  • Customize ranking down for complex nuance
  • Avoid the sticker shock & lock-in of a proprietary "cognitive search" product

“Our mission is to empower search teams. This plugin gives teams deep control of ranking, allowing machine learning models to be directly deployed to the search engine for relevance ranking” said Doug Turnbull, author of Relevant Search and CTO, OpenSource Connections.

I need to work through all the documentation and examples but:

Feature Names are Unique

Because some model training libraries refer to features by name, Elasticsearch LTR enforces unique names for each features. In the example above, we could not add a new user_rating feature without creating an error.

is a warning of what you (and I) are likely to find.

Really? Someone involved in the design thought globally unique feature names was a good idea? Or at a minimum didn’t realize it is a very bad idea?

Scope anyone? Either in the programming or topic map sense?

Despite the unique feature name fail, I’m sure ‘Learning to Rank’ will be useful. But not as useful as it could have been.

Doug Turnbull (https://twitter.com/softwaredoug) advises that features are scoped by feature stores, so the correct prose would read: “…LTR enforces unique names for each feature within a feature store.”

No fail, just bad writing.

August 13, 2013

Are EigenVectors Dangerous?

Filed under: Graphs,Mathematics,Networks,PageRank,Ranking — Patrick Durusau @ 7:44 pm

neo4j: Extracting a subgraph as an adjacency matrix and calculating eigenvector centrality with JBLAS by Mark Needham.

Mark continues his exploration of Eigenvector centrality by adding Eigenvector centrality values back to the graph from which it was developed.

Putting the Eigenvector centrality measure results back into Neo4j make they easier to query.

What troubles me is that Eigenvector centrality values are based only upon the recorded information we have for the graph.

There is no allowance for missing relationships or any validation of the Eigenvector centrality values found.

Recalling Paul Revere was a “terrorist” in his day, the NSA uses algorithms to declare nodes “important,” lack of access to courts for detainees, and Eigenvector centrality values start to look dangerous.

How would you validate Eigenvector centrality values? Not mathematically but against known values or facts outside of your graph.

How Important is Your Node in the Social Graph?

Filed under: Graphs,Mathematics,Networks,PageRank,Ranking — Patrick Durusau @ 6:08 pm

Java/JBLAS: Calculating eigenvector centrality of an adjacency matrix by Mark Needham.

OK, Mark’s title is more accurate but mine is more likely to get you to look beyond the headline. 😉

From the post:

I recently came across a very interesting post by Kieran Healy where he runs through a bunch of graph algorithms to see whether he can detect the most influential people behind the American Revolution based on their membership of various organisations.

The first algorithm he looked at was betweenness centrality which I’ve looked at previously and is used to determine the load and importance of a node in a graph.

This algorithm would assign a high score to nodes which have a lot of nodes connected to them even if those nodes aren’t necessarily influential nodes in the graph.

If we want to take the influence of the other nodes into account then we can use an algorithm called eigenvector centrality.

You may remember Kieran Healy’s post from Using Metadata to Find Paul Revere [In a Perfect World], where I pointed out that Kieran was using clean data. No omissions, no variant spellings, no confusion of any sort.

I suspect any sort of analysis would succeed with the proviso that it only gets clean data. Unlikely in an unclean data world.

But that to one side, Mark does a great job of assembling references on eigenvectors and code for processing. Follow all the resources in Mark’s post and you will have a much deeper understanding of this area.

Be sure to take note of the comparison between PageRank and Eigenvector centrality. Results are computational artifacts of choices that are visible when examining the end results.

PS: The Wikipedia link for Centrality cites Opsahl, Tore; Agneessens, Filip; Skvoretz, John (2010). “Node centrality in weighted networks: Generalizing degree and shortest paths“. Social Networks 32 (3): 245. doi:10.1016/j.socnet.2010.03.006 as a good summary. The link for the title leads to a preprint which is freely available.

March 10, 2013

Solr: Custom Ranking with Function Queries

Filed under: Lucene,Ranking,Solr — Patrick Durusau @ 8:42 pm

Solr: Custom Ranking with Function Queries by Sujit Pal.

From the post:

Solr has had support for Function Queries since version 3.1, but before sometime last week, I did not have a use for it. Which is probably why when I would read about Function Queries, they would seem like a nice idea, but not interesting enough to pursue further.

Most people get introduced to Function Queries through the bf parameter in the DisMax Query Parser or through the geodist function in Spatial Search. So far, I haven’t had the opportunity to personally use either feature in a real application. My introduction to Function Queries was through a problem posed to me by one of my coworkers.

The problem was as follows. We want to be able to customize our search results based on what a (logged-in) user tells us about himself or herself via their profile. This could be gender, age, ethnicity and a variety of other things. On the content side, we can annotate the document with various features corresponding to these profile features. For example, we can assign a score to a document that indicates its appeal/information value to males versus females that would correspond to the profile’s gender.

So the idea is that if we know that the profile is male, we should boost the documents that have a high male appeal score and deboost the ones that have a high female appeal score, and vice versa if the profile is female. This idea can be easily extended for multi-category features such as ethnicity as well. In this post, I will describe a possible implementation that uses Function Queries to rerank search results using male/female appeal document scores.

Does your topic map deliver information based on user characteristics?

Have you re-invented the ranking or are you using an off-the-shelf solution?

January 22, 2013

On ranking relevant entities in heterogeneous networks…

Filed under: Entities,Heterogeneous Data,Ranking — Patrick Durusau @ 2:40 pm

On ranking relevant entities in heterogeneous networks using a language-based model by Laure Soulier, Lamjed Ben Jabeur, Lynda Tamine, Wahiba Bahsoun. (Soulier, L., Jabeur, L. B., Tamine, L. and Bahsoun, W. (2013), On ranking relevant entities in heterogeneous networks using a language-based model. J. Am. Soc. Inf. Sci.. doi: 10.1002/asi.22762)

Abstract:

A new challenge, accessing multiple relevant entities, arises from the availability of linked heterogeneous data. In this article, we address more specifically the problem of accessing relevant entities, such as publications and authors within a bibliographic network, given an information need. We propose a novel algorithm, called BibRank, that estimates a joint relevance of documents and authors within a bibliographic network. This model ranks each type of entity using a score propagation algorithm with respect to the query topic and the structure of the underlying bi-type information entity network. Evidence sources, namely content-based and network-based scores, are both used to estimate the topical similarity between connected entities. For this purpose, authorship relationships are analyzed through a language model-based score on the one hand and on the other hand, non topically related entities of the same type are detected through marginal citations. The article reports the results of experiments using the Bibrank algorithm for an information retrieval task. The CiteSeerX bibliographic data set forms the basis for the topical query automatic generation and evaluation. We show that a statistically significant improvement over closely related ranking models is achieved.

Note the “estimat[ion] of topic similarity between connected entities.”

Very good work but rather than a declaration of similarity (topic maps) we have an estimate of similarity.

Before you protest about the volume of literature/data, recall that some author write the documents in question. And selected the terms and references found therein.

Rather than guessing what may be similar to what the author wrote, why not devise a method to allow the author to say?

And build upon similarity/sameness declarations across heterogeneous networks of data.

October 6, 2012

ReFr: A New Open-Source Framework for Building Reranking Models

Filed under: Natural Language Processing,Ranking — Patrick Durusau @ 1:09 pm

ReFr: A New Open-Source Framework for Building Reranking Models by Dan Bikel and Keith Hall.

From the post:

We are pleased to announce the release of an open source, general-purpose framework designed for reranking problems, ReFr (Reranker Framework), now available at: http://code.google.com/p/refr/.

Many types of systems capable of processing speech and human language text produce multiple hypothesized outputs for a given input, each with a score. In the case of machine translation systems, these hypotheses correspond to possible translations from some sentence in a source language to a target language. In the case of speech recognition, the hypotheses are possible word sequences of what was said derived from the input audio. The goal of such systems is usually to produce a single output for a given input, and so they almost always just pick the highest-scoring hypothesis.

A reranker is a system that uses a trained model to rerank these scored hypotheses, possibly inducing a different ranked order. The goal is that by employing a second model after the fact, one can make use of additional information not available to the original model, and produce better overall results. This approach has been shown to be useful for a wide variety of speech and natural language processing problems, and was the subject of one of the groups at the 2011 summer workshop at Johns Hopkins’ Center for Language and Speech Processing. At that workshop, led by Professor Brian Roark of Oregon Health & Science University, we began building a general-purpose framework for training and using reranking models. The result of all this work is ReFr.

An interesting software package and you are going to pick up some coding experience as well.

June 12, 2012

Predicting link directions via a recursive subgraph-based ranking

Filed under: Graphs,Prediction,Ranking,Subgraphs — Patrick Durusau @ 1:13 pm

Predicting link directions via a recursive subgraph-based ranking by Fangjian Guo, Zimo Yang, and Tao Zhou.

Abstract:

Link directions are essential to the functionality of networks and their prediction is helpful towards a better knowledge of directed networks from incomplete real-world data. We study the problem of predicting the directions of some links by using the existence and directions of the rest of links. We propose a solution by first ranking nodes in a specific order and then predicting each link as stemming from a lower-ranked node towards a higher-ranked one. The proposed ranking method works recursively by utilizing local indicators on multiple scales, each corresponding to a subgraph extracted from the original network. Experiments on real networks show that the directions of a substantial fraction of links can be correctly recovered by our method, which outperforms either purely local or global methods.

This paper focuses mostly on prediction of direction of links, relying on other research for the question of link existence.

I mention it because predicting links and their directions will be important for planning graph database deployments in particular.

It will be a little late to find out when under full load that other modeling choices should have been made. (It is usually under “full load” conditions when retrospectives on modeling choices come up.)

September 14, 2011

Flexible ranking in Lucene 4

Filed under: Lucene,Ranking — Patrick Durusau @ 7:01 pm

Flexible ranking in Lucene 4

Robert Muir writes:

Over the summer I served as a Google Summer of Code mentor for David Nemeskey, PhD student at Eötvös Loránd University. David proposed to improve Lucene’s scoring architecture and implement some state-of-the-art ranking models with the new framework.

These improvements are now committed to Lucene’s trunk: you can use these models in tandem with all of Lucene’s features (boosts, slops, explanations, etc) and queries (term, phrase, spans, etc). A JIRA issue has been created to make it easy to use these models from Solr’s schema.xml.

Relevance ranking is the heart of the search engine, and I hope the additional models and flexibility will improve the user experience for Lucene: whether you’ve been frustrated with tuning TF/IDF weights and find an alternative model works better for your case, found it difficult to integrate custom logic that your application needs, or just want to experiment.

The wiki page for this project has a pointer to the search engine in A “Terrier” For Your Tool Box?.

I count a baker’s dozen or so new features described in this post.

December 27, 2010

A weighted similarity measure for non-conjoint rankings – Post

Filed under: Probalistic Models,Rank Correlation,Ranking — Patrick Durusau @ 3:56 pm

A weighted similarity measure for non-conjoint rankings updates the recently published A Similarity Measure for Indefinite Rankings
and provides an implementation.

Abstract from the article:

Ranked lists are encountered in research and daily life, and it is often of interest to compare these lists, even when they are incomplete or have only some members in common. An example is document rankings returned for the same query by different search engines. A measure of the similarity between incomplete rankings should handle non-conjointness, weight high ranks more heavily than low, and be monotonic with increasing depth of evaluation; but no measure satisfying all these criteria currently exists. In this article, we propose a new measure having these qualities, namely rank-biased overlap (RBO). The RBO measure is based on a simple probabilistic user model. It provides monotonicity by calculating, at a given depth of evaluation, a base score that is non-decreasing with additional evaluation, and a maximum score that is non-increasing. An extrapolated score can be calculated between these bounds if a point estimate is required. RBO has a parameter which determines the strength of the weighting to top ranks. We extend RBO to handle tied ranks and rankings of different lengths. Finally, we give examples of the use of the measure in comparing the results produced by public search engines, and in assessing retrieval systems in the laboratory.

There are also some comments about journal publishing delays that should serve as fair warning to other authors.

Powered by WordPress