Elasticsearch ‘Learning to Rank’ Released, Bringing Open Source AI to Search Teams
From the post:
Search experts at OpenSource Connections, the Wikimedia Foundation, and Snagajob, deliver open source cognitive search capabilities to the Elasticsearch community. The open source Learning to Rank plugin allows organizations to control search relevance ranking with machine learning. The plugin is currently delivering search results at Wikipedia and Snagajob, providing significant search quality improvements over legacy solutions.
Learning to Rank lets organizations:
- Directly optimize sales, conversions and user satisfaction in search
- Personalize search for users
- Drive deeper insights from a knowledge base
- Customize ranking down for complex nuance
- Avoid the sticker shock & lock-in of a proprietary "cognitive search" product
“Our mission is to empower search teams. This plugin gives teams deep control of ranking, allowing machine learning models to be directly deployed to the search engine for relevance ranking” said Doug Turnbull, author of Relevant Search and CTO, OpenSource Connections.
…
I need to work through all the documentation and examples but:
Because some model training libraries refer to features by name, Elasticsearch LTR enforces unique names for each features. In the example above, we could not add a new user_rating feature without creating an error.
is a warning of what you (and I) are likely to find.
Really? Someone involved in the design thought globally unique feature names was a good idea? Or at a minimum didn’t realize it is a very bad idea?
Scope anyone? Either in the programming or topic map sense?
Despite the unique feature name fail, I’m sure ‘Learning to Rank’ will be useful. But not as useful as it could have been.
Doug Turnbull (https://twitter.com/softwaredoug) advises that features are scoped by feature stores, so the correct prose would read: “…LTR enforces unique names for each feature within a feature store.”
No fail, just bad writing.