Auralist: introducing serendipity into music recommendation
Abstract:
Recommendation systems exist to help users discover content in a large body of items. An ideal recommendation system should mimic the actions of a trusted friend or expert, producing a personalised collection of recommendations that balance between the desired goals of accuracy, diversity, novelty and serendipity. We introduce the Auralist recommendation framework, a system that – in contrast to previous work – attempts to balance and improve all four factors simultaneously. Using a collection of novel algorithms inspired by principles of “serendipitous discovery”, we demonstrate a method of successfully injecting serendipity, novelty and diversity into recommendations whilst limiting the impact on accuracy. We evaluate Auralist quantitatively over a broad set of metrics and, with a user study on music recommendation, show that Auralist‘s emphasis on serendipity indeed improves user satisfaction.
A deeply interesting article for anyone interested in recommendation systems and the improvement thereof.
It is research that should go forward but among my concerns about the article:
1) I am not convinced of the definition of “serendipity:”
Serendipity represents the “unusualness” or “surprise” of recommendations. Unlike novelty, serendipity encompasses the semantic content of items, and can be imagined as the distance between recommended items and their expected contents. A recommendation of John Lennon to listeners of The Beatles may well be accurate and novel, but hardly constitutes an original or surprising recommendation. A serendipitous system will challenge users to expand their tastes and hopefully provide more interesting recommendations, qualities that can help improve recommendation satisfaction [23]
Or perhaps I am “hearing” it in the context of discovery. Such as searching for Smokestack Lighting and not finding the Yardbirds but Howling Wolf as the performer. Serendipity in that sense not having any sense of “challenge.”
2) A survey of 21 participants, mostly students, is better than experimenters asking each other for feedback but only just. The social sciences department should be able to advise on test protocols and procedures.
3) There was no showing that “user satisfaction,” the item to be measured, is the same thing as “serendipity.” I am not entirely sure that other than by example, “serendipity” can even be discussed, let alone measured.
Take my Howling Wolf example. How close or far away is the “serendipity” there versus an instance of “serendipity” as offered by Auralist? Unless and until we can establish a metric, at least a loose one, it is hard to say which one has more “serendipity.”