Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

September 14, 2012

RecSys 2012: Beyond Five Stars

Filed under: Conferences,Recommendation — Patrick Durusau @ 2:48 pm

RecSys 2012: Beyond Five Stars by Daniel Tunkelang.

From the post:

I spent the past week in Dublin attending the 6th ACM International Conference on Recommender Systems (RecSys 2012). This young conference has become the premier global forum for discussing the state of the art in recommender systems, and I’m thrilled to have has the opportunity to participate.

Daniel’s review of RecSys 2012 with lots of links and pointers!

It will take you some time to work through all the hyperlinks so it is a good thing the weekend is upon us!

Enjoy!

September 11, 2012

Context-Aware Recommender Systems 2012 [Identity and Context?]

Filed under: Context,Context-aware,Identity,Recommendation — Patrick Durusau @ 4:33 am

Context-Aware Recommender Systems 2012 (In conjunction with the 6th ACM Conference on Recommender Systems (RecSys 2012))

I usually think of recommender systems as attempts to deliver content based on clues about my interests or context. If I dial 911, the location of the nearest pizza vendor probably isn’t high on my lists of interests, etc.

As I looked over these proceedings, it occurred to me that subject identity, for merging purposes, isn’t limited to the context of the subject in question.

That is some merging tests could depend upon my context as a user.

Take my 911 call for instance. For many purposes, a police substation, fire station, 24 hour medical clinic and a hospital are different subjects.

In a medical emergency situation, for which a 911 call might be a clue, all of those could be treated as a single subject – places for immediate medical attention.

What other subjects do you think might merge (or not) depending upon your context?

Table of Contents

  1. Optimal Feature Selection for Context-Aware Recommendation Using Differential Relaxation
    Yong Zheng, Robin Burke, Bamshad Mobasher.
  2. Relevant Context in a Movie Recommender System: Users’ Opinion vs. Statistical Detection
    Ante Odic, Marko Tkalcic, Jurij Franc Tasic, Andrej Kosir.
  3. Improving Novelty in Streaming Recommendation Using a Context Model
    Doina Alexandra Dumitrescu, Simone Santini.
  4. Towards a Context-Aware Photo Recommender System
    Fabricio Lemos, Rafael Carmo, Windson Viana, Rossana Andrade.
  5. Context and Intention-Awareness in POIs Recommender Systems
    Hernani Costa, Barbara Furtado, Durval Pires, Luis Macedo, F. Amilcar Cardoso.
  6. Evaluation and User Acceptance Issues of a Bayesian-Classifier-Based TV Recommendation System
    Benedikt Engelbert, Karsten Morisse, Kai-Christoph Hamborg.
  7. From Online Browsing to Offline Purchases: Analyzing Contextual Information in the Retail Business
    Simon Chan, Licia Capra.

July 9, 2012

Recommendations and how to measure the ROI with some metrics?

Filed under: Recommendation — Patrick Durusau @ 7:58 am

Recommendations and how to measure the ROI with some metrics ?

From the post:

We talked a lot about recommender systems, specially discussing the techniques and algorithms used to build and evaluate algorithmically those systems. But let’s discuss now how can we measure in quantitative terms how a social network or an on-line store can measure the return of investment (ROI) of a given recommendation.

The metrics used in recommender systems

We talk a lot about F1-measure, Accuracy, Precision, Recall, AUC, those buzzwords widely known by the machine learning researchers and data mining specialists. But do you know what is CTR, LOC, CER or TPR ? Let’s explain more about those metrics and how they can evaluate the quantitative benefits of a given recommendation.

Would you feel more comfortable if I said identification instead of recommendation?

Consider it done.

After all, a “recommendation” is some actor making a statement about identified subject. Run of the mill stuff for a topic map.

The ROI question is whether there is some benefit to that statement + identification?

Assuming you are using a topic map or similar measures to track the source of a recommendation, you could begin to attach ROI to particular sources of recommendation.

June 13, 2012

SeRSy 2012

Filed under: Conferences,Recommendation,Semantic Web — Patrick Durusau @ 2:21 pm

SeRSy 2012: International Workshop on Semantic Technologies meet Recommender Systems & Big Data

Important Dates:

Submission of papers: July 31, 2012
Notification of acceptance: August 21, 2012
Camera-ready versions: September 10, 2012

[In connection with the 11th International Semantic Web Conference, Boston, USA, November 11-15, 2012.]

The scope statement:

People generally need more and more advanced tools that go beyond those implementing the canonical search paradigm for seeking relevant information. A new search paradigm is emerging, where the user perspective is completely reversed: from finding to being found. Recommender Systems may help to support this new perspective, because they have the effect of pushing relevant objects, selected from a large space of possible options, to potentially interested users. To achieve this result, recommendation techniques generally rely on data referring to three kinds of objects: users, items and their relations.

Recent developments of the Semantic Web community offer novel strategies to represent data about users, items and their relations that might improve the current state of the art of recommender systems, in order to move towards a new generation of recommender systems which fully understand the items they deal with.

More and more semantic data are published following the Linked Data principles, that enable to set up links between objects in different data sources, by connecting information in a single global data space: the Web of Data. Today, Web of Data includes different types of knowledge represented in a homogeneous form: sedimentary one (encyclopedic, cultural, linguistic, common-sense) and real-time one (news, data streams, …). This data might be useful to interlink diverse information about users, items, and their relations and implement reasoning mechanisms that can support and improve the recommendation process.

The challenge is to investigate whether and how this large amount of wide-coverage and linked semantic knowledge can be automatically introduced into systems that perform tasks requiring human-level intelligence. Examples of such tasks include understanding a health problem in order to make a medical decision, or simply deciding which laptop to buy. Recommender systems support users exactly in those complex tasks.

The primary goal of the workshop is to showcase cutting edge research on the intersection of Semantic Technologies and Recommender Systems, by taking the best of the two worlds. This combination may provide the Semantic Web community with important real-world scenarios where its potential can be effectively exploited into systems performing complex tasks.

Should be interesting to see whether the semantic technologies or the recommender systems or both get the “rough” or inexact edges.

June 11, 2012

Neo4j in the Trenches [webinar – Thursday June 14 10:00 PDT / 19:00 CEST]

Filed under: Graphs,Neo4j,Recommendation — Patrick Durusau @ 4:23 pm

Neo4j in the Trenches

Thursday June 14 10:00 PDT / 19:00 CEST

From the webpage:

OpenCredo discusses Opigram: a social recommendation engine

In this webinar, Nicki Watt of OpenCredo presents the lessons learned (and being learned) on an active Neo4j project: Opigram. Opigram is a socially oriented recommendation engine which is already live, with some 150k users and growing. The webinar will cover Neo4j usage, challenges encountered, and solutions to these challenges.

I was curious enough to run down the homepage for OpenCredo.

Now there is an interesting homepage!

The blog post titles promise some interesting reading.

I will report back as I find items of interest.

April 26, 2012

Simple tools for building a recommendation engine

Filed under: Dataset,R,Recommendation — Patrick Durusau @ 6:31 pm

Simple tools for building a recommendation engine by Joseph Rickert.

From the post:

Revolution’s resident economist, Saar Golde, is very fond of saying that “90% of what you might from a recommendation engine can be achieved with simple techniques”. To illustrate this point (without doing a lot of work), we downloaded the million row movie dataset from www.grouplens.org with the idea of just taking the first obvious exploratory step: finding the good movies. Three zipped up .dat files comprise this data set. The first file, ratings.dat, contains 1,000,209 records of UserID, MovieID, Rating, and Timestamp for 6,040 users rating 3,952 movies. Ratings are whole numbers on a 1 to 5 scale. The second file, users.dat, contains the UserID, Gender, Age, Occupation and Zip-code for each user. The third file, movies.dat, contains the MovieID, Title and Genre associated with each movie.

Curious, if a topic map engine performed 90% of the possible merges in a topic map, would that be enough?

Would your answer differ if the topic map had less than 10,000 topics and associations versus a topic map with 100 million topics and associations?

Would your answer differ based on a timeline of the data? Say the older the data, the less reliable the merging. Recent medical data < 1% error rate, up to ten years, ten to twenty years, <= 10% error rate, more than twenty years, best efforts. Which of course raises the question of how you would test for conformance to such requirements?

April 25, 2012

A long and winding road (….introducing serendipity into music recommendation)

Filed under: Music,Recommendation,Serendipity — Patrick Durusau @ 6:26 pm

Auralist: introducing serendipity into music recommendation

Abstract:

Recommendation systems exist to help users discover content in a large body of items. An ideal recommendation system should mimic the actions of a trusted friend or expert, producing a personalised collection of recommendations that balance between the desired goals of accuracy, diversity, novelty and serendipity. We introduce the Auralist recommendation framework, a system that – in contrast to previous work – attempts to balance and improve all four factors simultaneously. Using a collection of novel algorithms inspired by principles of “serendipitous discovery”, we demonstrate a method of successfully injecting serendipity, novelty and diversity into recommendations whilst limiting the impact on accuracy. We evaluate Auralist quantitatively over a broad set of metrics and, with a user study on music recommendation, show that Auralist‘s emphasis on serendipity indeed improves user satisfaction.

A deeply interesting article for anyone interested in recommendation systems and the improvement thereof.

It is research that should go forward but among my concerns about the article:

1) I am not convinced of the definition of “serendipity:”

Serendipity represents the “unusualness” or “surprise” of recommendations. Unlike novelty, serendipity encompasses the semantic content of items, and can be imagined as the distance between recommended items and their expected contents. A recommendation of John Lennon to listeners of The Beatles may well be accurate and novel, but hardly constitutes an original or surprising recommendation. A serendipitous system will challenge users to expand their tastes and hopefully provide more interesting recommendations, qualities that can help improve recommendation satisfaction [23]

Or perhaps I am “hearing” it in the context of discovery. Such as searching for Smokestack Lighting and not finding the Yardbirds but Howling Wolf as the performer. Serendipity in that sense not having any sense of “challenge.”

2) A survey of 21 participants, mostly students, is better than experimenters asking each other for feedback but only just. The social sciences department should be able to advise on test protocols and procedures.

3) There was no showing that “user satisfaction,” the item to be measured, is the same thing as “serendipity.” I am not entirely sure that other than by example, “serendipity” can even be discussed, let alone measured.

Take my Howling Wolf example. How close or far away is the “serendipity” there versus an instance of “serendipity” as offered by Auralist? Unless and until we can establish a metric, at least a loose one, it is hard to say which one has more “serendipity.”

LAILAPS

LAILAPS

From the website:

LAILAPS combines a keyword driven search engine for an integrative access to life science databases, machine learning for a content driven relevance ranking, recommender systems for suggestion of related data records and query refinements with a user feedback tracking system for an self learning relevance training.

Features:

  • ultra fast keyword based search
  • non-static relevance ranking
  • user specific relevance profiles
  • suggestion of related entries
  • suggestion of related query terms
  • self learning by user tracking
  • deployable at standard desktop PC
  • 100% JAVA
  • installer for in-house deployment

I like the idea of a recommender system that “suggests” related data records and query refinements. It could be wrong.

I am as guilty as anyone of thinking in terms of “correct” recommendations that always lead to relevant data.

That is applying “crisp” set thinking to what is obviously a “rough” set situation. We as readers have to sort out the items in the “rough” set and construct for ourselves, a temporary and fleeting “crisp” set for some particular purpose.

If you are using LAILAPS, I would appreciate a note about your experiences and impressions.

April 13, 2012

Neo4J Tales from the Trenches: A Recommendation Engine Case Study

Filed under: Neo4j,Recommendation — Patrick Durusau @ 4:43 pm

Neo4J Tales from the Trenches: A Recommendation Engine Case Study

25 April 2012 – At 18:30 PM (“Oh to be in London,” he wished. Not for the last time.)

From the post:

In this talk for the Neo4j User Group, Nicki Watt and Michal Bachman present the lessons learned (and being learned) on an active Neo4J project – Opigram.

Opigram is a socially orientated recommendation engine which is already live, with some 150k users and growing. Nicki and Michal will outline their usage of Neo4j, and some of the challenges they have encountered, as well as the approaches and implications taken to address them.

Sound like a good introduction to Neo4j in the context of an actual project.

February 25, 2012

FrostyMug – Beer Rating/Recommendation Service

Filed under: Contest,Heroku,Neo4j,Recommendation — Patrick Durusau @ 7:39 pm

Similarity-based Recommendation Engines by Josh Adell.

From the post:

I am currently participating in the Neo4j-Heroku Challenge. My entry is a — as yet, unfinished — beer rating and recommendation service called FrostyMug. All the major functionality is complete, except for the actual recommendations, which I am currently working on. I wanted to share some of my thoughts and methods for building the recommendation engine.

I hear “similarity” as a measure of subject identity: beers recommended to X; movies enjoyed by Y users, even though those are group subjects.

Or perhaps better, as a possible means of subject identity. A person could list all the movies they have enjoyed and that list be the same as a recommendation list. Same subject, just a different method of identification. (Unless the means of subject identification has an impact on the subject you think is being identified.)

January 10, 2012

Proceedings…Information Heterogeneity and Fusion in Recommender Systems

Filed under: Conferences,Heterogeneous Data,Recommendation — Patrick Durusau @ 8:05 pm

Proceedings of the 2nd International Workshop on Information Heterogeneity and Fusion in Recommender Systems

I am still working on the proceeding for the main conference but thought these might be of interest:

  • Information market based recommender systems fusion
    Efthimios Bothos, Konstantinos Christidis, Dimitris Apostolou, Gregoris Mentzas
    Pages: 1-8
    doi>10.1145/2039320.2039321
  • A kernel-based approach to exploiting interaction-networks in heterogeneous information sources for improved recommender systems
    Oluwasanmi Koyejo, Joydeep Ghosh
    Pages: 9-16
    doi>10.1145/2039320.2039322
  • Learning multiple models for exploiting predictive heterogeneity in recommender systems
    Clinton Jones, Joydeep Ghosh, Aayush Sharma
    Pages: 17-24
    doi>10.1145/2039320.2039323
  • A generic semantic-based framework for cross-domain recommendation
    Ignacio Fernández-Tobías, Iván Cantador, Marius Kaminskas, Francesco Ricci
    Pages: 25-32
    doi>10.1145/2039320.2039324
  • Hybrid algorithms for recommending new items
    Paolo Cremonesi, Roberto Turrin, Fabio Airoldi
    Pages: 33-40
    doi>10.1145/2039320.2039325
  • Expert recommendation based on social drivers, social network analysis, and semantic data representation
    Maryam Fazel-Zarandi, Hugh J. Devlin, Yun Huang, Noshir Contractor
    Pages: 41-48
    doi>10.1145/2039320.2039326
  • Experience Discovery: hybrid recommendation of student activities using social network data
    Robin Burke, Yong Zheng, Scott Riley
    Pages: 49-52
    doi>10.1145/2039320.2039327
  • Personalizing tags: a folksonomy-like approach for recommending movies
    Alan Said, Benjamin Kille, Ernesto W. De Luca, Sahin Albayrak
    Pages: 53-56
    doi>10.1145/2039320.2039328
  • Personalized pricing recommender system: multi-stage epsilon-greedy approach
    Toshihiro Kamishima, Shotaro Akaho
    Pages: 57-64
    doi>10.1145/2039320.2039329
  • Matrix co-factorization for recommendation with rich side information and implicit feedback
    Yi Fang, Luo Si
    Pages: 65-69
    doi>10.1145/2039320.2039330

December 13, 2011

ACM RecSys 2011 Workshop on Novelty and Diversity in Recommender Systems

Filed under: Diversity,Novelty,Recommendation — Patrick Durusau @ 9:55 pm

DiveRS 2011 – ACM RecSys 2011 Workshop on Novelty and Diversity in Recommender Systems

From the conference page:

Most research and development efforts in the Recommender Systems field have been focused on accuracy in predicting and matching user interests. However there is a growing realization that there is more than accuracy to the practical effectiveness and added-value of recommendation. In particular, novelty and diversity have been identified as key dimensions of recommendation utility in real scenarios, and a fundamental research direction to keep making progress in the field.

Novelty is indeed essential to recommendation: in many, if not most scenarios, the whole point of recommendation is inherently linked to a notion of discovery, as recommendation makes most sense when it exposes the user to a relevant experience that she would not have found, or thought of by herself –obvious, however accurate recommendations are generally of little use.

Not only does a varied recommendation provide in itself for a richer user experience. Given the inherent uncertainty in user interest prediction –since it is based on implicit, incomplete evidence of interests, where the latter are moreover subject to change–, avoiding a too narrow array of choice is generally a good approach to enhance the chances that the user is pleased by at least some recommended item. Sales diversity may enhance businesses as well, leveraging revenues from market niches.

It is easy to increase novelty and diversity by giving up on accuracy; the challenge is to enhance these aspects while still achieving a fair match of the user’s interests. The goal is thus generally to enhance the balance in this trade-off, rather than just a diversity or novelty increase.

DiveRS 2011 aims to gather researchers and practitioners interested in the role of novelty and diversity in recommender systems. The workshop seeks to advance towards a better understanding of what novelty and diversity are, how they can improve the effectiveness of recommendation methods and the utility of their outputs. We aim to identify open problems, relevant research directions, and opportunities for innovation in the recommendation business. The workshop seeks to stir further interest for these topics in the community, and stimulate the research and progress in this area.

The abstract from “Fusion-based Recommender System for Improving Serendipity” by Kenta Oku, Fumio Hattori reads:

Recent work has focused on new measures that are beyond the accuracy of recommender systems. Serendipity, which is one of these measures, is defined as a measure that indicates how the recommender system can find unexpected and useful items for users. In this paper, we propose a Fusion-based Recommender System that aims to improve the serendipity of recommender systems. The system is based on the novel notion that the system finds new items, which have the mixed features of two user-input items, produced by mixing the two items together. The system consists of item-fusion methods and scoring methods. The item-fusion methods generate a recommendation list based on mixed features of two user-input items. Scoring methods are used to rank the recommendation list. This paper describes these methods and gives experimental results.

Interested yet? 😉

October 6, 2011

VII PythonBrasil

Filed under: Python,Recommendation — Patrick Durusau @ 5:30 pm

VII PythonBrasil

Marcel Caraciolo covers his slides from keynotes at VII PythonBrasil, the most interesting for topic mappers would be Crab – A Python Framework for Building Recommender Systems.

Recommender systems by necessity have to identify the interests of a user (2 subjects, interests and user), match those to other interests (another subject) and then produce a recommendation (yet another subject), plus relationship subjects if you are interested. Recommender systems are already identifying all those subjects and gathering instances of them together.

What would you do to make their constantly interim results available to other systems?

October 4, 2011

VinWiki Part 1: Building an intelligent Web app using Seam, Hibernate, RichFaces, Lucene and Mahout

Filed under: Lucene,Mahout,Recommendation — Patrick Durusau @ 7:57 pm

VinWiki Part 1: Building an intelligent Web app using Seam, Hibernate, RichFaces, Lucene and Mahout

From the webpage:

This is the first post in a four part series about a wine rating and recommendation Web application, named VinWiki, built using open source technology. The purpose of this series is to document key design and implementation decisions, which may be of interest to anyone wanting to build an intelligent Web application using Java technologies. The end result will not be a 100% functioning Web application, but will have enough functionality to prove the concepts.

I thought about Lars Marius and his expertise at beer evaluation when I saw this series. Not that Lars would need it but it looks like the sort of thing you could build to recommend things you know something about, and like. Whatever that may be. 😉

October 3, 2011

Algorithms of the Intelligent Web Review

Algorithms of the Intelligent Web Review by Pearlene McKinley

From the post:

I have always had an interest in AI, machine learning, and data mining but I found the introductory books too mathematical and focused mostly on solving academic problems rather than real-world industrial problems. So, I was curious to see what this book was about.

I have read the book front-to-back (twice!) before I write this report. I started reading the electronic version a couple of months ago and read the paper print again over the weekend. This is the best practical book in machine learning that you can buy today — period. All the examples are written in Java and all algorithms are explained in plain English. The writing style is superb! The book was written by one author (Marmanis) while the other one (Babenko) contributed in the source code, so there are no gaps in the narrative; it is engaging, pleasant, and fluent. The author leads the reader from the very introductory concepts to some fairly advanced topics. Some of the topics are covered in the book and some are left as an exercise at the end of each chapter (there is a “To Do” section, which was a wonderful idea!). I did not like some of the figures (they were probably made by the authors not an artist) but this was only a minor aesthetic inconvenience.

The book covers four cornerstones of machine learning and intelligence, i.e. intelligent search, recommendations, clustering, and classification. It also covers a subject that today you can find only in the academic literature, i.e. combination techniques. Combination techniques are very powerful and although the author presents the techniques in the context of classifiers, it is clear that the same can be done for recommendations — as the Bell Korr team did for the Netflix prize.

Wonder if this will be useful in the Stanford AI course that starts next week with more than 130,000 students? Introduction to Artificial Intelligence – Stanford Class

I am going to order a copy, if for no other reason than to evaluate the reviewer’s claim of explanations “in plain English.” I have seen some fairly clever explanations of AI algorithms and would like to see how these stack up.

September 24, 2011

Recommendation Engine

Filed under: Recommendation — Patrick Durusau @ 6:57 pm

Recommendation Engine by Ricky Ho.

From the post:

In a classical model of recommendation system, there are “users” and “items”. User has associated metadata (or content) such as age, gender, race and other demographic information. Items also has its metadata such as text description, price, weight … etc. On top of that, there are interaction (or transaction) between user and items, such as userA download/purchase movieB, userX give a rating 5 to productY … etc.

Ricky does a good job of stepping through the different approaches to making recommendations. Iimportant for topic map interfaces that recommend additional topics to their users.

September 22, 2011

A Graph-Based Movie Recommender Engine

Filed under: Graphs,Gremlin,Neo4j,Recommendation — Patrick Durusau @ 6:32 pm

A Graph-Based Movie Recommender Engine by Marko A. Rodriguez.

From the post:

A recommender engine helps a user find novel and interesting items within a pool of resources. There are numerous types of recommendation algorithms and a graph can serve as a general-purpose substrate for evaluating such algorithms. This post will demonstrate how to build a graph-based movie recommender engine using the publicly available MovieLens dataset, the graph database Neo4j, and the graph traversal language Gremlin. Feel free to follow along in the Gremlin console as the post will go step-by-step from data acquisition, to parsing, and ultimately, to traversing.

As important as graph engines, algorithms and research are at present, and as important as they will become, I think the Neo4j community itself is worthy of direct study. There are stellar contributors to the technology and the community, but is that what makes it such an up and coming community? Or perhaps how they contributed? It would take a raft (is that the term for a group of sociologists?) of sociologists and perhaps there are existing studies of online communities that might have some clues. I mention that because there are other groups I would like to see duplicate the success of the Neo4j community.

Marko takes you from data import to a useful (albeit limited) application in less than 2500 words. (measured to the end of the conclusion, excluding further reading)

And leaves you with suggestions for further exploring.

That is a blog post that promotes a paradigm. (And for anyone who takes offense at that observation, it applies to my efforts as well. There are other ways to promote a paradigm but you have to admit, this is a fairly compelling one.)

Put Marko’s post on your read with evening coffee list.

September 19, 2011

Recommender Systems

Filed under: Recommendation,Similarity — Patrick Durusau @ 7:55 pm

Recommender Systems

This website provides support for “Recommender Systems: An Introduction” and “Recommender Systems Handbook.”

Recommender systems are an important area of research for topic maps because recommendation of necessity involves recognition (or attempted recognition) of subjects similar to an example subject. That recommendation may be captured in relationship to a particular set of user characteristics or it can be used as the basis for identifying a subject.

The site offers pointers to very strong teaching materials (as of 19 September 2011):

Slides

Tutorials

Courses

If you want to contribute teaching materials, please contact dietmar.jannach (at) udo.edu.

September 11, 2011

New Challenges in Distributed Information Filtering and Retrieval

New Challenges in Distributed Information Filtering and Retrieval

Proceedings of the 5th International Workshop on New Challenges in Distributed Information Filtering and Retrieval
Palermo, Italy, September 17, 2011.

Edited by:

Cristian Lai – CRS4, Loc. Piscina Manna, Building 1 – 09010 Pula (CA), Italy

Giovanni Semeraro – Dept. of Computer Science, University of Bari, Aldo Moro, Via E. Orabona, 4, 70125 Bari, Italy

Eloisa Vargiu – Dept. of Electrical and Electronic Engineering, University of Cagliari, Piazza d’Armi, 09123 Cagliari, Italy

Table of Contents:

  1. Experimenting Text Summarization on Multimodal Aggregation
    Giuliano Armano, Alessandro Giuliani, Alberto Messina, Maurizio Montagnuolo, Eloisa Vargiu
  2. From Tags to Emotions: Ontology-driven Sentimental Analysis in the Social Semantic Web
    Matteo Baldoni, Cristina Baroglio, Viviana Patti, Paolo Rena
  3. A Multi-Agent Decision Support System for Dynamic Supply Chain Organization
    Luca Greco, Liliana Lo Presti, Agnese Augello, Giuseppe Lo Re, Marco La Cascia, Salvatore Gaglio
  4. A Formalism for Temporal Annotation and Reasoning of Complex Events in Natural Language
    Francesco Mele, Antonio Sorgente
  5. Interaction Mining: the new Frontier of Call Center Analytics
    Vincenzo Pallotta, Rodolfo Delmonte, Lammert Vrieling, David Walker
  6. Context-Aware Recommender Systems: A Comparison Of Three Approaches
    Umberto Panniello, Michele Gorgoglione
  7. A Multi-Agent System for Information Semantic Sharing
    Agostino Poggi, Michele Tomaiuolo
  8. Temporal characterization of the requests to Wikipedia
    Antonio J. Reinoso, Jesus M. Gonzalez-Barahona, Rocio Muñoz-Mansilla, Israel Herraiz
  9. From Logical Forms to SPARQL Query with GETARUN
    Rocco Tripodi, Rodolfo Delmonte
  10. ImageHunter: a Novel Tool for Relevance Feedback in Content Based Image Retrieval
    Roberto Tronci, Gabriele Murgia, Maurizio Pili, Luca Piras, Giorgio Giacinto

September 2, 2011

Discovering the Impact of Knowledge in Recommender Systems: A Comparative Study

Filed under: Recommendation,Semantics — Patrick Durusau @ 7:59 pm

Discovering the Impact of Knowledge in Recommender Systems: A Comparative Study by Bahram Amini, Roliana Ibrahim, and Mohd Shahizan Othman.

Abstract:

Recommender systems engage user profiles and appropriate filtering techniques to assist users in finding more relevant information over the large volume of information. User profiles play an important role in the success of recommendation process since they model and represent the actual user needs. However, a comprehensive literature review of recommender systems has demonstrated no concrete study on the role and impact of knowledge in user profiling and filtering approache. In this paper, we review the most prominent recommender systems in the literature and examine the impression of knowledge extracted from different sources. We then come up with this finding that semantic information from the user context has substantial impact on the performance of knowledge based recommender systems. Finally, some new clues for improvement the knowledge-based profiles have been proposed.

Interesting work but I am uncertain about the need to “extract” semantic information from users. At least directly. As in linguistics, it may be enough to see where the user falls statistically and use that as a guide to the semantics. As in linguistics, it will miss the edge cases but those are likely to be missed anyway.

August 14, 2011

Recommendation Engine Powered By Hadoop

Filed under: Hadoop,Recommendation — Patrick Durusau @ 7:09 pm

Recommendation Engine Powered By Hadoop by Pranab Ghosh.

Nice set of slides on use of Hadoop to power a recommendation engine. (Implicit subject recognition and therefore difficult to fashion explicit merger from different sources.)

At least on Slideshare the additional resource links aren’t working on the slides. So, for your reading pleasure:

Pranab’s blog: Mawazo A number of interesting posts on NoSQL and related technologies.

Including Pranab’s two-part blog post on Hadoop and recommendation engines:

Recommendation Engine Powered by Hadoop (Part 1)

Recommendation Engine Powered by Hadoop (Part 2)

and, Mining of Massive Datasets by Anand Rajaraman and Jeff Ullman.

July 4, 2011

OrganiK Knowledge Management System

Filed under: Filters,Indexing,Knowledge Management,Recommendation,Text Analytics — Patrick Durusau @ 6:03 pm

OrganiK Knowledge Management System (wiki)

OrganiK Knowledge Management System (homepage)

I encountered the OrganiK project while searching for something else (naturally). 😉

From the homepage:

Objectives of the Project

The aim of the OrganiK project is to research and develop an innovative knowledge management system that enables the semantic fusion of enterprise social software applications. The system accumulates information that can be exchanged among one or several collaborating companies. This enables an effective management of organisational knowledge and can be adapted to functional requirements of smaller and knowledge-intensive companies.

More info..

Main distinguishing features

The set of OrganiK KM Client Interfaces comprises of a Wiki, a Blog, a Social Bookmarking and a Search Component that together constitute a Collaborative Workspace for SME knowledge workers. Each of the components consists of a Web-based client interface and a corresponding server engine.
The components that comprise the Business Logic Layer of the OrganiK KM Server are:

  • the Recommender System,
  • the Semantic Text Analyser,
  • the Collaborative Filtering Engine
  • the Full-text Indexer

More info…

Interesting project but the latest news item dates from 2008. Not encouraging.

I checked the source code and the most recent update was August, 2010. Much more encouraging.

Have written for more recent news.

May 27, 2011

The Science and Magic of User and Expert Feedback for Improving Recommendations

Filed under: Collaboration,Filters,Librarian/Expert Searchers,Recommendation — Patrick Durusau @ 12:37 pm

The Science and Magic of User and Expert Feedback for Improving Recommendations by Dr. Xavier Amatriain (Telefonica).

Abstract:

Recommender systems are playing a key role in the next web revolution as a practical alternative to traditional search for information access and filtering. Most of these systems use Collaborative Filtering techniques in which predictions are solely based on the feedback of the user and similar peers. Although this approach is considered relatively effective, it has reached some practical limitations such as the so-called Magic Barrier. Many of these limitations strive from the fact that explicit user feedback in the form of ratings is considered the ground truth. However, this feedback has a non-negligible amount of noise and inconsistencies. Furthermore, in most practical applications, we lack enough explicit feedback and would be better off using implicit feedback or usage data.

In the first part of my talk, I will present our studies in analyzing natural noise in explicit feedback and finding ways to overcome it to improve recommendation accuracy. I will also present our study of user implicit feedback and an approach to relate both kinds of information. In the second part, I will introduce a radically different approach to recommendation that is based on the use of the opinions of experts instead of regular peers. I will show how this approach addresses many of the shortcomings of traditional Collaborative Filtering, generates recommendations that are better perceived by the users, and allows for new applications such as fully-privacy preserving recommendations.

Chris Anderson: “We are leaving the age of information and entering the age of recommendation.”

I suspect Chris Anderson must not be an active library user. Long before recommender systems, librarians have been making recommendations to researchers, patrons and children doing homework. I would say we are returning to the age of librarians, assisted by recommender systems.

Librarians use the reference interview so that based on feedback from patrons they can make the appropriate recommendations.

If you substitute librarian for “expert” in this presentation, it becomes apparent the world of information is coming back around to libraries and librarians.

Librarians should be making the case, both in the literature but to researchers like Dr. Amatriain, that librarians can play a vital role in recommender systems.

This is a very enjoyable as well as useful presentation.

For further information see:

http://xavier.amatriain.net

http://technocalifornia.blogspot.net

« Newer Posts

Powered by WordPress