Social Annotations in Web Search by Aditi Muralidharan,
Zoltan Gyongyi, and Ed H. Chi. (CHI 2012, May 5–10, 2012, Austin, Texas, USA)
Abstract:
We ask how to best present social annotations on search results, and attempt to find an answer through mixed-method eye-tracking and interview experiments. Current practice is anchored on the assumption that faces and names draw attention; the same presentation format is used independently of the social connection strength and the search query topic. The key findings of our experiments indicate room for improvement. First, only certain social contacts are useful sources of information, depending on the search topic. Second, faces lose their well-documented power to draw attention when rendered small as part of a social search result annotation. Third, and perhaps most surprisingly, social annotations go largely unnoticed by users in general due to selective, structured visual parsing behaviors specific to search result pages. We conclude by recommending improvements to the design and content of social annotations to make them more noticeable and useful.
The entire paper is worth your attention but the first paragraph of the conclusion gives much food for thought:
For content, three things are clear: not all friends are equal, not all topics benefit from the inclusion of social annotation, and users prefer different types of information from different people. For presentation, it seems that learned result-reading habits may cause blindness to social annotations. The obvious implication is that we need to adapt the content and presentation of social annotations to the specialized environment of web search.
The complexity and sublty of semantics on human side keeps bumping into the search/annotate with a hammer on the computer side.
Or as the authors say: “…users prefer different types of information from different people.”
Search engineers/designers who use their preferences/intuitions as the designs to push out to the larger user universe are always going to fall short.
Because all users have their own preferences and intuitions about searching and parsing search results. What is so surprising about that?
I have had discussions with programmers who would say: “But it will be better for users to do X (as opposed to Y) in the interface.”
Know what? Users are the only measure of the fitness of an interface or success of a search result.
A “pull” model (user preferences) based search engine will gut all existing (“push” model, engineer/programmer preference) search engines.
PS: You won’t discover the range of user preferences by study groups with 11 participants. Ask one of the national survey companies and have them select several thousand participants. Then refine which preferences get used the most. Won’t happen overnight but every precentage gain will be one the existing search engines won’t regain.
PPS: Speaking of interfaces, I would pay for a web browser that put webpages back under my control (the early WWW model).
Enabling me to defeat those awful “page is loading” ads from major IT vendors who should know better. As well as strip other crap out. It is a data stream that is being parsed. I should be able to clean it up before viewing. That could be a real “hit” and make page load times faster.
I first saw this article in a list of links from Greg Linden.