Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

June 15, 2012

Hands-on with Google Docs’s new research tool [UI Idea?]

Filed under: Authoring Topic Maps,Interface Research/Design — Patrick Durusau @ 1:54 pm

Hands-on with Google Docs’s new research tool by Joel Mathis, Macworld.com.

From the post:

Google Docs has unveiled a new research tool meant to help writers streamline their browser-based research, making it easier for them to find and cite the information they need while composing text.

The feature, announced Tuesday, appears as an in-page vertical pane on the right side of your Google Doc. (You can see an example of the pane at left.) It can be accessed either through the page’s Tools menu, or with a Command-Option-R keyboard shortcut on your Mac.

The tool offers three types of searches: A basic “everything” search, another just for images, and a third featuring quotes about—or by—the subject of your search.

In “everything” mode, a search for GOP presidential candidate Mitt Romney brought up a column of images and information. At the top of the column, a scrollable set of thumbnail pictures of the man, followed by some basic dossier information—birthday, hometown, and religion—followed by a quote from Romney, taken from an ABC News story that had appeared within the last hour.

The top Web links for a topic are displayed underneath that roster of information. You’re given three option with the links: First, you can “preview” the linked page within the Google Docs page—though you’ll have to open a new tab if you want to conduct a more thorough perusal of the pertinent info. The second option is to create a link to that page directly from the text you’re writing. The third is to create a footnote in the text that cites the link.

Interfaces are forced to make assumptions about the “average” user and their needs. This one sounds like it is hitting around or even close to needs that are fairly common.

Makes me wonder if topic map authoring interfaces should place more emphasis on incorporation of content and authoring, with correspondingly less emphasis on the topic mappishness of the result.

Perhaps cleaning up a map is something that should be a separate task anyway.

Authors write and editors edit.

Is there some reason to combine those two tasks?

(I first saw this at Research Made Easy With Google Docs by Stephen Arnold.)

June 13, 2012

Social Annotations in Web Search

Social Annotations in Web Search by Aditi Muralidharan,
Zoltan Gyongyi, and Ed H. Chi. (CHI 2012, May 5–10, 2012, Austin, Texas, USA)

Abstract:

We ask how to best present social annotations on search results, and attempt to find an answer through mixed-method eye-tracking and interview experiments. Current practice is anchored on the assumption that faces and names draw attention; the same presentation format is used independently of the social connection strength and the search query topic. The key findings of our experiments indicate room for improvement. First, only certain social contacts are useful sources of information, depending on the search topic. Second, faces lose their well-documented power to draw attention when rendered small as part of a social search result annotation. Third, and perhaps most surprisingly, social annotations go largely unnoticed by users in general due to selective, structured visual parsing behaviors specific to search result pages. We conclude by recommending improvements to the design and content of social annotations to make them more noticeable and useful.

The entire paper is worth your attention but the first paragraph of the conclusion gives much food for thought:

For content, three things are clear: not all friends are equal, not all topics benefit from the inclusion of social annotation, and users prefer different types of information from different people. For presentation, it seems that learned result-reading habits may cause blindness to social annotations. The obvious implication is that we need to adapt the content and presentation of social annotations to the specialized environment of web search.

The complexity and sublty of semantics on human side keeps bumping into the search/annotate with a hammer on the computer side.

Or as the authors say: “…users prefer different types of information from different people.”

Search engineers/designers who use their preferences/intuitions as the designs to push out to the larger user universe are always going to fall short.

Because all users have their own preferences and intuitions about searching and parsing search results. What is so surprising about that?

I have had discussions with programmers who would say: “But it will be better for users to do X (as opposed to Y) in the interface.”

Know what? Users are the only measure of the fitness of an interface or success of a search result.

A “pull” model (user preferences) based search engine will gut all existing (“push” model, engineer/programmer preference) search engines.


PS: You won’t discover the range of user preferences by study groups with 11 participants. Ask one of the national survey companies and have them select several thousand participants. Then refine which preferences get used the most. Won’t happen overnight but every precentage gain will be one the existing search engines won’t regain.

PPS: Speaking of interfaces, I would pay for a web browser that put webpages back under my control (the early WWW model).

Enabling me to defeat those awful “page is loading” ads from major IT vendors who should know better. As well as strip other crap out. It is a data stream that is being parsed. I should be able to clean it up before viewing. That could be a real “hit” and make page load times faster.

I first saw this article in a list of links from Greg Linden.

June 11, 2012

Social Design – Systems of Engagement

Filed under: Design,Interface Research/Design — Patrick Durusau @ 4:28 pm

I almost missed this series except this caught my eye skimming posts:

“We were really intrigued when we heard Cognizant’s Malcolm Frank and industry guru Geoffrey Moore discuss the enterprise/consumer IT divide. They liken it to the Sunday night vs. Monday morning experience.

It goes something like this … on a typical Sunday night, we use our iPhones/iPads and interact with our friends via Facebook/Twitter etc. It is a delightful experience. Malcolm and Geoff call these environments “Systems of Engagement”. Then Monday morning arrives and we show up in the office and are subjected to applications like the Timesheet I described in the previous post. Adding to the misery is the landline phone, a gadget clearly from the previous millennium (and alien to most millennials who came of age with mobile phones).

We then asked ourselves this additional question – did any of us attend a training program, seminar or e-learning program to use the iPhone, iPad, Facebook, Twitter, etc? The answer, obviously is NO. Why then, we concluded, do users need training to use corporate IT applications!

(http://dealarchitect.typepad.com/deal_architect/2012/06/the-pursuit-of-employee-delight-part-2.html)

Let me make that question personal: Do your users require training to use your apps?

Or do any of these sound familiar?

1. Confusing navigation. There were just too many steps to reach the target screen. Developed by different groups, each application had its own multi-level menu structure. Lack of a common taxonomy further complicated usability.

2. Each screen had too many fields which frustrated users. Users had to go through several hours of training to use the key applications.

3. Some applications were slow, especially when accessed from locations far away from our data centers.

4. Each application had its own URL and required a separate login. Sometimes, one application had many URLs. Bookmarking could never keep up with this. Most importantly, new associates could never easily figure out which application was available at which URL.

5. All applications were generating huge volumes of email alerts to keep the workflow going. This resulted in tremendous e-mail overload.

(http://dealarchitect.typepad.com/deal_architect/2012/06/the-real-deal-sukumar-rajagopal-on-a-cios-pursuit-of-employee-delight.html)

Vinnie Mirchandani covers systems of engagement in five posts that I won’t attempt to summarize.

The Real Deal: Sukumar Rajagopal on a CIO’s Pursuit of Employee Delight

The Pursuit of Employee Delight – Part 2

The Pursuit of Employee Delight – Part 3

The Pursuit of Employee Delight – Part 4

The Pursuit of Employee Delight – Part 5

There is much to learn here.

May 30, 2012

Human-Computer Interaction Lab – Tech Papers

Filed under: Human-Computer Interaction Lab (HCIL),Interface Research/Design — Patrick Durusau @ 2:37 pm

Human-Computer Interaction Lab – Tech Papers

Twenty-five (25) years worth of research papers, presentations, software and other materials from the Human-Computer Interaction Lab at the University of Maryland.

I discovered the tech report site in the musings of Kim Rees, Thoughts on the HCIL symposium.

From the overview of the HCIL:

The Human-Computer Interaction lab has a long, rich history of transforming the experience people have with new technologies. From understanding user needs, to developing and evaluating those technologies, the lab’s faculty, staff, and students have been leading the way in HCI research and teaching.

We believe it is critical to understand how the needs and dreams of people can be reflected in our future technologies. To this end, the HCIL develops advanced user interfaces and design methodology. Our primary activities include collaborative research, publication and the sponsorship of open houses, workshops and symposiums.

I mentioned the tech reports, don’t neglect the video reports, presentations and projects while you are browsing this site.

If I were making a limited set of sites to search for human-computer interface issues, this would be one of them. Yes?

May 17, 2012

Designing Search (part 4): Displaying results

Filed under: Interface Research/Design,Search Behavior,Search Interface,Searching — Patrick Durusau @ 3:41 pm

Designing Search (part 4): Displaying results

Tony Russell-Rose writes:

In an earlier post we reviewed the various ways in which an information need may be articulated, focusing on its expression via some form of query. In this post we consider ways in which the response can be articulated, focusing on its expression as a set of search results. Together, these two elements lie at the heart of the search experience, defining and shaping much of the information seeking dialogue. We begin therefore by examining the most universal of elements within that response: the search result.

As usual, Tony does a great job of illustrating your choices and trade-offs in presentation of search results. Highly recommended.

I am curious since Tony refers to it as an “information seeking dialogue,” has anyone mapped reference interview approaches to search interfaces? I suspect that is just my ignorance of the literature on that subject so would appreciate any pointers you can throw my way.

I would update Tony’s bibliography:

Marti Hearst (2009) Search User Interfaces. Cambridge University Press

Online as full text: http://searchuserinterfaces.com/

May 12, 2012

Initial HTTP Speed+Mobility Open Source Prototype Now Available for Download

Filed under: HTTP Speed+Mobility,Interface Research/Design — Patrick Durusau @ 4:35 pm

Initial HTTP Speed+Mobility Open Source Prototype Now Available for Download

From the post:

Microsoft Open Technologies, Inc. has just published an initial open source prototype implementation of HTTP Speed+Mobility. The prototype is available for download on html5labs.com, where you will also find pointers to the source code.

The IETF HTTPbis workgroup met in Paris at the end of March to discuss how to approach HTTP 2.0 in order to meet the needs of an ever larger and more diverse web. It would be hard to downplay the importance of this work: it will impact how billions of devices communicate over the internet for years to come, from low-powered sensors, to mobile phones, to tablets, to PCs, to network switches, to the largest datacenters on the planet.

Prior to that IETF meeting, Jean Paoli and Sandeep Singhal announced in their post to the Microsoft Interoperability blog that Microsoft has contributed the HTTP Speed+Mobility proposal as input to that conversation.

The prototype implements the websocket-based session layer described in the proposal, as well as parts of the multiplexing logic incorporated from Google’s SPDY proposal. The code does not support header compression yet, but it will in upcoming refreshes.

The open source software comprises a client implemented in C# and a server implemented in Node.js running on Windows Azure. The client is a command line tool that establishes a connection to the server and can download a set of web pages that include html files, scripts, and images. We have made available on the server some static versions of popular web pages like http://www.microsoft.com and http://www.ietf.org, as well as a handful of simpler test pages.

I have avoided having a cell phone much less a smart phone all these years.

Now it looks like to evaluate/test semantic applications, including topic maps, I am going to have to get one.

Thanks Jean and Sandeep! 😉

May 3, 2012

Giving People The Finger

Filed under: Interface Research/Design,Navigation — Patrick Durusau @ 6:22 pm

“Giving people the finger” is how I would headline:

In a paper published in the peer-reviewed journal Perception, researchers at the universities of Exeter and Lincoln showed that biological cues like an outstretched index finger or a pair of eyes looking to one side affect people’s attention even when they are irrelevant to the task at hand. Abstract directional symbols like pointed arrows or the written words “left” and “right” do not have the same effect. Pointing a Finger Work Much Better Than Using Pointed Arrows

I don’t have access to the article but the post reports:

“Interestingly, it was only the cues which were biological — the eye gaze and finger pointing cues — which had this effect,” said Prof. Hodgson, Professor of Cognitive Neuroscience in the School of Psychology at the University of Lincoln. “Road sign arrows and words “left” and “right” had no influence at all. What’s more, the eyes and fingers seemed to affect the participants’ reaction times even when the images were flashed on the screen for only a tenth of a second.”

The authors suggest that the reason that these biological signals may be particularly good at directing attention is because they are used by humans and some other species as forms of non-verbal communication: Where someone is looking or pointing indicates to others not only what they are paying attention to, but also what they might be feeling or what they might be planning on doing next.

I think the commonly quoted figure for the origins of language/symbol manipulation is about 100,000 years ago. Use of biological clues, pointing, eye movement, is far older. That’s off the top of my head so feel free to throw in citations (for or against).

There would be a learning curve in collaboration to use this for UIs. The abstract in question reads:

Pointing with the eyes or the finger occurs frequently in social interaction to indicate direction of attention and one’s intentions. Research with a voluntary saccade task (where saccade direction is instructed by the colour of a fixation point) suggested that gaze cues automatically activate the oculomotor system, but non-biological cues, like arrows, do not. However, other work has failed to support the claim that gaze cues are special. In the current research we introduced biological and non-biological cues into the anti-saccade task, using a range of stimulus onset asynchronies (SOAs). The anti-saccade task recruits both top – down and bottom – up attentional mechanisms, as occurs in naturalistic saccadic behaviour. In experiment 1 gaze, but not arrows, facilitated saccadic reaction times (SRTs) in the opposite direction to the cues over all SOAs, whereas in experiment 2 directional word cues had no effect on saccades. In experiment 3 finger pointing cues caused reduced SRTs in the opposite direction to the cues at short SOAs. These findings suggest that biological cues automatically recruit the oculomotor system whereas non-biological cues do not. Furthermore, the anti-saccade task set appears to facilitate saccadic responses in the opposite direction to the cues. Giving subjects the eye and showing them the finger: Socio-biological cues and saccade generation in the anti-saccade task

April 12, 2012

The Guide on the Side

Filed under: Education,Interface Research/Design — Patrick Durusau @ 7:05 pm

The Guide on the Side by Meredith Farkas.

From the post:

Many librarians have embraced the use of active learning in their teaching. Moving away from lectures and toward activities that get students using the skills they’re learning can lead to more meaningful learning experiences. It’s one thing to tell someone how to do something, but to have them actually do it themselves, with expert guidance, makes it much more likely that they’ll be able to do it later on their own.

Replicating that same “guide on the side” model online, however, has proven difficult. Librarians, like most instructors, have largely gone back to a lecture model of delivering instruction. Certainly it’s a great deal more difficult to develop active learning exercises, or even interactivity, in online instruction, but many of the tools and techniques that have been embraced by librarians for developing online tutorials and other learning objects do not allow students to practice what they’re learning while they’re learning. While some software for creating screencasts—video tutorials that film activity on one’s desktop—include the ability to create quizzes or interactive components, users can’t easily work with a library resource and watch a screencast at the same time.

In 2000, the reference desk staff at the University of Arizona was looking for an effective way to build web-based tutorials to embed in a class that had resulted in a lot of traffic at the reference desk. Not convinced of the efficacy of traditional tutorials to instruct students on using databases, the librarians “began using a more step-by-step approach where students were guided to perform specific searches and locate specific articles,” Instructional Services Librarian Leslie Sult told me. The students were then assessed on their ability to conduct searches in the specific resources assigned. Later, Sult, Mike Hagedon, and Justin Spargur of the library’s scholarly publishing and data management team, turned this early active learning tutorial model into Guide on the Side software.

Guide on the Side is an interface that allows librarians at all levels of technological skill to easily develop a tutorial that resides in an online box beside a live web page students can use. Students can read the instructions provided by the librarian while actively using a database, without needing to switch between screens. This allows students to use a database while still receiving expert guidance, much like they could in the classroom.

Meredith goes on to provide links to examples of such “Guide on the Side” resources and promises code to appear on GitHub early this summer.

This looks like a wonderful way to teach topic maps.

Comments/suggestions?

March 29, 2012

Mobile App Developer Competition (HaptiMap)

Filed under: Interface Research/Design,Mapping,Maps — Patrick Durusau @ 6:39 pm

Mobile App Developer Competition (HaptiMap)

From the website:

Win 4000 Euro, a smartphone or a tablet!

This competition is open for mobile apps, which demonstrate designs that can be used by a wide range of users and in a wide range of situations (also on the move). The designs can make use of visual (on-screen) elements, but they should also make significant use of the non-visual interaction channels. The competition is open both for newly developed apps as well as existing apps who are updated using the HaptiMap toolkit. To enter the competition, the app implementation must make use of the HaptiMap toolkit. Your app can rely on existing toolkit modules, but it is also possible extend or add appropriate modules (in line with the purpose of HaptiMap) into the toolkit.

Important dates:

The competition closes 15th of June 17.00 CET 2012. The winners will be announced at the HAID’12 workshop (http://www.haid.ws) 23-24 August 2012, Lund, Sweden.

In case you aren’t familiar with HaptiMap:

What is HaptiMap?

HaptiMap is an EU project which aims at making maps and location based services more accessible by using several senses like vision, hearing, and, particularly, touch. Enabling haptic access to mainstream map and LBS data allows more people to use them in a number of different environmental or individual circumstances. For example, when navigating in low-visibility (e.g., bright sunlight) and/or high noise environments, preferring to concentrate on riding your bike, sightseeing and/or listening to sounds, or when your visual and/or auditory senses are impaired (e.g., due to age).

If you think about it, what is being proposed is standard mapping but not using the standard (visual) channel.

March 28, 2012

Designing User Experiences for Imperfect Data

Filed under: Data Quality,Interface Research/Design,Search Interface,Searching — Patrick Durusau @ 4:21 pm

Designing User Experiences for Imperfect Data by Matthew Hurst.

Matthew writes:

Any system that uses some sort of inference to generate user value is at the mercy of the quality of the input data and the accuracy of the inference mechanism. As neither of these can be guaranteed to by perfect, users of the system will inevitably come across incorrect results.

In web search we see this all the time with irrelevant pages being surfaced. In the context of track // microsoft, I see this in the form of either articles that are incorrectly added to the wrong cluster, or articles that are incorrectly assigned to no cluster, becoming orphans.

It is important, therefore, to take these imperfections into account when building the interface. This is not necessarily a matter of pretending that they don’t exist, or tricking the user. Rather it is a problem of eliciting an appropriate reaction to error. The average user is not conversant in error margins and the like, and thus tends to over-weight errors leading to the perception of poorer quality in the good stuff.

I am not real sure how Matthew finds imperfect data but I guess I will just have to take his word for it. 😉

Seriously, I think he is spot on in observing that expecting users to hunt-n-peck through search results is wearing a bit thin. That is going to be particularly so when better search systems make the hidden cost of hunt-n-peck visible.

Do take the time to visit his track // microsoft site.

Now imagine your own subject specific and dynamic website. Or even search engine. Could be that search engines for “everything” are the modern day dinosaurs. Big, clumsy, fairly crude.

March 20, 2012

Designing Search (part 3): Keeping on track

Filed under: Interface Research/Design,Search Behavior,Search Interface,Searching — Patrick Durusau @ 3:52 pm

Designing Search (part 3): Keeping on track by Tony Russell-Rose

From the post:

In the previous post we looked at techniques to help us create and articulate more effective queries. From auto-complete for lookup tasks to auto-suggest for exploratory search, these simple techniques can often make the difference between success and failure.

But occasionally things do go wrong. Sometimes our information journey is more complex than we’d anticipated, and we find ourselves straying off the ideal course. Worse still, in our determination to pursue our original goal, we may overlook other, more productive directions, leaving us endlessly finessing a flawed strategy. Sometimes we are in too deep to turn around and start again.

(graphic omitted)

Conversely, there are times when we may consciously decide to take a detour and explore the path less trodden. As we saw earlier, what we find along the way can change what we seek. Sometimes we find the most valuable discoveries in the most unlikely places.

However, there’s a fine line between these two outcomes: one person’s journey of serendipitous discovery can be another’s descent into confusion and disorientation. And there’s the challenge: how can we support the former, while unobtrusively repairing the latter? In this post, we’ll look at four techniques that help us keep to the right path on our information journey.

Whether you are writing a search interface or simply want to know more about what factors to consider in evaluating a search interface, this series by Tony Russell-Rose is well worth your time.

If you are writing a topic map, you already have as a goal the collection of information for some purpose. It would be sad if the information you collect isn’t findable due to poor interface design.

March 5, 2012

Bad vs Good Search Experience

Filed under: Interface Research/Design,Lucene,Search Interface,Searching — Patrick Durusau @ 7:53 pm

Bad vs Good Search Experience by Emir Dizdarevic.

From the post:

The Problem

This article will show how a bad search solution can be improved. We will demonstrate how to build an enterprise search solution relatively easy using Apache Lucene/SOLR.

We took a local ad site as an example of a bad search experience.

We crawled the ad site with Apache Nutch, using a couple of home grown plugins to fetch only the data we want and not the whole site. Stay tuned for a separate article on this topic.

‘BAD’ search is based on real search results from the ad site i.e. how the website search currently works. ‘GOOD ‘ search is based on same data but indexed with Apache Lucene/Solr (inverted index).

BAD Search: We assume that it’s based on exact match criteria or something similar to ‘%like%’ database statement. To simulate this behavior we used content field that it tokenized by whitespace, lowercased and used phrase queries every time. This is the closest we could get to existing ad site search solution, but even this bad it was performing better.

An excellent post in part because of the detailed example but also to show that improving search results is an iterative process.

Enjoy!

March 1, 2012

10 Tips for Data Visualization

Filed under: Interface Research/Design,Visualization — Patrick Durusau @ 9:02 pm

10 Tips for Data Visualization by David Steier, William D. Eggers, Joe Leinbach, Anesa Diaz-Uda.

From the post:

After disaster strikes or government initiatives fail, in hindsight, we see all too often that warning signs were overlooked by decision-makers. Or sophisticated technology was installed, but nobody took the time to learn to use it. It’s often labeled “user error,” or “problem between keyboard and chair.” In analytics, the problem is especially acute — the most sophisticated analytics models in the world are futile unless decision-makers understand and act on the results appropriately.

This problem often arises because designers haven’t truly considered how those using the fancy dashboards, maps or policy visualizations will interact with the analytics. They may become enamored of the model’s power and try to fit every piece of data into it. However, in offering more options and parameters to control the model’s operation, and filling up every pixel of screen real estate, designers can fail to recognize that most government decision-makers are inundated with inputs, pressed for time and can only focus on essentials. As Yale professor and information design guru Edward Tufte wrote, “Clutter and confusion are not attributes of information; they are failures of design.”

In many cases, users can’t answer basic questions like “What should I pay attention to?” and “Now that I’ve seen this, what should I do?” If the answers aren’t readily apparent, the interface and analytics aren’t solving a problem — rather, they might be creating a bigger one. As one federal executive said recently, “No tweet stops bleeding.Unless something has actually changed, it’s just information. What pieces of data are actually going to help us make a better decision?”

Agencies should consider a more user-centric and outcome-centric approach to analytics design to visualize policy problems and guide executives toward better, faster, more informed decisions.

The good news is that government leaders are getting serious about making sense of their data, and constant advances in graphic, mobile and Web technology make it possible to translate “big data” into meaningful, impactful visual interfaces. Using visualization tools to present advanced analytics can help policymakers more easily understand a topic, create an instant connection to unseen layers of data, and provide a sense of scale in dealing with large, complex data sets.

Read the post to pick up the ten tips. And re-read them about every three months or so.

I am curious about the point under letting users lead that reads:

If enough users believe an interface is unsatisfactory, the designer is well advised to accept their judgment.

That runs counter to my belief that an interface exists solely to assist the user in some task. What other purpose would an interface have? Or should I say what other legitimate purpose would an interface have? 😉 (Outside of schools where interfaces are designed to educate or challenge the user. If I am on deadline with a project, the last think I want is an interface that attempts to educate or challenge me.)

February 29, 2012

Designing Search (part 2): As-you-type suggestions

Filed under: AutoComplete,AutoSuggestion,Interface Research/Design,Searching — Patrick Durusau @ 7:21 pm

Designing Search (part 2): As-you-type suggestions by Tony Russell-Rose.

From the post:

Have you ever tried the “I’m Feeling Lucky” button on Google? The idea is, of course, that Google will take you directly to the result you want, rather than return a list of results. It’s a simple idea, and when it works, it seems like magic.

(graphic omitted)

But most of the time we are not so lucky. Instead, we submit a query and review the results; only to find that they’re not quite what we were looking for. Occasionally, we review a further page or two of results, but in most cases it’s quicker just to enter a new query and try again. In fact, this pattern of behaviour is so common that techniques have been developed specifically to help us along this part of our information journey. In particular, three versions of as-you-type suggestions—auto-complete, auto-suggest, and instant results—subtly guide us in creating and reformulating queries.

Tony guides the reader through auto-complete, auto-suggest, and instant results in his usual delightful manner. He illustrates the principles under discussion with well known examples from the WWW.

A collection of his posts should certainly be supplemental (if not primary) reading for any course on information interfaces.

February 23, 2012

The Scale of the Universe 2

Filed under: Interface Research/Design,Visualization — Patrick Durusau @ 4:51 pm

The Scale of the Universe 2

This is a very effective scale of the universe visualization!

You can click on objects to learn more.

What scale do you want to visualize? As an interface for your topic map?

February 20, 2012

Attention-enhancing information retrieval

Filed under: Information Retrieval,Interface Research/Design,Users — Patrick Durusau @ 8:36 pm

Attention-enhancing information retrieval

William Webber writes:

Last week I was at SWIRL, the occasional talkshop on the future of information retrieval. To me the most important of the presentations was Dianne Kelly’s “Rage against the Machine Learning”, in which she observed the way information retrieval currently works has changed the way people think. In particular, she proposed that the combination of short query with snippet response has reworked peoples’ plastic brains to focus on working memory, and forgo the processing of information required for it to lay its tracks down in our long term memory. In short, it makes us transactionally adept, but stops us from learning.

This is as important as Bret Victor’s presentation.

I particularly liked the line:

Various fanciful scenarios were given, but the ultimate end-point of such a research direction is that you walk into the shopping mall, and then your mobile phone leads you round telling you what to buy.

Reminds me of a line I remember imperfectly as judging from advertising, we are all “…insecure, sex-starved neurotics with 15-second attention spans.”

I always thought that was being generous on the attention span but opinions differ on that point. 😉

How do you envision your users? Serious question but not one you have to answer here. Ask yourself.

February 14, 2012

Responsive UX Design

Filed under: Interface Research/Design — Patrick Durusau @ 5:04 pm

Responsive UX Design by Darrin Henein.

From the post:

In recent years, the deluge of new connected devices that have entered the market has created an increas­ingly complex chal­lenge for designers and devel­opers alike. Until rela­tively recently, the role of a UX or UI designer was compar­a­tively straight­forward. Digital expe­ri­ences lived on their own, and were built and tailored for the specific mediums by which they were to be consumed. Moreover, the number of devices through which a user could access your brand was more limited. Digital expe­ri­ences were confined to (typi­cally) either a desktop or laptop computer, or in some cases a rela­tively basic browser embedded in a mobile phone. Furthermore, these devices were fairly homogenous within their classes, sporting similar screen reso­lu­tions and hardware capabilities.

Urges the use of CSS and HTML5 to plan for display of content on a variety of platforms. Rather than designing for one UX and taking ones changes on other devices.

Not simply a matter of getting larger or smaller but a UX/UI design issue.

Here taken to be a function of the device for viewing content but what if that were extended to content?

That is for manuals of various kinds that content is augmented with additional safeguards or warnings? If a particular repair sequence is chosen, additional cautionary content is loaded or checks are added to the process.

February 10, 2012

Dragsters, Drag Cars & Drag Racing Cars

I still remember the cover of Hot Rod magazine that announced (from memory) “The 6’s are here!” Don “The Snake” Prudhomme had broken the 200 mph barrier in a drag race. Other memories follow on from that one but I mention it to explain my interest in a recent Subject Authority Cooperative Program decision to not have a cross-reference from dragster (the term I would have used) to more recent terms, drag cars or drag racing cars.

The expected search (in this order) due to this decision is:

Cars (Automobiles) -> redirect to Automobiles -> Automobiles -> narrower term -> Automobiles, racing -> narrower term -> Dragsters

Adam L. Schiff, proposer of drag cars & drag racing cars says below “This just is not likely to happen.”

Question: Is there a relationship between users “work[ing] their way up and down hierarchies” and display of relationships methods? Who chooses which items will be the starting point to lead to other items? How do you integrate a keyword search into such a system?

Question: And what of the full phrase/sentence AI systems where keywords work less well? How does that work with relationship display systems?

Question: I wonder if the relationship display methods are closer to the up and down hierarchies, but with less guidance?

Adam’s Dragster proposal post in full:

Dragsters

Automobiles has a UF Cars (Automobiles). Since the UF already exists on the basic heading, it is not necessary to add it to Dragsters. The proposal was not approved.

Our proposal was to add two additional cross-references to Dragsters: Drag cars, and Drag racing cars. While I understand, in principle, the reasoning behind the rejection of these additional references, I do not see how it serves users. A user coming to a catalog to search for the subject “Drag cars” will now get nothing, no redirection to the established heading. I don’t see how the presence of a reference from Cars (Automobiles) to Automobiles helps any user who starts a search with “Drag cars”. Only if they begin their search with Cars would they get led to Automobiles, and then only if they pursue narrower terms under that heading would they find Automobiles, Racing, which they would then have to follow further down to Dragsters. This just is not likely to happen. Instead they will probably start with a keyword search on “Drag cars” and find nothing, or if lucky, find one or two resources and think they have it all. And if they are astute enough to look at the subject headings on one of the records and see “Dragsters”, perhaps they will then redo their search.

Since the proposed cross-refs do not begin with the word Cars, I do not at all see how a decision like this is in the service of users of our catalogs. I think that LCSH rules for references were developed when it was expected that users would consult the big red books and work their way up and down hierarchies. While some online systems do provide for such navigation, it is doubtful that many users take this approach. Keyword searching is predominant in our catalogs and on the Web. Providing as many cross-refs to established headings as we can would be desirable. If the worry is that the printed red books will grow to too many volumes if we add more variant forms that weren’t made in the card environment, then perhaps there needs to be a way to include some references in authority records but mark them as not suitable for printing in printed products.

PS: According to ODLIS: Online Dictionary for Library and Information Science by Joan M. Reitz, UF, has the following definition:

used for (UF)

A phrase indicating a term (or terms) synonymous with an authorized subject heading or descriptor, not used in cataloging or indexing to avoid scatter. In a subject headings list or thesaurus of controlled vocabulary, synonyms are given immediately following the official heading. In the alphabetical list of indexing terms, they are included as lead-in vocabulary followed by a see or USE cross-reference directing the user to the correct heading. See also: syndetic structure.

I did not attempt to reproduce the extremely rich cross-linking in this entry but commend the entire resource to your attention, particularly if you are a library science student.

January 28, 2012

YapMap: Breck’s Fun New Project to Improve Search

Filed under: Interface Research/Design,Search Interface,Searching — Patrick Durusau @ 7:32 pm

YapMap: Breck’s Fun New Project to Improve Search

From the post:

What I like about the user interface is that threads can be browsed easily–I have spent hours on remote controlled airplane forums reading every post because it is quite difficult to find relevant information within a thread. The color coding and summary views are quite helpful in eliminating irrelevant posts.

My first job is to get query spell checking rolling. Next is search optimized for the challenges of thread based postings. The fact that relevance of a post to a query is a function of a thread is very interesting. I will hopefully get to do some discourse analysis as well.

I will continue to run Alias-i/LingPipe. The YapMap involvement is just too fun a project to pass up given that I get to build a fancy search and discovery tool.

What do you think about the thread browsing capabilities?

I am sympathetic to the “reading every post” problem but I am not sure threading helps, at least not completely.

Doesn’t help with posters like myself who may make “off-thread” comments that may be the one you are looking for.

Comments about the interface?

January 21, 2012

December 7, 2011

Distributed User Interfaces: Collaboration and Usability

Filed under: Conferences,Interface Research/Design,Users — Patrick Durusau @ 8:10 pm

2nd Workshop on Distributed User Interfaces: Collaboration and Usability (CHI 2012 Workshop)

Important Dates:

  • Submission Deadline: January 13th, 2012
  • Author Notification: February 10th, 2012
  • Camera-Ready Deadline: April 1st, 2012
  • Workshop Date: May 5th or 6th, 2012 (to be confirmed)

From the website:

Distributed User Interfaces (DUIs) have recently become a new field of research and development in Human-Computer Interaction (HCI). The DUIs have brought about drastic changes affecting the way interactive systems are conceived. DUIs have gone beyond the fact that user interfaces are controlled by a single end user on the same computing platform in the same environment.

Traditional interaction is focused on the use of mobile devices such as, smartphones, tablets, laptops, and so on, tearing apart other environmental interaction resources such as large screens and multi-tactile displays, or tables. Under a collaborative scenario, users sharing common goals may take advantage of DUIs to carry out their tasks because they provide a shared environment where they are allowed to manipulate information in the same space at the same time. Under this hypothesis, collaborative DUIs scenarios open new challenges to usability evaluation techniques and methods.

Thus, the goal of this workshop is to promote the discussion about the emerging topic of DUIs, answering a set of key questions: how collaboration can be improved by using DUI? , in which situations a DUI is suitable to ease the collaboration among users?, how usability standards can be employed to evaluate the usability of systems based on DUIs?

Topics of Interest:

  • Conceptual models for DUIs
  • DUIs on ubiquitous environments
  • Distributed User Interface design
  • Public display interaction and DUIs
  • DUIs and coupled displays
  • DUIs and ambient intelligence
  • Human factors in DUIs design
  • Collaboration and DUIs
  • Usability evaluation in DUIs
  • DUIs on learning environment

If you aren’t already dealing with distributed topic map interfaces and collaboration issues, you will be.

December 4, 2011

Digital Methods

Filed under: Graphs,Interface Research/Design,Web Applications,WWW — Patrick Durusau @ 8:16 pm

Digital Methods

From the website:

Welcome to the Digital Methods course, which is a focused section of the more expansive Digital Methods wiki. The Digital Methods course consists of seven units with digital research protocols, specially developed tools, tutorials as well as sample projects. In particular this course is dedicated to how else links, Websites, engines and other digital objects and spaces may be studied, if methods were to follow the medium, as opposed to importing standard methods from the social sciences more generally, including surveys, interviews and observation. Here digital methods are central. Short literature reviews are followed by distinctive digital methods approaches, step-by-step guides and exemplary projects.

Jack Park forwarded this link. A site that merits careful exploration. You will find things that you did not expect. Much like using the WWW. 😉

Curious what parts of it you find to be the most useful/interesting?

The section on digital tools is my current favorite. I suspect that may change as I continue to explore the site.

Enjoy!

December 3, 2011

A quick study of Scholar-ly Citation

Filed under: HCIR,Interface Research/Design,Searching — Patrick Durusau @ 8:22 pm

A quick study of Scholar-ly Citation by Gene Golovchinsky.

From the post:

Google recently unveiled Citations, its extension to Google Scholar that helps people to organize the papers and patents they wrote and to keep track of citations to them. You can edit metadata that wasn’t parsed correctly, merge or split references, connect to co-authors’ citation pages, etc. Cool stuff. When it comes to using this tool for information seeking, however, we’re back to that ol’ Google command line. Sigh.

Gene covers use of the Citations interface in some detail and then offers suggestions and pointers to resources that could help Google create a better interface.

Can’t say whether Google will take Gene’s advice or not. If you are smart, when you are designing an interface for similar material, you will.

Or as Gene concludes:

In short, Google seems to have taken the lessons from general web search, and applied them to Google Scholar, with predictable results. Instead, they should look at Google Scholar as an opportunity to learn about HCIR, about exploratory search with long-running, evolving information needs, and to apply those lessons to the web search interface.

November 29, 2011

Wakanda

Filed under: Interface Research/Design,Javascript — Patrick Durusau @ 9:12 pm

Wakanda

From the documentation page:

Wakanda is an open-source platform that allows you to develop business web applications. It provides a unified stack running on JavaScript from end-to-end:

  • Cross-platform and cloud-ready on the back end
  • Fully functional, go-anywhere desktop, mobile and tablet apps on the front end

You gain the ability to create browser-based data applications that are as fast, stable, and capable as native client/server solutions are on the desktop.

and:

Notice that no code is generated behind your back: no SQL statements, no binaries, …
What you write is what you get.

I’m not sure that’s a good thing but the name ran a bell and I found an earlier post, Berlin Buzzwords 2011, that just has a slide deck on it.

It’s in Developer Preview 2 (is that pre-pre-alpha or some other designation?) now.

Comments? Anyone looking at this for interface type issues?

I’m the first to admit that most interfaces disappoint but that isn’t because of the underlying technology. Most interfaces disappoint because they are designed as a matter of the underlying technology.

A well-designed green screen would find faster acceptance than any number of current interfaces. (Note I said a well-designed green screen.)

November 28, 2011

3D Ecosystem Globe Grows on #cop17 Tweets

Filed under: Graphics,Interface Research/Design,Visualization — Patrick Durusau @ 7:09 pm

CNN Ecosphere: 3D Ecosystem Globe Grows on #cop17 Tweets from information aesthetics

From the post:

The goal of CNN’s Ecosphere [cnn-ecosphere.com] by Minivegas and Stinkdigital is a real-time Twitter visualization that aims to reveal how the online discussion is evolving around the topic of climate change. More specifically, the visualization aggregates all Twitter messages on the topic of #cop17 (in case you wonder, this is an abbreviation for “The 17th Conference of the Parties (COP17) to the United Nations Framework Convention on Climate Change (UNFCCC)”.

The online visualization consists of an interactive 3D globe, described as a “lush digital ecosystem” that closely resembles the look and behavior of real plants and trees in nature. In practice, the virtual plants in the 3D Ecosphere grow from those tweets that are tagged with #COP17. Each tweet about climate change feeds into a plant representing that specific topic or discussion, causing it to grow a little more.

Don’t know that the DoD (US) would be interested in that level of transparency but one can imagine such an info-graphic that tracks inventory by service, item, etc. and shows the last point inventory was accounted for. Would give the GAO a place to start looking for some of it.

What is a Dashboard?

Filed under: Dashboard,Interface Research/Design — Patrick Durusau @ 7:08 pm

What is a Dashboard? – Defining dashboards, visual analysis tools and other data presentation media by Alexander ‘Sandy’ Chiang.

From the post:

To reiterate, there are typically four types of presentation media: dashboards, visual analysis tools, scorecards, and reports. These are all visual representations of data that help people identify correlations, trends, outliers (anomalies), patterns, and business conditions. However, they all have their own unique attributes.

What do you think? Are there four for business purposes or do other domains offer more choices? If so, how would you distinguish them from those defined here?

Just curious. I can imagine flogging one of these to a business client who was choosing based on experience with these four choices. Hard to choose what you have not seen. But beyond that, say in government circles, do these hold true?

November 24, 2011

FactLab

Filed under: Data,Data Source,Interface Research/Design — Patrick Durusau @ 3:55 pm

FactLab

From the webpage:

Factlab collects official stats from around the world, bringing together the World Bank, UN, the EU and the US Census Bureau. How does it work for you – and what can you do with the data?

From the guardian in the UK.

Very impressive and interactive site.

Don’t agree with their philosophical assumptions about “facts,” but none the less, a number of potential clients do. So long as they are paying the freight, facts they are. 😉

What topics science lovers link to the most

Filed under: Interface Research/Design,Visualization — Patrick Durusau @ 3:41 pm

What topics science lovers link to the most

From FlowingData a visualization by Hilary Mason, chief scientist at bitly, of links to 600 science pages and the pages people visited next.

Ask interesting questions and sometimes you get interesting “answers” or at least observations.

When I see this sort of graphic, it just screams “interface,” even if not suitable for everyone.

November 23, 2011

Google Plugin for Eclipse (GPE) is Now Open Source

Filed under: Cloud Computing,Eclipse,Interface Research/Design,Java — Patrick Durusau @ 7:41 pm

Google Plugin for Eclipse (GPE) is Now Open Source by Eric Clayberg.

From the post:

Today is quite a milestone for the Google Plugin for Eclipse (GPE). Our team is very happy to announce that all of GPE (including GWT Designer) is open source under the Eclipse Public License (EPL) v1.0. GPE is a set of software development tools that enables Java developers to quickly design, build, optimize, and deploy cloud-based applications using the Google Web Toolkit (GWT), Speed Tracer, App Engine, and other Google Cloud services.

….

As of today, all of the code is available directly from the new GPE project and GWT Designer project on Google Code. Note that GWT Designer itself is based upon the WindowBuilder open source project at Eclipse.org (contributed by Google last year). We will be adopting the same guidelines for contributing code used by the GWT project.

Important for the reasons given but also one possible model for topic map services. What if your topic map services were hosted in the cloud and developers could write against against it? That is they would not have to concern themselves with the niceties of topic maps but simply request the information of interest to them, using tools you have provided to make that easier for them.

Take for example the Statement of Disbursements that I covered recently. If that were hosted as a topic map in the cloud, a developer, say working for a resturant promoter, might want to query the topic map for who frequents eateries in a particular area. They are not concerned with the merging that has to take place between various budgets and the alignment of those merges with individuals, etc. They are looking for a list of places with House members alphabetically sorted after it.

November 20, 2011

These Aren’t the Sites You’re Looking For: Building Modern Web Apps

Filed under: HTML,Interface Research/Design — Patrick Durusau @ 4:09 pm

These Aren’t the Sites You’re Looking For: Building Modern Web Apps

Interesting promo for HTML5, which is a developing way to deliver interaction with a topic map.

The presentation does not focus on use of user feedback, the absence of which can leave you with a “really cool” interface that no one outside the development team really likes. To no small degree, it is good interface design with users that tells the tale, not how the interface is seen to work on the “other” side of the screen.

BTW, the slides go out of their way to promote the Chrome browser. Browser usage statistics, you do the math. Marketing is a matter of numbers, not religion.

If you are experimenting with HTML5 as a means to interact with a topic map engine, would appreciate a note when you are ready to go public.

« Newer PostsOlder Posts »

Powered by WordPress