Archive for the ‘HCIR’ Category

What Is the Relationship Between HCI Research and UX Practice?

Sunday, December 14th, 2014

What Is the Relationship Between HCI Research and UX Practice? by Stuart Reeves

From the post:

Human-computer interaction (HCI) is a rapidly expanding academic research domain. Academic institutions conduct most HCI research—in the US, UK, Europe, Australasia, and Japan, with growth in Southeast Asia and China. HCI research often occurs in Computer Science departments, but retains its historically strong relationship to Psychology and Human Factors. Plus, there are several large, prominent corporations that both conduct HCI research themselves and engage with the academic research community—for example, Microsoft Research, PARC, and Google.

If you aren’t concerned with the relationship between HCI research and UX practice you should be.

I was in a meeting discussing the addition of RDFa to ODF when a W3C expert commented that the difficulty users have with RDFa syntax was a “user problem.”

Not to pick on RDFa, I think many of us in the topic map camp felt that users weren’t putting enough effort into learning topic maps. (I will only confess that for myself. Others can speak for themselves.)

Anytime an advocate and/or developer takes the view that syntax, interfaces or interaction with a program is a “user problem,” they pointing the wrong way with the stick.

They should be pointing at the developers, designers, advocates who have not made interaction with their program/software intuitive for the “targeted audience.”

If your program is a LaTeX macro targeted at physicists who eat LaTeX for breakfast, lunch and dinner, that’s one audience.

If your program is an editing application is targeted at users crippled by the typical office suite menus, then you had best make different choices.

That is assuming that use of your application is your measure of success.

Otherwise you can strive to be the second longest running non-profitable software project (Xandu, started in 1960 has first place) in history.

Rather than being right, or saving the world, or any of the other …ologies, I would prefer to have software that users find useful and do in fact use.

Use is pre-condition to any software or paradigm changing the world.


PS: Don’t get me wrong, Xandu is a great project but its adoption of web browsers as means of delivery is a mistake. True, they are everywhere but also subject to the crippled design of web security which prevents transclusion. Which ties you to a server where the NSA can more conveniently scoop up your content.

Better would be a document browser that uses web protocols and ignores web security rules, thus enabling client-side transclusion. Fork one of the open source browsers and be done with it. Only use digitally signed PDFs or from particular sources. Once utility is demonstrated in a PDF-only universe, the demand will grow for extending it to other sources as well.

True, some EU/US trade delegates and others will get caught in phishing schemes but I consider that grounds for dismissal and forfeiture of all retirement benefits. (Yes, I retain a certain degree of users be damned but not about UI/UX experiences. 😉 )

My method of avoiding phishing schemes is to never follow links in emails. If there is an offer I want to look at, I log directly into the site from my browser and not via email. Even for valid messages, which they rarely are.

I first saw this in a tweet by Raffaele Boiano.

CHI2013 [Warning: Cognitive Overload Ahead]

Tuesday, May 14th, 2013

I have commented on several papers from CHI2013 Enrico Bertini posted to his blog.

I wasn’t aware of the difficulty Enrico must have had done to come up with his short list!

Take a look at the day-by-day schedule for CHI2013.

You will gravitate to some papers more than others. But I haven’t seen any slots that don’t have interesting material.

May be oversight on my part but I did not see any obvious links for the presentations/papers.

Definitely a resource to return to over and over again.

HCIR 2012 papers published!

Thursday, November 8th, 2012

HCIR 2012 papers published! by Gene Golovchinsky.

Gene calls attention to four papers from the HCIR Symposium:

Great looking set of papers!

HCIR 2012 Challenge: People Search

Monday, July 9th, 2012

HCIR 2012 Challenge: People Search by Daniel Tunkelang.

From the post:

As we get ready for the Sixth Symposium on Human-Computer Interaction and Information Retrieval this October in Cambridge, MA, people around the world are working on their entries for the third HCIR Challenge.

Daniel reviews the results from HCIR 2010 (exploring news archives) and HCIR 2011 (information availability) and reviews the current challenge, people search.

The people search tasks include “hiring,” “assembling a conference program,” and “finding people to deliver patent research or expert testimony.”

I wonder if any of the disharmony relationship sites are going to sponsor teams?

HCIR 2012 Symposium

Saturday, May 5th, 2012

HCIR 2012 Symposium

Important Dates:

  • Submission deadline (position and research papers): Sunday, July 29
  • HCIR Challenge:
    • Request access to corpus: Friday, June 1
    • Freeze system and submit brief description: Friday, August 31
    • Submit videos or screenshots demonstrating systems on example tasks: Friday, September 14
    • Live demonstrations at symposium: October 4-5
  • Notification date for position and research papers:
    Thursday, September 6
  • Final versions of accepted papers due: Sunday, September 16
  • Presentations and poster session at symposium: Thursday, October 4-5

Gene Golovchinsky writes:

We are happy to announce that the 2012 Human-Computer Information Retrieval Symposium (HCIR 2012) will be held in Cambridge, Massachusetts October 4 – 5, 2012. The HCIR series of workshops has provided a venue for discussion of ongoing research on a range of topics related to interactive information retrieval, including interaction techniques, evaluation, models and algorithms for information retrieval, visual design, user modeling, etc. The focus of these meetings has been to bring together people from industry and academia for short presentations and in-depth discussion. Attendance has grown steadily since the first meeting, and as a result this year we have decided to modify the structure of the meeting to accommodate the increasing demand for participation.

To this end, this year’s event has been expanded to two days to allow more time for presentations and for discussion. In addition to the position papers and challenge reports from previous years, we are introducing a new submission category, the archival paper. Archival papers will be peer-reviewed to a rigorous standard comparable to first-tier conference submissions, and the accepted papers will be published on and indexed in the ACM Digital Library.

It’s Massachusetts in October (think Fall colors) and it sounds like a great conference.

DUALIST: Utility for Active Learning with Instances and Semantic Terms

Wednesday, April 18th, 2012

DUALIST: Utility for Active Learning with Instances and Semantic Terms

From the webpage:

DUALIST is an interactive machine learning system for quickly building classifiers for text processing tasks. It does so by asking “questions” of a human “teacher” in the form of both data instances (e.g., text documents) and features (e.g., words or phrases). It uses active learning and semi-supervised learning to build text-based classifiers at interactive speed.

(video demo omitted)

The goals of this project are threefold:

  1. A practical tool to facilitate annotation/learning in text analysis projects.
  2. A framework to facilitate research in interactive and multi-modal active learning. This includes enabling actual user experiments with the GUI (as opposed to simulated experiments, which are pervasive in the literature but sometimes inconclusive for use in practice) and exploring HCI issues, as well as supporting new dual supervision algorithms which are fast enough to be interactive, accurate enough to be useful, and might make more appropriate modeling assumptions than multinomial naive Bayes (the current underlying model).
  3. A starting point for more sophisticated interactive learning scenarios that combine multiple “beyond supervised learning” strategies. See the proceedings of the recent ICML 2011 workshop on this topic.

This could be quite useful for authoring a topic map across a corpus of materials. With interactive recognition of occurrences of subjects, etc.

Sponsored in part by the folks at DARPA. Unlike Al Gore, they did build the Internet.

Excellent Papers for 2011 (Google)

Friday, March 23rd, 2012

Excellent Papers for 2011 (Google)

Corinna Cortes and Alfred Spector of Google Research have collected up great papers published by Glooglers in 2011.

To be sure there are the obligatory papers on searching and natural language processing but there are also papers on audio processing, human-computer interfaces, multimedia, systems and other topics.

Many of these will be the subjects of separate posts in the future. For now, peruse at your leisure and sing out when you see one of special interest.

Designing Search (part 1): Entering the query

Thursday, January 19th, 2012

Designing Search (part 1): Entering the query by Tony Russell-Rose.

From the post:

In an earlier post we reviewed models of information seeking, from an early focus on documents and queries through to a more nuanced understanding of search as an information journey driven by dynamic information needs. While each model emphasizes different aspects of the search process, what they share is the principle that search begins with an information need which is articulated in some form of query. What follows below is the first in a mini-series of articles exploring the process of query formulation, starting with the most ubiquitous of design elements: the search box.

If you are designing or using search interfaces, you will benefit from reading this post.

Suggestion: Don’t jump to the summary and best practices. Tony’s analysis is just as informative as the conclusions he reaches.


Sunday, December 25th, 2011

Citeology: Visualizing the Relationships between Research Publications

From the post:

Justin Matejka at Autodesk Research has recently released the sophisticated visualization “Citeology: Visualizing Paper Genealogy” []. The visualization shows the 3,502 unique academic research papers that were published at the CHI and UIST, two of the most renowned human-computer interaction (HCI) conferences, between the years 1982 and 2010.

All the articles are listed by year and sorted with the most cited papers in the middle, whereas the 11,699 citations that connect the articles to one another are represented by curved lines. Selecting a single paper reveals colors the papers from the past that the paper referenced in blue, in addition to the future articles which referenced it, in brownish-red. Titles, The resulting graphs can be explored as a low-rez interactive screen, or as a high-rez, static PDF graph.

Interesting visualization but what does it mean for one paper to cite another?

I was spoiled by the granularity of legal decision indexing, at least for United States decisions, that broke cases down by issues. So that you could separate out a case being cited for a jurisdictional issue versus the same case being cited on a damages issue. I realize it took a large number of very clever editors (now I assume assisted by computers) to create such an index but it made use of the vast archives of legal decisions possible.

I suppose my question is: Why does one paper cite another? To agree with some fact finding or to disagree? If either, which fact(s)? To extend, supprt or correct some technique? If so, which one? For exampe, so I could trace papers that extend the Patricia trie as opposed to those that cite it in passing. It would certainly make research in any number of areas much easier and possibly more effective.

A quick study of Scholar-ly Citation

Saturday, December 3rd, 2011

A quick study of Scholar-ly Citation by Gene Golovchinsky.

From the post:

Google recently unveiled Citations, its extension to Google Scholar that helps people to organize the papers and patents they wrote and to keep track of citations to them. You can edit metadata that wasn’t parsed correctly, merge or split references, connect to co-authors’ citation pages, etc. Cool stuff. When it comes to using this tool for information seeking, however, we’re back to that ol’ Google command line. Sigh.

Gene covers use of the Citations interface in some detail and then offers suggestions and pointers to resources that could help Google create a better interface.

Can’t say whether Google will take Gene’s advice or not. If you are smart, when you are designing an interface for similar material, you will.

Or as Gene concludes:

In short, Google seems to have taken the lessons from general web search, and applied them to Google Scholar, with predictable results. Instead, they should look at Google Scholar as an opportunity to learn about HCIR, about exploratory search with long-running, evolving information needs, and to apply those lessons to the web search interface.

HCIR 2011 keynote

Saturday, November 12th, 2011

HCIR 2011 keynote by Gene Golovchinsky

From the post:

HCIR 2011 took place almost three weeks ago, but I am just getting caught up after a week at CIKM 2011 and an actual almost-no-internet-access vacation. I wanted to start off my reflections on HCIR with a summary of Gary Marchionini‘s keynote, titled “HCIR: Now the Tricky Part.” Gary coined the term “HCIR” and has been a persuasive advocate of the concepts represented by the term. The talk used three case studies of HCIR projects as a lens to focus the audience’s attention on one of the main challenges of HCIR: how to evaluate the systems we build.

The projects reviewed are themselves worthy of separate treatments, at length.

Gene’s summary makes one wish for video of the keynote. Perhaps I have overlooked it? If so, please post the link.