Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 6, 2012

History of the Book [Course resources]

Filed under: Books,History,Interface Research/Design — Patrick Durusau @ 11:45 am

History of the Book by Kate Martinson.

From the webpage:

This website consists of information relating to Art 43 – The History of the Book. Participants should consider this site as a learning tool for the class. It will contain updated documents, images for reference, necessary links, class announcements and other information necessary for participation in the course. It will be constantly modified throughout the semester. Questions or problems should be directed to Kate Martinson, or in the event of technical difficulties, to the Help Desk.

A large number of links to images and other materials on writing and book making around the world. From cuneiform tablets to electronic texts.

I encountered it while looking for material on book indexing.

Useful for studying the transmission of and access to information. Which may influence how you design your topic maps.

Grossly oversimplified but consider the labor involved in writing/accessing information on a cuneiform tablet, on a scroll, in a movable type codex or in electronic form.

At each stage the labor becomes less and the amount of recorded information (not the same as useful information) goes up.

Rather than presenting more information to a user, would it be better for topic maps to present less? And/or to make it difficult to add more information?

What if FaceBook offered a filter to exclude coffee, pictures not taken by the sender, etc.? Would that make it a more useful information stream?

How We Read….[Does Your Topic Map Contribute to Information Overload?]

Filed under: Indexing,Information Overload,Interface Research/Design,Usability,Users — Patrick Durusau @ 11:43 am

How we read, not what we read, may be contributing to our information overload by Justin Ellis.

From the post:

Every day, a new app or service arrives with the promise of helping people cut down on the flood of information they receive. It’s the natural result of living in a time when an ever-increasing number of news providers push a constant stream of headlines at us every day.

But what if it’s the ways we choose to read the news — not the glut of news providers — that make us feel overwhelmed? An interesting new study out of the University of Texas looks at the factors that contribute to the concept of information overload, and found that, for some people, the platform on which news is being consumed can make all the difference between whether you feel overwhelmed.

The study, “News and the Overloaded Consumer: Factors Influencing Information Overload Among News Consumers” was conducted by Avery Holton and Iris Chyi. They surveyed more than 750 adults on their digital consumption habits and perceptions of information overload. On the central question of whether they feel overloaded with the amount of news available, 27 percent said “not at all”; everyone else reported some degree of overloaded.

The results imply that the more constrained the platform for delivery of content, the less overwhelmed users feel. Reading news on a cell phone for example. The links and videos on Facebook being at the other extreme.

Which makes me curious about information interfaces in general and topic map interfaces in particular.

Does the traditional topic map interface (think Omnigator) contribute to a feeling of information overload?

If so, how would you alter that display to offer the user less information by default but allow its expansion upon request?

Compare to a book index, which offers sparse information on a subject, that can be expanded by following a pointer to fuller treatment of a subject.

I don’t think replicating a print index with hyperlinks in place of traditional references is the best solution but it might be a starting place for consideration.

December 4, 2012

node-webkit

Filed under: CSS3,HTML5,Interface Research/Design,Javascript — Patrick Durusau @ 10:14 am

node-webkit

From the webpage:

node-webkit is an app runtime based on Chromium and node.js. You can write native apps in HTML and Javascript with node-webkit. It also lets you to call Node.js modules directly from DOM and enables a new way of writing native applications with all Web technologies.

Will HTML5 and Javascript, via apps like node-webkit free users from developer based interfaces?

Developer based interfaces are intended to be useful for others, or at least I suspect so, but quite often fall short of the mark.

Apps like node-webkit should encourage rapid prototyping and less reluctance to trash yesterday’s interface code. (I said “should,” whether it will or not remains to be seen.)

Rather than a “full featured” topic map editor, how would you divide the task of authoring a topic map into pieces?

I first saw this in Christophe Lalanne’s A bag of tweets / November 2012.

December 2, 2012

Listen to Your Stakeholders : Sowing seeds for future research

Filed under: Design,Interface Research/Design,Usability,Use Cases,Users — Patrick Durusau @ 5:06 pm

Listen to Your Stakeholders : Sowing seeds for future research by Tomer Sharon.

From the post:

If I needed to summarize this article in one sentence, I’d say: “Shut up, listen, and then start talking.”

User experience practitioners who are also excellent interviewers know that listening is a key aspect of a successful interview. By keeping your mouth shut you reduce the risk of verbal foibles and are in a better position to absorb information. When you are concentrated in absorbing information, you can then begin to identify research opportunities and effectively sow seeds for future research.

When you discuss future UX research with your stakeholders you want to collect pure, unbiased data and turn it into useful information that will help you pitch and get buy-in for future research activities. As in end-user interviews, stakeholder interviews a word, a gesture, or even a blink or a certain body posture can bias an interviewee and add flaws to data you collect. Let’s discuss several aspects of listening to your stakeholders when you talk with them about UX research. You will quickly see how these are similar to techniques you apply when interviewing users.

Stakeholders are our clients, whether internal or external to our organization. These are people who need to believe in what we do so they will act on research results and fund future research. We all have a stake in product development. They have a stake in UX research.

Tomer’s advice doesn’t require hardware or software. It does require wetware and some social interaction skills.

If you are successful with the repeated phrase technique, ping me. (“These aren’t the droids you are looking for.”) I have a phrase for them that starts with a routing number. 😉

November 27, 2012

Top 25 Web Design Blogs of 2012

Filed under: Graphics,Interface Research/Design,Visualization — Patrick Durusau @ 1:38 pm

25 Of The Most Useful Web Design Blogs Of This Year by Bogdan Sandu.

I kept the listing order but added the names of the sites/blogs. Reasoning those are easier to recognize than bare URLs.

Smashing Magazine http://www.smashingmagazine.com/

speckyboy Design Magazine http://speckyboy.com/

The Dave Walsh Blog http://davidwalsh.name/

TutorialZinetz http://tutorialzine.com/

Design Your Way http://www.designyourway.net/blog/

CSS – Tricks http://css-tricks.com/

HONGKIAT.COM http://www.hongkiat.com/blog/

onextrapixel http://www.onextrapixel.com/

Noupe http://www.noupe.com/

Web Designer Wall http://webdesignerwall.com/

Webdesigner Depot http://www.webdesignerdepot.com/

WPtuts+ http://wp.tutsplus.com/

Vandelay Design http://vandelaydesign.com/blog/

1stWebDesigner http://www.1stwebdesigner.com/

Design Shack http://designshack.net/

Six Revisions http://sixrevisions.com/

Web Designer Ledger (WDL) http://webdesignledger.com/

Nettuts+ http://net.tutsplus.com/

SpyreStudios http://spyrestudios.com/

Webdesigntuts+ http://webdesign.tutsplus.com/

Digital Telepathy Blog http://www.dtelepathy.com/blog/

Abduzeedo http://abduzeedo.com/

Codrops http://tympanus.net/codrops/

Awwwards http://www.awwwards.com/blog/

Designmodo http://designmodo.com/

November 26, 2012

UILLD 2013 — User interaction built on library linked data

Filed under: Interface Research/Design,Library,Linked Data,Usability,Users — Patrick Durusau @ 4:48 pm

UILLD 2013: Workshop on User interaction built on library linked data (UILLD) Pre-conference to the 79th World Library and Information Conference, Jurong Regional Library, Singapore.

Important Dates:

Paper submission deadline: February 28, 2013
Acceptance notification: May 15, 2013
Camera-ready versions of accepted papers: June 30, 2013
Workshop date: August 16, 2013

From the webpage:

The quantity of Linked Data published by libraries is increasing dramatically: Following the lead of the National Library of Sweden (2008), several libraries and library networks have begun to publish authority files and bibliographic information as linked (open) data. However, applications that consume this data are not yet widespread. Particularly, there is a lack of methods for integration of Linked Data from multiple sources and its presentation in appropriate end user interfaces. Existing services tend to build on one or two well integrated datasets – often from the same data supplier – and do not actively use the links provided to other datasets within or outside of the library or cultural heritage sector to provide a better user experience.

CALL FOR PAPERS

The main objective of this workshop/pre-conference is to provide a platform for discussion of deployed services, concepts, and approaches for consuming Linked Data from libraries and other cultural heritage institutions. Special attention will be given to papers presenting working end user interfaces using Linked Data from both cultural heritage institutions (including libraries) and other datasets.

For further information about the workshop, please contact the workshops chairs at uilld2013@gmail.com

In connection with this workshop, see also: IFLA World Library and Information Congress 79th IFLA General Conference and Assembly.

I first saw this in a tweet by Ivan Herman.

November 25, 2012

Code Maven and programming for teens [TMs for pre-teens/teens?]

Filed under: Games,Interface Research/Design,Programming,Teaching — Patrick Durusau @ 1:01 pm

Code Maven and programming for teens by Greg Linden.

From the post:

I recently launched Code Maven from Crunchzilla. It helps teens learn a little about what they can do if they learn more about programming.

A lot of teens are curious about programming these days, but don’t end up doing any. And, it’s true, if you are a teen who wants to learn programming, you either have to use tutorials, books, and classes made for adults (which have a heavy focus on syntax and are slow to let you do anything) or high level tools that let you build games but teach a specialized programming language you can’t use anywhere else. Maybe something else might be useful to help more teens get started and get interested.

Code Maven lets teens learn a little about how to program, starting with basic concepts such as loops then rapidly getting into fractals, animation, physics, and games. In every lesson, all the code is there — in some cases, a complete physics engine with gravity, frame rate, friction, and other code you can modify — and it is all live Javascript, so the impact of any change is immediate. It’s a fun way to explore what programming can do.

Code Maven is a curious blend of a game and a tutorial. Like a tutorial, it’s step-by-step, and there’s not-too-big, not-too-small challenges at each step. Like a game, it’s fun, addictive, and experimentation can yield exciting (and often very cool) results. I hope you and your friends like it. Please try Code Maven, tell your friends about it, and, if you have suggestions or feedback, please e-mail me at maven@crunchzilla.com

Greg is also responsible for Code Monster, appropriate for introducing programming to kids 9-14. Code Maven, teens, 13-18 plus adults.

Curious if you know of other projects of this type?

Suspect it is effective in part because of the immediate feedback. Not to mention effective authoring/creation of the interface!

Something you should share with others.

Reminds me of the reason OS vendors almost give away academic software. If a student knows “your” system and not another, which one has the easier learning curve when they leave school?

What does that suggest to you about promoting a semantic technology like topic maps?

Infinite Jukebox plays your favorite songs forever

Filed under: Interface Research/Design,Music,Navigation,Similarity — Patrick Durusau @ 11:51 am

Infinite Jukebox plays your favorite songs forever by Nathan Yau.

From the post:

You know those songs that you love so much that you cry because they’re over? Well, cry no more with the Inifinite Jukebox by Paul Lamere. Inspired by Infinite Gangnam Style, the Infinite Jukebox lets you upload a song, and it’ll figure out how to cut the beats and piece them back together for a version of that song that goes forever.

Requires advanced web audio so you need to fire up a late version of Chrome or Safari. (I am on Ubuntu so can tell you about IE. In a VM?)

I tried it with Metallica’s Unforgiven.

Very impressive, although that assessment will vary based on your taste in music.

Would make an interesting interface for exploring textual features.

To have calculation of features and automatic navigation based on some pseudo-randomness. So you encounter data or text you would not otherwise have seen.

Many would argue we navigate with intention and rational purpose, but to be honest, that’s comfort analysis. It’s an explanation we use to compliment ourselves. (see, Thinking, Fast and Slow) Research suggests decision making is complex and almost entirely non-rational.

Complexificaton: Is ElasticSearch Making a Case for a Google Search Solution?

Filed under: ElasticSearch,Interface Research/Design,Search Interface,Searching — Patrick Durusau @ 10:15 am

Complexificaton: Is ElasticSearch Making a Case for a Google Search Solution? by Stephen Arnold.

From the post:

I don’t have any dealings with Google, the GOOG, or Googzilla (a word I coined in the years before the installation of the predator skeleton on the wizard zone campus). In the briefings I once endured about the GSA (Google speak for the Google Search Appliance), I recall three business principles imparted to me; to wit:

  1. Search is far too complicated. The Google business proposition was and is that the GSA and other Googley things are easy to install, maintain, use, and love.
  2. Information technology people in organizations can often be like a stuck brake on a sports car. The institutionalized approach to enterprise software drags down the performance of the organization information technology is supposed to serve.
  3. The enterprise search vendors are behind the curve.

Now the assertions from the 2004 salad days of Google are only partially correct today. As everyone with a colleague under 25 years of age knows, Google is the go to solution for information. A number of large companies have embraced Google’s all-knowing, paternalistic approach to digital information. However, others—many others, in fact—have not.

I won’t repeat Stephen’s barbs at ElasticSearch but his point applies to search interfaces and approaches in general.

Is your search application driving business towards simpler solutions? (If the simpler solution isn’t yours, isn’t that the wrong direction?)

Designing for Consumer Search Behaviour [Descriptive vs. Prescriptive]

Filed under: Interface Research/Design,Search Behavior,Usability,Users — Patrick Durusau @ 9:24 am

Designing for Consumer Search Behaviour by Tony Russell-Rose.

From the post:

A short while ago I posted the slides to my talk at HCIR 2012 on Designing for Consumer Search Behaviour. Finally, as promised, here is the associated paper, which is co-authored with Stephann Makri (and is available as a pdf in the proceedings). This paper takes the ideas and concepts introduced in A Model of Consumer Search Behaviour and explores their practical design implications. As always, comments and feedback welcome :)

ABSTRACT

In order to design better search experiences, we need to understand the complexities of human information-seeking behaviour. In this paper, we propose a model of information behavior based on the needs of users of consumer-oriented websites and search applications. The model consists of a set of search modes users employ to satisfy their information search and discovery goals. We present design suggestions for how each of these modes can be supported in existing interactive systems, focusing in particular on those that have been supported in interesting or novel ways.

Tony uses nine (9) categories to classify consumer search behavior:

1. Locate….

2. Verify….

3. Monitor….

4. Compare….

5. Comprehend….

6. Explore….

7. Analyze….

8. Evaluate….

9. Synthesize….

The details will help you be a better search interface designer so see Tony’s post for the details on each category.

My point is that his nine categories are based on observation of and research on, consumer behaviour. A descriptive approach to consumer search behaviour. Not a prescriptive approach to consumer search behaviour.

In some ideal world, perhaps consumers would understand why X is a better approach to Y, but attracting users is done in present world, not an ideal one.

Think of it this way:

Every time an interface requires training of or explanation to a consumer, you have lost a percentage of the potential audience share. Some you may recover but a certain percentage is lost forever.

Ready to go through your latest interface, pencil and paper in hand to add up the training/explanation points?

November 18, 2012

Level Up: Study Reveals Keys to Gamer Loyalty [Tips For TM Interfaces]

Filed under: Interface Research/Design,Marketing,Usability,Users — Patrick Durusau @ 11:24 am

Level Up: Study Reveals Keys to Gamer Loyalty

For topic maps that aspire to be common meeting places, there are a number of lessons in this study. The study is forthcoming but quoting from the news coverage:

One strategy found that giving players more control and ownership of their character increased loyalty. The second strategy showed that gamers who played cooperatively and worked with other gamers in “guilds” built loyalty and social identity.

“To build a player’s feeling of ownership towards its character, game makers should provide equal opportunities for any character to win a battle,” says Sanders. “They should also build more selective or elaborate chat rooms and guild features to help players socialize.”

In an MMORPG, players share experiences, earn rewards and interact with others in an online world that is ever-present. It’s known as a “persistent-state-world” because even when a gamer is not playing, millions of others around the globe are.

Some MMORPGs operate on a subscription model where gamers pay a monthly fee to access the game world, while others use the free-to-play model where access to the game is free but may feature advertising, additional content through a paid subscription or optional purchases of in-game items or currency.

The average MMORPG gamer spends 22 hours per week playing.

Research on loyalty has found that increasing customer retention by as little as 5 percent can increase profits by 25 to 95 percent, Sanders points out.

So, how would you like to have people paying to use your topic map site 22 hours per week?

There are challenges in adapting these strategies to a topic map context but that would be your value-add.

I first saw this at ScienceDaily.

The study will be published in the International Journal of Electronic Commerce.

That link is: http://www.ijec-web.org/. For the benefit of ScienceDaily and the University of Buffalo.

Either they were unable to find that link or are unfamiliar with the practice of placing hyperlinks in HTML texts to aid readers in locating additional resources.

November 16, 2012

Phrase Detectives

Filed under: Annotation,Games,Interface Research/Design,Linguistics — Patrick Durusau @ 5:21 am

Phrase Detectives

This annotation game was also mentioned in Bob Carpenter’s Another Linguistic Corpus Collection Game, but it merits separate mention.

From the description:

Welcome to Phrase Detectives

Lovers of literature, grammar and language, this is the place where you can work together to improve future generations of technology. By indicating relationships between words and phrases you will help to create a resource that is rich in linguistic information.

It is easy to see how this could be adapted to identification of subjects, roles and associations in texts.

And in a particular context, the interest would be in capturing usage in that context, not the wider world.

Definitely has potential as a topic map authoring interface.

Another Linguistic Corpus Collection Game

Filed under: Annotation,Games,Interface Research/Design — Patrick Durusau @ 5:11 am

Another Linguistic Corpus Collection Game by Bob Carpenter

From the post:

Johan Bos and his crew at University of Groningen have a new suite of games aimed at linguistic data data collection. You can find them at:

http://www.wordrobe.org/

Wordrobe is currently hosting four games. Twins is aimed at part-of-speech tagging, Senses is for word sense annotation, Pointers for coref data, and Names for proper name classification.

One of the neat things about Wordrobe is that they try to elicit some notion of confidence by allowing users to “bet” on their answers.

Used here with a linguistic data collection but there is no reason why such games would not work in other contexts.

For instance, in an enterprise environment seeking to collect information for construction of a topic map. An alternative to awkward interviews where you try to elicit intuitive knowledge from users.

Create a game using their documents and give meaningful awards, extra vacation time for instance.

November 15, 2012

Five User Experience Lessons from Tom Hanks

Filed under: Interface Research/Design,Usability,Users — Patrick Durusau @ 6:39 pm

Five User Experience Lessons from Tom Hanks by Steve Tengler.

From the post:

Some of you might work for companies that have not figured it out. They might still be pondering, “Why should we care about user experience?” Maybe they don’t care at all. Maybe they’ve lucked into a strange vortex where customers are accepting of unpleasant interactions and misguided designs.

If you’re that lucky, stop reading this article and go buy a lottery ticket. If, on the other hand, you work at any company with a product, website, or application within which a customer might fail or succeed, you should pause to understand how the strategic failings of some (e.g. Research In Motion, Yahoo, or Sony) caused them to be leapfrogged by the vision of others (e.g. Apple, Google).

But delineating the underpinnings of user experience clearly for everyone is not an easy task. There are algorithms, axioms, and antonyms abound. My frequent reference-point is pop culture; something to which folks can relate. I’ve already touched on UX lessons from Tom Cruise and Johnny Depp, but a thirsty person crawling through the desert of knowledge needs more than two swigs, so today’s user experience lessons are five taken from the cannon of Tom Hanks.

Another touchdown by Steve Tengler!

I have seen at least some of the movies (the older ones) that he mentions but his creativity in relating them to UI design is amazing.

I will have to comment and suggest he post lessons based on Kim Kardashian. 😉

Dueling and Design…

Filed under: Design,Interface Research/Design,Usability,Users — Patrick Durusau @ 5:52 pm

Dueling and Design : How fencing and UX are quite alike by Ben Self.

From the post:

The other day I was leaving the office and mentally switching gears from the design work I had been doing all day to the fencing class I was about to teach that night. During my commute, I thought to myself, “It’s time to stop thinking like the end user and start thinking like a fencer.”

Suddenly realizing the similarities between my job and my hobby, I found myself pondering the connections between fencing and UX Design further over the next few weeks. I discovered more parallels than I had expected, although the first thought I had was that the goals are almost completely opposite.

When I am fencing, I want to frustrate my opponent and keep him from accomplishing his goals. When I am designing an interface, I want to encourage the user and help them accomplish their goals. It occurred to me, however, that while the final results are polar opposites, many of the methods used for assessing how best to achieve those opposite ends are actually very similar.

All these years I thought interfaces were designed to prevent me from accomplishing my goals. An even closer parallel to fencing. 😉

Ben does an excellent job of drawing parallels but I am particularly fond of his suggestion that you know your opponent/users. It’s hard work, which is probably why you don’t see it very often in practice.

What other activity do you have that illustrates principles for an interface, communication with others, or other semantic type activities?

November 8, 2012

HCIR 2012 papers published!

Filed under: HCIR,Interface Research/Design,Search Interface,Searching — Patrick Durusau @ 3:45 pm

HCIR 2012 papers published! by Gene Golovchinsky.

Gene calls attention to four papers from the HCIR Symposium:

Great looking set of papers!

November 3, 2012

Reducing/Reinforcing Confirmation Bias in TM Interfaces

Filed under: Confidence Bias,Interface Research/Design,Users — Patrick Durusau @ 9:52 am

Recent research has demonstrated a difficult-to-read font can reduce the influence of the “confirmation bias.”

Wikipedia on confirmation bias:

Confirmation bias (also called confirmatory bias or myside bias) is a tendency of people to favor information that confirms their beliefs or hypotheses. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. For example, in reading about gun control, people usually prefer sources that affirm their existing attitudes. They also tend to interpret ambiguous evidence as supporting their existing position. Biased search, interpretation and memory have been invoked to explain attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence), belief perseverance (when beliefs persist after the evidence for them is shown to be false), the irrational primacy effect (a greater reliance on information encountered early in a series) and illusory correlation (when people falsely perceive an association between two events or situations).

A series of experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Later work re-interpreted these results as a tendency to test ideas in a one-sided way, focusing on one possibility and ignoring alternatives. In certain situations, this tendency can bias people’s conclusions. Explanations for the observed biases include wishful thinking and the limited human capacity to process information. Another explanation is that people show confirmation bias because they are weighing up the costs of being wrong, rather than investigating in a neutral, scientific way.

Confirmation biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence. Poor decisions due to these biases have been found in military, political, and organizational contexts.

[one footnote reference removed]

The topic maps consumed by users can either help avoid or reinforce (depends on your agenda) the impact of the confirmation bias.

The popular account of the research:

Liberals and conservatives who are polarized on certain politically charged subjects become more moderate when reading political arguments in a difficult-to-read font, researchers report in a new study. Likewise, people with induced bias for or against a defendant in a mock trial are less likely to act on that bias if they have to struggle to read the evidence against him.

The study is the first to use difficult-to-read materials to disrupt what researchers call the “confirmation bias,” the tendency to selectively see only arguments that support what you already believe, psychology professor Jesse Preston said.

The new research, reported in the Journal of Experimental Social Psychology, is one of two studies to show that subtle manipulations that affect how people take in information can reduce political polarization. The other study, which explores attitudes toward a Muslim community center near the World Trade Center site, is described in a paper in the journal Social Psychological and Personality Science.

By asking participants to read an overtly political argument about capital punishment in a challenging font, the researchers sought to disrupt participants’ usual attitudes to the subject, said graduate student Ivan Hernandez, who led the capital punishment/mock trial study with University of Illinois psychology professor Jesse Preston.

The intervention worked. Liberals and conservatives who read the argument in an easy-to-read font were much more polarized on the subject than those who had to slog through the difficult version. [Difficult-To-Read Font Reduces Political Polarity, Study Finds]

Or if you are interested in the full monty:

“Disfluency disrupts the confirmation bias.” by Ivan Hernandez and Jesse Lee Preston. Journal of Experimental Social Psychology Volume 49, Issue 1, January 2013, Pages 178–182.

Abstract:

One difficulty in persuasion is overcoming the confirmation bias, where people selectively seek evidence that is consistent with their prior beliefs and expectations. This biased search for information allows people to analyze new information in an efficient, but shallow way. The present research discusses how experienced difficultly in processing (disfluency) can reduce the confirmation bias by promoting careful, analytic processing. In two studies, participants with prior attitudes on an issue became less extreme after reading an argument on the issues in a disfluent format. The change occurred for both naturally occurring attitudes (i.e. political ideology) and experimentally assigned attitudes (i.e. positivity toward a court defendant). Importantly, disfluency did not reduce confirmation biases when participants were under cognitive load, suggesting that cognitive resources are necessary to overcome these biases. Overall, these results suggest that changing the style of an argument’s presentation can lead to attitude change by promoting more comprehensive consideration of opposing views.

I like the term “disfluency,” although “a dlsfluency on both your houses” doesn’t have the ring of “a plague on both your houses,” does it?*

Must be the confirmation bias.

* Romeo And Juliet Act 3, scene 1, 90–92

November 2, 2012

Designing to Build Trust : The factors that matter

Filed under: Design,Interface Research/Design — Patrick Durusau @ 7:13 pm

Designing to Build Trust : The factors that matter by Ilana Westerman.

From the post:

More than ever in the digital domain, companies rely on design to communicate with their customers. Because the experience of visiting a company website is by nature remote—lacking any direct interaction with any tangible assets offered—the company’s digital presence often defines a user’s impressions of the company as a whole. In this context, how customers experience not only the website but also the way the site handles their personal information becomes key to shaping their overall impression of the brand.

In this article, we will dive into the nature of trusted online experiences, why they are important, design attributes that we know people trust, and how design creates trust and distrust. We’ll illustrate the issues around designing for trust with a sample prototype of a healthcare exchange design and user reactions to it.

Research to consider when building an interface for your topic map based application.

Google’s Hybrid Approach to Research [Lessons For Topic Map Research?]

Filed under: Interface Research/Design,Research Methods — Patrick Durusau @ 4:10 pm

Google’s Hybrid Approach to Research by Alfred Spector, Peter Norvig, and Slav Petrov.

From the start of the article:

In this Viewpoint, we describe how we organize computer science research at Google. We focus on how we integrate research and development and discuss the benefits and risks of our approach. The challenge in organizing R&D is great because CS is an increasingly broad and diverse field. It combines aspects of mathematical reasoning, engineering methodology, and the empirical approaches of the scientific method. The empirical components are clearly on the upswing, in part because the computer systems we construct have become so large that analytic techniques cannot properly describe their properties, because the systems now dynamically adjust to the difficult-to-predict needs of a diverse user community, and because the systems can learn from vast datasets and large numbers of interactive sessions that provide continuous feedback.

We have also noted that CS is an expanding sphere, where the core of the field (theory, operating systems, and so forth) continues to grow in depth, while the field keeps expanding into neighboring application areas. Research results come not only from universities, but also from companies, both large and small. The way research results are disseminated is also evolving and the peer-reviewed paper is under threat as the dominant dissemination method. Open source releases, standards specifications, data releases, and novel commercial systems that set new standards upon which others then build are increasingly important.

This seems particularly useful:

Thus, we have structured the Google environment as one where new ideas can be rapidly verified by small teams through large-scale experiments on real data, rather than just debated. The small-team approach benefits from the services model, which enables a few engineers to create new systems and put them in front of users.

Particularly in terms of research and development for topic maps.

I confess to a fondness for the “…just debated” side but point out that developers aren’t users. For interface requirements or software capabilities.

Selling what you have debated or written isn’t the same thing as selling what customers want. You can verify that lesson with with the Semantic Web folks.

Semantic impedance is going to grow along with “big data.”

Topic maps need to be poised to deliver a higher ROI in resolving semantic impedance than ad hoc solutions. And to delivery that ROI in the context of “big data” tools.

Research dead ahead.

October 25, 2012

Ditch Traditional Wireframes

Filed under: Design,Graphics,Interface Research/Design,Visualization — Patrick Durusau @ 4:18 pm

Ditch Traditional Wireframes by Sergio Nouvel.

From the post:

Wireframes have played an increasingly leading role in the modern Web development process. They provide a simple way of validating user interface and layout and are cheaper and faster to produce than a final visual comp. However, most of the methods and techniques used to create them are far from being efficient, contradicting the principles and values that made wireframing useful in first place.

While this article is not about getting rid of the wireframing process itself, now is a good time for questioning and improving some of the materials and deliverables that have become de facto standards in the UX field. To make this point clear, let´s do a quick review of the types of wireframes commonly used.

Especially appropriate since I mentioned the Health Design Challenge [$50K in Prizes – Deadline 30th Nov 2012] earlier today. You are likely to be using one or more of these techniques for your entry.

Hopefully Sergio’s comments will make your usage more productive and effective!

October 23, 2012

The Ultimate User Experience

Filed under: Image Recognition,Interface Research/Design,Marketing,Usability,Users — Patrick Durusau @ 4:55 am

The Ultimate User Experience by Tim R. Todish.

From the post:

Today, more people have mobile phones than have electricity or safe drinking water. In India, there are more cell phones than toilets! We all have access to incredible technology, and as designers and developers, we have the opportunity to use this pervasive technology in powerful ways that can change people’s lives.

In fact, a single individual can now create an application that can literally change the lives of people across the globe. With that in mind, I’m going to highlight some examples of designers and developers using their craft to help improve the lives of people around the world in the hope that you will be encouraged to find ways to do the same with your own skills and talents.

I may have to get a cell phone to get a better understanding of its potential when combined with topic maps.

For example, the “hot” night spots are well known in New York City. What if a distributed information network imaged guests as they arrived/left and maintained a real time map of images + locations (no names)?

That would make a nice subscription service, perhaps with faceted searching by physical characteristics.

October 22, 2012

Boy Scout Explusions – Oil Drop Semantics

Data on decades of Boy Scout expulsions released by Nathan Yau.

Nathan points to an interactive map, searchable list and downloadable data from the Los Angeles Times of data from the Boy Scouts of America on people expelled from the Boy Scouts for suspicions of sexual abuse.

The LA Times has done a great job with this data set (and the story) but it also illustrates a limitation in current data practices.

All of these cases occurred in jurisdictions with laws against sexual abuse of children.

If a local sheriff or district attorney reads about this database, how do they tie it into their databases?

Not at simple as saying “topic map,” if that’s what you were anticipating.

Among the issues that would need addressing:

  • Confidentiality – Law enforcement and courts have their own rules about sharing data.
  • Incompatible System Semantics – The typical problem that is encountered in business enterprises, writ large. Every jurisdiction is likely to have its own rules, semantics and files.
  • Incompatible Data Semantics – Assuming systems talk to each other, the content and its semantics will vary from one jurisdiction to another.
  • Subjects Evading Identification – The subjects (sorry!) in question are trying to avoid identification.

You could get funding for a conference of police administrators to discuss how to organize additional meetings to discuss potential avenues for data sharing and get the DHS to fund a large screen digital TV (not for the meeting, just to have one). Consultants could wax and whine about possible solutions if someday you decided on one.

I have a different suggestion: Grab your records guru and meet up with an overlapping or neighboring jurisdiction’s data guru and one of their guys. For lunch.

Bring note pads and sample records. Talk about how you share information between officers (that you and your counter-part). Let the data gurus talk about how they can share data.

Practical questions of how to share data and what does your data mean now? Make no global decisions, no award medals for attending, etc.

Do that once or twice a month for six months. Write down what worked, what didn’t work (just as important). Each of you picks an additional partner. Share what you have learned.

The documenting and practice at information sharing will be the foundation for more formal information sharing systems. Systems based on documented sharing practices, not how administrators imagine sharing works.

Think of it as “oil drop semantics.”

Start small and increase only as more drops are added.

The goal isn’t a uniform semantic across law enforcement but understanding what is being said. That understanding can be mapped into a topic map or other information sharing strategy. But understanding comes first, mapping second.

October 21, 2012

Successful Dashboard Design

Filed under: Dashboard,Interface Research/Design — Patrick Durusau @ 3:59 am

Following formulaic rules will not make you a good author. Studying the work of good authors may, no guarantees, give you the skills to be a good author. The same is true of interface/dashboard design.

Examples of good dashboard design and why certain elements are thought to be exemplary can be found in: 2012 Perceptual Edge Dashboard Design Competition: We Have a Winner! by Stephen Few.

Unlike a traditional topic map node/arc display, these designs allow quick comparison of information between subjects.

Even if a topic map underlies the presentation, the nature of the data and expectations of your users will (should) be driving the presentation.

Looking forward to the appearance of the second edition of Information Dashboard Design ((by Stephen) which will incorporate examples from this contest.

October 19, 2012

Seeing beyond reading: a survey on visual text analytics

Filed under: Graphics,Interface Research/Design,Visualization — Patrick Durusau @ 4:11 pm

Seeing beyond reading: a survey on visual text analytics by Aretha B. Alencar, Maria Cristina F. de Oliveira, Fernando V. Paulovich. (Alencar, A. B., de Oliveira, M. C. F. and Paulovich, F. V. (2012), Seeing beyond reading: a survey on visual text analytics. WIREs Data Mining Knowl Discov, 2: 476–492. doi: 10.1002/widm.1071)

Abstract:

We review recent visualization techniques aimed at supporting tasks that require the analysis of text documents, from approaches targeted at visually summarizing the relevant content of a single document to those aimed at assisting exploratory investigation of whole collections of documents.Techniques are organized considering their target input material—either single texts or collections of texts—and their focus, which may be at displaying content, emphasizing relevant relationships, highlighting the temporal evolution of a document or collection, or helping users to handle results from a query posed to a search engine.We describe the approaches adopted by distinct techniques and briefly review the strategies they employ to obtain meaningful text models, discuss how they extract the information required to produce representative visualizations, the tasks they intend to support and the interaction issues involved, and strengths and limitations. Finally, we show a summary of techniques, highlighting their goals and distinguishing characteristics. We also briefly discuss some open problems and research directions in the fields of visual text mining and text analytics.

Papers like this one make me wish for a high resolution color printer. 😉

With three tables of representations, twenty-nine (29) entries and sixty (60) footnotes, it isn’t really possible to provide a useful summary beyond quoting the author’s conclusion:

This survey has provided an overview of the lively field of visual text analytics. The variety of tasks and situations addressed introduces a demand for many domain-specific and/or task-oriented solutions. Nonetheless, despite the impressive number of contributions and wide variety of approaches identified in the literature, the field is still in its infancy. Deployment of existing and novel techniques to a wider audience of users performing real-life tasks remains a challenge that requires tackling multiple issues.

One issue is to foster tighter integration with traditional text mining tasks and algorithms. Various contributions are found in the literature reporting usage of visual interfaces or visualizations to support interpretation of the output of traditional text mining algorithms. Still, visualization has the potential to give users a much more active role in text mining tasks and related activities, and concrete examples of such usage are still scarce. Many rich possibilities remain open to further exploration. Better visual text analytics will also likely require more sophisticated text models, possibly integrating results and tools from research on natural language processing. Finally, providing usable tools also requires addressing several issues related to scalability, i.e., the capability of effectively handling very large text documents and textual collections.

However, what I can do is track down the cited literature and point back to this article as the origin for my searching.

It merits wider readership than its publisher’s access polices are likely to permit.

October 18, 2012

Do Presidential Debates Approach Semantic Zero?

ReConstitution recreates debates through transcripts and language processing by Nathan Yau.

From Nathan’s post:

Part data visualization, part experimental typography, ReConstitution 2012 is a live web app linked to the US Presidential Debates. During and after the three debates, language used by the candidates generates a live graphical map of the events. Algorithms track the psychological states of Romney and Obama and compare them to past candidates. The app allows the user to get beyond the punditry and discover the hidden meaning in the words chosen by the candidates.

The visualization does not answer the thorny experimental question: Do presidential debates approach semantic zero?

Well, maybe the technique will improve by the next presidential election.

In the meantime, it was an impressive display of read time processing and analysis of text.

Imagine such an interface that was streaming text for you to choose subjects, associations between subjects, and the like.

Not trying to perfectly code any particular stretch of text but interacting with the flow of the text.

There are goals other than approaching semantic zero.

Designing for Consumer Search Behaviour (slideshow)

Filed under: Interface Research/Design,Search Behavior,Users — Patrick Durusau @ 10:40 am

Designing for Consumer Search Behaviour (slideshow) by Tony Russell-Rose.

From the post:

Here are the slides from the talk I gave recently at HCIR 2012 on Designing for Consumer Search Behaviour. This presentation is the counterpart to the previous one: while A Model of Consumer Search Behaviour introduced the model and described the analytic work that led to it, this talk looks at the practical design implications. In particular, it addresses the observation that although the information retrieval community is blessed with an abundance of analytic models, only a tiny fraction of these make any impression at all on mainstream UX design practice.

Why is this? In part, this may be simply a reflection of imperfect channels of communication between the respective communities. However, I suspect it may also be a by-product of the way researchers are incentivized: with career progression based almost exclusively on citations in peer-reviewed academic journals, it is hard to see what motivation may be left to encourage adoption by other communities such as design practitioners. Yet from a wider perspective, it is precisely this cross-fertilisation that can make the difference between an idea gathering the dust of citations within a closed community and actually having an impact on the mainstream search experiences that we as consumers all encounter.

I have encounter the “cross-community” question before. A major academic organization where I was employed and a non-profit in the field shared members. For more than a century.

They had no projects in common all that time. Knew about each other, but kept waiting for the “other” one to call first. Eventually did have a project or two together but members of communities tend to stay in those communities.

It is a question of a member’s “comfort” zone. How will members of other community react? Will they be accepting? Judgemental? Once you know, hard to go back to ignorance. Best just to stay at home. Imagine what it would be like “over there.” Less risky.

You might find members of other communities have the same hopes, fears, dreams that you do. Then what? Had to diss others when it means dissing yourself.

A cross-over UX design practitioner/researcher poster day, with lots of finger food, tables for ad hoc conversations/demos, would be a nice way to break the ice between the two communities?

Cross-Community? Try Japan, 1980’s, For Success, Today!

Filed under: Interface Research/Design,User Targeting,Users — Patrick Durusau @ 10:37 am

Leveraging the Kano Model for Optimal Results by Jan Moorman.

Jan’s post outlines what you need to know to understand and use a UX model known at the “Kano Model.”

In short, the Kano Model is a way to evaluate how customers (the folks who buy products, not your engineers) feel about product features.

You are ahead of me if you guessed that positive reactions to product features are the goal.

Jan and company returned to the original research. An important point because applying research mechanically will get you mechanical results.

From the post:

You are looking at a list of 18 proposed features for your product. Flat out, 18 are too many to include in the initial release given your deadlines, and you want identify the optimal subset of these features.

You suspect an executive’s teenager suggested a few. Others you recognize from competitor products. Your gut instinct tells you that none of the 18 features are game changers and you’re getting pushback on investing in upfront generative research.

It’s a problem. What do you do?

You might try what many agile teams and UX professionals are doing: applying a method that first emerged in Japan during the 1980’s called the ‘Kano Model’ used to measures customer emotional reaction to individual features. At projekt202, we’ve had great success in doing just that. Our success emerged from revisiting Kano’s original research and through trial and error. What we discovered is that it really matters how you design and perform a Kano study. It matters how you analyze and visualize the results.

We have also seen how the Kano Model is a powerful tool for communicating the ROI of upfront generative research, and how results from Kano studies inform product roadmap decisions. Overall, Kano studies are a very useful to have in our research toolkit.

Definitely an approach to incorporate in UX evaluation.

Open-Sankoré

Filed under: Graphics,Interface Research/Design,Visualization — Patrick Durusau @ 10:37 am

Open-Sankoré

From the website:

Open-Sankoré is a multiplatform, open-source program that is compatible with every type of interactive hardware. It is also translated into many different languages. Its range of tools is adapted to all users: from beginners to experts.

I first saw this at H Open, which noted:

Open Sankoré is open source whiteboard software that offers a drop-in replacement for proprietary alternatives and adds a couple of interesting features such as support for the W3C Widgets specification. This means that users can, for example, embed interactive content which has been developed for Apache Wookie directly on the whiteboard.

Looks useful for instruction and perhaps WG3 meetings.

Any experience with it?

Anyone using one of the Watcom Bamboo tablets with it?

OneZoom Tree of Life Explorer

Filed under: Graphics,Interface Research/Design,Visualization — Patrick Durusau @ 4:46 am

OneZoom Tree of Life Explorer

From the website:

OneZoom is committed to heightening awareness about the diversity of life on earth, its evolutionary history and the threats of extinction. This website allows you to explore the tree of life in a completely new way: it’s like a map, everything is on one page, all you have to do is zoom in and out. OneZoom also provides free, open source, data visulation tools for science and education, currently focusing on the tree of life.

This is wicked cool! Be sure to watch the video introduction. You will be able to navigate without it but there are hidden bells and whistles.

Should provoke all sort of ideas about visualizing and exploring data.

See also: OneZoom: A Fractal Explorer for the Tree of Life (Rosindell J, Harmon LJ (2012) OneZoom: A Fractal Explorer for the Tree of Life. PLoS Biol 10(10): e1001406. doi:10.1371/journal.pbio.1001406) for more details on the project.

I first saw this at Science Daily: Tree of Life Branches out Online.

October 15, 2012

What you hear could depend on what your hands are doing [Interface Subtleties]

Filed under: Interface Research/Design — Patrick Durusau @ 4:17 am

What you hear could depend on what your hands are doing

Probably not ready for the front of the interface queue but something you should keep in mind.

There are subtleties of information processing that a difficult to dig out but that you can ignore only at the peril of an interface that doesn’t quite “work,” but no one can say why.

I will have to find the reference but I remember some work years ago where poor word spacing algorithms made text measurably more difficult to read, without the reader being aware of the difference.

What if you had information you would prefer readers not pursue beyond a certain point? Could altering the typography make the cognitive load so high that they would “remember” reading a section but not recall that they quit before understanding it in detail?

How would you detect such a strategy if you encountered it?

From the post:

New research links motor skills and perception, specifically as it relates to a second finding—a new understanding of what the left and right brain hemispheres “hear.” Georgetown University Medical Center researchers say these findings may eventually point to strategies to help stroke patients recover their language abilities, and to improve speech recognition in children with dyslexia.

The study, presented at Neuroscience 2012, the annual meeting of the Society for Neuroscience, is the first to match human behavior with left brain/right brain auditory processing tasks. Before this research, neuroimaging tests had hinted at differences in such processing.

“Language is processed mainly in the left hemisphere, and some have suggested that this is because the left hemisphere specializes in analyzing very rapidly changing sounds,” says the study’s senior investigator, Peter E. Turkeltaub, M.D., Ph.D., a neurologist in the Center for Brain Plasticity and Recovery. This newly created center is a joint program of Georgetown University and MedStar National Rehabilitation Network.

Turkeltaub and his team hid rapidly and slowly changing sounds in background noise and asked 24 volunteers to simply indicate whether they heard the sounds by pressing a button.

“We asked the subjects to respond to sounds hidden in background noise,” Turkeltaub explained. “Each subject was told to use their right hand to respond during the first 20 sounds, then their left hand for the next 20 second, then right, then left, and so on.” He says when a subject was using their right hand, they heard the rapidly changing sounds more often than when they used their left hand, and vice versa for the slowly changing sounds.

« Newer PostsOlder Posts »

Powered by WordPress