Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 13, 2012

Five User Experience Lessons from Johnny Depp

Filed under: Authoring Topic Maps,Interface Research/Design,Usability,Users — Patrick Durusau @ 7:01 pm

Five User Experience Lessons from Johnny Depp by Steve Tengler.

Print this post out and pencil in your guesses for the Johnny Depp movies that illustrate these lessons:

Lesson #1: It’s Not About the Ship You Rode In On

Lesson #2: Good UXers Plan Ahead to Assimilate External Content

Lesson #3: Flexibility on Size Helps Win the Battle

Lesson #4: Design for What Your Customer Wants … Not for What You Want

Lesson #5: Tremendous Flexibility Can Lead to User Satisfaction

Then pass a clean copy to the next cubicle and see how they do.

Funny how Lesson #4 keeps coming up.

I had an Old Testament professor who said laws against idol worship were evidence people were engaged in idol worship. Rarely prohibit what isn’t a problem.

I wonder if #4 keeps coming up because designers keep designing for themselves?

What do you think?

If that is true, then it must be true that authors write for themselves. (Ouch!)

So how do authors discover (or do they) how to write for others?

Know the ones that succeed in commercial trade by sales. But that is after the fact and not explanatory.

Important question if you are authoring curated content with a topic map for sale.

October 12, 2012

Mirror, Mirror, on the Screen

Filed under: Interface Research/Design,Usability — Patrick Durusau @ 3:03 pm

Mirror, Mirror, on the Screen by David Moskovic.

From the post:

According to Don Norman (author of Emotional Design: Why We Love (or Hate) Everyday Things) there are three levels of cognitive processing. The visceral level is the most immediate and is the one marketing departments look to when trying to elicit trigger responses and be persuasive. Behavioral processing is the middle level, and is the concern of traditional usability or human factors practitioners designing for ergonomics and ease of use. The third level is reflective processing.

Reflective processing is when our desires for uniqueness and cultural or aesthetic sophistication influence our preferences. Simply put, it is about seeing ourselves positively reflected in the products we use. What that means to individuals and their own self-images is highly subjective (see the picture at upper-left), however—and again according to Norman—designing for reflection is the most powerful way to build long-term product/user relationships.

Unfortunately, reflective processing is often dismissed by interaction designers as a style question they shouldn’t concern themselves with. To be fair, applying superficial style has too often been used in ways that cause major usability issues—a fairly common occurrence with brand websites for consumer packaged goods. One that comes to mind (although perhaps not the most egregious) is Coors.com, with its wood paneling background image where the navigation gets lost. It is superficial style with no reflective trade-off because not only is its usability quite poor, it is also completely product-centric rather than customer-centric. On the flip side, and what seems to be a recurring problem, is that many very usable digital products and services fail to generate the levels of adoption, engagement, and retention their creators were after because they lack that certain je ne sais quoi that connects with users at a deeper level.

The point of this article is to make the case for reflective processing design in a way that does not detract from usability’s chief concerns. When reflection-based design goes deeper than superficial stylization tricks and taps into our reflected sense of self, products become much more rewarding and life-enhancing, and have a higher potential for a more engaged and longer-lasting customer relationship.

Equally important, and deserving of attention from a UX and user-centered design perspective, is the fact that products that successfully address the reflective level are almost unanimously perceived as more intuitive and easier to use. Norman famously makes that case by pointing out how the original iPod click-wheel navigation was perhaps not the most usable solution but was perceived as the easiest because of Apple’s amazing instinct for reflection-based design.

Questions:

1. Does your application connect with your customers at a deeper level?

Or

2. Does your application connect with your developers at a deeper level?

If #2 is yes, hope your developers buy enough copies to keep the company afloat.

Otherwise, work to make the answer to #1 yes.

See David’s post for suggestions.

Ten Reasons Users Won’t Use Your Topic Map

Filed under: Interface Research/Design,Marketing,Usability — Patrick Durusau @ 1:28 pm

Ian Nicholson’s analysis of why business intelligence applications aren’t used equally applies to topic maps and topic map applications.

From: Ten Reasons Your Users Won’t Use Your Business Intelligence Solution.

  • Project Stalled or Went Over Deadline/Budget
  • The Numbers Cannot Be Trusted
  • Reports Take Too Long To Run
  • Requirements Have Changed Since The Project Began
  • The World Has Moved On After Delivery
  • Inadequate Training
  • Delivery Did Not Meet User Expectations
  • Your BI Solution is Not Available to Everyone
  • Reports Too Static – No Self-Serve Reporting
  • Users Simply Won’t Give Up Excel or Whatever It Is They Use

Ian also offers possible solutions to these issues.

Questions:

Do any of the issues sound familiar?

Do the solutions sound viable in a topic maps context?

October 5, 2012

Journal of Experimental Psychology: Applied

Filed under: Interface Research/Design,Language,Psychology — Patrick Durusau @ 2:16 pm

Journal of Experimental Psychology: Applied

From the website:

The mission of the Journal of Experimental Psychology: Applied® is to publish original empirical investigations in experimental psychology that bridge practically oriented problems and psychological theory.

The journal also publishes research aimed at developing and testing of models of cognitive processing or behavior in applied situations, including laboratory and field settings. Occasionally, review articles are considered for publication if they contribute significantly to important topics within applied experimental psychology.

Areas of interest include applications of perception, attention, memory, decision making, reasoning, information processing, problem solving, learning, and skill acquisition. Settings may be industrial (such as human–computer interface design), academic (such as intelligent computer-aided instruction), forensic (such as eyewitness memory), or consumer oriented (such as product instructions).

I browsed several recent issues of the Journal of Experimental Psychology: Applied while researching the Todd Rogers post. Fascinating stuff and some of it will find its way into interfaces or other more “practical” aspects of computer science.

Something to temper the focus on computer facing work.

No computer has ever originated a purchase order or contract. Might not hurt to know something about the entities that do.

October 4, 2012

Google Maps: A Prelude to Broader Predictive Search

Filed under: Interface Research/Design,Mapping,Maps — Patrick Durusau @ 2:01 pm

Google Maps: A Prelude to Broader Predictive Search by Stephen E. Arnold.

From the post:

Short honk. Google’s MoreThanaMap subsite signals an escalation in the map wars. You will want to review the information at www.morethanamap.com. The subsite presents the new look of Google’s more important features and services. The demonstrations are front and center.The focus is on visualization of mashed up data; that is, compound displays. The real time emphasis is clear as swell. The links point to developers and another “challenge.” It is clear that Google wants to make it desirable for programmers and other technically savvy individuals to take advantage of Google’s mapping capabilities. After a few clicks, Google has done a good job of making clear that findability and information access shift a map from a location service to a new interface.

You really need to see the demos to appreciate what can be done with the Google Map API.

Although, I remember the flight from Atlanta to Gatwick (London) as being longer than it seems in the demo. 😉

September 23, 2012

Five User Experience Lessons from Tom Cruise

Filed under: Interface Research/Design,Usability,Users — Patrick Durusau @ 3:24 pm

Five User Experience Lessons from Tom Cruise by Steve Tengler.

From the post:

As previously said best by Steve Jobs, “The broader one’s understanding of the human experience, the better designs we will have.” And the better the design, the more your company will thrive.

But how can we clarify some basics of User Experience for the masses? The easiest and obvious point of reference is pop culture; something to which we all can relate. My first inclination was to make this article “Five User Experience Lessons from Star Wars” since, at my core, I am a geek. But that’s like wearing a “KICK ME” sign at recess, so I thought better of it. Instead, I looked to a source of some surprisingly fantastic examples: movie characters played by Tom Cruise. I know, I’m playing up to my female readers, but hey, they represent 51% of the population … so I’m simply demonstrating that understanding your customer persona is part of designing a good user experience!

Tengler’s Five Lessons:

Lesson #1: Social Media Ratings of User Experiences Can Be Powerful

Lesson #2: Arrange Your User Interface around the Urgent Tasks

Lesson #3: Design Your System with a Multimodal Interface

Lesson #4: You Must Design For Human Error Upfront For Usability

Lesson #5: Style Captures the Attention

Whether you are a female reader or not, you will find the movie examples quite useful.

What actor/actress and movies would you choose for these principles?

Walk your users through the lessons and ask them to illustrate the lessons with movies they have seen.

A good way to break the ice for designing a user interface.

September 21, 2012

The First Three Seconds: How Users Are Lost

Filed under: Interface Research/Design,Usability,Users,Visualization — Patrick Durusau @ 2:02 pm

The First Three Seconds: How Users Are Lost by Zac Gery.

From the post:

In the time it takes to read this sentence, someone has viewed this post and moved on. They probably didn’t even read this sentence. Why did they leave? What were they looking for? Users searching on the internet have a short attention span. It is commonly referred to as the “3 Second Rule.” Although not specifically three seconds, the rule accentuates the limited time a website has to make a first impression. The goal of any website is to clarify, then build interest. Interest drives return visits and recommendations. As a user’s visit extends so does the chance for a return visit.

On the web, first impressions start with speed. From the moment users request a web page, they begin to evaluate. Displaying a modern website is a coordinated effort of content, css files, javascript files, images, and more. Too many requests or large files can increase a website’s load time. Tools such as Firebug, YSlow, Webkit’s Inspector, and Fiddler offer an excellent overview of load times. Browser caching can help with additional requests, but most websites are not afforded a second look. Investigate the number of files required for a web page. Sprites are a great way to reduce multiple image files and overall size. Compression tools can also help to reduce wasted space in javsacript and CSS files.

A little bit longer than Love or Hate in 50 Milliseconds but it still raises the bar over the thirty (30) second elevator speech.

Are you measuring user reactions to your interfaces in milliseconds?

Or do you ask your manager for their reaction?

Care to guess which test is most often used by successful products?

I first saw this at DZone.

September 20, 2012

“Communicating the User Experience” (Book Review)

Filed under: Documentation,Interface Research/Design,Usability — Patrick Durusau @ 7:47 pm

“Communicating the User Experience” – reviewed by Jane Pyle.

From the post:

I’ll admit it. I haven’t spent a lot of time in my career creating beautiful wireframes. For the past four years I’ve been designing mobile apps for internal use in a large corporation and the first casualty in every project has been design documentation. I’ve been able to successfully communicate my designs using sketches, dry erase boards, and/or rapid prototyping, but the downside of this approach became quite clear when our small team disbanded. As a new team was formed, the frequently asked question of “so where is the documentation for this project” was met with my sheepish gaze.

So I was very curious to read Communicating the User Experience and perhaps learn some practical methods for creating UX documentation on a shoestring time budget. What’s the verdict? Have I seen the documentation light and decided to turn over a new leaf? Read on.

As Jane discovers, there are no shortcuts to documentation, UX or otherwise.

A guide to tools for creating a particular style of documentation can be helpful to beginners, as Jane notes, but not beyond that.

Creating documentation is not a tool driven activity. It is a more creative activity than creation of software or an interface.

Software works with deterministic machines and can be tested as such. Documentation has to work with non-deterministic users.

The only test for documentation being whether it is understood by those non-deterministic users.

Rather than facing the harder task of documentation, many prefer to grunt and wave their sharpies in the air.

It may be amusing, but it’s not documentation.

Misinformation: Why It Sticks and How to Fix It

Filed under: Interface Research/Design,Usability — Patrick Durusau @ 4:43 pm

Misinformation: Why It Sticks and How to Fix It

From the post:

Childhood vaccines do not cause autism. Barack Obama was born in the United States. Global warming is confirmed by science. And yet, many people believe claims to the contrary.

Why does that kind of misinformation stick? A new report published in Psychological Science in the Public Interest, a journal of the Association for Psychological Science, explores this phenomenon. Psychological scientist Stephan Lewandowsky of the University of Western Australia and colleagues highlight the cognitive factors that make certain pieces of misinformation so “sticky” and identify some techniques that may be effective in debunking or counteracting erroneous beliefs.

The main reason that misinformation is sticky, according to the researchers, is that rejecting information actually requires cognitive effort. Weighing the plausibility and the source of a message is cognitively more difficult than simply accepting that the message is true — it requires additional motivational and cognitive resources. If the topic isn’t very important to you or you have other things on your mind, misinformation is more likely to take hold.

And when we do take the time to thoughtfully evaluate incoming information, there are only a few features that we are likely to pay attention to: Does the information fit with other things I believe in? Does it make a coherent story with what I already know? Does it come from a credible source? Do others believe it?

Misinformation is especially sticky when it conforms to our preexisting political, religious, or social point of view. Because of this, ideology and personal worldviews can be especially difficult obstacles to overcome.

Useful information for designing interfaces in general and topic maps in particular.

I leave it for others to decide which worldviews support are “information,” as opposed to “misinformation.”

But whatever your personal view of some “facts,” the same techniques should serve equally well.

PS: The taking effort to reject information is a theme explored in Thinking, Fast and Slow.

September 19, 2012

Five Lessons Learned Doing User Research in Asia

Filed under: Interface Research/Design,Usability,Users — Patrick Durusau @ 7:20 pm

Five Lessons Learned Doing User Research in Asia by Carissa Carter.

From the post:

If you have visited any country in Asia recently, you have probably seen it. Turn your head in any direction; stand up; go shopping; or check an app on your phone and you will notice products from Western companies lurking about. Some of these products are nearly identical to their counterparts overseas, and others are brand new, launched specifically for the local market.

As more and more companies are taking their products abroad, the need for user research in these new markets is increasing in importance. I spent a year spanning 2010 and 2011 living in Hong Kong and leading user research campaigns—primarily in China, Japan, and India. Through a healthy balance of trial and error (and more error), I learned a lot about leading these studies in cultures incredibly different than my own. Meta understanding with a bit of methodology mixed in, I offer you my top five lessons learned while conducting and applying user research in Asia.

Successful user interface designs change across cultures.

Is that a clue as to what happens with subject identifications?

September 16, 2012

Sketching User Experiences: The Workbook

Filed under: Design,Graphics,Interface Research/Design,Visualization — Patrick Durusau @ 4:42 pm

Sketching User Experiences: The Workbook By: Saul Greenberg; Sheelagh Carpendale; Nicolai Marquardt; Bill Buxton.

Description:

In Sketching User Experiences: The Workbook, you will learn, through step-by-step instructions and exercises, various sketching methods that will let you express your design ideas about user experiences across time. Collectively, these methods will be your sketching repertoire: a toolkit where you can choose the method most appropriate for developing your ideas, which will help you cultivate a culture of experience-based design and critique in your workplace.

  • Features standalone modules detailing methods and exercises for practitioners who want to learn and develop their sketching skills
  • Extremely practical, with illustrated examples detailing all steps on how to do a method
  • Excellent for individual learning, for classrooms, and for a team that wants to develop a culture of design practice
  • Perfect complement to Buxtons Sketching User Experience or any UX text

My first time to encounter this book.

Comments/suggestions?

Similar materials?

Interfaces are as much about mapping as anything we do inside topic maps.

Which implies the ability to map from “your” interface to one I find more congenial doesn’t it?

September 15, 2012

Blame Google? Different Strategy: Let’s Blame Users! (Not!)

Let me quote from A Simple Guide To Understanding The Searcher Experience by Shari Thurow to start this post:

Web searchers have a responsibility to communicate what they want to find. As a website usability professional, I have the opportunity to observe Web searchers in their natural environments. What I find quite interesting is the “Blame Google” mentality.

I remember a question posed to me during World IA Day this past year. An attendee said that Google constantly gets search results wrong. He used a celebrity’s name as an example.

“I wanted to go to this person’s official website,” he said, “but I never got it in the first page of search results. According to you, it was an informational query. I wanted information about this celebrity.”

I paused. “Well,” I said, “why are you blaming Google when it is clear that you did not communicate what you really wanted?”

“What do you mean?” he said, surprised.

“You just said that you wanted information about this celebrity,” I explained. “You can get that information from a variety of websites. But you also said that you wanted to go to X’s official website. Your intent was clearly navigational. Why didn’t you type in [celebrity name] official website? Then you might have seen your desired website at the top of search results.”

The stunned silence at my response was almost deafening. I broke that silence.

“Don’t blame Google or Yahoo or Bing for your insufficient query formulation,” I said to the audience. “Look in the mirror. Maybe the reason for the poor searcher experience is the person in the mirror…not the search engine.”

People need to learn how to search. Search experts need to teach people how to search. Enough said.

What a novel concept! If the search engine/software doesn’t work, must be the user’s fault!

I can save you a trip down the hall to the marketing department. They are going to tell you that is an insane sales strategy. Satisfying to the geeks in your life but otherwise untenable, from a business perspective.

Remember the stats on using Library of Congress subject headings I posted under Subject Headings and the Semantic Web:

Overall percentages of correct meanings for subject headings in the original order of subdivisions were as follows: children, 32%, adults, 40%, reference 53%, and technical services librarians, 56%.

?

That is with decades of teaching people to search both manual and automated systems using Library of Congress classification.

Test Question: I have a product to sell. 60% of my all buyers can’t find it with a search engine. Do I:

  • Teach all users everywhere better search techniques?
  • Develop better search engines/interfaces to compensate for potential buyers’ poor searching?

I suspect the “stunned silence” was an audience with greater marketing skills than the speaker.

September 14, 2012

Should We Focus on User Experience?

Should We Focus on User Experience? by Koen Claes.

From the post:

In the next seven minutes or so, this article hopes to convince you that our current notion of UX design mistakenly focuses on experience, and that we should go one step further and focus on the memory of an experience instead.

Studies of behavioral economics have changed my entire perspective on UX design, causing me to question basic tenets. This has led to ponderings like: “Is it possible that trying to create ‘great experiences’ is pointless?” Nobel Prize-winning research seems to hint that it is.

Via concrete examples, additional research sources, and some initial how-to tips, I aim to illustrate why and how we should recalibrate our UX design processes.

You will also like the narrative (with addition resources) from Koen’s presentation at IA Summit 2011, On Why We Should NOT Focus on UX.

The more I learn about the myriad aspects of communcation, the more I am amazed that we communicate at all. 😉

September 1, 2012

Web Performance Power Tool: HTTP Archive (HAR)

Filed under: Interface Research/Design,Performance,Web Server — Patrick Durusau @ 2:52 pm

Web Performance Power Tool: HTTP Archive (HAR) by Ilya Grigorik.

From the post:

When it comes to analyzing page performance, the network waterfall tab of your favorite HTTP monitoring tool (e.g. Chrome Dev Tools, Firebug, Fiddler, etc) is arguably the single most useful power tool at our disposal. Now, wouldn’t it be nice if we could export the waterfall for better bug reports, performance monitoring, or later in-depth analysis?

Well, good news, that is precisely what the HTTP Archive (HAR) data format was created for. Even better, chances are, you favorite monitoring tool already knows how to speak in HAR, which opens up a lot of possibilities – let’s explore.

If you are tuning or developing a web interface, there is much here you will find helpful.

The gathering of information for later analysis, by other tools, was what interested me the most.

August 30, 2012

HTML5 Boilerplate

Filed under: HTML5,Interface Research/Design — Patrick Durusau @ 2:17 pm

HTML5 Boilerplate

From the website:

HTML5 Boilerplate helps you build fast, robust, and adaptable web apps or sites. Kick-start your project with the combined knowledge and effort of 100’s of developers, all in one little package.

If this helps you roll out test web pages quickly, good.

If you prefer another package, please post a pointer.

The Top 10 Challenges in Extreme-Scale Visual Analytics [Human Bottlenecks and Parking Meters]

Filed under: Analytics,Interface Research/Design,Visualization — Patrick Durusau @ 2:04 pm

The Top 10 Challenges in Extreme-Scale Visual Analytics by Pak Chung Wong, Han-Wei Shen, Christopher R. Johnson, Chaomei Chen, and Robert B. Ross. (Link to PDF. IEEE Computer Graphics and Applications, July-Aug. 2012, pp. 63–67)

The top 10 challenges are:

  1. In Situ Interactive Analysis
  2. User-Driven Data Reduction
  3. Scalability and Multilevel Hierarchy
  4. Representing Evidence and Uncertainty
  5. Heterogeneous-Data Fusion
  6. Data Summarization and Triage for Interactive Query
  7. Analytics of Temporally Evolved Features
  8. The Human Bottleneck
  9. Design and Engineering Development
  10. The Renaissance of Conventional Wisdom

I was amused by #8: The Human Bottleneck, which reads:

Experts predict that all major high-performance computing (HPC) components—power, memory, storage, bandwidth, concurrence, and so on—will improve performance by a factor of 3 to 4,444 by 2018.2 Human cognitive capability will certainly remain constant. One challenge is to find alternative ways to compensate for human cognitive weaknesses.

It isn’t clear to me how speed counting 0’s and 1’s is an indicator of “human cognitive weakness?”

Parking meters stand in the weather day and night. I don’t take that as a commentary on human endurance.

Do you?

A Model of Consumer Search Behaviour (slideshow) [Meta-analysis Anyone?]

Filed under: Interface Research/Design,Search Behavior,Searching — Patrick Durusau @ 1:12 pm

A Model of Consumer Search Behaviour (slideshow) by Tony Russell-Rose.

From the post:

Here are the slides from the talk I gave at EuroHCIR last week on A Model of Consumer Search Behaviour. This talk extends and validates the taxonomy of information search strategies (aka ‘search modes’) presented at last year’s event, but applies it in this instance to the domain of site search, i.e. consumer-oriented websites and search applications. We found that site search users presented significantly different information needs to those of enterprise search, implying some key differences in the information behaviours required to satisfy those needs.

Every so often I see “meta-analysis” used in medical research that combines the data from several clinical trials.

Are you aware of anyone who has performed a meta-analysis upon search behavior research?

Same question but with regard to computer interfaces more generally?

Recline.js

Filed under: Interface Research/Design,Javascript,Web Applications — Patrick Durusau @ 9:41 am

Recline.js

From the documentation:

The Recline Library consists of 3 parts: Models, Backends and Views

Models

Models help you structure your work with data by providing some standard objects such as Dataset and Record – a Dataset being a collection of Records. More »

Backends

Backends connect your Models to data sources (and stores) – for example Google Docs spreadsheets, local CSV files, the DataHub, ElasticSearch etc. More »

Views

Views are user interface components for displaying, editing or interacting with the data. For example, maps, graphs, data grids or a query editor. More »

Make trial-n-error with interfaces easy, while you search for the one that users “like” in < 50 milliseconds. What if you had to hard code every interface change? How quickly would the rule become: Users must adapt to the interface? Not a bad rule, if you want to drive customers to other sites/vendors. (Think about that for a minute, then take Recline.js for a spin.)

August 29, 2012

Love or Hate in 50 Milliseconds

Filed under: Graphics,Interface Research/Design,Visualization — Patrick Durusau @ 4:03 pm

Users love simple and familiar designs – Why websites need to make a great first impression by Javier Bargas-Avila, Senior User Experience Researcher at YouTube UX Research

I knew it didn’t take long to love/hate a website but…:

I’m sure you’ve experienced this at some point: You click on a link to a website, and after a quick glance you already know you’re not interested, so you click ‘back’ and head elsewhere. How did you make that snap judgment? Did you really read and process enough information to know that this website wasn’t what you were looking for? Or was it something more immediate?

We form first impressions of the people and things we encounter in our daily lives in an extraordinarily short timeframe. We know the first impression a website’s design creates is crucial in capturing users’ interest. In less than 50 milliseconds, users build an initial “gut feeling” that helps them decide whether they’ll stay or leave. This first impression depends on many factors: structure, colors, spacing, symmetry, amount of text, fonts, and more.

As a comparison, the post cites the blink of an eye taking from 100 to 400 milliseconds.

Raises the bar on the 30 second “elevator speech” doesn’t it?

Pass this on to web page, topic map (and other) semantic technology and software interface designers in general.

How would you test a webpage given this time constraint? (Serious question.)

August 26, 2012

Designing a Better Sales Pipeline Dashboard

Filed under: Interface Research/Design,Marketing — Patrick Durusau @ 10:50 am

Designing a Better Sales Pipeline Dashboard by Zach Gemignani

From the post:

What would your perfect sales pipeline dashboard look like?

The tools that so effectively capture sales information (Salesforce, PipelineDeals, Highrise) tend to do a pretty lousy job of providing visibility into that very same data. The reporting or analytics is often just a table with lots of filtering features. That doesn’t begin to answer important questions like:

  • What is the value of the pipeline?
  • Where is it performing efficiently? Where is it failing?
  • How are things likely to change in the next month?

I’ve been annoyed by this deficiency in sales dashboards for a while. Ken and I put together some thoughts about what a better sales pipeline information interface would look like and how it would function. Here’s what we came up with:

A sales dashboard that at least two people like better than most offerings.

What would you add to this dashboard that topic maps would be able to supply?

Yes, I am divorcing the notion of “interface” from “topic map.”

Interface being how a user accomplishes a task or accesses information.

Completely orthogonal to the underlying technology.

Exposing the underlying technology demonstrates how clever we are.

Is not succeeding in the marketplace clever?*


*Ask yourself how many MS Office users can even stumble through a “big block” diagram of how MS Word works?

Compare that number to the number of MS Word users. Express as:

“MS Word users/MS Word users who understand the technology.”

That’s my target ratio for:

“topic map users/topic map users who understand the technology.”

August 12, 2012

X3DOM

Filed under: Graphics,Interface Research/Design,X3DOM — Patrick Durusau @ 12:06 pm

about page:

X3DOM (pronounced X-Freedom) is an experimental open source framework and runtime to support the ongoing discussion in the Web3D and W3C communities how an integration of HTML5 and declarative 3D content could look like. It tries to fulfill the current HTML5 specification for declarative 3D content and allows including X3D elements as part of any HTML5 DOM tree.

I had to get past two empty PR releases and finally search the Web for a useful URL.

Even the about page has a great demo. It also has links to more information.

Not stable, yet, but merits your attention for authoring topic map interfaces.

Pointers to your interfaces, topic map or otherwise, using X3DOM, greatly appreciated!

August 2, 2012

Community Based Annotation (mapping?)

Filed under: Annotation,Bioinformatics,Biomedical,Interface Research/Design,Ontology — Patrick Durusau @ 1:51 pm

Enabling authors to annotate their articles is examined in: Assessment of community-submitted ontology annotations from a novel database-journal partnership by Tanya Z. Berardini, Donghui Li, Robert Muller, Raymond Chetty, Larry Ploetz, Shanker Singh, April Wensel and Eva Huala.

Abstract:

As the scientific literature grows, leading to an increasing volume of published experimental data, so does the need to access and analyze this data using computational tools. The most commonly used method to convert published experimental data on gene function into controlled vocabulary annotations relies on a professional curator, employed by a model organism database or a more general resource such as UniProt, to read published articles and compose annotation statements based on the articles’ contents. A more cost-effective and scalable approach capable of capturing gene function data across the whole range of biological research organisms in computable form is urgently needed.

We have analyzed a set of ontology annotations generated through collaborations between the Arabidopsis Information Resource and several plant science journals. Analysis of the submissions entered using the online submission tool shows that most community annotations were well supported and the ontology terms chosen were at an appropriate level of specificity. Of the 503 individual annotations that were submitted, 97% were approved and community submissions captured 72% of all possible annotations. This new method for capturing experimental results in a computable form provides a cost-effective way to greatly increase the available body of annotations without sacrificing annotation quality.

It is encouraging that this annotation effort started with the persons most likely to know the correct answers, authors of the papers in question.

The low initial participation rate (16%) and improved after email reminder rate (53%), were less encouraging.

I suspect unless and until prior annotation practices (by researchers) becomes a line item on current funding requests (how many annotations were accepted by publishers of your prior research?), we will continue to see annotations to be a low priority item.

Perhaps I should suggest that as a study area for the NIH?

Publishers, researchers who build annotation software, annotated data sources and their maintainers, are all likely to be interested.

Would you be interested as well?

August 1, 2012

Useful junk?:…

Filed under: Graphics,Interface Research/Design,Visualization — Patrick Durusau @ 10:12 am

Useful junk?: the effects of visual embellishment on comprehension and memorability of charts by Scott Bateman, Regan L. Mandryk, Carl Gutwin, Aaron Genest, David McDine, and Christopher Brooks.

Abstract:

Guidelines for designing information charts (such as bar charts) often state that the presentation should reduce or remove ‘chart junk’ – visual embellishments that are not essential to understanding the data. In contrast, some popular chart designers wrap the presented data in detailed and elaborate imagery, raising the questions of whether this imagery is really as detrimental to understanding as has been proposed, and whether the visual embellishment may have other benefits. To investigate these issues, we conducted an experiment that compared embellished charts with plain ones, and measured both interpretation accuracy and long-term recall. We found that people’s accuracy in describing the embellished charts was no worse than for plain charts, and that their recall after a two-to-three-week gap was significantly better. Although we are cautious about recommending that all charts be produced in this style, our results question some of the premises of the minimalist approach to chart design.

No, I didn’t just happen across this work while reading the morning paper. 😉

I started at Nathan Yau’s Nigel Holmes on explanation graphics and how he got started and followed a link to a Column Five Media interview with Holmes, Nigel Holmes on 50 Years of Designing Infographics, because of a quote from Holmes on Edward Tufte that Nathan quotes:

Recent academic studies have proved many of his theses wrong.

which finally brings us to the article I link to above.

It may be the case that Edward Tufte does better with charts designed with the minimalist approach, but this article shows that other people may do better with other chart design principles.

But that’s the trick isn’t it?

We start from what makes sense to us and then generalize that to be the principle that makes the most sense for everyone.

I fear that is also the case with the design of topic map (and other) interfaces. We start with what works for us and generalize that to “that should work for everyone.”

Hard to hear evidence to the contrary. “If you just try it you will see that it works better than X way.”

I fear the solution is to test interfaces with actual user populations. Perhaps even injecting “randomness” into the design so we can test things we would never think of. Or even give users (shudder) the capacity to draw in controls or arrangements of controls.

You may not like the resulting interface but do you want to market to an audience of < 5 or educate and market to a larger audience? (Ask one of your investors if you are unsure.)

July 29, 2012

OSCON 2012

OSCON 2012

Over 4,000 photographs were taken at the MS booth.

I wonder how many of them include Doug?

Drop by the OSCON website after you count photos of Doug.

Your efforts at topic mapping will improve from the experience.

From the OSCON site visit.

What you get from counting photos of Doug is unknown. 😉

July 25, 2012

Beyond The Pie Chart

Filed under: Graphics,Interface Research/Design,Visualization — Patrick Durusau @ 6:31 pm

Beyond The Pie Chart : Creating new visualization tools to reveal treasures in the data by Hunter Whitney.

From the post:

The New Treasure Maps

If a picture’s worth a thousand words, what’s the value of an image representing a terabyte of data?

Much of the vast sea of data flowing around the world every day is left unexplored because the existing tools and charts can’t help us effectively navigate it. Data visualization, interactive infographics, and related visual representation techniques can play a key part in helping people find their way through the wide expanses of data now opening up. There’s a long history of depicting complex information in graphical forms, but the gusher of data now flowing from corporations, governments and scientific research requires more powerful and sophisticated visualization tools to manage it.

Just as a compass needle can give us direction in physical space, a chart line can direct our way through data. As effective as these simple lines may be, they can only take us so far. For many purposes, advanced data visualization methods may never replace Excel, but in our data-saturated world, they might well be the best tools for the job. UX designers can play a key role in creating these new tools and charts. In these treasure maps of data, perhaps UX marks the spot.

Start of what promises to be an interesting series of posts on visualization.

July 23, 2012

Everything Still Looks Like A Graph (but graphs look like maps)

Filed under: Graphs,Interface Research/Design,Visualization — Patrick Durusau @ 9:28 am

Everything Still Looks Like A Graph (but graphs look like maps) by Dan Brickley.

From the post:

Last October I posted a writeup of some experiments that illustrate item-to-item similarities from Apache Mahout using Gephi for visualization. This was under a heading that quotes Ben Fry, “Everything looks like a graph” (but almost nothing should ever be drawn as one). There was also some followup discussion on the Gephi project blog

The entry quoting Ben Fry is entitled Linked Literature, Linked TV – Everything Looks like a Graph and is a great read! Both from the experiments he reports on visualizing linked data and the visualizations that are part of the posts.

Near the end of the “Everything Still Looks Like A Graph…” Dan remarks:

There’s no single ‘correct’ view of the bibliographic landscape; what makes sense for a phd researcher, a job seeker or a schoolkid will naturally vary. This is true also of similarity measures in general, i.e. for see-also lists in plain HTML as well as fancy graph or landscape-based visualizations. There are more than metaphorical comparisons to be drawn with the kind of compositing tools we see in systems like Blender, and plenty of opportunities for putting control into end-user rather than engineering hands.

What do you make of:

There’s no single ‘correct’ view…of similarity measures in general, i.e., for see-also lists in plain HTML…

and

…plenty of opportunities for putting control into end-user rather than engineering hands.

???

Is it the case that most semantic solutions offer users “similarity measures” as applied by the author’s of the semantic solutions?

That may or may not the same as “similarity measures” as applied by users?

Is that why user continue to use Google? That for all of its crudeness, it does offer users the freedom to create their own judgements on similarity?

So how do we create an interface that:

  • Enables users to use their own judgements of similarity
  • and

  • Enables users to capture those judgements of similarity for use by others
  • and

  • Enables uses to explain/disclose their judgements of similarity (to enable other users to agree/not-agree)
  • and

  • Does so with only a little more effort than like/dislike?

Suggestions/comments/proposals?

July 15, 2012

Interactive Dynamics for Visual Analysis

Filed under: Graphics,Interface Research/Design,Visualization — Patrick Durusau @ 3:57 pm

Interactive Dynamics for Visual Analysis by Jeffrey Heer and Ben Shneiderman.

From the article:

The increasing scale and availability of digital data provides an extraordinary resource for informing public policy, scientific discovery, business strategy, and even our personal lives. To get the most out of such data, however, users must be able to make sense of it: to pursue questions, uncover patterns of interest, and identify (and potentially correct) errors. In concert with data-management systems and statistical algorithms, analysis requires contextualized human judgments regarding the domain-specific significance of the clusters, trends, and outliers discovered in data.

Visualization provides a powerful means of making sense of data. By mapping data attributes to visual properties such as position, size, shape, and color, visualization designers leverage perceptual skills to help users discern and interpret patterns within data. [cite omitted] A single image, however, typically provides answers to, at best, a handful of questions. Instead, visual analysis typically progresses in an iterative process of view creation, exploration, and refinement. Meaningful analysis consists of repeated explorations as users develop insights about significant relationships, domain-specific contextual influences, and causal patterns. Confusing widgets, complex dialog boxes, hidden operations, incomprehensible displays, or slow response times can limit the range and depth of topics considered and may curtail thorough deliberation and introduce errors. To be most effective, visual analytics tools must support the fluent and flexible use of visualizations at rates resonant with the pace of human thought.

The goal of this article is to assist designers, researchers, professional analysts, procurement officers, educators, and students in evaluating and creating visual analysis tools. We present a taxonomy of interactive dynamics that contribute to successful analytic dialogues. The taxonomy consists of 12 task types grouped into three high-level categories, as shown in table 1: (1) data and view specification (visualize, filter, sort, and derive); (2) view manipulation (select, navigate, coordinate, and organize); and (3) analysis process and provenance (record, annotate, share, and guide). These categories incorporate the critical tasks that enable iterative visual analysis, including visualization creation, interactive querying, multiview coordination, history, and collaboration. Validating and evolving this taxonomy is a community project that proceeds through feedback, critique, and refinement.

This rocks! I missed it earlier this year but you should not miss it now! (BTW, if you see something interesting, post a note to patrick@durusau.net. I miss lots of interesting and important things. Share what you see with others!)

Two lessons I would draw from this article:

  1. Visual analysis, enabled by the number-crunching and display capabilities of modern computers, is just in its infancy, if that far along. This is a rich area for research and experimentation.
  2. There is no “correct” visualization for any data set. Only ones that give a particular analyst more or less insight into a given data set. What visualizations work for one task or user may not be appropriate for another.

July 6, 2012

Puzzling outcomes in A/B testing

Filed under: Interface Research/Design,Users — Patrick Durusau @ 9:28 am

Puzzling outcomes in A/B testing by Greg Linden.

Greg writes:

“Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained” (PDF), has a lot of great insights into A/B testing and real issues you hit with A/B testing.

I like where Greg quotes the paper as saying:

When Bing had a bug in an experiment, which resulted in very poor results being shown to users, two key organizational metrics improved significantly: distinct queries per user went up over 10%, and revenue per user went up over 30%! …. Degrading algorithmic results shown on a search engine result page gives users an obviously worse search experience but causes users to click more on ads, whose relative relevance increases, which increases short-term revenue … [This shows] it’s critical to understand that long-term goals do not always align with short-term metrics.

I am not real sure what an “obviously worse search experience” would look like. Maybe I don’t want to know. 😉

Anyway, kudos to Greg for finding an amusing and useful paper on testing.

July 4, 2012

Designing Search (part 5): Results pages

Filed under: Interface Research/Design,Search Interface,Searching — Patrick Durusau @ 4:43 pm

Designing Search (part 5): Results pages by Tony Russell-Rose.

From the post:

In the previous post, we looked at the ways in which a response to an information need can be articulated, focusing on the various forms that individual search results can take. Each separate result represents a match for our query, and as such, has the potential to fulfil our information needs. But as we saw earlier, information seeking is a dynamic, iterative activity, for which there is often no single right answer.

A more informed approach therefore is to consider search results not as competing alternatives, but as an aggregate response to an information need. In this context, the value lies not so much with the individual results but on the properties and possibilities that emerge when we consider them in their collective form. In this section we examine the most universal form of aggregation: the search results page.

As usual, Tony illustrates each of his principles with examples drawn from actual webpages. Makes a very nice checklist to use when constructing a results page. Concludes with references and links to all the prior posts in this series.

Unless you are a UI expert, defaulting to following Tony’s advice is not a bad plan. May not be anyway.

June 17, 2012

User Interface Design and Implementation [MIT]

Filed under: Interface Research/Design — Patrick Durusau @ 3:00 pm

User Interface Design and Implementation

Description:

6.831/6.813 examines human-computer interaction in the context of graphical user interfaces. The course covers human capabilities, design principles, prototyping techniques, evaluation techniques, and the implementation of graphical user interfaces. Deliverables include short programming assignments and a semester-long group project. Students taking the graduate version also have readings from current literature and additional assignments.

This is a “traditional” courseware offering and not the recent Harvard/MIT edx venture.

Having said that, if you are looking for a reading list in the field, see the “recommended” books for the class.

Or for that matter, check out the lecture notes.

« Newer PostsOlder Posts »

Powered by WordPress