Useful junk?: the effects of visual embellishment on comprehension and memorability of charts by Scott Bateman, Regan L. Mandryk, Carl Gutwin, Aaron Genest, David McDine, and Christopher Brooks.
Abstract:
Guidelines for designing information charts (such as bar charts) often state that the presentation should reduce or remove ‘chart junk’ – visual embellishments that are not essential to understanding the data. In contrast, some popular chart designers wrap the presented data in detailed and elaborate imagery, raising the questions of whether this imagery is really as detrimental to understanding as has been proposed, and whether the visual embellishment may have other benefits. To investigate these issues, we conducted an experiment that compared embellished charts with plain ones, and measured both interpretation accuracy and long-term recall. We found that people’s accuracy in describing the embellished charts was no worse than for plain charts, and that their recall after a two-to-three-week gap was significantly better. Although we are cautious about recommending that all charts be produced in this style, our results question some of the premises of the minimalist approach to chart design.
No, I didn’t just happen across this work while reading the morning paper. 😉
I started at Nathan Yau’s Nigel Holmes on explanation graphics and how he got started and followed a link to a Column Five Media interview with Holmes, Nigel Holmes on 50 Years of Designing Infographics, because of a quote from Holmes on Edward Tufte that Nathan quotes:
Recent academic studies have proved many of his theses wrong.
which finally brings us to the article I link to above.
It may be the case that Edward Tufte does better with charts designed with the minimalist approach, but this article shows that other people may do better with other chart design principles.
But that’s the trick isn’t it?
We start from what makes sense to us and then generalize that to be the principle that makes the most sense for everyone.
I fear that is also the case with the design of topic map (and other) interfaces. We start with what works for us and generalize that to “that should work for everyone.”
Hard to hear evidence to the contrary. “If you just try it you will see that it works better than X way.”
I fear the solution is to test interfaces with actual user populations. Perhaps even injecting “randomness” into the design so we can test things we would never think of. Or even give users (shudder) the capacity to draw in controls or arrangements of controls.
You may not like the resulting interface but do you want to market to an audience of < 5 or educate and market to a larger audience? (Ask one of your investors if you are unsure.)