## Archive for the ‘Sensemaking’ Category

### Human Sense Making

Saturday, May 3rd, 2014

Scientists’ sense making when hypothesizing about disease mechanisms from expression data and their needs for visualization support by Barbara Mirel and Carsten Görg.

Abstract:

A common class of biomedical analysis is to explore expression data from high throughput experiments for the purpose of uncovering functional relationships that can lead to a hypothesis about mechanisms of a disease. We call this analysis expression driven, -omics hypothesizing. In it, scientists use interactive data visualizations and read deeply in the research literature. Little is known, however, about the actual flow of reasoning and behaviors (sense making) that scientists enact in this analysis, end-to-end. Understanding this flow is important because if bioinformatics tools are to be truly useful they must support it. Sense making models of visual analytics in other domains have been developed and used to inform the design of useful and usable tools. We believe they would be helpful in bioinformatics. To characterize the sense making involved in expression-driven, -omics hypothesizing, we conducted an in-depth observational study of one scientist as she engaged in this analysis over six months. From findings, we abstracted a preliminary sense making model. Here we describe its stages and suggest guidelines for developing visualization tools that we derived from this case. A single case cannot be generalized. But we offer our findings, sense making model and case-based tool guidelines as a first step toward increasing interest and further research in the bioinformatics field on scientists’ analytical workflows and their implications for tool design.

From the introduction:

In other domains, improvements in data visualization designs have relied on models of analysts’ actual sense making for a complex analysis [2]. A sense making model captures analysts’ cumulative, looped (not linear) “process [es] of searching for a representation and encoding data in that representation to answer task-specific questions” relevant to an open-ended problem [3]: 269. As an end-to-end flow of application-level tasks, a sense making model may portray and categorize analytical intentions, associated tasks, corresponding moves and strategies, informational inputs and outputs, and progression and iteration over time. The importance of sense making models is twofold: (1) If an analytical problem is poorly understood developers are likely to design for the wrong questions, and tool utility suffers; and (2) if developers do not have a holistic understanding of the entire analytical process, developed tools may be useful for one specific part of the process but will not integrate effectively in the overall workflow [4,5].

As the authors admit, one case isn’t enough to be generalized but their methodology, with its focus on the work flow of a scientist, is a refreshing break from imagined and/or “ideal” work flows for scientists.

Until now semantic software has followed someone’s projection of an “ideal” work flow.

The next generation of semantic software should follow the actual work flows of people working with their data.

I first saw this in a tweet by Neil Saunders

### Making Intelligence Systems Smarter (or Dumber)

Wednesday, May 9th, 2012

Picking the Brains of Strangers….[$507 Billion Dollar Prize (at least)] had three keys to its success: • Use of human analysts • Common access to data and prior efforts • Reuse of prior efforts by human analysts Intelligence analysts spend their days with snippets and bits of data, trying to wring sense out of it, only to pigeon hold their results in silos. Other analysts have to know about data to even request it. Or analysts with information must understand their information will help others with their own sensemaking. All contrary to the results in Picking the Brains of Strangers…. What information will result in sensemaking for one or more analysts is unknown. And cannot be known. Every firewall, every silo, every compartment, every clearance level, makes every intelligence agency and the overall intelligence community dumber. Until now, the intelligence community has chosen to be dumber and more secure. In a time of budget cuts and calls for efficiency in government, it is time for more effective intelligence work, even if less secure. Take the leak of the diplomatic cables. The only people unaware of the general nature of the cables were the public and perhaps the intelligence agency of Zambia. All other intelligence agencies probably had them or their own version, pigeon holed in their own systems. With robust intelligence sharing, the NSA could do all the signal capture and expense it out to other agencies. Rather than having duplicate systems by various agencies. And perhaps a public data flow of analysis for foreign news sources in their original languages. They may not have clearance but they may have insights into cultures and languages that are rare in intelligence agencies. But that presumes an interest in smarter intelligence systems, not dumber ones by design. ### Picking the Brains of Strangers….[$507 Billion Dollar Prize (at least)]

Wednesday, May 9th, 2012

Picking the Brains of Strangers Helps Make Sense of Online Information

Science Daily carried this summary (the official abstract and link are below):

People who have already sifted through online information to make sense of a subject can help strangers facing similar tasks without ever directly communicating with them, researchers at Carnegie Mellon University and Microsoft Research have demonstrated.

This process of distributed sensemaking, they say, could save time and result in a better understanding of the information needed for whatever goal users might have, whether it is planning a vacation, gathering information about a serious disease or trying to decide what product to buy.

The researchers explored the use of digital knowledge maps — a means of representing the thought processes used to make sense of information gathered from the Web. When participants in the study used a knowledge map that had been created and improved upon by several previous users, they reported that the quality of their own work was better than when they started from scratch or used a newly created knowledge map.

“Collectively, people spend more than 70 billion hours a year trying to make sense of information they have gathered online,” said Aniket Kittur, assistant professor in Carnegie Mellon’s Human-Computer Interaction Institute. “Yet in most cases, when someone finishes a project, that work is essentially lost, benefitting no one else and perhaps even being forgotten by that person. If we could somehow share those efforts, however, all of us might learn faster.”

Three take away points:

• “people spend more than 70 billion hours a year trying to make sense of information they have gathered online”
• “when someone finishes a project, that work is essentially lost, benefitting no one else and perhaps even being forgotten by that person”
• using knowledge maps created and improved upon by others — improved the quality of their own work

At the current minimum wage in the US of $7.25, that’s roughly$507,500,000,000. Some of us make more than minimum wage so that figure should be adjusted upwards.

The key to success was improvement upon efforts already improved upon by others.

Based on a small sample set (21 people) so there is an entire research field waiting to explore. Whether this holds true with different types of data, what group dynamics make it work best, individual characteristics that influence outcomes, interfaces (that help or hinder), processing models, software, hardware, integrating the results from different interfaces, etc.

Start here:

Distributed sensemaking: improving sensemaking by leveraging the efforts of previous users
by Kristie Fisher, Scott Counts, and Aniket Kittur.

Abstract:

We examine the possibility of distributed sensemaking: improving a user’s sensemaking by leveraging previous users’ work without those users directly collaborating or even knowing one another. We asked users to engage in sensemaking by organizing and annotating web search results into “knowledge maps,” either with or without previous users’ maps to work from. We also recorded gaze patterns as users examined others’ knowledge maps. Our findings show the conditions under which distributed sensemaking can improve sensemaking quality; that a user’s sensemaking process is readily apparent to a subsequent user via a knowledge map; and that the organization of content was more useful to subsequent users than the content itself, especially when those users had differing goals. We discuss the role distributed sensemaking can play in schema induction by helping users make a mental model of an information space and make recommendations for new tool and system development.