Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

July 10, 2013

Data visualization: ambiguity as a fellow traveler

Filed under: Ambiguity,Uncertainty,Visualization — Patrick Durusau @ 4:48 pm

Data visualization: ambiguity as a fellow traveler by Vivien Marx. (Nature Methods 10, 613–615 (2013) doi:10.1038/nmeth.2530)

From the article:

Data from an experiment may appear rock solid. Upon further examination, the data may morph into something much less firm. A knee-jerk reaction to this conundrum may be to try and hide uncertain scientific results, which are unloved fellow travelers of science. After all, words can afford ambiguity, but with visuals, “we are damned to be concrete,” says Bang Wong, who is the creative director of the Broad Institute of MIT and Harvard. The alternative is to face the ambiguity head-on through visual means.

Color or color gradients in heat maps, for example, often show degrees of data uncertainty and are, at their core, visual and statistical expressions. “Talking about uncertainty is talking about statistics,” says Martin Krzywinski, whose daily task is data visualization at the Genome Sciences Centre at the British Columbia Cancer Agency.

Statistically driven displays such as box plots can work for displaying uncertainty, but most visualizations use more ad hoc methods such as transparency or blur. Error bars are also an option, but it is difficult to convey information clearly with them, he says. “It’s likely that if something as simple as error bars is misunderstood, anything more complex will be too,” Krzywinski says.

I don’t hear “ambiguity” and “uncertainty” as the same thing.

The duck/rabbit image you will remember from Sperberg-McQueen’s presentations is ambiguous, but not uncertain.

duck rabbit

Granting that “uncertainty” and its visualization is a difficult task but let’s not compound the task by confusing it with ambiguity.

The uncertainty issue in this article echoes Steve Pepper’s concern over binary choices for type under the current TMDM. Either a topic, for example, is of a particular type or not. There isn’t any room for uncertainty.

The article has a number of suggestions on visualizing uncertainty that I think you may find helpful.

I first saw this at: Visualizing uncertainty still unsolved problem by Nathan Yau.

March 24, 2013

Uncertainty Principle for Data

Filed under: BigData,Uncertainty — Patrick Durusau @ 1:48 pm

Rich Sherman writes in Big Data & The Wizard of Oz Syndrome:

An excellent article in the Wall Street Journal, “Big Data, Big Blunders,” discussed five mistakes commonly made by enterprises when initiating their first Big Data projects. The technology hype cycle, which reminds me a lot of The Wizard of Oz, is a contributing factor in these blunders. I’ll briefly summarize the WSJ’s points, and will suggest, based on my experience helping clients, why enterprises make these blunders.

Rick summarizes these points from the WSJ story:

  • Data for Data’s Sake
  • Talent Gap
  • Data, Data Everywhere
  • Infighting
  • Aiming Too High

Rick says that advocates of new technologies promise to solve problems with prior technology advances, leading to unrealistic expectations.

I agree but there is a persistent failure to recognize the uncertainty principle for data.

How would you know if data is clean and uniform?

By your use case for the data. Yes?

That would explain why data scientists estimate they spend 60-80% of their time munging data (cleaning, transforming, etc.).

They are making data clean and uniform for their individual use cases.

And they do that task over and over again.

The definition of clean and uniform data is like the uncertainty principle in physics.

You can have clean and uniform data for one purpose, but making it so makes it dirty and non-uniform for another purpose.

Unless a technology outlines how it obtains clean and uniform data, from its perspective, it has told you only part of the cost of its use.

January 27, 2013

Information field theory

Filed under: Data Analysis,Information Field Theory,Mathematics,Uncertainty — Patrick Durusau @ 5:41 pm

Information field theory

From the webpage:

Information field theory (IFT) is information theory, the logic of reasoning under uncertainty, applied to fields. A field can be any quantity defined over some space, e.g. the air temperature over Europe, the magnetic field strength in the Milky Way, or the matter density in the Universe. IFT describes how data and knowledge can be used to infer field properties. Mathematically it is a statistical field theory and exploits many of the tools developed for such. Practically, it is a framework for signal processing and image reconstruction.

IFT is fully Bayesian. How else can infinitely many field degrees of freedom be constrained by finite data?

It can be used without the knowledge of Feynman diagrams. There is a full toolbox of methods.

It reproduces many known well working algorithms. This should be reassuring.

And, there were certainly previous works in a similar spirit. See below for IFT publications and previous works.

Anyhow, in many cases IFT provides novel rigorous ways to extract information from data.

Please, have a look! The specific literature is listed below and more general highlight articles on the right hand side.

Just in case you want to be on the cutting edge of information extraction. 😉

And you might note that Feynman diagrams are graphic representations (maps) of complex mathematical equations.

September 24, 2012

High Dimensional Undirected Graphical Models

Filed under: Graphs,High Dimensionality,Uncertainty — Patrick Durusau @ 4:06 pm

High Dimensional Undirected Graphical Models by Larry Wasserman.

Larry discusses uncertainty in high dimensional graphs. No answers but does illustrate the problem.

October 29, 2011

IFIP Working Conference on Uncertainty Quantification in Scientific Computing

Filed under: Scientific Computing,Uncertainty — Patrick Durusau @ 7:24 pm

IFIP Working Conference on Uncertainty Quantification in Scientific Computing

From the webpage:

I just came across the following presentations at the IFIP Working Conference on Uncertainty Quantification in Scientific Computing held at the Millennium Harvest House in Boulder, on August 1-4, 2011. Here are the talks and some abstracts:

I really like the title of this blog: The Robust Mathematical Modeling Blog …When modeling Reality is not an option.

I think you will find the presentations good starting points for reviewing what we know or suspect about uncertainty.

Does anyone know of references to modeling uncertainties in the humanities?

Seems to me that our notions of subject identity should be understood along a continuum of uncertainty.

Powered by WordPress