Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

April 27, 2014

The Deadly Data Science Sin of Confirmation Bias

Filed under: Confidence Bias,Data Science,Statistics — Patrick Durusau @ 4:06 pm

The Deadly Data Science Sin of Confirmation Bias by Michael Walker.

From the post:

confirmation

Confirmation bias occurs when people actively search for and favor information or evidence that confirms their preconceptions or hypotheses while ignoring or slighting adverse or mitigating evidence. It is a type of cognitive bias (pattern of deviation in judgment that occurs in particular situations – leading to perceptual distortion, inaccurate judgment, or illogical interpretation) and represents an error of inductive inference toward confirmation of the hypothesis under study.

Data scientists exhibit confirmation bias when they actively seek out and assign more weight to evidence that confirms their hypothesis, and ignore or underweigh evidence that could disconfirm their hypothesis. This is a type of selection bias in collecting evidence.

Note that confirmation biases are not limited to the collection of evidence: even if two (2) data scientists have the same evidence, their respective interpretations may be biased. In my experience, many data scientists exhibit a hidden yet deadly form of confirmation bias when they interpret ambiguous evidence as supporting their existing position. This is difficult and sometimes impossible to detect yet occurs frequently.

Isn’t that a great graphic? Michael goes on to list several resources that will help in spotting confirmation bias, yours and that of others. Not 1005 but you will do better heeding his advice.

Be aware that the confirmation bias isn’t confined to statistical and/or data science methods. Decision makers, topic map authors, fact gatherers, etc. are all subject to confirmation bias.

Michael sees confirmation bias as dangerous to the credibility of data science, writing:

The evidence suggests confirmation bias is rampant and out of control in both the hard and soft sciences. Many academic or research scientists run thousands of computer simulations where all fail to confirm or verify the hypothesis. Then they tweak the data, assumptions or models until confirmatory evidence appears to confirm the hypothesis. They proceed to publish the one successful result without mentioning the failures! This is unethical, may be fraudulent and certainly produces flawed science where a significant majority of results can not be replicated. This has created a loss or confidence and credibility for science by the public and policy makers that has serious consequences for our future.
.
The danger for professional data science practitioners is providing clients and employers with flawed data science results leading to bad business and policy decisions. We must learn from the academic and research scientists and proactively avoid confirmation bias or data science risks loss of credibility.

I don’t think bad business and policy decisions need any help from “flawed data science.” You may recall that “policy makers” not all that many years ago dismissed a failure to find weapons of mass destruction, a key motivation for war, as irrelevant in hindsight.

My suggestion would be to make your data analysis as complete and accurate as possible and always keep digitally signed and encrypted copies of data and communications with your clients.

November 3, 2012

Reducing/Reinforcing Confirmation Bias in TM Interfaces

Filed under: Confidence Bias,Interface Research/Design,Users — Patrick Durusau @ 9:52 am

Recent research has demonstrated a difficult-to-read font can reduce the influence of the “confirmation bias.”

Wikipedia on confirmation bias:

Confirmation bias (also called confirmatory bias or myside bias) is a tendency of people to favor information that confirms their beliefs or hypotheses. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. For example, in reading about gun control, people usually prefer sources that affirm their existing attitudes. They also tend to interpret ambiguous evidence as supporting their existing position. Biased search, interpretation and memory have been invoked to explain attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence), belief perseverance (when beliefs persist after the evidence for them is shown to be false), the irrational primacy effect (a greater reliance on information encountered early in a series) and illusory correlation (when people falsely perceive an association between two events or situations).

A series of experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Later work re-interpreted these results as a tendency to test ideas in a one-sided way, focusing on one possibility and ignoring alternatives. In certain situations, this tendency can bias people’s conclusions. Explanations for the observed biases include wishful thinking and the limited human capacity to process information. Another explanation is that people show confirmation bias because they are weighing up the costs of being wrong, rather than investigating in a neutral, scientific way.

Confirmation biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence. Poor decisions due to these biases have been found in military, political, and organizational contexts.

[one footnote reference removed]

The topic maps consumed by users can either help avoid or reinforce (depends on your agenda) the impact of the confirmation bias.

The popular account of the research:

Liberals and conservatives who are polarized on certain politically charged subjects become more moderate when reading political arguments in a difficult-to-read font, researchers report in a new study. Likewise, people with induced bias for or against a defendant in a mock trial are less likely to act on that bias if they have to struggle to read the evidence against him.

The study is the first to use difficult-to-read materials to disrupt what researchers call the “confirmation bias,” the tendency to selectively see only arguments that support what you already believe, psychology professor Jesse Preston said.

The new research, reported in the Journal of Experimental Social Psychology, is one of two studies to show that subtle manipulations that affect how people take in information can reduce political polarization. The other study, which explores attitudes toward a Muslim community center near the World Trade Center site, is described in a paper in the journal Social Psychological and Personality Science.

By asking participants to read an overtly political argument about capital punishment in a challenging font, the researchers sought to disrupt participants’ usual attitudes to the subject, said graduate student Ivan Hernandez, who led the capital punishment/mock trial study with University of Illinois psychology professor Jesse Preston.

The intervention worked. Liberals and conservatives who read the argument in an easy-to-read font were much more polarized on the subject than those who had to slog through the difficult version. [Difficult-To-Read Font Reduces Political Polarity, Study Finds]

Or if you are interested in the full monty:

“Disfluency disrupts the confirmation bias.” by Ivan Hernandez and Jesse Lee Preston. Journal of Experimental Social Psychology Volume 49, Issue 1, January 2013, Pages 178–182.

Abstract:

One difficulty in persuasion is overcoming the confirmation bias, where people selectively seek evidence that is consistent with their prior beliefs and expectations. This biased search for information allows people to analyze new information in an efficient, but shallow way. The present research discusses how experienced difficultly in processing (disfluency) can reduce the confirmation bias by promoting careful, analytic processing. In two studies, participants with prior attitudes on an issue became less extreme after reading an argument on the issues in a disfluent format. The change occurred for both naturally occurring attitudes (i.e. political ideology) and experimentally assigned attitudes (i.e. positivity toward a court defendant). Importantly, disfluency did not reduce confirmation biases when participants were under cognitive load, suggesting that cognitive resources are necessary to overcome these biases. Overall, these results suggest that changing the style of an argument’s presentation can lead to attitude change by promoting more comprehensive consideration of opposing views.

I like the term “disfluency,” although “a dlsfluency on both your houses” doesn’t have the ring of “a plague on both your houses,” does it?*

Must be the confirmation bias.

* Romeo And Juliet Act 3, scene 1, 90–92

November 4, 2011

Confidence Bias: Evidence from Crowdsourcing

Filed under: Bias,Confidence Bias,Crowd Sourcing,Interface Research/Design — Patrick Durusau @ 6:10 pm

Confidence Bias: Evidence from Crowdsourcing Crowdflower

From the post:

Evidence in experimental psychology suggests that most people overestimate their own ability to complete objective tasks accurately. This phenomenon, often called confidence bias, refers to “a systematic error of judgment made by individuals when they assess the correctness of their responses to questions related to intellectual or perceptual problems.” 1 But does this hold up in crowdsourcing?

We ran an experiment to test for a persistent difference between people’s perceptions of their own accuracy and their actual objective accuracy. We used a set of standardized questions, focusing on the Verbal and Math sections of a common standardized test. For the 829 individuals who answered more than 10 of these questions, we asked for the correct answer as well as an indication of how confident they were of the answer they supplied.

We didn’t use any Gold in this experiment. Instead, we incentivized performance by rewarding those finishing in the top 10%, based on objective accuracy.

I am not sure why crowdsourcing would make a difference on the question of overestimation of ability but now the answer is in, N0. But do read the post for the details, I think you will find it useful when doing user studies.

For example, when you ask a user if some task is too complex as designed, are they likely to overestimate their ability to complete it, either to avoid being embarrassed in front of others or admitting that they really didn’t follow your explanation?

My suspicion is yes and so in addition to simply asking users if they understand particular search or other functions with an interface, you need to also film them using the interface with no help from you (or others).

You will remember in Size Really Does Matter… that Blair and Maron reported that lawyers over estimated their accuracy in document retrieval by 55%. Of course, the question of retrieval is harder to evaluate than those in the Crowdflower experiment but it is a bias you need to keep in mind.

Powered by WordPress