Trouble at the lab, Oct. 19, 2013, The Economist.
From the web page:
“I SEE a train wreck looming,” warned Daniel Kahneman, an eminent psychologist, in an open letter last year. The premonition concerned research on a phenomenon known as “priming”. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. They have been a boom area in psychology over the past decade, and some of their insights have already made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace.
Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.
The idea that the same experiments always get the same results, no matter who performs them, is one of the cornerstones of science’s claim to objective truth. If a systematic campaign of replication does not lead to the same results, then either the original research is flawed (as the replicators claim) or the replications are (as many of the original researchers on priming contend). Either way, something is awry.
…
The numbers will make you a militant data skeptic:
- Original results could be duplicated for only 6 out of 53 landmark studies of cancer.
- Drug company could reproduce only 1/4 of 67 “seminal studies.”
- NIH official estimates at least three-quarters of publishing biomedical finding would be hard to reproduce.
- Three-quarter of published paper in machine learning are bunk due to overfitting.
Those and more examples await you in this article from The Economist.
As the sub-heading for the article reads:
Scientists like to think of science as self-correcting. To an alarming degree, it is not
You may not mind misrepresenting facts to others, but do you want other people misrepresenting facts to you?
Do you have a professional data critic/skeptic on call?