Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 27, 2017

No Peer Review at FiveThirtyEight

Filed under: Humanities,Peer Review,Researchers,Science — Patrick Durusau @ 10:47 am

Politics Moves Fast. Peer Review Moves Slow. What’s A Political Scientist To Do? by Maggie Koerth-Baker

From the post:

Politics has a funny way of turning arcane academic debates into something much messier. We’re living in a time when so much in the news cycle feels absurdly urgent and partisan forces are likely to pounce on any piece of empirical data they can find, either to champion it or tear it apart, depending on whether they like the result. That has major implications for many of the ways knowledge enters the public sphere — including how academics publicize their research.

That process has long been dominated by peer review, which is when academic journals put their submissions in front of a panel of researchers to vet the work before publication. But the flaws and limitations of peer review have become more apparent over the past decade or so, and researchers are increasingly publishing their work before other scientists have had a chance to critique it. That’s a shift that matters a lot to scientists, and the public stakes of the debate go way up when the research subject is the 2016 election. There’s a risk, scientists told me, that preliminary research results could end up shaping the very things that research is trying to understand.

The legend of peer review catching and correcting flaws has a long history. A legend much tarnished by the Top 10 Retractions of 2017 and similar reports. Retractions are self admissions of the failure of peer review. By the hundreds.

Withdrawal of papers isn’t the only debunking of peer review. The reports, papers, etc., on the failure of peer review include: “Data fabrication and other reasons for non-random sampling in 5087 randomised, controlled trials in anaesthetic and general medical journals,” Anaesthesia, Carlisle 2017, DOI: 10.1111/anae.13962; “The peer review drugs don’t work” by Richard Smith; “One in 25 papers contains inappropriately duplicated images, screen finds” by Cat Ferguson.

Koerth-Baker’s quoting of Justin Esarey to support peer review is an example of no or failed peer review at FiveThirtyEight.


But, on aggregate, 100 studies that have been peer-reviewed are going to produce higher-quality results than 100 that haven’t been, said Justin Esarey, a political science professor at Rice University who has studied the effects of peer review on social science research. That’s simply because of the standards that are supposed to go along with peer review – clearly reporting a study’s methodology, for instance – and because extra sets of eyes might spot errors the author of a paper overlooked.

Koerth-Baker acknowledges the failures of peer review but since the article is premised upon peer review insulating the public from “bad science,” she runs in Justin Esarey, “…who has studied the effects of peer review on social science research.” One assumes his “studies” are mentioned to embue his statements with an aura of authority.

Debunking Esarey’s authority to comment on the “…effects of peer review on social science research” doesn’t require much effort. If you scan his list of publications you will find Does Peer Review Identify the Best Papers?, which bears the sub-title, A Simulation Study of Editors, Reviewers, and the Social Science Publication Process.

Esarey’s comments on the effectiveness of peer review are not based on fact but on simulations of peer review systems. Useful work no doubt but hardly the confessing witness needed to exonerate peer review in view of its long history of failure.

To save you chasing the Esarey link, the abstract reads:

How does the structure of the peer review process, which can vary from journal to journal, influence the quality of papers published in that journal? In this paper, I study multiple systems of peer review using computational simulation. I find that, under any system I study, a majority of accepted papers will be evaluated by the average reader as not meeting the standards of the journal. Moreover, all systems allow random chance to play a strong role in the acceptance decision. Heterogen eous reviewer and reader standards for scientific quality drive both results. A peer review system with an active editor (who uses desk rejection before review and does not rely strictly on reviewer votes to make decisions ) can mitigate some of these effects.

If there were peer reviewers, editors, etc., at FiveThirtyEight, shouldn’t at least one of them looked beyond the title Does Peer Review Identify the Best Papers? to ask Koerth-Baker what evidence Esarey has for his support of peer review? Or is agreement with Koerth-Baker sufficient?

Peer review persists for a number of unsavory reasons, prestige, professional advancement, enforcement of discipline ideology, pretension of higher quality of publications, let’s not add a false claim of serving the public.

April 23, 2017

Fraudulent Peer Review – Clue? Responded On Time!

Filed under: Peer Review,Science — Patrick Durusau @ 7:28 pm

107 cancer papers retracted due to peer review fraud by Cathleen O’Grady.

As if peer review weren’t enough of a sham, some authors took it to another level:


It’s possible to fake peer review because authors are often asked to suggest potential reviewers for their own papers. This is done because research subjects are often blindingly niche; a researcher working in a sub-sub-field may be more aware than the journal editor of who is best-placed to assess the work.

But some journals go further and request, or allow, authors to submit the contact details of these potential reviewers. If the editor isn’t aware of the potential for a scam, they then merrily send the requests for review out to fake e-mail addresses, often using the names of actual researchers. And at the other end of the fake e-mail address is someone who’s in on the game and happy to send in a friendly review.

Fake peer reviewers often “know what a review looks like and know enough to make it look plausible,” said Elizabeth Wager, editor of the journal Research Integrity & Peer Review. But they aren’t always good at faking less obvious quirks of academia: “When a lot of the fake peer reviews first came up, one of the reasons the editors spotted them was that the reviewers responded on time,” Wager told Ars. Reviewers almost always have to be chased, so “this was the red flag. And in a few cases, both the reviews would pop up within a few minutes of each other.”

I’m sure timely submission of reviews weren’t the only basis for calling fraud but it is an amusing one.

It’s past time to jettison the bloated machinery of peer review. Judge work by its use, not where it’s published.

January 5, 2017

Beall’s List of Predatory Publishers 2017 [Avoiding “fake” scholarship, journalists take note]

Filed under: Journalism,News,Peer Review,Publishing — Patrick Durusau @ 11:42 am

Beall’s List of Predatory Publishers 2017 by Jeffrey Beall.

From the webpage:

Each year at this time I formally announce my updated list of predatory publishers. Because the publisher list is now very large, and because I now publish four, continuously-updated lists, the annual releases do not include the actual lists but instead include statistical and explanatory data about the lists and links to them.

Jeffrey maintains four lists of highly questionable publishers/publications:

Beall’s list should be your first stop when an article arrives from an unrecognized publication.

Not that being published in Nature and/or Science is a guarantee of quality scholarship, but publication on Beall’s list should raise publication stopping red flags.

Such a publication could be true, but bears the burden of proving itself to be so.

July 2, 2016

Developing Expert p-Hacking Skills

Filed under: Peer Review,Psychology,Publishing,R,Statistics — Patrick Durusau @ 4:00 pm

Introducing the p-hacker app: Train your expert p-hacking skills by Ned Bicare.

Ned’s p-hacker app will be welcomed by everyone who publishes where p-values are accepted.

Publishers should mandate authors and reviewers to submit six p-hacker app results along with any draft that contains, or is a review of, p-values.

The p-hacker app results won’t improve a draft and/or review, but when compared to the draft, will improve the publication in which it might have appeared.

From the post:

My dear fellow scientists!

“If you torture the data long enough, it will confess.”

This aphorism, attributed to Ronald Coase, sometimes has been used in a disrespective manner, as if it was wrong to do creative data analysis.

In fact, the art of creative data analysis has experienced despicable attacks over the last years. A small but annoyingly persistent group of second-stringers tries to denigrate our scientific achievements. They drag psychological science through the mire.

These people propagate stupid method repetitions; and what was once one of the supreme disciplines of scientific investigation – a creative data analysis of a data set – has been crippled to conducting an empty-headed step-by-step pre-registered analysis plan. (Come on: If I lay out the full analysis plan in a pre-registration, even an undergrad student can do the final analysis, right? Is that really the high-level scientific work we were trained for so hard?).

They broadcast in an annoying frequency that p-hacking leads to more significant results, and that researcher who use p-hacking have higher chances of getting things published.

What are the consequence of these findings? The answer is clear. Everybody should be equipped with these powerful tools of research enhancement!

The art of creative data analysis

Some researchers describe a performance-oriented data analysis as “data-dependent analysis”. We go one step further, and call this technique data-optimal analysis (DOA), as our goal is to produce the optimal, most significant outcome from a data set.

I developed an online app that allows to practice creative data analysis and how to polish your p-values. It’s primarily aimed at young researchers who do not have our level of expertise yet, but I guess even old hands might learn one or two new tricks! It’s called “The p-hacker” (please note that ‘hacker’ is meant in a very positive way here. You should think of the cool hackers who fight for world peace). You can use the app in teaching, or to practice p-hacking yourself.

Please test the app, and give me feedback! You can also send it to colleagues: http://shinyapps.org/apps/p-hacker.

Enjoy!

July 1, 2016

Open Access Journals Threaten Science – What’s Your Romesburg Number?

Filed under: Open Access,Peer Review — Patrick Durusau @ 10:35 am

When I saw the pay-per-view screen shot of this article on Twitter, I almost dismissed it as Photoshop-based humor. But, anything is possible so I searched for the title, only to find:

How publishing in open access journals threatens science and what we can do about it by H. Charles Romesburg (Department of Environment and Society, Utah State University, Logan, UT, USA).

Abstract:

The last decade has seen an enormous increase in the number of peer-reviewed open access research journals in which authors whose articles are accepted for publication pay a fee to have them made freely available on the Internet. Could this popularity of open access publishing be a bad thing? Is it actually imperiling the future of science? In this commentary, I argue that it is. Drawing upon research literature, I explain why it is almost always best to publish in society journals (i.e., those sponsored by research societies such as Journal of Wildlife Management) and not nearly as good to publish in commercial academic journals, and worst—to the point it should normally be opposed—to publish in open access journals (e.g., PLOS ONE). I compare the operating plans of society journals and open access journals based on 2 features: the quality of peer review they provide and the quality of debate the articles they publish receive. On both features, the quality is generally high for society journals but unacceptably low for open access journals, to such an extent that open access publishing threatens to pollute science with false findings. Moreover, its popularity threatens to attract researchers’ allegiance to it and away from society journals, making it difficult for them to achieve their traditionally high standards of peer reviewing and of furthering debate. I prove that the commonly claimed benefits to science of open access publishing are nonexistent or much overestimated. I challenge the notion that journal impact factors should be a key consideration in selecting journals in which to publish. I suggest ways to strengthen the Journal and keep it strong. © 2016 The Wildlife Society.

On a pay-per-view site (of course):

wildlife-460

You know about the Erdős number, which measures your distance from collaborating with Paul Erdős.

I propose the Romesburg Number, which measures your collaboration distance from H. Charles Romesburg. The higher your number, the further removed you are from Romesburg.

I don’t have all the data but I hopeful my Romesburg number is 12 or higher.

April 23, 2016

Peer Review Fails, Again.

Filed under: Bioinformatics,Peer Review,Science — Patrick Durusau @ 1:51 pm

One in 25 papers contains inappropriately duplicated images, screen finds by Cat Ferguson.

From the post:

Elisabeth Bik, a microbiologist at Stanford, has for years been a behind-the-scenes force in scientific integrity, anonymously submitting reports on plagiarism and image duplication to journal editors. Now, she’s ready to come out of the shadows.

With the help of two editors at microbiology journals, she has conducted a massive study looking for image duplication and manipulation in 20,621 published papers. Bik and co-authors Arturo Casadevall and Ferric Fang (a board member of our parent organization) found 782 instances of inappropriate image duplication, including 196 published papers containing “duplicated figures with alteration.” The study is being released as a pre-print on bioArxiv.

I don’t know if the refusal of three (3) journals to date to publish this work or that peer reviewers of the original papers missed the duplication is the sadder news about this paper.

Being in the business of publishing, not in the business of publishing correct results, the refusal to publish an article that establishes the poor quality of those publications, is perhaps understandable. Not acceptable but understandable.

Unless the joke is on the reading public and other researchers. Publications are just that, publications. May or may not resemble any experiment or experience that can be duplicated by others. Rely on published results at your own peril.

Transparent access to all data and not peer review is the only path to solving this problem.

February 12, 2016

Overlay Journals – Community-Based Peer Review?

Filed under: Open Access,Peer Review,Publishing — Patrick Durusau @ 8:31 pm

New Journals Piggyback on arXiv by Emily Conover.

From the post:

A non-traditional style of scientific publishing is gaining ground, with new journals popping up in recent months. The journals piggyback on the arXiv or other scientific repositories and apply peer review. A link to the accepted paper on the journal’s website sends readers to the paper on the repository.

Proponents hope to provide inexpensive open access publication and streamline the peer review process. To save money, such “overlay” journals typically do away with some of the services traditional publishers provide, for example typesetting and copyediting.

Not everyone is convinced. Questions remain about the scalability of overlay journals, and whether they will catch on — or whether scientists will demand the stamp of approval (and accompanying prestige) that the established, traditional journals provide.

The idea is by no means new — proposals for journals interfacing with online archives appeared as far back as the 1990s, and a few such journals are established in mathematics and computer science. But now, say proponents, it’s an idea whose time has come.

The newest such journal is the Open Journal of Astrophysics, which began accepting submissions on December 22. Editor in Chief Peter Coles of the University of Sussex says the idea came to him several years ago in a meeting about the cost of open access journals. “They were talking about charging thousands of pounds for making articles open access,” Coles says, and he thought, “I never consult journals now; I get all my papers from the arXiv.” By adding a front end onto arXiv to provide peer review, Coles says, “We can dispense with the whole paraphernalia with traditional journals.”

Authors first submit their papers to arXiv, and then input the appropriate arXiv ID on the journal’s website to indicate that they would like their paper reviewed. The journal follows a standard peer review process, with anonymous referees whose comments remain private.

When an article is accepted, a link appears on the journal’s website and the article is issued a digital object identifier (DOI). The entire process is free for authors and readers. As APS News went to press, Coles hoped to publish the first batch of half-dozen papers at the end of January.

My Archive for the ‘Peer Review’ Category has only a few of the high profile failures of peer review over the last five years.

You are probably familiar with at least twice as many reports as I have reported in this blog on the brokenness of peer review.

If traditional peer review is a known failure, why replicate it even for overlay journals?

Why not ask the full set of peers in a discipline? That is the readers of articles posted in public repositories?

If a book/journal article goes uncited, isn’t that evidence that it:

Did NOT advance the discipline in a way meaningful to their peers?

What other evidence would you have that it did advance the discipline? The opinions of friends of the editor? That seems too weak to even suggest.

Citation analysis isn’t free from issues, Are 90% of academic papers really never cited? Searching citations about academic citations reveals the good, the bad and the ugly, but it has the advantage of drawing on the entire pool of talent that comprises a discipline.

Moreover, peer review would not be limited to a one time judgment of traditional peer reviewers but on the basis of how a monograph or article fits into the intellectual development of the discipline as a whole.

Which is more persuasive: That editors and reviewers at Science or Nature accept a paper or that in the ten years following publication, an article is cited by every other major study in the field?

Citation analysis obviates the overhead costs that are raised about organizing peer review on a massive scale. Why organize peer review at all?

Peers are going to read and cite good literature and more likely than not, skip the bad. Unless you need to create positions for gate keepers and other barnacles on the profession, opt for citation based peer review based on open repositories.

I’m betting on the communities that silently vet papers and books in spite of the formalized and highly suspect mechanisms for peer review.

Overlay journals could publish preliminary lists of articles that are of interest in particular disciplines and as community-based peer review progresses, they can publish “best of…” series as the community further filters the publications.

Community-based peer review is already operating in your discipline. Why not call it out and benefit from it?

January 31, 2016

Experts, Sources, Peer Review, Bad Poetry and Flint, Michigan.

Filed under: Peer Review,Skepticism — Patrick Durusau @ 7:54 pm

Red faces at National Archive after Baldrick poem published with WW1 soldiers’ diaries.

From the post:

Officials behind the launch of a major initiative detailing lives of ordinary soldiers during the First World War were embarrassed by the discovery that they had mistakenly included the work of Blackadder character, Baldrick, in the achieve release.

The work, entitled ‘The German Guns’ and attributed to Private S.O. Baldrick, was actually written by the sitcom’s writers Richard Curtis and Ben Elton some 70 years after the end of the conflict. Elton was reported to be “delighted at the news” and friends said he was already checking to see if royalty payments may be due.

Although the archive release was scrutinised by experts, it is understood that the Baldrick poem was approved after a clerk recalled hearing Education Secretary Michael Gove referring to Baldrick in relation to the Great War, and assumed that he was of contemporary cultural significance.

Another illustration that experts and peer review aren’t the gold standards of correctness.

Or to put it differently: Mistakes happen, especially without sources.

If the only surviving information was Education Secretary Michael Gove referring to Baldrick, not only would the mistake be perpetuated but it would be immune to correction.

Citing and/or pointing to a digital resource that was the origin of the poem, would be more likely to trip warnings (by date of publication) or contain a currently recognizable reference, such as Blackadder.

The same lesson should be applied to reports such as Michael Moore’s claim:

1. While the Children in Flint Were Given Poisoned Water to Drink, General Motors Was Given a Special Hookup to the Clean Water. A few months after Gov. Snyder removed Flint from the clean fresh water we had been drinking for decades, the brass from General Motors went to him and complained that the Flint River water was causing their car parts to corrode when being washed on the assembly line. The governor was appalled to hear that GM property was being damaged, so he jumped through a number of hoops and quietly spent $440,000 to hook GM back up to the Lake Huron water, while keeping the rest of Flint on the Flint River water. Which means that while the children in Flint were drinking lead-filled water, there was one—and only one—address in Flint that got clean water: the GM factory.

Verification is especially important for me because I think Michael Moore is right and that predisposes me to accept his statements, without evidence.

In no particular order:

  • What “brass” from GM? Names, addresses, contact details. Links to statements?
  • What evidence did the “brass” present? Documents? Minutes of the meeting? Date?
  • What hoops did the Governor jump through? Who else in state government was aware of the request?
  • Where is the disbursement order for the $400,000 and related work orders?
  • Who was aware of any or all of these steps, in and out of government?

Those are some of the questions to ask to verify Michael Moore’s claim and, just as importantly, to lay a trail of knowledge and responsibility for the damage to the citizens of Flint.

Just because it was your job to hook GM back up to clean water, knowing that the citizens of Flint would be drinking water that corrodes auto parts, doesn’t make it right.

There are obligations that transcend personal interests or those of government.

Not poisoning innocents is one of those.

If there were sources for Michael’s account, people could start to be brought to justice. (See, sources really are important.)

January 28, 2016

Large-scale Conspiracies Fail On Revelation? – A Contrary Example

Filed under: Peer Review,Security — Patrick Durusau @ 8:00 am

Large-scale conspiracies would quickly reveal themselves, equations show

From the post:

While we can all keep a secret, a study by Dr David Robert Grimes suggests that large groups of people sharing in a conspiracy will very quickly give themselves away. The study is published online by journal PLOS ONE.

Dr Grimes, a physicist working in cancer research, is also a science writer and broadcaster. His profile means that he receives many communications from people who believe in science-related conspiracies. Those messages prompted him to look at whether large-scale collusions were actually tenable.

He explained: ‘A number of conspiracy theories revolve around science. While believing the moon landings were faked may not be harmful, believing misinformation about vaccines can be fatal. However, not every belief in a conspiracy is necessarily wrong — for example, the Snowden revelations confirmed some theories about the activities of the US National Security Agency.

He then looked at the maximum number of people who could take part in an intrigue in order to maintain it. For a plot to last five years, the maximum was 2521 people. To keep a scheme operating undetected for more than a decade, fewer than 1000 people can be involved. A century-long deception should ideally include fewer than 125 collaborators. Even a straightforward cover-up of a single event, requiring no more complex machinations than everyone keeping their mouth shut, is likely to be blown if more than 650 people are accomplices.

Dr. Grimes equates revelation with “failure” of a conspiracy.

But what of conspiracies that are “revealed” that don’t fail? Conspiracies sustained in spite of revelation of the true state of affairs.

Peer review has been discredited too often to require citation. But, for the sake of tradition: NIH grants could be assigned by lottery as effectively as the present grant process, …lotteries to pick NIH research-grant recipients, editors and peer reviewers fail to catch basic errors, Science self-corrects – instantly, and replication is a hit or miss affair, Replication in Psychology?.

There are literally thousands of examples of peer review as preached not being realized in practice. Yet every journal in the humanities and sciences and conferences for both, continue to practice and swear by peer review, in the face of known evidence to the contrary.

Dr. Grimes fails to account for maintenance of the peer review conspiracy, one of the most recent outrages being falsification of research results is not misconduct, Pressure on controversial nanoparticle paper builds.

How is it that both the conspiracy and the contrary facts are revealed over and over again, yet the conspiracy attracts new adherents every year?

BTW, the conspiracy against citizens of the United States and the world continues, despite the revelations of Edward Snowden.

Perhaps revelation isn’t “failure” for a conspiracy but simply another stage in its life-cycle?

You can see this work in full at: David Robert Grimes. On the Viability of Conspiratorial Beliefs. PLOS ONE, 2016; 11 (1): e0147905 DOI: 10.1371/journal.pone.0147905.

January 27, 2016

Another Victory For Peer Review – NOT! Cowardly Science

Filed under: Chemistry,Peer Review,Science — Patrick Durusau @ 9:35 pm

Pressure on controversial nanoparticle paper builds by Anthony King.

From the post:

The journal Science has posted an expression of concern over a controversial 2004 paper on the synthesis of palladium nanoparticles, highlighting serious problems with the work. This follows an investigation by the US funding body the National Science Foundation (NSF), which decided that the authors had falsified research data in the paper, which reported that crystalline palladium nanoparticle growth could be mediated by RNA.1 The NSF’s 2013 report on the issue, and a letter of reprimand from May last year, were recently brought into the open by a newspaper article.

The chief operating officer of the NSF identified ‘an absence of care, if not sloppiness, and most certainly a departure from accepted practices’. Recommended actions included sending letters of reprimand, requiring the subjects contact the journal to make a correction and barring the two chemists from serving as a peer reviewer, adviser or consultant for the NSF for three years.

Science notes that, though the ‘NSF did not find that the authors’ actions constituted misconduct, it nonetheless concluded that there “were significant departures from research practice”.’ The NSF report noted it would no longer fund the paper’s senior authors chemists Daniel Feldheim and Bruce Eaton at the University of Colorado, Boulder, who ‘recklessly falsified research data’, unless they ‘take specific actions to address issues’ in the 2004 paper. Science said it is working with the two authors ‘to understand their response to the NSF final ruling’.

Feldheim and Eaton have been under scrutiny since 2008, when an investigation by their former employer North Carolina State University, US, concluded the 2004 paper contained falsified data. According to Retraction Watch, Science said it would retract the paper as soon as possible.

I’m not a subscriber to Science, unfortunately, but if you are, can you write to Marcia McNutt, Editor-in-Chief to ask why findings of “recklessly falsified research data,” merits an expression of concern?

What’s with that? Concern?

In many parts of the United States, you can be murdered with impunity for DWB, Driving While Black, but you can falsify research data and only merit an expression of “concern” from Science?

Not to mention that the NSF doesn’t think that falsifying research evidence is “misconduct.”

The NSF needs to document what it thinks “misconduct” means. I don’t think it means what they think it means.

Every profession has bad apples but what is amazing in this case is the public kid glove handling of known falsifiers of evidence.

What is required for a swift and effective response against scientific misconduct?

Vivisection of human babies?

Or would that only count if they failed to have a petty cash account and to reconcile it on a monthly basis?

June 4, 2015

Open Review: Grammatical theory:…

Filed under: Grammar,Linguistics,Open Access,Peer Review — Patrick Durusau @ 2:22 pm

Open Review: Grammatical theory: From transformational grammar to constraint-based approaches by Stefan Müller (Author).

From the webpage:

This book is currently at the Open Review stage. You can help the author by making comments on the preliminary version: Part 1, Part 2. Read our user guide to get acquainted with the software.

This book introduces formal grammar theories that play a role in current linguistics or contributed tools that are relevant for current linguistic theorizing (Phrase Structure Grammar, Transformational Grammar/Government & Binding, Mimimalism, Generalized Phrase Structure Grammar, Lexical Functional Grammar, Categorial Grammar, Head-Driven Phrase Structure Grammar, Construction Grammar, Tree Adjoining Grammar, Dependency Grammar). The key assumptions are explained and it is shown how each theory treats arguments and adjuncts, the active/passive alternation, local reorderings, verb placement, and fronting of constituents over long distances. The analyses are explained with German as the object language.

In a final part of the book the approaches are compared with respect to their predictions regarding language acquisition and psycholinguistic plausibility. The nativism hypothesis that claims that humans posses genetically determined innate language-specific knowledge is examined critically and alternative models of language acquisition are discussed. In addition this more general part addresses issues that are discussed controversially in current theory building such as the question whether flat or binary branching structures are more appropriate, the question whether constructions should be treated on the phrasal or the lexical level, and the question whether abstract, non-visible entities should play a role in syntactic analyses. It is shown that the analyses that are suggested in the various frameworks are often translatable into each other. The book closes with a section that shows how properties that are common to all languages or to certain language classes can be captured.

(emphasis in the original)

Part of walking the walk of open access means participating in open reviews as your time and expertise permits.

Even if grammar theory isn’t your field, professionally speaking, it will be good mental exercise to see another view of the world of language.

I am intrigued by the suggestion “It shows that the analyses that are suggested in the various frameworks are often translatable into each other.” Shades of the application of category theory to linguistics? Mappings of identifications?

May 31, 2015

The peer review drugs don’t work [Faith Based Science]

Filed under: Peer Review,Publishing,Science,Social Sciences — Patrick Durusau @ 10:44 am

The peer review drugs don’t work by Richard Smith.

From the post:

It is paradoxical and ironic that peer review, a process at the heart of science, is based on faith not evidence.

There is evidence on peer review, but few scientists and scientific editors seem to know of it – and what it shows is that the process has little if any benefit and lots of flaws.

Peer review is supposed to be the quality assurance system for science, weeding out the scientifically unreliable and reassuring readers of journals that they can trust what they are reading. In reality, however, it is ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant.

As Drummond Rennie, the founder of the annual International Congress on Peer Review and Biomedical Publication, says, “If peer review was a drug it would never be allowed onto the market.”

Cochrane reviews, which gather systematically all available evidence, are the highest form of scientific evidence. A 2007 Cochrane review of peer review for journals concludes: “At present, little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research.”

We can see before our eyes that peer review doesn’t work because most of what is published in scientific journals is plain wrong. The most cited paper in Plos Medicine, which was written by Stanford University’s John Ioannidis, shows that most published research findings are false. Studies by Ioannidis and others find that studies published in “top journals” are the most likely to be inaccurate. This is initially surprising, but it is to be expected as the “top journals” select studies that are new and sexy rather than reliable. A series published in The Lancet in 2014 has shown that 85 per cent of medical research is wasted because of poor methods, bias and poor quality control. A study in Nature showed that more than 85 per cent of preclinical studies could not be replicated, the acid test in science.

I used to be the editor of the BMJ, and we conducted our own research into peer review. In one study we inserted eight errors into a 600 word paper and sent it 300 reviewers. None of them spotted more than five errors, and a fifth didn’t detect any. The median number spotted was two. These studies have been repeated many times with the same result. Other studies have shown that if reviewers are asked whether a study should be published there is little more agreement than would be expected by chance.

As you might expect, the humanities are lagging far behind the sciences in acknowledging that peer review is an exercise in social status rather than quality:


One of the changes I want to highlight is the way that “peer review” has evolved fairly quietly during the expansion of digital scholarship and pedagogy. Even though some scholars, such as Kathleen Fitzpatrick, are addressing the need for new models of peer review, recognition of the ways that this process has already been transformed in the digital realm remains limited. The 2010 Center for Studies in Higher Education (hereafter cited as Berkeley Report) comments astutely on the conventional role of peer review in the academy:

Among the reasons peer review persists to such a degree in the academy is that, when tied to the venue of a publication, it is an efficient indicator of the quality, relevance, and likely impact of a piece of scholarship. Peer review strongly influences reputation and opportunities. (Harley, et al 21)

These observations, like many of those presented in this document, contain considerable wisdom. Nevertheless, our understanding of peer review could use some reconsideration in light of the distinctive qualities and conditions associated with digital humanities.
…(Living in a Digital World: Rethinking Peer Review, Collaboration, and Open Access by Sheila Cavanagh.)

Can you think of another area where something akin to peer review is being touted?

What about internal guidelines of the CIA, NSA, FBI and secret courts reviewing actions by those agencies?

How do those differ from peer review, which is an acknowledged failure in science and should be acknowledged in the humanities?

They are quite similar in the sense that some secret group is empowered to make decisions that impact others and members of those groups, don’t want to relinquish those powers. Surprise, surprise.

Peer review should be scrapped across the board and replaced by tracked replication and use by others, both in the sciences and the humanities.

Government decisions should be open to review by all its citizens and not just a privileged few.

May 1, 2015

Replication in Psychology?

Filed under: Peer Review,Psychology,Researchers,Science — Patrick Durusau @ 8:28 pm

First results from psychology’s largest reproducibility test by Monya Baker.

From the post:

An ambitious effort to replicate 100 research findings in psychology ended last week — and the data look worrying. Results posted online on 24 April, which have not yet been peer-reviewed, suggest that key findings from only 39 of the published studies could be reproduced.

But the situation is more nuanced than the top-line numbers suggest (See graphic, ‘Reliability test’). Of the 61 non-replicated studies, scientists classed 24 as producing findings at least “moderately similar” to those of the original experiments, even though they did not meet pre-established criteria, such as statistical significance, that would count as a successful replication.

The project, known as the “Reproducibility Project: Psychology”, is the largest of a wave of collaborative attempts to replicate previously published work, following reports of fraud and faulty statistical analysis as well as heated arguments about whether classic psychology studies were robust. One such effort, the ‘Many Labs’ project, successfully reproduced the findings of 10 of 13 well-known studies3.

Replication is a “hot” issue and likely to get hotter if peer review shifts to be “open.”

Do you really want to be listed as a peer reviewer for a study that cannot be replicated?

Perhaps open peer review will lead to more accountability of peer reviewers.

Yes?

April 5, 2015

Photoshopping Science? Where Was Peer Review?

Filed under: Bioinformatics,Peer Review,Science — Patrick Durusau @ 6:46 pm

Too Much to be Nothing? by Leonid Schneider.

From the post:

(March 24th, 2015) Already at an early age, Olivier Voinnet had achieved star status among plant biologists – until suspicions arose last year that more than 30 of his publications contained dubious images. Voinnet’s colleagues are shocked – and demand an explanation.

Several months ago, a small group of international plant scientists set themselves the task of combing through the relevant literature for evidence of potential data manipulation. They posted their discoveries on the post-publication peer review platform PubPeer. As one of these anonymous scientists (whose real name is known to Laborjournal/Lab Times) explained, all this detective work was accomplished simply by taking a good look at the published figures. Soon, the scientists stumbled on something unexpected: putative image manipulations in the papers of one of the most eminent scientists in the field, Sir David Baulcombe. Even more strikingly, all these suspicious publications (currently seven, including papers in Cell, PNAS and EMBO J) featured his former PhD student, Olivier Voinnet, as first or co-author.

Baulcombe’s research group at The Sainsbury Laboratory (TSL) in Norwich, England, has discovered nothing less than RNA interference (RNAi) in plants, the famous viral defence mechanism, which went on to revolutionise biomedical research as a whole and the technology of controlled gene silencing in particular. Olivier Voinnet himself also prominently contributed to this discovery, which certainly helped him, then only 33 years old, to land a research group leader position at the CNRS Institute for Plant Molecular Biology in Strasbourg, in his native country, France. During his time in Strasbourg, Voinnet won many prestigious prizes and awards, such as the ERC Starting Grant and the EMBO Young Investigator Award, plus the EMBO Gold Medal. Finally, at the end of 2010, the Swiss Federal Institute of Technology (ETH) in Zürich appointed the 38-year-old EMBO Member as Professor of RNA biology. Shortly afterwards, Voinnet was awarded the well-endowed Max Rössler Prize of the ETH.

Disturbing news from the plant sciences of evidence of photo manipulation in published articles.

The post examines the charges at length and indicates what is or is not known at this juncture. Investigations are underway and reports from those investigation will appear in the future.

A step that could be taken now, since the articles in question (about 20) have been published, would be for the journals to disclose the peer reviewers who failed to catch the photo manipulation.

The premise of peer review is holding an author responsible for the content of their article so it is only fair to hold peer reviewers responsible for articles approved by their reviews.

Peer review isn’t much of a gate keeper if it is unable to discover false information or even patterns of false information prior to publication.

I haven’t been reading Lab Times on a regular basis but it looks like I need to correct that oversight.

February 21, 2015

The Many Faces of Science (the journal)

Filed under: Peer Review,Publishing — Patrick Durusau @ 7:17 pm

Andy Dalby tells a chilling tale in Why I will never trust Science again.

You need to read the full account but as a quick summary, Andy submits a paper to Science that is rejected and within weeks finds that Science accepted another paper, a deeply flawed one, reaching the same conclusion and when he notified Science, it was suggested he post an online comment. Andy’s account has quotes, links to references, etc.

That is one face of Science, secretive, arbitrary and restricted peer review of submissions. I say “restricted peer” because Science has a tiny number of reviewers, compared to your peers, who review submissions. If you want “peer review,” you should publish with an open source journal that enlists all of your peers as reviewers, not just a few.

There is another face of Science, which appeared last December without any trace of irony at all:

Does journal peer review miss best and brightest? by David Shultz, which reads in part:

Sometimes greatness is hard to spot. Before going on to lead the Chicago Bulls to six NBA championships, Michael Jordan was famously cut from his high school basketball team. Scientists often face rejection of their own—in their case, the gatekeepers aren’t high school coaches, but journal editors and peers they select to review submitted papers. A study published today indicates that this system does a reasonable job of predicting the eventual interest in most papers, but it may shoot an air ball when it comes to identifying really game-changing research.

There is a serious chink in the armor, though: All 14 of the most highly cited papers in the study were rejected by the three elite journals, and 12 of those were bounced before they could reach peer review. The finding suggests that unconventional research that falls outside the established lines of thought may be more prone to rejection from top journals, Siler says.

Science publishes research showing its methods are flawed and yet it takes no notice. Perhaps its rejection of Andy’s paper isn’t so strange. It must have not traveled far enough down the stairs.

I first saw Andy’s paper in a tweet by Mick Watson.

July 10, 2014

Peer Review Ring

Filed under: Peer Review,Transparency — Patrick Durusau @ 10:25 am

Scholarly journal retracts 60 articles, smashes ‘peer review ring’ by Fred Barbash.

From the post:

Every now and then a scholarly journal retracts an article because of errors or outright fraud. In academic circles, and sometimes beyond, each retraction is a big deal.

Now comes word of a journal retracting 60 articles at once.

The reason for the mass retraction is mind-blowing: A “peer review and citation ring” was apparently rigging the review process to get articles published.

You’ve heard of prostitution rings, gambling rings and extortion rings. Now there’s a “peer review ring.”

Favorable reviews were entered using fake identities as part of an open peer review process. The favorable reviews resulted in publication of those articles.

This was a peer review ring that depended upon false identities.

If peer review were more transparent, publications could explore relationships between peer reviewers and who reviewed their papers, grants, proposals, or their prior reviews of authors, projects, for interesting patterns.

I first saw this in a tweet by Steven Strogatz.

April 22, 2014

Innovations in peer review:…

Filed under: Bioinformatics,Biomedical,Peer Review,Publishing — Patrick Durusau @ 9:54 am

Innovations in peer review: join a discussion with our Editors by Shreeya Nanda.

From the post:

Innovation may not be an adjective often associated with peer review, indeed commentators have claimed that peer review slows innovation and creativity in science. Preconceptions aside, publishers are attempting to shake things up a little, with various innovations in peer review, and these are the focus of a panel discussion at BioMed Central’s Editors’ Conference on Wednesday 23 April in Doha, Qatar. This follows our spirited discussion at the Experimental Biology conference in Boston last year.

The discussion last year focussed on the limitations of the traditional peer review model (you can see a video here). This year we want to talk about innovations in the field and the ways in which the limitations are being addressed. Specifically, we will focus on open peer review, portable peer review – in which we help authors transfer their manuscript, often with reviewers’ reports, to a more appropriate journal – and decoupled peer review, which is undertaken by a company or organisation independent of, or on contract from, a journal.

We will be live tweeting from the session at 11.15am local time (9.15am BST), so if you want to join the discussion or put questions to our panellists, please follow #BMCEds14. If you want to brush up on any or all of the models that we’ll be discussing, have a look at some of the content from around BioMed Central’s journals, blogs and Biome below:

This post includes pointers to a number of useful resources concerning the debate around peer review.

But there are oddities as well. First, the claim that peer review “slows innovation and creativity in science,” considering recent reports that peer review is no better than random chance for grants (…lotteries to pick NIH research-grant recipients and the not infrequent reports of false papers, fraud in actual papers, and a general inability to replicate research described in papers (Reproducible Research/(Mapping?)).

A claim doesn’t have to appear on the alt.fringe.peer.review newsgroup (imaginary newsgroup) in order to be questionable on its face.

Secondly, despite the invitation to follow and participate on Twitter, holding the meeting in Qartar means potential attendees from the United States will have to rise at:

Eastern 4:15 AM (last year’s location)

Central 3:15 AM

Mountain 2:15 AM

Western 1:15 AM

I wonder what the participation levels will be from Boston last year as compared to Qatar this year?

Nothing against non-United States locations but non-junket locations, such as major educational/research hubs, should be the sites for such meetings.

March 14, 2014

Science self-corrects – instantly

Filed under: Peer Review — Patrick Durusau @ 6:47 pm

Science self-corrects – instantly

A highly amusing account of how post-publication review uncovered serious flaws in a paper published with great fanfare in Nature.

To give you the tone of the post:

Publishing a paper is still considered a definitive event. And what could be more definitive than publishing two Nature papers back to back on the same subject? Clearly a great step forward must have occurred. Just such a seismic event happened on the 29th of January, when Haruko Obokata and colleagues described a revolutionarily simple technique for producing pluripotent cells. A short dunk in the acid bath or brief exposure to any one of a number of stressors sufficed to produce STAP (Stimulus-Triggered Acqusition of Pluripotency) cells, offering enormous simplification in stem cell research and opening new therapeutic avenues.

As you may be guessing, the “three overworked referees and a couple of editors” did not catch serious issues with the papers.

But some 4000 viewers at PubPeer did.

If traditional peer review had independent and adequately compensated peer reviewers, the results might be different. But the lack of independence and compensation are designed to product a minimum review, not a peer review.

Ironic that electronic journals and publications aren’t given weight in scholarly circles due to a lack of “peer review,” when some “peer review” is nothing more than a hope the author has performed well. A rather vain hope in a number of cases.

I do disagree with the PubPeer policy on anonymity.

Authors could be retaliated against but revolutions are never bloodless. Where would the civil rights movement have accomplished with anonymous letters to editors? It was only the outrages and excesses of their oppressors that finally resulted in some change (an ongoing process even now).

Serious change will occur if and only if the “three overworked referees and a couple of editors” are publicly outed by named colleagues. And for that process to be repeated over and over again. Until successful peer review is a mark of quality of research and writing, not just another step at a publication mill.

October 8, 2012

Peer2ref: A new online tool for locating peer reviewers

Filed under: Peer Review,Searching — Patrick Durusau @ 9:06 am

Peer2ref: A new online tool for locating peer reviewers by Jack Cochrane.

From the post:

Findings published in the peer reviewed journal BioData Mining…” A sentence like this instantly adds credibility to a scientific article. But it isn’t simply the name of a prestigious journal that assures readers of an article’s validity; it’s the knowledge that the research has been peer reviewed.

Peer review, the process by which scientists critically evaluate their colleagues’ methods and findings, has been essential to scientific discourse for centuries. In those early days of scientific research, with fewer journals and lower levels of specialization, scientists found it relatively easy to devote their time to assessing new findings. However as the pace of research has expanded, so too has the number of articles and the number of journals set up to publish them. Scientists, already faced with increasingly full to-do-lists, have struggled to keep up.

Exacerbating this problem is the specialization of many articles, which now come from increasingly narrow fields of research. This expansion of the body of scientific knowledge and the resulting compartmentalization of many research fields means that locating qualified peer reviewers can be a major challenge.

Jack points to software developed by Miguel A Andrade-Navarro et al that can help solve the finding peer reviewers problem.

From his description of the software:

This allows users to search for authors and editors in specific fields using keywords related to the subject an article, making Peer2ref highly effective at finding experts in narrow fields of research.

Does “narrow field of research” sound appropriate for a focused topic map effort?

Identifying the experts in an area would be a good first step.

I first saw this in Christophe Lalanne’s A bag of tweets / September 2012

November 12, 2011

Real scientists never report fraud

Filed under: Peer Review,Publishing,Research Methods — Patrick Durusau @ 8:41 pm

Real scientists never report fraud

Daniel Lemire writes (in part):

People who want to believe that “peer reviewed work” means “correct work” will object that this is just one case. But what about the recently dismissed Harvard professor Marc Hauser? We find exactly the same story. Marc Hauser published over 200 papers in the best journals, making up data as he went. Again colleagues, journals and collaborators failed to openly challenge him: it took naive students, that is, outsiders, to report the fraud.

While I agree that other “professionals” may not have time to closely check work in the peer review process (see some of the comments), I think that illustrates the valuable role that students can play in the publication process.

Why not have a departmental requirement that papers for publication be circulated among students with an anonymous but public comment mechanism? Students are as pressed for time as anyone but they have the added incentive of wanting to become skilled at criticism of ideas and writing.

Not only would such a review process increase the likelihood of detection of fraud, but it would catch all manner of poor writing or citation practices. I regularly encounter published CS papers that incorrectly cite other published work or that cite work eventually published but under other titles. No fraud, just poor practices.

Powered by WordPress