Archive for the ‘Publishing’ Category

Maintaining Your Access to Sci-Hub

Tuesday, November 21st, 2017

A tweet today by @Sci_Hub advises:

Sci-Hub is working. To get around domain names problem, use custom Sci-Hub DNS servers 80.82.77.83 and 80.82.77.84. How to customize DNS in Windows: https://pchelp.ricmedia.com/set-custom-dns-servers-windows/

No doubt, Elsevier will continue to attempt to interfere with your access to Sci-Hub.

Already the largest, bloated and insecure presence academic publishing presence on the Internet, Elsevier labors every day to become more of an attractive nuisance.

What corporate strategy is served by painting a flashing target on your Internet presence?

Thoughts?

PS: Do update your DNS entries while pondering that question.

The Infectiousness of Pompous Prose – #GettysburgAbstract

Sunday, October 29th, 2017

See the full size version.

Editors could improve readability of authors but then peer reviewers could actually read papers they are assigned to review.

Don’t wait on miracles to improve readability of scientific articles.

Create your Gettysburg Abstract of the most important point of the paper.

Lincoln’s Gettysburg address was 272 words long. A #GettysburgAbstract is 272 words or less.

If you can’t capture an article in 272 words or less, read the article again.

Post your #GettysburgAbstract as a comment and/or review of the article. Spread the word.

If first saw this image in a tweet by Mara Averick.

Success in Astronomy? Some Surprising Strategies

Friday, October 27th, 2017

Success in Astronomy? Some Surprising Strategies by Stacy Kim.

Kim reviews How long should an astronomical paper be to increase its impact? by K. Z. Stanek, saying:

What do you think it takes to succeed in astronomy? Some innate brilliance? Hard work? Creativity? Great communication skills?

What about writing lots of short papers? For better or for worse, one’s success as an astronomer is frequently measured in the number of papers one’s written and how well cited they are. Papers are a crucial method of communicating results to the rest of the astronomy community, and the way they’re written and how they’re published can have a significant impact on the number of citations that you receive.

There are a number of simple ways to increase the citation counts on your papers. There are things you might expect: if you’re famous within the community (e.g. a Nobel Prize winner), or are in a very hot topic like exoplanets or cosmology, you’ll tend to get cited more often. There are those that make sense: papers that are useful, such as dust maps, measurements of cosmological parameters, and large sky surveys often rank among the most-cited papers in astronomy. And then there’s the arXiv, a preprint service that is highly popular in astronomy. It’s been shown that papers that appear on the arXiv are cited twice as much as those that aren’t, and furthermore—those at the top of the astro-ph list are twice as likely to be cited than those that appear further down.

If you need a quick lesson from the article, Kim suggests posting to arXiv at 4pm, so your paper appears higher on the list.

For more publishing advice, see Kim’s review or the paper in full.

Enjoy!

Papers we Scrutinize: How to critically read papers

Thursday, April 13th, 2017

Papers we Scrutinize: How to critically read papers by Tomas Petricek.

From the post:

As someone who enjoys being at the intersection of the academic world and the world of industry, I’m very happy to see any attempts at bridging this harmful gap. For this reason, it is great to see that more people are interested in reading academic papers and that initiatives like Papers We Love are there to help.

There is one caveat with academic papers though. It is very easy to see academic papers as containing eternal and unquestionable truths, rather than as something that the reader should actively interact with. I recently remarked about this saying that “reading papers” is too passive. I also mentioned one way of doing more than just “reading”, which is to write “critical reviews” – something that we recently tried to do at the Salon des Refusés workshop. In this post, I would like to expand my remark.

First of all, it is very easy to miss the context in which papers are written. The life of an academic paper is not complete after it is published. Instead, it continues living its own life – people refer to it in various contexts, give different meanings to entities that appear in the paper and may “love” different parts of the paper than the author. This also means that there are different ways of reading papers. You can try to reconstruct the original historical context, read it according to the current main-stream interpretation or see it as an inspiration for your own ideas.

I suspect that many people, both in academia and outside, read papers without worrying about how they are reading them. You can certainly “do science” or “read papers” without reflecting on the process. That said, I think the philosophical reflection is important if we do not want to get stuck in local maxima.

Petricek goes on to define three (3) different ways to read a paper, using A Formulation of the Simple Theory of Types by Alonzo Church.

Worth reading and following, but consider more concrete guidance as well:

The requirements to make reading and peer review useful activities are well known.

Only you can prevent the failure to meet those requirements in your case. Yes?

Unpaywall (Access to Academic Publishing)

Wednesday, April 12th, 2017

How a Browser Extension Could Shake Up Academic Publishing by Lindsay McKenzie.

From the post:

Open-access advocates have had several successes in the past few weeks. The Bill & Melinda Gates Foundation started its own open-access publishing platform, which the European Commission may replicate. And librarians attending the Association of College and Research Libraries conference in March were glad to hear that the Open Access Button, a tool that helps researchers gain free access to copies of articles, will be integrated into existing interlibrary-loan arrangements.

Another initiative, called Unpaywall, is a simple browser extension, but its creators, Jason Priem and Heather Piwowar, say it could help alter the status quo of scholarly publishing.

“We’re setting up a lemonade stand right next to the publishers’ lemonade stand,” says Mr. Priem. “They’re charging $30 for a glass of lemonade, and we’re showing up right next to them and saying, ‘Lemonade for free’. It’s such a disruptive, exciting, and interesting idea, I think.”

Like the Open Access Button, Unpaywall is open-source, nonprofit, and dedicated to improving access to scholarly research. The button, devised in 2013, has a searchable database that comes into play when a user hits a paywall.

When an Unpaywall user lands on the page of a research article, the software scours thousands of institutional repositories, preprint servers, and websites like PubMed Central to see if an open-access copy of the article is available. If it is, users can click a small green tab on the side of the screen to view a PDF.

Sci-Hub gets an honorable mention as a “..pirate website…,” usage of which carries “…so much fear and uncertainty….” (Disclaimer, the author of those comments is one of the creators of Unpaywall (Jason Priem).)

Hardly. What was long suspected about academic publishing has become widely known: Peer review is a fiction, even at the best known publishers, to say nothing of lesser lights in the academic universe. The “contribution” of publishers is primarily maintaining lists of editors for padding the odd resume. (Peer Review failure: Science and Nature journals reject papers because they “have to be wrong”.)

I should not overlook publishers as a source of employment for “gatekeepers.” “Gatekeepers” being those unable to make a contribution on their own, who seek to prevent others from doing so and failing that, preventing still others from learning of those contributions.

Serfdom was abolished centuries ago, academic publishing deserves a similar fate.

PS: For some reason authors are reluctant to post the web address for Sci-Hub: https://sci-hub.cc/.

Textbook manifesto

Sunday, April 9th, 2017

Textbook manifesto by Allen B. Downey.

From the post:

My textbook manifesto is so simple it sounds stupid. Here it is:

Students should read and understand textbooks.

That’s it. It’s hard to imagine that anyone would disagree, but here’s the part I find infuriating: the vast majority of textbook authors, publishers, professors and students behave as if they do not expect students to read or understand textbooks.

Here’s how it works. Most textbook authors sit down with the goal writing the bible of their field. Since it is meant to be authoritative, they usually stick to well-established ideas and avoid opinion and controversy. The result is a book with no personality.

For publishers, the primary virtue is coverage. They want books that can be used for many classes, so they encourage authors to include all the material for all possible classes. The result is a 1000-page book with no personality.
… (emphasis in original)

You probably know Downey from his Think Python, Think Bayes books.

Think Python, with the index, front matter, etc. runs 244 pages from tip to tail.

Longer than his proposed 10 pages per week for a semester course, total pages of 140 pages for a class, but not unreasonably so.

Take this as encouragement that a useful book need not be comprehensive, just effectively communicating more than the reader knows already.

Beall’s List of Predatory Publishers 2017 [Avoiding “fake” scholarship, journalists take note]

Thursday, January 5th, 2017

Beall’s List of Predatory Publishers 2017 by Jeffrey Beall.

From the webpage:

Each year at this time I formally announce my updated list of predatory publishers. Because the publisher list is now very large, and because I now publish four, continuously-updated lists, the annual releases do not include the actual lists but instead include statistical and explanatory data about the lists and links to them.

Jeffrey maintains four lists of highly questionable publishers/publications:

Beall’s list should be your first stop when an article arrives from an unrecognized publication.

Not that being published in Nature and/or Science is a guarantee of quality scholarship, but publication on Beall’s list should raise publication stopping red flags.

Such a publication could be true, but bears the burden of proving itself to be so.

The Joy of Collective Action: Elsevier Boycott – Germany

Friday, December 16th, 2016

Germany-wide consortium of research libraries announce boycott of Elsevier journals over open access by Cory Doctorow.

Cory writes:

Germany’s DEAL project, which includes over 60 major research institutions, has announced that all of its members are canceling their subscriptions to all of Elsevier’s academic and scientific journals, effective January 1, 2017.

The boycott is in response to Elsevier’s refusal to adopt “transparent business models” to “make publications more openly accessible.”

Just guessing but I suspect the DEAL project would welcome news of other consortia and schools taking similar action.

Over the short term, scholars can tide themselves over with Sci-Hub.

Cory ends:

No full-text access to Elsevier journals to be expected from 1 January 2017 on [Göttingen State and University Library]

How many libraries will you contact by the end of this year?

Free & Interactive Online Introduction to LaTeX

Thursday, July 28th, 2016

Free & Interactive Online Introduction to LaTeX by John Lees-Miller.

From the webpage:

Part 1: The Basics

Welcome to the first part of our free online course to help you learn LaTeX. If you have never used LaTeX before, or if it has been a while and you would like a refresher, this is the place to start. This course will get you writing LaTeX right away with interactive exercises that can be completed online, so you don’t have to download and install LaTeX on your own computer.

In this part of the course, we’ll take you through the basics of how LaTeX works, explain how to get started, and go through lots of examples. Core LaTeX concepts, such as commands, environments, and packages, are introduced as they arise. In particular, we’ll cover:

  • Setting up a LaTeX Document
  • Typesetting Text
  • Handling LaTeX Errors
  • Typesetting Equations
  • Using LaTeX Packages

In part two and part three, we’ll build up to writing beautiful structured documents with figures, tables and automatic bibliographies, and then show you how to apply the same skills to make professional presentations with beamer and advanced drawings with TikZ. Let’s get started!

Since I mentioned fonts earlier today, Learning a Manifold of Fonts, it seems only fair to post about the only typesetting language that can take full advantage of any font you care to use.

TeX was released in 1978 and it has yet to be equaled by any non-TeX/LaTeX system.

It’s almost forty (40) years old, widely used and still sui generis.

Web Design in 4 minutes

Thursday, July 28th, 2016

Web Design in 4 minutes by Jeremy Thomas.

From the post:

Let’s say you have a product, a portfolio, or just an idea you want to share with everyone on your own website. Before you publish it on the internet, you want to make it look attractive, professional, or at least decent to look at.

What is the first thing you need to work on?

This is more for me than you, especially if you consider my much neglected homepage.

Over the years my blog has consumed far more of my attention than my website.

I have some new, longer material that is more appropriate for the website so this post is a reminder to me to get my act together over there!

Other web design resource suggestions welcome!

Everything You Wanted to Know about Book Sales (But Were Afraid to Ask)

Tuesday, July 5th, 2016

Everything You Wanted to Know about Book Sales (But Were Afraid to Ask) by Lincoln Michel.

From the post:

Publishing is the business of creating books and selling them to readers. And yet, for some reason we aren’t supposed to talk about the latter.

Most literary writers consider book sales a half-crass / half-mythological subject that is taboo to discuss.
While authors avoid the topic, every now and then the media brings up book sales — normally to either proclaim, yet again, the death of the novel, or to make sweeping generalizations about the attention spans of different generations. But even then, the data we are given is almost completely useless for anyone interested in fiction and literature. Earlier this year, there was a round of excited editorials about how print is back, baby after industry reports showed print sales increasing for the second consecutive year. However, the growth was driven almost entirely by non-fiction sales… more specifically adult coloring books and YouTube celebrity memoirs. As great as adult coloring books may be, their sales figures tell us nothing about the sales of, say, literary fiction.

Lincoln’s account mirrors my experience (twice) with a small press decades ago.

While you (rightfully) think that every sane person on the planet will forego the rent in order to purchase your book, sadly your publisher is very unlikely to share that view.

One of the comments to this post reads:

…Writing is a calling but publishing is a business.

Quite so.

Don’t be discouraged by this account but do allow it to influence your expectations, at least about the economic rewards of publishing.

Just in case I get hit with the publishing bug again, good luck to us all!

Developing Expert p-Hacking Skills

Saturday, July 2nd, 2016

Introducing the p-hacker app: Train your expert p-hacking skills by Ned Bicare.

Ned’s p-hacker app will be welcomed by everyone who publishes where p-values are accepted.

Publishers should mandate authors and reviewers to submit six p-hacker app results along with any draft that contains, or is a review of, p-values.

The p-hacker app results won’t improve a draft and/or review, but when compared to the draft, will improve the publication in which it might have appeared.

From the post:

My dear fellow scientists!

“If you torture the data long enough, it will confess.”

This aphorism, attributed to Ronald Coase, sometimes has been used in a disrespective manner, as if it was wrong to do creative data analysis.

In fact, the art of creative data analysis has experienced despicable attacks over the last years. A small but annoyingly persistent group of second-stringers tries to denigrate our scientific achievements. They drag psychological science through the mire.

These people propagate stupid method repetitions; and what was once one of the supreme disciplines of scientific investigation – a creative data analysis of a data set – has been crippled to conducting an empty-headed step-by-step pre-registered analysis plan. (Come on: If I lay out the full analysis plan in a pre-registration, even an undergrad student can do the final analysis, right? Is that really the high-level scientific work we were trained for so hard?).

They broadcast in an annoying frequency that p-hacking leads to more significant results, and that researcher who use p-hacking have higher chances of getting things published.

What are the consequence of these findings? The answer is clear. Everybody should be equipped with these powerful tools of research enhancement!

The art of creative data analysis

Some researchers describe a performance-oriented data analysis as “data-dependent analysis”. We go one step further, and call this technique data-optimal analysis (DOA), as our goal is to produce the optimal, most significant outcome from a data set.

I developed an online app that allows to practice creative data analysis and how to polish your p-values. It’s primarily aimed at young researchers who do not have our level of expertise yet, but I guess even old hands might learn one or two new tricks! It’s called “The p-hacker” (please note that ‘hacker’ is meant in a very positive way here. You should think of the cool hackers who fight for world peace). You can use the app in teaching, or to practice p-hacking yourself.

Please test the app, and give me feedback! You can also send it to colleagues: http://shinyapps.org/apps/p-hacker.

Enjoy!

TUGBoat – The Complete Set

Thursday, June 30th, 2016

Norm Walsh tweeted an offer of circa 1990 issues of TUGBoat for free to a good home today (30 June 2016).

On the off chance that you, like me, have only a partial set, consider the full set, TUGBoat Contents, 1980 1:1 to date.

From the TUGBoat homepage:

The TUGboat journal is a unique benefit of joining TUG. It is currently published three times a year and distributed to all TUG members (for that year). Anyone can also buy copies from the TUG store.

We post articles online after about one year for the benefit of the entire TeX community, but TUGboat is funded by member support. So please consider joining TUG if you find TUGboat useful.

TUGboat publishes the proceedings of the TUG Annual Meetings, and sometimes other conferences. A list of other publications by TUG, and by other user groups is available.

This is an opportunity to support the TeX Users Group (TUG) without looking for a future home for your printed copies of TUGBoat. Donate to TUG and read online!

Enjoy!

The No-Value-Add Of Academic Publishers And Peer Review

Tuesday, June 21st, 2016

Comparing Published Scientific Journal Articles to Their Pre-print Versions by Martin Klein, Peter Broadwell, Sharon E. Farb, Todd Grappone.

Abstract:

Academic publishers claim that they add value to scholarly communications by coordinating reviews and contributing and enhancing text during publication. These contributions come at a considerable cost: U.S. academic libraries paid $1.7 billion for serial subscriptions in 2008 alone. Library budgets, in contrast, are flat and not able to keep pace with serial price inflation. We have investigated the publishers’ value proposition by conducting a comparative study of pre-print papers and their final published counterparts. This comparison had two working assumptions: 1) if the publishers’ argument is valid, the text of a pre-print paper should vary measurably from its corresponding final published version, and 2) by applying standard similarity measures, we should be able to detect and quantify such differences. Our analysis revealed that the text contents of the scientific papers generally changed very little from their pre-print to final published versions. These findings contribute empirical indicators to discussions of the added value of commercial publishers and therefore should influence libraries’ economic decisions regarding access to scholarly publications.

The authors have performed a very detailed analysis of pre-prints, 90% – 95% of which are published as open pre-prints first, to conclude there is no appreciable difference between the pre-prints and the final published versions.

I take “…no appreciable difference…” to mean academic publishers and the peer review process, despite claims to the contrary, contribute little or no value to academic publications.

How’s that for a bargaining chip in negotiating subscription prices?

Where Has Sci-Hub Gone?

Saturday, June 18th, 2016

While I was writing about the latest EC idiocy (link tax), I was reminded of Sci-Hub.

Just checking to see if it was still alive, I tried http://sci-hub.io/.

404 by standard DNS service.

If you are having the same problem, Mike Masnick reports in Sci-Hub, The Repository Of ‘Infringing’ Academic Papers Now Available Via Telegram, you can access Sci-Hub via:

I’m not on Telegram, yet, but that may be changing soon. 😉

BTW, while writing this update, I stumbled across: The New Napster: How Sci-Hub is Blowing Up the Academic Publishing Industry by Jason Shen.

From the post:


This is obviously piracy. And Elsevier, one of the largest academic journal publishers, is furious. In 2015, the company earned $1.1 billion in profits on $2.9 billion in revenue [2] and Sci-hub directly attacks their primary business model: subscription service it sells to academic organizations who pay to get access to its journal articles. Elsevier filed a lawsuit against Sci-Hub in 2015, claiming Sci-hub is causing irreparable injury to the organization and its publishing partners.

But while Elsevier sees Sci-Hub as a major threat, for many scientists and researchers, the site is a gift from the heavens, because they feel unfairly gouged by the pricing of academic publishing. Elsevier is able to boast a lucrative 37% profit margin because of the unusual (and many might call exploitative) business model of academic publishing:

  • Scientists and academics submit their research findings to the most prestigious journal they can hope to land in, without getting any pay.
  • The journal asks leading experts in that field to review papers for quality (this is called peer-review and these experts usually aren’t paid)
  • Finally, the journal turns around and sells access to these articles back to scientists/academics via the organization-wide subscriptions at the academic institution where they work or study

There’s piracy afoot, of that I have no doubt.

Elsevier:

  • Relies on research it does not sponsor
  • Research results are submitted to it for free
  • Research is reviewed for free
  • Research is published in journals of value only because of the free contributions to them
  • Elsevier makes a 37% profit off of that free content

There is piracy but Jason fails to point to Elsevier as the pirate.

Sci-Hub/Alexandra Elbakyan is re-distributing intellectual property that was stolen by Elsevier from the academic community, for its own gain.

It’s time to bring Elsevier’s reign of terror against the academic community to an end. Support Sci-Hub in any way possible.

The Symptom of Many Formats

Monday, June 13th, 2016

Distro.Mic: An Open Source Service for Creating Instant Articles, Google AMP and Apple News Articles

From the post:

Mic is always on the lookout for new ways to reach our audience. When Facebook, Google and Apple announced their own native news experiences, we jumped at the opportunity to publish there.

While setting Mic up on these services, David Björklund realized we needed a common article format that we could use for generating content on any platform. We call this format article-json, and we open-sourced parsers for it.

Article-json got a lot of support from Google and Apple, so we decided to take it a step further. Enter DistroMic. Distro lets anyone transform an HTML article into the format mandated by one of the various platforms.

Sigh.

While I applaud the DistroMic work, I am saddened that it was necessary.

From the DistroMic page, here is the same article in three formats:

Apple:

{
“article”: [
{
“text”: “Astronomers just announced the universe might be expanding up to 9% faster than we thought.\n”,
“additions”: [
{
“type”: “link”,
“rangeStart”: 59,
“rangeLength”: 8,
“URL”: “http://hubblesite.org/newscenter/archive/releases/2016/17/text/”
}
],
“inlineTextStyles”: [
{
“rangeStart”: 59,
“rangeLength”: 8,
“textStyle”: “bodyLinkTextStyle”
}
],
“role”: “body”,
“layout”: “bodyLayout”
},
{
“text”: “It’s a surprising insight that could put us one step closer to finally figuring out what the hell dark energy and dark matter are. Or it could mean that we’ve gotten something fundamentally wrong in our understanding of physics, perhaps even poking a hole in Einstein’s theory of gravity.\n”,
“additions”: [
{
“type”: “link”,
“rangeStart”: 98,
“rangeLength”: 28,
“URL”: “http://science.nasa.gov/astrophysics/focus-areas/what-is-dark-energy/”
}
],
“inlineTextStyles”: [
{
“rangeStart”: 98,
“rangeLength”: 28,
“textStyle”: “bodyLinkTextStyle”
}
],
“role”: “body”,
“layout”: “bodyLayout”
},
{
“role”: “container”,
“components”: [
{
“role”: “photo”,
“URL”: “bundle://image-0.jpg”,
“style”: “embedMediaStyle”,
“layout”: “embedMediaLayout”,
“caption”: {
“text”: “Source: \n NASA\n \n”,
“additions”: [
{
“type”: “link”,
“rangeStart”: 13,
“rangeLength”: 4,
“URL”: “http://www.nasa.gov/mission_pages/hubble/hst_young_galaxies_200604.html”
}
],
“inlineTextStyles”: [
{
“rangeStart”: 13,
“rangeLength”: 4,
“textStyle”: “embedCaptionTextStyle”
}
],
“textStyle”: “embedCaptionTextStyle”
}
}
],
“layout”: “embedLayout”,
“style”: “embedStyle”
}
],
“bundlesToUrls”: {
“image-0.jpg”: “http://bit.ly/1UFHdpf”
}
}

Facebook:

<article>
<p>Astronomers just announced the universe might be expanding
<a href=”http://hubblesite.org/newscenter/archive/releases/2016/17/text/”>up to 9%</a> faster than we thought.</p>
<p>It’s a surprising insight that could put us one step closer to finally figuring out what the hell
<a href=”http://science.nasa.gov/astrophysics/focus-areas/what-is-dark-energy/”>
dark energy and dark matter</a> are. Or it could mean that we’ve gotten something fundamentally wrong in our understanding of physics, perhaps even poking a hole in Einstein’s theory of gravity.</p>
<figure data-feedback=”fb:likes,fb:comments”>
<img src=”http://bit.ly/1UFHdpf”></img>
<figcaption><cite>
Source: <a href=”http://www.nasa.gov/mission_pages/hubble/hst_young_
galaxies_200604.html”>NASA</a>
</cite></figcaption>
</figure>
</article>

Google:

<article>
<p>Astronomers just announced the universe might be expanding
<a href=”http://hubblesite.org/newscenter/archive/releases/2016/17/text/”>up to 9%</a> faster than we thought.</p> <p>It’s a surprising insight that could put us one step closer to finally figuring out what the hell
<a href=”http://science.nasa.gov/astrophysics/focus-areas/what-is-dark-energy/”> dark energy and dark matter</a> are. Or it could mean that we’ve gotten something fundamentally wrong in our understanding of physics, perhaps even poking a hole in Einstein’s theory of gravity.</p>
<figure>
<amp-img width=”900″ height=”445″ layout=”responsive” src=”http://bit.ly/1UFHdpf”></amp-img>
<figcaption>Source:
<a href=”http://www.nasa.gov/mission_pages/hubble/hst_young_
galaxies_200604.html”>NASA</a>
</figcaption>
</figure>
</article>

All starting from the same HTML source:

<p>Astronomers just announced the universe might be expanding
<a href=”http://hubblesite.org/newscenter/archive/releases/2016/17/text/”>up to 9%</a> faster than we thought.</p><p>It’s a surprising insight that could put us one step closer to finally figuring out what the hell
<a href=”http://science.nasa.gov/astrophysics/focus-areas/what-is-dark-energy/”>
dark energy and dark matter</a> are. Or it could mean that we’ve gotten something fundamentally wrong in our understanding of physics, perhaps even poking a hole in Einstein’s theory of gravity.</p>
<figure>
<img width=”900″ height=”445″ src=”http://bit.ly/1UFHdpf”>
<figcaption>Source: 
<a href=”http://www.nasa.gov/mission_pages/hubble/hst_young_
galaxies_200604.html”>NASA</a>
</figcaption>
</figure>

Three workflows based on what started life in one common format.

Three workflows that have their own bugs and vulnerabilities.

Three workflows that duplicate the capabilities of each other.

Three formats that require different indexing/searching.

This is not the cause of why we can’t have nice things in software, but it certainly is a symptom.

The next time someone proposes a new format for a project, challenge them to demonstrate a value-add over existing formats.

Newspaper Publishers Protecting Consumers (What?)

Friday, June 3rd, 2016

Newspaper industry asks FTC to investigate “deceptive” adblockers by John Zorabedian.

From the post:

Fearing that online publishers may be on the losing side of their battle with commercial adblockers, the newspaper publishing industry is now seeking relief from the US government.

The Newspaper Association of America (NAA), an industry group representing 2000 newspapers, filed a complaint with the US Federal Trade Commission (FTC) asking the consumer watchdog to investigate adblocker companies’ “deceptive” and “unlawful” practices.

The NAA is not alleging that adblockers themselves are illegal – rather, it says that adblocker companies make misleading claims about their products, a violation of the Federal Trade Commission Act.

Do you feel safer knowing the Newspaper Association of America (NAA) is protecting you from deceptive ads by adblocker companies?

A better service would be to protect consumers from deceptive ads in their publications but I suppose that would be a conflict of interest.

The best result would be for the FTC to declare you can display (or not) content received on your computer any way you like.

You cannot, of course, re-transmit that content, but if a user chooses to combine your content with that of another site, that is entirely on their watch.

Ad-blocking, transformation of lawfully delivered content, including merging of content, are rights that every user should enjoy.

Help Defend MuckRock And Your Right To Know!

Wednesday, May 25th, 2016

A multinational demands to know who reads MuckRock and is suing to stop us from posting records about them by Michael Morisy.

Michael captures everything you need to know in his first paragraph:

A multinational owned by Toshiba is demanding MuckRock remove documents about them received under a public records act request, destroy any copies we have, and help identify MuckRock readers who saw them.

After skimming the petition and the two posted documents (Landis+Gyr Managed Services Report 2015 Final and Req 9_Security Overview), I feel like the man who remarked to George Bailey in It’s A Wonderful Life, “…you must mean two other trees,” taking George for being drunk. 😉

As far as I can tell, the posted documents contain no pricing information, no contact details, etc.

Do you disagree?

There are judges who insist that pleadings have some relationship to facts. Let’s hope that MuckRock draws one of those.

Do you wonder what other local governments are involved with Landis+Gyr?

There is a simple starting point: Landis+Gyr.

Overlay Journal – Discrete Analysis

Saturday, March 5th, 2016

The arXiv overlay journal Discrete Analysis has launched by Christian Lawson-Perfect.

From the post:

Discrete Analysis, a new open-access journal for articles which are “analytical in flavour but that also have an impact on the study of discrete structures”, launched this week. What’s interesting about it is that it’s an arXiv overlay journal founded by, among others, Timothy Gowers.

What that means is that you don’t get articles from Discrete Analysis – it just arranges peer review of papers held on the arXiv, cutting out almost all of the expensive parts of traditional journal publishing. I wasn’t really prepared for how shallow that makes the journal’s website – there’s a front page, and when you click on an article you’re shown a brief editorial comment with a link to the corresponding arXiv page, and that’s it.

But that’s all it needs to do – the opinion of Gowers and co. is that the only real value that journals add to the papers they publish is the seal of approval gained by peer review, so that’s the only thing they’re doing. Maths papers tend not to benefit from the typesetting services traditional publishers provide (or, more often than you’d like, are actively hampered by it).

One way the journal is adding value beyond a “yes, this is worth adding to the list of papers we approve of” is by providing an “editorial introduction” to accompany each article. These are brief notes, written by members of the editorial board, which introduce the topics discussed in the paper and provide some context, to help you decide if you want to read the paper. That’s a good idea, and it makes browsing through the articles – and this is something unheard of on the internet – quite pleasurable.

It’s not difficult to imagine “editorial introductions” with underlying mini-topic maps that could be explored on their own or that as you reach the “edge” of a particular topic map, it “unfolds” to reveal more associations/topics.

Not unlike a traditional street map for New York which you can unfold to find general areas but can then fold it up to focus more tightly on a particular area.

I hesitate to say “zoom” because in the application I have seen (important qualification), “zoom” uniformly reduces your field of view.

A more nuanced notion of “zoom,” for a topic map and perhaps for other maps as well, would be to hold portions of the current view stationary, say a starting point on an interstate highway and to “zoom” only a portion of the current view to show a detailed street map. That would enable the user to see a particular location while maintaining its larger context.

Pointers to applications that “zoom” but also maintain different levels of “zoom” in the same view? Given the fascination with “hairy” presentations of graphs that would have to be real winner.

Overlay Journals – Community-Based Peer Review?

Friday, February 12th, 2016

New Journals Piggyback on arXiv by Emily Conover.

From the post:

A non-traditional style of scientific publishing is gaining ground, with new journals popping up in recent months. The journals piggyback on the arXiv or other scientific repositories and apply peer review. A link to the accepted paper on the journal’s website sends readers to the paper on the repository.

Proponents hope to provide inexpensive open access publication and streamline the peer review process. To save money, such “overlay” journals typically do away with some of the services traditional publishers provide, for example typesetting and copyediting.

Not everyone is convinced. Questions remain about the scalability of overlay journals, and whether they will catch on — or whether scientists will demand the stamp of approval (and accompanying prestige) that the established, traditional journals provide.

The idea is by no means new — proposals for journals interfacing with online archives appeared as far back as the 1990s, and a few such journals are established in mathematics and computer science. But now, say proponents, it’s an idea whose time has come.

The newest such journal is the Open Journal of Astrophysics, which began accepting submissions on December 22. Editor in Chief Peter Coles of the University of Sussex says the idea came to him several years ago in a meeting about the cost of open access journals. “They were talking about charging thousands of pounds for making articles open access,” Coles says, and he thought, “I never consult journals now; I get all my papers from the arXiv.” By adding a front end onto arXiv to provide peer review, Coles says, “We can dispense with the whole paraphernalia with traditional journals.”

Authors first submit their papers to arXiv, and then input the appropriate arXiv ID on the journal’s website to indicate that they would like their paper reviewed. The journal follows a standard peer review process, with anonymous referees whose comments remain private.

When an article is accepted, a link appears on the journal’s website and the article is issued a digital object identifier (DOI). The entire process is free for authors and readers. As APS News went to press, Coles hoped to publish the first batch of half-dozen papers at the end of January.

My Archive for the ‘Peer Review’ Category has only a few of the high profile failures of peer review over the last five years.

You are probably familiar with at least twice as many reports as I have reported in this blog on the brokenness of peer review.

If traditional peer review is a known failure, why replicate it even for overlay journals?

Why not ask the full set of peers in a discipline? That is the readers of articles posted in public repositories?

If a book/journal article goes uncited, isn’t that evidence that it:

Did NOT advance the discipline in a way meaningful to their peers?

What other evidence would you have that it did advance the discipline? The opinions of friends of the editor? That seems too weak to even suggest.

Citation analysis isn’t free from issues, Are 90% of academic papers really never cited? Searching citations about academic citations reveals the good, the bad and the ugly, but it has the advantage of drawing on the entire pool of talent that comprises a discipline.

Moreover, peer review would not be limited to a one time judgment of traditional peer reviewers but on the basis of how a monograph or article fits into the intellectual development of the discipline as a whole.

Which is more persuasive: That editors and reviewers at Science or Nature accept a paper or that in the ten years following publication, an article is cited by every other major study in the field?

Citation analysis obviates the overhead costs that are raised about organizing peer review on a massive scale. Why organize peer review at all?

Peers are going to read and cite good literature and more likely than not, skip the bad. Unless you need to create positions for gate keepers and other barnacles on the profession, opt for citation based peer review based on open repositories.

I’m betting on the communities that silently vet papers and books in spite of the formalized and highly suspect mechanisms for peer review.

Overlay journals could publish preliminary lists of articles that are of interest in particular disciplines and as community-based peer review progresses, they can publish “best of…” series as the community further filters the publications.

Community-based peer review is already operating in your discipline. Why not call it out and benefit from it?

Sci-Hub Tip: Converting Paywall DOIs to Public Access

Thursday, February 11th, 2016

In a tweet Jon Tenn@nt points out that:

Reminder: add “.sci-hub.io” after the .com in the URL of pretty much any paywalled paper to gain instant free access.

BTW, I tested Jon’s advice with:

http://dx.doi.org/10.****/*******

re-cast as:

http://dx.doi.org.sci-hub.io/10.****/*******

And it works!

With a little scripting, you can convert your paywall DOIs into public access with sci-hub.io.

This “worked for me” so if you encounter issues, please ping me so I can update this post.

Happy reading!

First Pirate – Sci-Hub?

Wednesday, February 10th, 2016

Sci-Hub romanticizes itself as:

Sci-Hub the first pirate website in the world to provide mass and public access to tens of millions of research papers. (from the about page)

I agree with:

…mass and public access to tens of millions of research papers

But Sci-Hub is hardly:

…the first pirate website in the world

I don’t remember the first gate-keeping publisher that went from stealing from the public in print to stealing from the public online.

With careful enough research I’m sure we could track that down but I’m not sure it matters at this point.

What we do know is that academic research is funded by the public, edited and reviewed by volunteers (to the extent it is reviewed at all), and then kept from the vast bulk of humanity for profit and status (gate-keeping).

It’s heady stuff to think of yourself as a bold and swashbuckling pirate, going to stick it “…to the man.”

However, gate-keeping publishers have developed stealing from the public to an art form. If you don’t believe me, take a brief look at the provisions in the Trans-Pacific Partnership that protect traditional publisher interests.

Recovering what has been stolen from the public isn’t theft at all, its restoration!

Use Sci-Hub, support Sci-Hub, spread the word about Sci-Hub.

Allow gate-keeping publishers to slowly, hopefully painfully, wither as opportunities for exploiting the public grow fewer and farther in between.

PS: You need to read: Meet the Robin Hood of Science by Simon Oxenham to get the full background on Sci-Hub and an extraordinary person, Alexandra Elbakyan.

JATS: Journal Article Tag Suite, Navigation Update!

Monday, January 11th, 2016

I posted about the appearance of JATS: Journal Article Tag Suite, version 1.1 and then began to lazily browse the pdf.

I forget what I was looking for now but I noticed the table of contents jumped from page 42 to page 235, and again from 272 to to 405. I’m thinking by this point “this is going to be a bear to find elements/attributes in.” I looked for an index only to find none. 🙁

But, there’s hope!

If you look at Chapter 7 “TAG Suite Components,” elements start on page 7 and attributes on page 28, you will find:

JATS-nav

Each ✔ is a navigation link to that element (or attribute if you are in the attribute section) under each of those divisions, Archiving, Publishing, Authoring.

Very cool but falls under “non-obvious” for me.

Pass it on so others can safely and quickly navigate JATS 1.1!

PS: It was Tommie Usdin of Balisage fame who pointed out the table in chapter 7 to me. Thanks Tommie!

JATS: Journal Article Tag Suite, version 1.1

Friday, January 8th, 2016

JATS: Journal Article Tag Suite, version 1.1

Abstract:

The Journal Article Tag Suite provides a common XML format in which publishers and archives can exchange journal content. The JATS provides a set of XML elements and attributes for describing the textual and graphical content of journal articles as well as some non-article material such as letters, editorials, and book and product reviews.

Documentation and help files: Journal Article Tag Suite.

Tommie Usdin (of Balisage fame) posted to Facebook:

JATS has added capabilities to encode:
– NISO Access License and Indicators
– additional support for multiple language documents and for Japanese documents (including Ruby)
– citation of datasets
and some other things users of version 1.0 have requested.

Another XML vocabulary that provides grist for your XQuery adventures!

What is Scholarly HTML?

Saturday, October 31st, 2015

What is Scholarly HTML? by Robin Berjon and Sébastien Ballesteros.

Abstract:

Scholarly HTML is a domain-specific data format built entirely on open standards that enables the interoperable exchange of scholarly articles in a manner that is compatible with off-the-shelf browsers. This document describes how Scholarly HTML works and how it is encoded as a document. It is, itself, written in Scholarly HTML.

The abstract is accurate enough but the “Motivation” section provides a better sense of this project:

Scholarly articles are still primarily encoded as unstructured graphics formats in which most of the information initially created by research, or even just in the text, is lost. This was an acceptable, if deplorable, condition when viable alternatives did not seem possible, but document technology has today reached a level of maturity and universality that makes this situation no longer tenable. Information cannot be disseminated if it is destroyed before even having left its creator’s laptop.

According to the New York Times, adding structured information to their recipes (instead of exposing simply as plain text) improved their discoverability to the point of producing an immediate rise of 52 percent in traffic (NYT, 2014). At this point in time, cupcake recipes are reaping greater benefits from modern data format practices than the whole scientific endeavour.

This is not solely a loss for the high principles of knowledge sharing in science, it also has very immediate pragmatic consequences. Any tool, any service that tries to integrate with scholarly publishing has to spend the brunt of its complexity (or budget) extracting data the author would have willingly shared out of antiquated formats. This places stringent limits on the improvement of the scholarly toolbox, on the discoverability of scientific knowledge, and particularly on processes of meta-analysis.

To address these issues, we have followed an approach rooted in established best practices for the reuse of open, standard formats. The «HTML Vernacular» body of practice provides guidelines for the creation of domain-specific data formats that make use of HTML’s inherent extensibility (Science.AI, 2015b). Using the vernacular foundation overlaid with «schema.org» metadata we have produced a format for the interchange of scholarly articles built on open standards, ready for all to use.

Our high-level goals were:

  • Uncompromisingly enabling structured metadata, accessibility, and internationalisation.
  • Pragmatically working in Web browsers, even if it occasionally incurs some markup overhead.
  • Powerfully customisable for inclusion in arbitrary Web sites, while remaining easy to process and interoperable.
  • Entirely built on top of open, royalty-free standards.
  • Long-term viability as a data format.

Additionally, in view of the specific problem we addressed, in the creation of this vernacular we have favoured the reliability of interchange over ease of authoring; but have nevertheless attempted to cater to the latter as much as possible. A decent boilerplate template file can certainly make authoring relatively simple, but not as radically simple as it can be. For such use cases, Scholarly HTML provides a great output target and overview of the data model required to support scholarly publishing at the document level.

An example of an authoring format that was designed to target Scholarly HTML as an output is the DOCX Standard Scientific Style which enables authors who are comfortable with Microsoft Word to author documents that have a direct upgrade path to semantic, standard content.

Where semantic modelling is concerned, our approach is to stick as much as possible to schema.org. Beyond the obvious advantages there are in reusing a vocabulary that is supported by all the major search engines and is actively being developed towards enabling a shared understanding of many useful concepts, it also provides a protection against «ontological drift» whereby a new vocabulary is defined by a small group with insufficient input from a broader community of practice. A language that solely a single participant understands is of limited value.

In a small, circumscribed number of cases we have had to depart from schema.org, using the https://ns.science.ai/ (prefixed with sa:) vocabulary instead (Science.AI, 2015a). Our goal is to work with schema.org in order to extend their vocabulary, and we will align our usage with the outcome of these discussions.

I especially enjoyed the observation:

According to the New York Times, adding structured information to their recipes (instead of exposing simply as plain text) improved their discoverability to the point of producing an immediate rise of 52 percent in traffic (NYT, 2014). At this point in time, cupcake recipes are reaping greater benefits from modern data format practices than the whole scientific endeavour.

I don’t doubt the truth of that story but after all, a large number of people are interested in baking cupcakes. Not more than three in many cases, are interested in reading any particular academic paper.

The use of schema.org will provide advantages for common concepts but to be truly useful for scholarly writing, it will require serious extension.

Take for example my post yesterday Deep Feature Synthesis:… [Replacing Human Intuition?, Calling Bull Shit]. What microdata from schema.org would help readers find Propositionalisation and Aggregates, 2001, which describes substantially the same technique, without claims of surpassing human intuition? (Uncited by the authors the paper on deep feature synthesis.)

Or the 161 papers on propositionalisation that you can find at CiteSeer?

A crude classification that can be used by search engines is very useful but falls far short of the mark in terms of finding and retrieving scholarly writing.

Semantic uniformity for classifying scholarly content hasn’t been reached by scholars or librarians despite centuries of effort. Rather than taking up that Sisyphean task, let’s map across the ever increasing universe of semantic diversity.

The Future Of News Is Not An Article

Wednesday, October 21st, 2015

The Future Of News Is Not An Article by Alexis Lloyd.

Alexis challenges readers to reconsider their assumptions about the nature of “articles.” Beginning with the model for articles that was taken over from traditional print media. Whatever appeared in an article yesterday must be re-created today if there is a new article on the same subject. Not surprising since print media lacks the means to transclude content from a prior article into a new one.

She saves her best argument for last:


A news organization publishes hundreds of articles a day, then starts all over the next day, recreating any redundant content each time. This approach is deeply shaped by the constraints of print media and seems unnecessary and strange when looked at from a natively digital perspective. Can you imagine if, every time something new happened in Syria, Wikipedia published a new Syria page, and in order to understand the bigger picture, you had to manually sift through hundreds of pages with overlapping information? The idea seems absurd in that context and yet, it is essentially what news publishers do every day.

While I agree fully with the advantages Alexis summarizes as Enhanced tools for journalists, Summarization and synthesis, and Adaptive Content (see her post), there are technical and non-technical roadblocks to such changes.

First and foremost, people are being paid to re-create redundant content everyday and their comfort levels, to say nothing about their remuneration for repetitive reporting of the same content will loom large in the adoption of the technology Alexis imagines.

I recall a disturbing story from a major paper where reporters didn’t share leads or research because of fear that other reporters would “scoop” them. That sort of protectionism isn’t limited to journalists. Rumor has it that Oracle sale reps refused to enter potential sales leads in a company wide database.

I don’t understand why that sort of pettiness is tolerated but be aware that it is, both in government and corporate environments.

Second and almost as importantly, Alexis needs raise the question of semantic ROI for any semantic technology. Take her point about adoption of the Semantic Web:

but have not seen universal adoption because of the labor costs involved in doing so.

To adopt a single level of semantic encoding for all content, without regard to its value, either historical or current use, is a sure budget buster. Perhaps the business community was playing closer attention to the Semantic Web than many of us thought, hence its adoption failure.

Some content may need machine driven encoding, more valuable content may require human supervision and/or encoding and some content may not be worth encoding at all. Depends on your ROI model.

I should mention that the Semantic Web manages statements about statements (in its or other semantic systems) poorly. (AKA, “facts about facts.”) Although I hate to use the term “facts.” The very notion of “facts” is misleading and tricky under the best of circumstances.

However universal (universal = among people you know) knowledge of a “fact” may seem, the better argument is that it is only a “fact” from a particular point of view. Semantic Web based systems have difficulty with such concepts.

Third, and not mentioned by Alexis, is that semantic systems should capture and preserve trails created by information explorers. Reporters at the New York Times use databases everyday, but each search starts from scratch.

If re-making redundant information over and over again is absurd, repeating the same searches (more or less successfully) over and over again is insane.

Capturing search trails as data would enrich existing databases, especially if searchers could annotate their trails and data they encounter along the way. The more intensively searched a resource becomes, the richer its semantics. As it is today, all the effort of searchers is lost at the end of each search.

Alexis is right, let’s stop entombing knowledge in articles, papers, posts and books. It won’t be quick or easy, but worthwhile journeys rarely are.

I first saw this in a tweet by Tim Strehle.

unglue.it

Monday, August 31st, 2015

unglue.it

From the webpage:

unglue (v. t.) 2. To make a digital book free to read and use, worldwide.

New to me, possibly old to you.

I “discovered” this site while looking at Intermediate Python.

From the general FAQ:

Basics

How It Works

What is Unglue.it?

Unglue.it is a a place for individuals and institutions to join together to make ebooks free to the world. We work together with authors, publishers, or other rights holders who want their ebooks to be free but also want to be able to earn a living doing so. We use Creative Commons licensing as an enabling tool to “unglue” the ebooks.

What are Ungluing Campaigns?

We have three types of Ungluing Campaigns: Pledge Campaigns, Buy-to-Unglue Campaigns and Thanks-for-Ungluing campaigns.

  • In a Pledge Campaign, book lovers pledge their support for ungluing a book. If enough support is found to reach the goal (and only then), the supporter’s credit cards are charged, and an unglued ebook is released.
  • In a Buy-to-Unglue Campaign, every ebook copy sold moves the book’s ungluing date closer to the present. And you can donate ebooks to your local library- that’s something you can’t do in the Kindle or Apple Stores!
  • In a Thanks-for-Ungluing Campaign, the ebook is already released with a Creative Commons license. Supporters can express their thanks by paying what they wish for the license and the ebook.

What is Crowdfunding?

Crowdfunding is collectively pooling contributions (or pledges) to support some cause. Using the internet for coordination means that complete strangers can work together, drawn by a common cause. This also means the number of supporters can be vast, so individual contributions can be as large or as small as people are comfortable with, and still add up to enough to do something amazing.

Want to see some examples? Kickstarter lets artists and inventors solicit funds to make their projects a reality. For instance, webcomic artist Rich Burlew sought $57,750 to reprint his comics in paper form — and raised close to a million.

In other words, crowdfunding is working together to support something you love. By pooling resources, big and small, from all over the world, we can make huge things happen.

What will supplement and then replace contemporary publishing models remains to be seen.

In terms of experiments, this one looks quite promising.

If you use unglue.it, please ping me with your experience. Thanks!

The Nation has a new publishing model

Wednesday, July 8th, 2015

Introducing the New TheNation.com by Richard Kim.

From the post:

…on July 6, 2015—exactly 150 years after the publication of our first issue—we’re relaunching TheNation.com. The new site, created in partnership with our friends at Blue State Digital and Diaspark, represents our commitment to being at the forefront of independent journalism for the next generation. The article page is designed with the Nation ambassador in mind: Beautiful, clear fonts (Mercury and Knockout) and a variety of image fields make the articles a joy to read—on desktop, tablet, and mobile. Prominent share tools, Twitter quotes, and a “highlight to e-mail/tweet” function make it easy to share them with others. A robust new taxonomy and a continuous scroll seamlessly connect readers to related content. You’ll also see color-coded touts that let readers take action on a particular issue, or donate and subscribe to The Nation.

I’m not overly fond of paywalls as you know but one part of the relaunch merits closer study. Comments on articles are going to be open to subscribers only.

It will be interesting to learn what the experience of The Nation is with its comments only by subscribers. Hopefully their tracking will be granular enough to determine what portion of subscribers subscribed, simply so they could make comments.

There are any number of fields where opinions run hot enough that even open content but paying for comments to be displayed could be a viable model for publication.

Imagine a publicly accessible topic map on the candidates for the US presidential election next year. If it had sufficient visibility, the publication of any report would spawn automatic responses from others. Responses that would not appear without paying for access to publish the comment.

Viable economic model?

Suggestions?

Digital Data Repositories in Chemistry…

Wednesday, July 1st, 2015

Digital Data Repositories in Chemistry and Their Integration with Journals and Electronic Notebooks by Matthew J. Harvey, Nicholas J. Mason, Henry S. Rzepa.

Abtract:

We discuss the concept of recasting the data-rich scientific journal article into two components, a narrative and separate data components, each of which is assigned a persistent digital object identifier. Doing so allows each of these components to exist in an environment optimized for purpose. We make use of a poorly-known feature of the handle system for assigning persistent identifiers that allows an individual data file from a larger file set to be retrieved according to its file name or its MIME type. The data objects allow facile visualization and retrieval for reuse of the data and facilitates other operations such as data mining. Examples from five recently published articles illustrate these concepts.

A very promising effort to integrate published content and electronic notebooks in chemistry. Encouraging that in addition to the technical and identity issues the authors also point out the lack of incentives for the extra work required to achieve useful integration.

Everyone agrees that deeper integration of resources in the sciences will be a game-changer but renewing the realization that there is no such thing as a free lunch, is an important step towards that goal.

This article easily repays a close read with interesting subject identity issues and the potential that topic maps would offer to such an effort.

The peer review drugs don’t work [Faith Based Science]

Sunday, May 31st, 2015

The peer review drugs don’t work by Richard Smith.

From the post:

It is paradoxical and ironic that peer review, a process at the heart of science, is based on faith not evidence.

There is evidence on peer review, but few scientists and scientific editors seem to know of it – and what it shows is that the process has little if any benefit and lots of flaws.

Peer review is supposed to be the quality assurance system for science, weeding out the scientifically unreliable and reassuring readers of journals that they can trust what they are reading. In reality, however, it is ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant.

As Drummond Rennie, the founder of the annual International Congress on Peer Review and Biomedical Publication, says, “If peer review was a drug it would never be allowed onto the market.”

Cochrane reviews, which gather systematically all available evidence, are the highest form of scientific evidence. A 2007 Cochrane review of peer review for journals concludes: “At present, little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research.”

We can see before our eyes that peer review doesn’t work because most of what is published in scientific journals is plain wrong. The most cited paper in Plos Medicine, which was written by Stanford University’s John Ioannidis, shows that most published research findings are false. Studies by Ioannidis and others find that studies published in “top journals” are the most likely to be inaccurate. This is initially surprising, but it is to be expected as the “top journals” select studies that are new and sexy rather than reliable. A series published in The Lancet in 2014 has shown that 85 per cent of medical research is wasted because of poor methods, bias and poor quality control. A study in Nature showed that more than 85 per cent of preclinical studies could not be replicated, the acid test in science.

I used to be the editor of the BMJ, and we conducted our own research into peer review. In one study we inserted eight errors into a 600 word paper and sent it 300 reviewers. None of them spotted more than five errors, and a fifth didn’t detect any. The median number spotted was two. These studies have been repeated many times with the same result. Other studies have shown that if reviewers are asked whether a study should be published there is little more agreement than would be expected by chance.

As you might expect, the humanities are lagging far behind the sciences in acknowledging that peer review is an exercise in social status rather than quality:


One of the changes I want to highlight is the way that “peer review” has evolved fairly quietly during the expansion of digital scholarship and pedagogy. Even though some scholars, such as Kathleen Fitzpatrick, are addressing the need for new models of peer review, recognition of the ways that this process has already been transformed in the digital realm remains limited. The 2010 Center for Studies in Higher Education (hereafter cited as Berkeley Report) comments astutely on the conventional role of peer review in the academy:

Among the reasons peer review persists to such a degree in the academy is that, when tied to the venue of a publication, it is an efficient indicator of the quality, relevance, and likely impact of a piece of scholarship. Peer review strongly influences reputation and opportunities. (Harley, et al 21)

These observations, like many of those presented in this document, contain considerable wisdom. Nevertheless, our understanding of peer review could use some reconsideration in light of the distinctive qualities and conditions associated with digital humanities.
…(Living in a Digital World: Rethinking Peer Review, Collaboration, and Open Access by Sheila Cavanagh.)

Can you think of another area where something akin to peer review is being touted?

What about internal guidelines of the CIA, NSA, FBI and secret courts reviewing actions by those agencies?

How do those differ from peer review, which is an acknowledged failure in science and should be acknowledged in the humanities?

They are quite similar in the sense that some secret group is empowered to make decisions that impact others and members of those groups, don’t want to relinquish those powers. Surprise, surprise.

Peer review should be scrapped across the board and replaced by tracked replication and use by others, both in the sciences and the humanities.

Government decisions should be open to review by all its citizens and not just a privileged few.