Archive for the ‘Open Access’ Category

Alexandra Elbakyan (Sci-Hub) As Freedom Fighter

Friday, February 9th, 2018

Recognizing Alexandra Elbakyan:

Alexandra Elbakyan is the freedom fighter behind Sci-Hub, a repository of 64.5 million papers, or “two-thirds of all published research, and it [is] available to anyone.”

Ian Graber-Stiehl, in Science’s Pirate Queen, misses an opportunity to ditch the mis-framing of Elbakyan as a “pirate,” and to properly frame her as a freedom fighter.

To set the background for why you too should see Elbakyan as a freedom fighter, it’s necessary to review, briefly, the notion of “sale” and your intellectual freedom prior to widespread use of electronic texts.

When I started using libraries in the ’60’s, you had to physically visit the library to use its books or journals. The library would purchase those items, what is known as first sale, and then either lend them or allow patrons to read them. No separate charge or income for the publisher upon reading. And once purchased, the item remained in the library for use by others.

With the advent of electronic texts, plus oppressive contracts and manipulation of the law, publishers began charging libraries even more than when libraries purchased and maintained access to material for their patrons. Think of it as a form of recurrent extortion, you can’t have access to materials already purchased, save for paying to maintain that access.

Which of course means that both libraries and individuals have lost their right to pay for an item and to maintain it separate and apart from the publisher. That’s a serious theft and it took place in full public view.

There are pirates in this story, people who stole the right of libraries and individuals to purchase items for their own storage and use. Some of the better known ones include: American Chemical Society, Reed-Elsevier (a/k/a RELX Group),Sage Publishing, Springer, Taylor & Francis, and, Wiley-Blackwell.

Elbakyan is trying to recover access for everyone, access that was stolen.

That doesn’t sound like the act of a pirate. Pirates steal for their own benefit. That sounds like the pirates I listed above.

Now that you know Elbakyan is fighting to recover a right taken from you, does that make you view her fight differently?

BTW, when publishers float the false canard of their professional staff/editors/reviewers, remember their retraction rates are silent witnesses refuting their claims of competence.

Read any recent retraction for the listed publishers. Use RetractionWatch for current or past retractions. “Unread” is the best explanation for how most of them got past “staff/editors/reviewers.”

Do you support freedom fighters or publisher/pirates?

If you want to support publisher/pirates, no further action needed.

If you want to support freedom fighters, including Alexandra Elbakyan, the Sci-Hub site has a donate link, contact Elbakyan if you have extra cutting edge equipment to offer, promote Sci-Hub on social media, etc.

For making the lives of publisher/pirates more difficult, use your imagination.

To follow Elbakyan, see her blog and Facebook page.

Maintaining Your Access to Sci-Hub

Tuesday, November 21st, 2017

A tweet today by @Sci_Hub advises:

Sci-Hub is working. To get around domain names problem, use custom Sci-Hub DNS servers and How to customize DNS in Windows:

No doubt, Elsevier will continue to attempt to interfere with your access to Sci-Hub.

Already the largest, bloated and insecure presence academic publishing presence on the Internet, Elsevier labors every day to become more of an attractive nuisance.

What corporate strategy is served by painting a flashing target on your Internet presence?


PS: Do update your DNS entries while pondering that question.

Academic Torrents Update

Friday, November 3rd, 2017

When I last mentioned Academic Torrents, in early 2014, it had 1.67TB of research data.

I dropped by Academic Torrents this week to find it now has 25.53TB of research data!

Some arbitrary highlights:

Richard Feynman’s Lectures on Physics (The Messenger Lectures)

A collection of sport activity datasets for data analysis and data mining 2017a

[Coursera] Machine Learning (Stanford University) (ml)

UC Berkeley Computer Science Courses (Full Collection)

[Coursera] Mining Massive Datasets (Stanford University) (mmds)

Wikilinks: A Large-scale Cross-Document Coreference Corpus Labeled via Links to Wikipedia (Original Dataset)

Your arbitrary highlights are probably different than mine so visit Academic Torrents to see what data captures your eye.


Gotta Minute To Help @WikiCommons?

Sunday, May 21st, 2017

Wikimedia NYC tweeted and Michael Peter Edison retweeted:

I know. Moving images from one silo to another.

But, it does increase the odds of @WikiCommons users finding the additional images. That’s a good thing.

Take a minute to visit,, select the public domain facet and grab an image to upload to WikiMedia Commons.

The process is quite painless, I uploaded The Pit of Acheron, or the Birth of of the Plagues of England today.

With practice it should take less than a minute but I got diverted looking for more background on the image.

Rowlandson the Caricaturist: A Selection from His Works, with Anecdotal Descriptions of His Famous Caricatures and a Sketch of His Life, Times, and Contemporaries, Volume 1 by Joseph Greco, J. W. Bouton, New York, 1880, page 112:

January 1. 1784. The Pit of Acheron, or the Birth of of the Plagues of England. —

The Pit of Acheron, if we may trust the satirist, is not situated at any considerable distance from Westminister; the precincts of that city appear through the smoke of the incantations which are carried on in the Pit. Three weird sisters, like the Witches in ‘Macbeth,’ are working the famous charm; a monstrous cauldron is supported by death’s-heads and harpies; the ingredients of the broth are various; a crucifix, a rosary, Deceit, Loans, Lotteries, and Pride, together with a fox’s head, cards, dice, daggers, and an executioner’s axe, &c., form portions of the accessories employed in these uncanny rites. Three heads are rising from the flames—the good-natured face of Lord North, the spectacled and incisive outline of Burke, and Fox’s ‘gunpowder jowl,’ which is drifting Westminster-wards. One hag, who is dropping Rebellion into the brew, is demanding, ‘Well, sister, what hast thou got for the ingredients of our charm’d pot?’ To this her fellow-witch, who is turning out certain mischievous ingredients which she has collected in her bag, is responding, ‘A best from Scotland called an Erskine, famous for duplicity, low art, and cunning; the other a monster who’d spurn even at Charter’s Rights.’ Erskine is shot out of the bag, crying, ‘I am like a Proteus, can turn any shape, from a sailor to a lawyer, and always lean to the strongest side!’ The other member, whose tail is that of a serpent, is singing, ‘Over the water and over the lee, thro’ hell I would follow my Charlie.’

I remain uncertain about the facts and circumstances surrounding the Westminster election of 1784 that would further explain this satire. Perhaps another day.

If you can’t wait, consider reading History of the Westminster Election, containing Every Material Occurrence, from its commencement On the First of April to the Close of the Poll, on the 17th of May, to which is prefixed A Summary Account of the Proceedings of the Late Parliament by James Hartley. (562 pages)

Rowlandson was also noted for his erotica: collection of erotica by Rowlandson.

Unpaywall (Access to Academic Publishing)

Wednesday, April 12th, 2017

How a Browser Extension Could Shake Up Academic Publishing by Lindsay McKenzie.

From the post:

Open-access advocates have had several successes in the past few weeks. The Bill & Melinda Gates Foundation started its own open-access publishing platform, which the European Commission may replicate. And librarians attending the Association of College and Research Libraries conference in March were glad to hear that the Open Access Button, a tool that helps researchers gain free access to copies of articles, will be integrated into existing interlibrary-loan arrangements.

Another initiative, called Unpaywall, is a simple browser extension, but its creators, Jason Priem and Heather Piwowar, say it could help alter the status quo of scholarly publishing.

“We’re setting up a lemonade stand right next to the publishers’ lemonade stand,” says Mr. Priem. “They’re charging $30 for a glass of lemonade, and we’re showing up right next to them and saying, ‘Lemonade for free’. It’s such a disruptive, exciting, and interesting idea, I think.”

Like the Open Access Button, Unpaywall is open-source, nonprofit, and dedicated to improving access to scholarly research. The button, devised in 2013, has a searchable database that comes into play when a user hits a paywall.

When an Unpaywall user lands on the page of a research article, the software scours thousands of institutional repositories, preprint servers, and websites like PubMed Central to see if an open-access copy of the article is available. If it is, users can click a small green tab on the side of the screen to view a PDF.

Sci-Hub gets an honorable mention as a “..pirate website…,” usage of which carries “…so much fear and uncertainty….” (Disclaimer, the author of those comments is one of the creators of Unpaywall (Jason Priem).)

Hardly. What was long suspected about academic publishing has become widely known: Peer review is a fiction, even at the best known publishers, to say nothing of lesser lights in the academic universe. The “contribution” of publishers is primarily maintaining lists of editors for padding the odd resume. (Peer Review failure: Science and Nature journals reject papers because they “have to be wrong”.)

I should not overlook publishers as a source of employment for “gatekeepers.” “Gatekeepers” being those unable to make a contribution on their own, who seek to prevent others from doing so and failing that, preventing still others from learning of those contributions.

Serfdom was abolished centuries ago, academic publishing deserves a similar fate.

PS: For some reason authors are reluctant to post the web address for Sci-Hub:

Sci Hub It!

Friday, April 7th, 2017

Sci Hub It!

Simple add-on to make it easier to use Sci-Hub.

If you aren’t already using this plug-in for Firefox you should be.

Quite handy!


Leak Publication: Sharing, Crediting, and Re-Using Leaks

Wednesday, March 22nd, 2017

If you substitute “leak” for “data” in this essay by Daniella Lowenberg, does it work for leaks as well?

Data Publication: Sharing, Crediting, and Re-Using Research Data by Daniella Lowenberg.

From the post:

In the most basic terms- Data Publishing is the process of making research data publicly available for re-use. But even in this simple statement there are many misconceptions about what Data Publications are and why they are necessary for the future of scholarly communications.

Let’s break down a commonly accepted definition of “research data publishing”. A Data Publication has three core features: 1 – data that are publicly accessible and are preserved for an indefinite amount of time, 2 – descriptive information about the data (metadata), and 3 – a citation for the data (giving credit to the data). Why are these elements essential? These three features make research data reusable and reproducible- the goal of a Data Publication.

As much as I admire the work of the International Consortium of Investigative Journalists (ICIJ, especially its Panama Papers project, sharing data beyond the confines of their community isn’t a value, much less a goal.

As all secret keepers, government, industry, organizations, ICIJ has “reasons” for its secrecy, but none that I find any more or less convincing than those offered by other secret keepers.

Every secret keeper has an agenda their secrecy serves. Agendas that which don’t include a public empowered to make judgments about their secret keeping.

The ICIJ proclaims Leak to Us.

A good place to leak but include with your leak a demand, an unconditional demand, that your leak be released in its entirely within a year or two of its first publication.

Help enable the public to watch all secrets and secret keepers, not just those some secret keepers choose to expose.

ESA Affirms Open Access Policy For Images, Videos And Data

Tuesday, February 21st, 2017

ESA Affirms Open Access Policy For Images, Videos And Data

From the post:

ESA today announced it has adopted an Open Access policy for its content such as still images, videos and selected sets of data.

For more than two decades, ESA has been sharing vast amounts of information, imagery and data with scientists, industry, media and the public at large via digital platforms such as the web and social media. ESA’s evolving information management policy increases these opportunities.

In particular, a new Open Access policy for ESA’s information and data will now facilitate broadest use and reuse of the material for the general public, media, the educational sector, partners and anybody else seeking to utilise and build upon it.

“This evolution in opening access to ESA’s images, information and knowledge is an important element of our goal to inform, innovate, interact and inspire in the Space 4.0 landscape,” said Jan Woerner, ESA Director General.

“It logically follows the free and open data policies we have already established and accounts for the increasing interest of the general public, giving more insight to the taxpayers in the member states who fund the Agency.”

A website pointing to sets of content already available under Open Access, a set of Frequently Asked Questions and further background information can be found at

More information on the ESA Digital Agenda for Space is available at

A great trove of images and data for exploration and development of data skills.

Launched on 1 March 2002 on an Ariane-5 rocket from Europe’s spaceport in French Guyana, Envisat was the largest Earth observation spacecraft ever built. The eight-tonne satellite orbited Earth more than 50 000 times over 10 years – twice its planned lifetime. The mission delivered thousands of images and a wealth of data used to study the workings of the Earth system, including insights into factors contributing to climate change. The end of the mission was declared on 9 May 2012, but ten years of Envisat’s archived data continues to be exploited for studying our planet.

With immediate effect, all 476 public Envisat MERIS or ASAR or AATSR images are released under the Creative Commons CC BY-SA 3.0 IGO licence, hence the credit for all images is: ESA, CC BY-SA 3.0 IGO. Follow this link.

The 476 images mentioned in the news release are images prepared over the years for public release.

For addition Envisat data under the Open Access license, see: EO data distributed by ESA.

I registered for an ESA Earth Observation Single User account, quite easy as registration forms go.

I’ll wander about for a bit and report back on the resources I find.


PS: Not only should you use and credit the ESA as a data source, laudatory comments about the Open Access license may encourage others to do the same.

Open Science: Too Much Talk, Too Little Action [Lessons For Political Opposition]

Monday, February 6th, 2017

Open Science: Too Much Talk, Too Little Action by Björn Brembs.

From the post:

Starting this year, I will stop traveling to any speaking engagements on open science (or, more generally, infrastructure reform), as long as these events do not entail a clear goal for action. I have several reasons for this decision, most of them boil down to a cost/benefit estimate. The time spent traveling does not seem worth the hardly noticeable benefits any more.

I got involved in Open Science more than 10 years ago. Trying to document the point when it all started for me, I found posts about funding all over my blog, but the first blog posts on publishing were from 2005/2006, the announcement of me joining the editorial board of newly founded PLoS ONE late 2006 and my first post on the impact factor in 2007. That year also saw my first post on how our funding and publishing system may contribute to scientific misconduct.

In an interview on the occasion of PLoS ONE’s ten-year anniversary, PLoS mentioned that they thought the publishing landscape had changed a lot in these ten years. I replied that, looking back ten years, not a whole lot had actually changed:

  • Publishing is still dominated by the main publishers which keep increasing their profit margins, sucking the public teat dry
  • Most of our work is still behind paywalls
  • You won’t get a job unless you publish in high-ranking journals.
  • Higher ranking journals still publish less reliable science, contributing to potential replication issues
  • The increase in number of journals is still exponential
  • Libraries are still told by their faculty that subscriptions are important
  • The digital functionality of our literature is still laughable
  • There are no institutional solutions to sustainably archive and make accessible our narratives other than text, or our code or our data

The only difference in the last few years really lies in the fraction of available articles, but that remains a small minority, less than 30% total.

So the work that still needs to be done is exactly the same as it was at the time Stevan Harnad published his “Subversive Proposal” , 23 years ago: getting rid of paywalls. This goal won’t be reached until all institutions have stopped renewing their subscriptions. As I don’t know of a single institution without any subscriptions, that task remains just as big now as it was 23 years ago. Noticeable progress has only been on the margins and potentially in people’s heads. Indeed, now only few scholars haven’t heard of “Open Access”, yet, but apparently without grasping the issues, as my librarian colleagues keep reminding me that their faculty believe open access has already been achieved because they can access everything from the computer in their institute.

What needs to be said about our infrastructure has been said, both in person, and online, and in print, and on audio, and on video. Those competent individuals at our institutions who make infrastructure decisions hence know enough to be able to make their rational choices. Obviously, if after 23 years of talking about infrastructure reform, this is the state we’re in, our approach wasn’t very effective and my contribution is clearly completely negligible, if at all existent. There is absolutely no loss if I stop trying to tell people what they already should know. After all, the main content of my talks has barely changed in the last eight or so years. Only more recent evidence has been added and my conclusions have become more radical, i.e., trying to tackle the radix (Latin: root) of the problem, rather than palliatively care for some tangential symptoms.

The line:

What needs to be said about our infrastructure has been said, both in person, and online, and in print, and on audio, and on video.

is especially relevant in light of the 2016 presidential election and the fund raising efforts of organizations that form the “political opposition.”

You have seen the ads in email, on Facebook, Twitter, etc., all pleading for funding to oppose the current US President.

I agree the current US President should be opposed.

But the organizations seeking funding failed to stop his rise to power.

Whether their failure was due to organizational defects or poor strategies is really beside the point. They failed.

Why should I enable them to fail again?

One data point, the Women’s March on Washington was NOT organized by organizations with permanents staff and offices in Washington or elsewhere.

Is your contribution supporting staffs and offices of the self-righteous (the primary function of old line organizations) or investigation, research, reporting and support of boots on the ground?

Government excesses are not stopped by bewailing our losses but by making government agents bewail theirs.

ODI – Access To Legal Data News

Friday, January 13th, 2017

Strengthening our legal data infrastructure by Amanda Smith.

Amanda recounts an effort between the Open Data Institute (ODI) and Thomas Reuters to improve access to legal data.

From the post:

Paving the way for a more open legal sector: discovery workshop

In September 2016, Thomson Reuters and the ODI gathered publishers of legal data, policy makers, law firms, researchers, startups and others working in the sector for a discovery workshop. Its aims were to explore important data types that exist within the sector, and map where they sit on the data spectrum, discuss how they flow between users and explore the opportunities that taking a more open approach could bring.

The notes from the workshop explore current mechanisms for collecting, managing and publishing data, benefits of wider access and barriers to use. There are certain questions that remain unanswered – for example, who owns the copyright for data collected in court. The notes are open for comments, and we invite the community to share their thoughts on these questions, the data types discussed, how to make them more open and what we might have missed.

Strengthening data infrastructure in the legal sector: next steps

Following this workshop we are working in partnership with Thomson Reuters to explore data infrastructure – datasets, technologies and processes and organisations that maintain them – in the legal sector, to inform a paper to be published later in the year. The paper will focus on case law, legislation and existing open data that could be better used by the sector.

The Ministry of Justice have also started their own data discovery project, which the ODI have been contributing to. You can keep up to date on their progress by following the MOJ Digital and Technology blog and we recommend reading their data principles.

Get involved

We are looking to the legal and data communities to contribute opinion pieces and case studies to the paper on data infrastructure for the legal sector. If you would like to get involved, contact us.
…(emphasis in original)

Encouraging news, especially for those interested in building value-added tools on top of data that is made available publicly. At least they can avoid the cost of collecting data already collected by others.

Take the opportunity to comment on the notes and participate as you are able.

If you think you have seen use cases for topic maps before, consider that the Code of Federal Regulations (US), as of December 12, 2016, has 54938 separate but not unique, definitions of “person.” The impact of each regulation depending upon its definition of that term.

Other terms have similar semantic difficulties both in the Code of Federal Regulations as well as the US Code.

Humanities Digital Library [A Ray of Hope]

Friday, January 13th, 2017

Humanities Digital Library (Launch Event)

From the webpage:

17 Jan 2017, 18:00 to 17 Jan 2017, 19:00


IHR Wolfson Conference Suite, NB01/NB02, Basement, IHR, Senate House, Malet Street, London WC1E 7HU


6-7pm, Tuesday 17 January 2017

Wolfson Conference Suite, Institute of Historical Research

Senate House, Malet Street, London, WC1E 7HU

About the Humanities Digital Library

The Humanities Digital Library is a new Open Access platform for peer reviewed scholarly books in the humanities.

The Library is a joint initiative of the School of Advanced Study, University of London, and two of the School’s institutes—the Institute of Historical Research and the Institute of Advanced Legal Studies.

From launch, the Humanities Digital Library offers scholarly titles in history, law and classics. Over time, the Library will grow to include books from other humanities disciplines studied and researched at the School of Advanced Study. Partner organisations include the Royal Historical Society whose ‘New Historical Perspectives’ series will appear in the Library, published by the Institute of Historical Research.

Each title is published as an open access PDF, with copies also available to purchase in print and EPUB formats. Scholarly titles come in several formats—including monographs, edited collections and longer and shorter form works.
(emphasis in the original)

Timely evidence that not everyone in the UK is barking mad! “Barking mad” being the only explanation I can offer for the Investigatory Powers Bill.

I won’t be attending but if you can, do and support the Humanities Digital Library after it opens.

OpenTOC (ACM SIG Proceedings – Free)

Sunday, January 1st, 2017


From the webpage:

ACM OpenTOC is a unique service that enables Special Interest Groups to generate and post Tables of Contents for proceedings of their conferences enabling visitors to download the definitive version of the contents from the ACM Digital Library at no charge.

Downloads of these articles are captured in official ACM statistics, improving the accuracy of usage and impact measurements. Consistently linking to definitive versions of ACM articles should reduce user confusion over article versioning.

Conferences are listed by year, 2014 – 2016 and by event.

A step in the right direction.

Do you know if the digital library allows bulk downloading of search result metadata?

It didn’t the last time I had a digital library subscription. Contacting the secret ACM committee that decides on web features was verboten.

Enjoy this improvement in access while waiting for ACM access bottlenecks to wither and die.

Guarantees Of Public Access In Trump Administration (A Perfect Data Storm)

Saturday, December 31st, 2016

I read hand wringing over the looming secrecy of the Trump administration on a daily basis.

More truthfully, I skip over daily hand wringing over the looming secrecy of the Trump administration.

For two reasons.

First, as reported in US government subcontractor leaks confidential military personnel data by Charlie Osborne, government data doesn’t require hacking, just a little initiative.

In this particular case, it was rsync without a username or password, that made this data leak possible.

Editors should ask their reporters before funding FOIA suits: “Have you tried rsync?”

Second, the alleged-to-be-Trump-nominees for cabinet and lesser positions, remind me of this character from Dilbert: November 2, 1992:


Trump appointees may have mastered the pointy end of pencils but their ability to use cyber-security will be as shown.

When you add up the cyber-security incompetence of Trump appointees, complaints from Inspector Generals about agency security, and agencies leaking to protect their positions/turf, you have the conditions for a perfect data storm.

A perfect data storm that may see the US government hemorrhaging data like never before.

PS: You know my preference, post leaks on receipt in their entirety. As for “consequences,” consider those a down payment on what awaits people who betray humanity, their people, colleagues and family. They could have chosen differently and didn’t. What more can one say?

The Joy of Collective Action: Elsevier Boycott – Germany

Friday, December 16th, 2016

Germany-wide consortium of research libraries announce boycott of Elsevier journals over open access by Cory Doctorow.

Cory writes:

Germany’s DEAL project, which includes over 60 major research institutions, has announced that all of its members are canceling their subscriptions to all of Elsevier’s academic and scientific journals, effective January 1, 2017.

The boycott is in response to Elsevier’s refusal to adopt “transparent business models” to “make publications more openly accessible.”

Just guessing but I suspect the DEAL project would welcome news of other consortia and schools taking similar action.

Over the short term, scholars can tide themselves over with Sci-Hub.

Cory ends:

No full-text access to Elsevier journals to be expected from 1 January 2017 on [Göttingen State and University Library]

How many libraries will you contact by the end of this year?

Version 2 of the Hubble Source Catalog [Model For Open Access – Attn: Security Researchers]

Friday, September 30th, 2016

Version 2 of the Hubble Source Catalog

From the post:

The Hubble Source Catalog (HSC) is designed to optimize science from the Hubble Space Telescope by combining the tens of thousands of visit-based source lists in the Hubble Legacy Archive (HLA) into a single master catalog.

Version 2 includes:

  • Four additional years of ACS source lists (i.e., through June 9, 2015). All ACS source lists go deeper than in version 1. See current HLA holdings for details.
  • One additional year of WFC3 source lists (i.e., through June 9, 2015).
  • Cross-matching between HSC sources and spectroscopic COS, FOS, and GHRS observations.
  • Availability of magauto values through the MAST Discovery Portal. The maximum number of sources displayed has increased from 10,000 to 50,000.

The HSC v2 contains members of the WFPC2, ACS/WFC, WFC3/UVIS and WFC3/IR Source Extractor source lists from HLA version DR9.1 (data release 9.1). The crossmatching process involves adjusting the relative astrometry of overlapping images so as to minimize positional offsets between closely aligned sources in different images. After correction, the astrometric residuals of crossmatched sources are significantly reduced, to typically less than 10 mas. The relative astrometry is supported by using Pan-STARRS, SDSS, and 2MASS as the astrometric backbone for initial corrections. In addition, the catalog includes source nondetections. The crossmatching algorithms and the properties of the initial (Beta 0.1) catalog are described in Budavari & Lubow (2012).


There are currently three ways to access the HSC as described below. We are working towards having these interfaces consolidated into one primary interface, the MAST Discovery Portal.

  • The MAST Discovery Portal provides a one-stop web access to a wide variety of astronomical data. To access the Hubble Source Catalog v2 through this interface, select Hubble Source Catalog v2 in the Select Collection dropdown, enter your search target, click search and you are on your way. Please try Use Case Using the Discovery Portal to Query the HSC
  • The HSC CasJobs interface permits you to run large and complex queries, phrased in the Structured Query Language (SQL).
  • HSC Home Page

    – The HSC Summary Search Form displays a single row entry for each object, as defined by a set of detections that have been cross-matched and hence are believed to be a single object. Averaged values for magnitudes and other relevant parameters are provided.

    – The HSC Detailed Search Form displays an entry for each separate detection (or nondetection if nothing is found at that position) using all the relevant Hubble observations for a given object (i.e., different filters, detectors, separate visits).

Amazing isn’t it?

The astronomy community long ago vanquished data hoarding and constructed tools to avoid moving very large data sets across the network.

All while enabling more and not less access and research using the data.

Contrast that to the sorry state of security research, where example code is condemned, if not actually prohibited by law.

Yet, if you believe current news reports (always an iffy proposition), cybercrime is growing by leaps and bounds. (PwC Study: Biggest Increase in Cyberattacks in Over 10 Years)

How successful is the “data hoarding” strategy of the security research community?

Do Your Part! Illegally Download Scientific Papers

Sunday, August 28th, 2016


From Rob Beschizza’s post at: Do Your Part! Illegally Download Scientific Papers, which has a poster size, 1940 x 2521 pixel resolution, version.

NASA just made all its research available online for free (Really?)

Saturday, August 20th, 2016

NASA just made all its research available online for free by Tim Walker.

Caution: The green colored links in the original post are pop-up ads and not links to content.

From the post:

Care to learn more about 400-foot tsunamis on Mars? Now you can, after Nasa announced it is making all its publicly funded research available online for free. The space agency has set up a new public web portal called Pubspace, where the public can find Nasa-funded research articles on everything from the chances of life on one of Saturn’s moons to the effects of space station living on the hair follicles of astronauts.

In 2013, the White House Office of Science and Technology Policy directed Nasa and other agencies to increase access to their research, which in the past was often available (if it was available online at all) only via a paywall. Now, it is Nasa policy that any research articles funded by the agency have to be posted on Pubspace within a year of publication.

There are some exceptions, such as research that relates to national security. Nonetheless, there are currently a little over 850 articles available on the website with many more to come.

Created in 1958, all of NASA’s research “available online for free,” amounts to approximately 850 documents?

Even starting in 2013, 850 documents seems a bit light.

Truth of the matter is that NASA has created yet another information silo of NASA data.

Here are just a few of the other NASA silos that come to mind right off hand:

Johnson Space Center Document Index System

NASA Aeronautics and Space Database

NASA Documents Online


NASA Technical Report Server

I don’t know if any of those include data repositories from NASA missions or not. Plus any other information silos NASA has constructed over the years.

I applaud NASA making sponsored research public but building yet another silo to do so seem wrong-headed.

Conversion and replacement of any of these silos is obviously out of the question.

Under taking to map all of them together, for some undefined ROI, seems equally unlikely.

Suggestions on how to approach such a large, extant silo problem?

Congressional Research Service Fiscal 2015 – Full Report List

Saturday, August 6th, 2016

Congressional Research Service Fiscal 2015

The Director’s Message:

From international conflicts and humanitarian crises, to immigration, transportation, and secondary education, the Congressional Research Service (CRS) helped every congressional office and committee navigate the wide range of complex and controversial issues that confronted Congress in FY2015.

We kicked off the year strongly, preparing for the newly elected Members of the 114th Congress with the tenth biannual CRS Seminar for New Members, and wrapped up 2015 supporting the transition to a new Speaker and the crafting of the omnibus appropriations bill. In between, CRS experts answered over 62,000 individual requests; hosted over 7,400 Congressional participants at seminars, briefings and trainings; provided over 3,600 new or refreshed products; and summarized over 8,000 pieces of legislation.

While the CRS mission remains the same, Congress and the environment in which it works are continually evolving. To ensure that the Service is well positioned to anticipate and meet the information and research needs of a 21st-century Congress, we launched a comprehensive strategic planning effort that has identified the most critical priorities, goals, and objectives that will enable us to most efficiently and effectively serve Congress as CRS moves into its second century.

Responding to the increasingly rapid pace of congressional business, and taking advantage of new technologies, we continued to explore new and innovative ways to deliver authoritative information and timely analysis to Congress. For example, we introduced shorter report formats and added infographics to our website to better serve congressional needs.

It is an honor and privilege to work for the U.S. Congress. With great dedication, our staff creatively supports Members, staff and committees as they help shape and direct the legislative process and our nation’s future. Our accomplishments in fiscal 2015 reflect that dedication.

All true but also true that the funders of all those wonderful efforts, taxpayers, have spotty and/or erratic access to those research goodies.

Perhaps that will change in the not too distant future.

But until then, perhaps a list of all the new CRS products in 2015, which runs from page 47 to page 124 may be of interest.

Not all entries are unique as they may appear under different categories.

Sadly the only navigation you are offered is by chunky categories like “Health” and “Law and Justice.”

Hmmm, perhaps that can be fixed, at least to some degree.

Watch for more CRS news this coming week.

Open Access Journals Threaten Science – What’s Your Romesburg Number?

Friday, July 1st, 2016

When I saw the pay-per-view screen shot of this article on Twitter, I almost dismissed it as Photoshop-based humor. But, anything is possible so I searched for the title, only to find:

How publishing in open access journals threatens science and what we can do about it by H. Charles Romesburg (Department of Environment and Society, Utah State University, Logan, UT, USA).


The last decade has seen an enormous increase in the number of peer-reviewed open access research journals in which authors whose articles are accepted for publication pay a fee to have them made freely available on the Internet. Could this popularity of open access publishing be a bad thing? Is it actually imperiling the future of science? In this commentary, I argue that it is. Drawing upon research literature, I explain why it is almost always best to publish in society journals (i.e., those sponsored by research societies such as Journal of Wildlife Management) and not nearly as good to publish in commercial academic journals, and worst—to the point it should normally be opposed—to publish in open access journals (e.g., PLOS ONE). I compare the operating plans of society journals and open access journals based on 2 features: the quality of peer review they provide and the quality of debate the articles they publish receive. On both features, the quality is generally high for society journals but unacceptably low for open access journals, to such an extent that open access publishing threatens to pollute science with false findings. Moreover, its popularity threatens to attract researchers’ allegiance to it and away from society journals, making it difficult for them to achieve their traditionally high standards of peer reviewing and of furthering debate. I prove that the commonly claimed benefits to science of open access publishing are nonexistent or much overestimated. I challenge the notion that journal impact factors should be a key consideration in selecting journals in which to publish. I suggest ways to strengthen the Journal and keep it strong. © 2016 The Wildlife Society.

On a pay-per-view site (of course):


You know about the Erdős number, which measures your distance from collaborating with Paul Erdős.

I propose the Romesburg Number, which measures your collaboration distance from H. Charles Romesburg. The higher your number, the further removed you are from Romesburg.

I don’t have all the data but I hopeful my Romesburg number is 12 or higher.

The No-Value-Add Of Academic Publishers And Peer Review

Tuesday, June 21st, 2016

Comparing Published Scientific Journal Articles to Their Pre-print Versions by Martin Klein, Peter Broadwell, Sharon E. Farb, Todd Grappone.


Academic publishers claim that they add value to scholarly communications by coordinating reviews and contributing and enhancing text during publication. These contributions come at a considerable cost: U.S. academic libraries paid $1.7 billion for serial subscriptions in 2008 alone. Library budgets, in contrast, are flat and not able to keep pace with serial price inflation. We have investigated the publishers’ value proposition by conducting a comparative study of pre-print papers and their final published counterparts. This comparison had two working assumptions: 1) if the publishers’ argument is valid, the text of a pre-print paper should vary measurably from its corresponding final published version, and 2) by applying standard similarity measures, we should be able to detect and quantify such differences. Our analysis revealed that the text contents of the scientific papers generally changed very little from their pre-print to final published versions. These findings contribute empirical indicators to discussions of the added value of commercial publishers and therefore should influence libraries’ economic decisions regarding access to scholarly publications.

The authors have performed a very detailed analysis of pre-prints, 90% – 95% of which are published as open pre-prints first, to conclude there is no appreciable difference between the pre-prints and the final published versions.

I take “…no appreciable difference…” to mean academic publishers and the peer review process, despite claims to the contrary, contribute little or no value to academic publications.

How’s that for a bargaining chip in negotiating subscription prices?

Where Has Sci-Hub Gone?

Saturday, June 18th, 2016

While I was writing about the latest EC idiocy (link tax), I was reminded of Sci-Hub.

Just checking to see if it was still alive, I tried

404 by standard DNS service.

If you are having the same problem, Mike Masnick reports in Sci-Hub, The Repository Of ‘Infringing’ Academic Papers Now Available Via Telegram, you can access Sci-Hub via:

I’m not on Telegram, yet, but that may be changing soon. 😉

BTW, while writing this update, I stumbled across: The New Napster: How Sci-Hub is Blowing Up the Academic Publishing Industry by Jason Shen.

From the post:

This is obviously piracy. And Elsevier, one of the largest academic journal publishers, is furious. In 2015, the company earned $1.1 billion in profits on $2.9 billion in revenue [2] and Sci-hub directly attacks their primary business model: subscription service it sells to academic organizations who pay to get access to its journal articles. Elsevier filed a lawsuit against Sci-Hub in 2015, claiming Sci-hub is causing irreparable injury to the organization and its publishing partners.

But while Elsevier sees Sci-Hub as a major threat, for many scientists and researchers, the site is a gift from the heavens, because they feel unfairly gouged by the pricing of academic publishing. Elsevier is able to boast a lucrative 37% profit margin because of the unusual (and many might call exploitative) business model of academic publishing:

  • Scientists and academics submit their research findings to the most prestigious journal they can hope to land in, without getting any pay.
  • The journal asks leading experts in that field to review papers for quality (this is called peer-review and these experts usually aren’t paid)
  • Finally, the journal turns around and sells access to these articles back to scientists/academics via the organization-wide subscriptions at the academic institution where they work or study

There’s piracy afoot, of that I have no doubt.


  • Relies on research it does not sponsor
  • Research results are submitted to it for free
  • Research is reviewed for free
  • Research is published in journals of value only because of the free contributions to them
  • Elsevier makes a 37% profit off of that free content

There is piracy but Jason fails to point to Elsevier as the pirate.

Sci-Hub/Alexandra Elbakyan is re-distributing intellectual property that was stolen by Elsevier from the academic community, for its own gain.

It’s time to bring Elsevier’s reign of terror against the academic community to an end. Support Sci-Hub in any way possible.

If You Believe In OpenAccess, Do You Practice OpenAccess?

Wednesday, June 15th, 2016


From the webpage:

CSC Open-Access Library aim to maintain and develop access to journal publication collections as a research resource for students, teaching staff, researchers and industrialists.

You can see a complete listing of the journals here.

Before you protest these are not Science or Nature, remember that Science and Nature did not always have the reputations they do today.

Let the quality of your work bolster the reputations of open access publications and attract others to them.

Reproducible Research Resources for Research(ing) Parasites

Friday, June 3rd, 2016

Reproducible Research Resources for Research(ing) Parasites by Scott Edmunds.

From the post:

Two new research papers on scabies and tapeworms published today showcase a new collaboration with This demonstrates a new way to share scientific methods that allows scientists to better repeat and build upon these complicated studies on difficult-to-study parasites. It also highlights a new means of writing all research papers with citable methods that can be updated over time.

While there has been recent controversy (and hashtags in response) from some of the more conservative sections of the medical community calling those who use or build on previous data “research parasites”, as data publishers we strongly disagree with this. And also feel it is unfair to drag parasites into this when they can teach us a thing or two about good research practice. Parasitology remains a complex field given the often extreme differences between parasites, which all fall under the umbrella definition of an organism that lives in or on another organism (host) and derives nutrients at the host’s expense. Published today in GigaScience are articles on two parasitic organisms, scabies and on the tapeworm Schistocephalus solidus. Not only are both papers in parasitology, but the way in which these studies are presented showcase a new collaboration with that provides a unique means for reporting the Methods that serves to improve reproducibility. Here the authors take advantage of their open access repository of scientific methods and a collaborative protocol-centered platform, and we for the first time have integrated this into our submission, review and publication process. We now also have a groups page on the portal where our methods can be stored.

A great example of how sharing data advances research.

Of course, that assumes that one of your goals is to advance research and not solely yourself, your funding and/or your department.

Such self-centered as opposed to research-centered individuals do exist, but I would not malign true parasites by describing them as such, even colloquially.

The days of science data hoarders are numbered and one can only hope that the same is true for the “gatekeepers” of humanities data, manuscripts and artifacts.

The only known contribution of hoarders or “gatekeepers” has been to the retarding of their respective disciplines.

Given the choice of advancing your field along with yourself, or only yourself, which one will you choose?

Academic, Not Industrial Secrecy

Saturday, March 12th, 2016

Data too important to share: do those who control the data control the message? by Peter Doshi (BMJ 2016;352:i1027).

Read Peter’s post for the details but the problem in a nutshell:

“The main concern we had was that Fresenius was involved in the process,” Myburgh explained. He said there was never any question of Krumholz’s independence or credentials. Rather, it was a “concern that this was a way for Fresenius to get the data once they were in the public domain. We want restrictions on who could do the analyses.

Under the YODA model Krumholz proposed, the data would be reanalysed by independent parties before being made more broadly available.

“We have no issue with the concept of data sharing,” Myburgh said. “The concerns we have come down to the people with ulterior motives which contradict or do not adhere to the scientific principles we adhere to. That’s the danger.”

Myburgh described himself as an impartial scientist, in contrast to those who have challenged his study. “I’ve heard some of the protagonists of starch. Senior figures wanted to make a point. We do research to answer a question. They do analyses to prove a point.” (emphasis added)

You can hear the echoes of Myburgh’s position of:

We want restrictions on who could do the analyses.

in every government claim for not releasing data that supports government conclusions.

If “terrorists” really are the danger the government claims, don’t you think releasing the data on which that claim is based would convince everyone? Or nearly everyone?

Ah, but some of us might not think opposing corrupt, puppet governments in the Middle East is the same thing as “terrorism.”

And still others of us might not think opposing an oppressive theocracy is the same as “terrorism.”

Yes, more data could lead to more informed discussion, but it could also lead to inconvenient questions.

If Myburgh and colleagues were to find this is the last funded study from any source, unless and until they release this and other trial data, they would sing a different tune.

Anyone with a list of the funders for Myburgh and his colleagues?

Email addresses would be a good start.

Overlay Journals – Community-Based Peer Review?

Friday, February 12th, 2016

New Journals Piggyback on arXiv by Emily Conover.

From the post:

A non-traditional style of scientific publishing is gaining ground, with new journals popping up in recent months. The journals piggyback on the arXiv or other scientific repositories and apply peer review. A link to the accepted paper on the journal’s website sends readers to the paper on the repository.

Proponents hope to provide inexpensive open access publication and streamline the peer review process. To save money, such “overlay” journals typically do away with some of the services traditional publishers provide, for example typesetting and copyediting.

Not everyone is convinced. Questions remain about the scalability of overlay journals, and whether they will catch on — or whether scientists will demand the stamp of approval (and accompanying prestige) that the established, traditional journals provide.

The idea is by no means new — proposals for journals interfacing with online archives appeared as far back as the 1990s, and a few such journals are established in mathematics and computer science. But now, say proponents, it’s an idea whose time has come.

The newest such journal is the Open Journal of Astrophysics, which began accepting submissions on December 22. Editor in Chief Peter Coles of the University of Sussex says the idea came to him several years ago in a meeting about the cost of open access journals. “They were talking about charging thousands of pounds for making articles open access,” Coles says, and he thought, “I never consult journals now; I get all my papers from the arXiv.” By adding a front end onto arXiv to provide peer review, Coles says, “We can dispense with the whole paraphernalia with traditional journals.”

Authors first submit their papers to arXiv, and then input the appropriate arXiv ID on the journal’s website to indicate that they would like their paper reviewed. The journal follows a standard peer review process, with anonymous referees whose comments remain private.

When an article is accepted, a link appears on the journal’s website and the article is issued a digital object identifier (DOI). The entire process is free for authors and readers. As APS News went to press, Coles hoped to publish the first batch of half-dozen papers at the end of January.

My Archive for the ‘Peer Review’ Category has only a few of the high profile failures of peer review over the last five years.

You are probably familiar with at least twice as many reports as I have reported in this blog on the brokenness of peer review.

If traditional peer review is a known failure, why replicate it even for overlay journals?

Why not ask the full set of peers in a discipline? That is the readers of articles posted in public repositories?

If a book/journal article goes uncited, isn’t that evidence that it:

Did NOT advance the discipline in a way meaningful to their peers?

What other evidence would you have that it did advance the discipline? The opinions of friends of the editor? That seems too weak to even suggest.

Citation analysis isn’t free from issues, Are 90% of academic papers really never cited? Searching citations about academic citations reveals the good, the bad and the ugly, but it has the advantage of drawing on the entire pool of talent that comprises a discipline.

Moreover, peer review would not be limited to a one time judgment of traditional peer reviewers but on the basis of how a monograph or article fits into the intellectual development of the discipline as a whole.

Which is more persuasive: That editors and reviewers at Science or Nature accept a paper or that in the ten years following publication, an article is cited by every other major study in the field?

Citation analysis obviates the overhead costs that are raised about organizing peer review on a massive scale. Why organize peer review at all?

Peers are going to read and cite good literature and more likely than not, skip the bad. Unless you need to create positions for gate keepers and other barnacles on the profession, opt for citation based peer review based on open repositories.

I’m betting on the communities that silently vet papers and books in spite of the formalized and highly suspect mechanisms for peer review.

Overlay journals could publish preliminary lists of articles that are of interest in particular disciplines and as community-based peer review progresses, they can publish “best of…” series as the community further filters the publications.

Community-based peer review is already operating in your discipline. Why not call it out and benefit from it?

Sci-Hub Tip: Converting Paywall DOIs to Public Access

Thursday, February 11th, 2016

In a tweet Jon Tenn@nt points out that:

Reminder: add “” after the .com in the URL of pretty much any paywalled paper to gain instant free access.

BTW, I tested Jon’s advice with:****/*******

re-cast as:****/*******

And it works!

With a little scripting, you can convert your paywall DOIs into public access with

This “worked for me” so if you encounter issues, please ping me so I can update this post.

Happy reading!

Tackling Zika

Thursday, February 11th, 2016

F1000Research launches rapid, open, publishing channel to help scientists tackle Zika

From the post:

ZAO provides a platform for scientists and clinicians to publish their findings and source data on Zika and its mosquito vectors within days of submission, so that research, medical and government personnel can keep abreast of the rapidly evolving outbreak.

The channel provides diamond-access: it is free to access and articles are published free of charge. It also accepts articles on other arboviruses such as Dengue and Yellow Fever.

The need for the channel is clearly evidenced by a recent report on the global response to the Ebola virus by the Harvard-LSHTM (London School of Hygiene & Tropical Medicine) Independent Panel.

The report listed ‘Research: production and sharing of data, knowledge, and technology’ among its 10 recommendations, saying: “Rapid knowledge production and dissemination are essential for outbreak prevention and response, but reliable systems for sharing epidemiological, genomic, and clinical data were not established during the Ebola outbreak.”

Dr Megan Coffee, an infectious disease clinician at the International Rescue Committee in New York, said: “What’s published six months, or maybe a year or two later, won’t help you – or your patients – now. If you’re working on an outbreak, as a clinician, you want to know what you can know – now. It won’t be perfect, but working in an information void is even worse. So, having a way to get information and address new questions rapidly is key to responding to novel diseases.”

Dr. Coffee is also a co-author of an article published in the channel today, calling for rapid mobilisation and adoption of open practices in an important strand of the Zika response: drug discovery –

Sean Ekins, of Collaborative Drug Discovery, and lead author of the article, which is titled ‘Open drug discovery for the Zika virus’, said: “We think that we would see rapid progress if there was some call for an open effort to develop drugs for Zika. This would motivate members of the scientific community to rally around, and centralise open resources and ideas.”

Another co-author, of the article, Lucio Freitas-Junior of the Brazilian Biosciences National Laboratory, added: “It is important to have research groups working together and sharing data, so that scarce resources are not wasted in duplication. This should always be the case for neglected diseases research, and even more so in the case of Zika.”

Rebecca Lawrence, Managing Director, F1000, said: “One of the key conclusions of the recent Harvard-LSHTM report into the global response to Ebola was that rapid, open data sharing is essential in disease outbreaks of this kind and sadly it did not happen in the case of Ebola.

“As the world faces its next health crisis in the form of the Zika virus, F1000Research has acted swiftly to create a free, dedicated channel in which scientists from across the globe can share new research and clinical data, quickly and openly. We believe that it will play a valuable role in helping to tackle this health crisis.”


For more information:

Andrew Baud, Tala (on behalf of F1000), +44 (0) 20 3397 3383 or +44 (0) 7775 715775

Excellent news for researchers but a direct link to the new channel would have been helpful as well: Zika & Arbovirus Outbreaks (ZAO).

See this post: The Zika & Arbovirus Outbreaks channel on F1000Research by Thomas Ingraham.

News organizations should note that as of today, 11 February 2016, ZAO offers 9 articles, 16 posters and 1 set of slides. Those numbers are likely to increase rapidly.

Oh, did I mention the ZAO channel is free?

Unlike some journals, payment, prestige, privilege, are not pre-requisites for publication.

Useful research on Zika & Arboviruses is the only requirement.

I know, sounds like a dangerous precedent but defeating a disease like Zika will require taking risks.

First Pirate – Sci-Hub?

Wednesday, February 10th, 2016

Sci-Hub romanticizes itself as:

Sci-Hub the first pirate website in the world to provide mass and public access to tens of millions of research papers. (from the about page)

I agree with:

…mass and public access to tens of millions of research papers

But Sci-Hub is hardly:

…the first pirate website in the world

I don’t remember the first gate-keeping publisher that went from stealing from the public in print to stealing from the public online.

With careful enough research I’m sure we could track that down but I’m not sure it matters at this point.

What we do know is that academic research is funded by the public, edited and reviewed by volunteers (to the extent it is reviewed at all), and then kept from the vast bulk of humanity for profit and status (gate-keeping).

It’s heady stuff to think of yourself as a bold and swashbuckling pirate, going to stick it “…to the man.”

However, gate-keeping publishers have developed stealing from the public to an art form. If you don’t believe me, take a brief look at the provisions in the Trans-Pacific Partnership that protect traditional publisher interests.

Recovering what has been stolen from the public isn’t theft at all, its restoration!

Use Sci-Hub, support Sci-Hub, spread the word about Sci-Hub.

Allow gate-keeping publishers to slowly, hopefully painfully, wither as opportunities for exploiting the public grow fewer and farther in between.

PS: You need to read: Meet the Robin Hood of Science by Simon Oxenham to get the full background on Sci-Hub and an extraordinary person, Alexandra Elbakyan.

Addressing The Concerns Of The Selfish

Monday, January 25th, 2016

A burnt hand didn’t teach any lessons to Dr. Jeffrey M. Drazen of the New England Journal of Medicine (NEJM).

Just last week Jeffrey and a co-conspirator took to the editorial page of the NEJM to denounce as “parasites,” scientists who reuse data developed by others. Especially, if the data developers weren’t included in the new work. See: Parasitic Re-use of Data? Institutionalizing Toadyism.

Overly sensitive, as protectors of greedy people tend to be, Jeffrey takes back to the editorial page to say:

In the process of formulating our policy, we spoke to clinical trialists around the world. Many were concerned that data sharing would require them to commit scarce resources with little direct benefit. Some of them spoke pejoratively in describing data scientists who analyze the data of others.3 To make data sharing successful, it is important to acknowledge and air those concerns.(Data Sharing and The Journal)

On target with concerns about data sharing requiring “…scarce resources with little direct benefit.”

Except Jeffrey forgot to mention that in his editorial about “parasites.”

Not a single word. The “cost free” myth of sharing data persists and the NEJM’s voice could be an important one in dispelling that myth.

But not Jeffrey, he took up his lance to defend the concerns of the selfish.

I will post separately on the issue of the cost of data sharing, etc., which as I say, is a legitimate concern.

We don’t need to resort to toadyism to satisfy the concerns of scientists over re-use of their data.

Create all the needed mechanisms to compensate for the sharing of data and if anyone objects or has “concerns” about re-use of data, cease funding them and/or any project of which they are a member.

There is no right to public funding for research, especially for scientists who have developed a sense of entitlement to public funding, for their own benefit.

You might want to compare the NEJM position to that of the radio astronomy community which shares both raw and processed data with anyone who wants to download it.

It’s a question of “privilege,” and not public safety, etc.

It’s annoying enough that people are selfish with research data, don’t be dishonest as well.

Parasitic Re-use of Data? Institutionalizing Toadyism.

Thursday, January 21st, 2016

Data Sharing by Dan L. Longo, M.D., and Jeffrey M. Drazen, M.D, N Engl J Med 2016; 374:276-277 January 21, 2016 DOI: 10.1056/NEJMe1516564.

This editorial in the New England Journal of Medicine advocates the following for re-use of medical data:

How would data sharing work best? We think it should happen symbiotically, not parasitically. Start with a novel idea, one that is not an obvious extension of the reported work. Second, identify potential collaborators whose collected data may be useful in assessing the hypothesis and propose a collaboration. Third, work together to test the new hypothesis. Fourth, report the new findings with relevant coauthorship to acknowledge both the group that proposed the new idea and the investigative group that accrued the data that allowed it to be tested. What is learned may be beautiful even when seen from close up.

I had to check my calendar to make sure April the 1st hadn’t slipped up on me.

This is one of the most bizarre and malignant proposals on data re-use that I have seen.

If you have an original idea, you have to approach other researchers as a suppliant and ask them to benefit from your idea, possibly using their data in new and innovative ways?

Does that smack of a “good old boys/girls” club to you?

If anyone uses the term parasitic or parasite with regard to data re-use, be sure to respond with the question:

How much do dogs in the manger contribute to science?

That phenomena is not unknown in the humanities nor in biblical studies. There was a wave of very disgusting dissertations that began with “…X entrusted me with this fragment of the Dead Sea Scrolls….”

I suppose those professors knew their ability to attract students based on merit versus their hoarding of original text fragments better than I did. You should judge them by their choices.