Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

May 5, 2018

Sci-Hub Needs Your Help

Filed under: Open Access,Open Science,Science — Patrick Durusau @ 4:40 pm

Sci-Hub ‘Pirate Bay For Science’ Security Certs Revoked by Comodo by Andy.

From the post:

Sci-Hub, often known as ‘The Pirate Bay for Science’, has lost control of several security certificates after they were revoked by Comodo CA, the world’s largest certification authority. Comodo CA informs TorrentFreak that the company responded to a court order which compelled it to revoke four certificates previously issued to the site.

Sci-Hub is often referred to as the “Pirate Bay of Science”. Like its namesake, it offers masses of unlicensed content for free, mostly against the wishes of copyright holders.

While The Pirate Bay will index almost anything, Sci-Hub is dedicated to distributing tens of millions of academic papers and articles, something which has turned itself into a target for publishing giants like Elsevier.

Sci-Hub and its Kazakhstan-born founder Alexandra Elbakyan have been under sustained attack for several years but more recently have been fending off an unprecedented barrage of legal action initiated by the American Chemical Society (ACS), a leading source of academic publications in the field of chemistry.

While ACS has certainly caused problems for Sci-Hub, the platform is extremely resilient and remains online.

The domains https://sci-hub.is and https://sci-hub.nu are fully operational with certificates issued by Let’s Encrypt, a free and open certificate authority supported by the likes of Mozilla, EFF, Chrome, Private Internet Access, and other prominent tech companies.

It’s unclear whether these certificates will be targeted in the future but Sci-Hub doesn’t appear to be in the mood to back down.

There are any number of obvious ways you can assist Sci-Hub. Others you will discover in conversations with your friends and other Sci-Hub supporters.

Go carefully.

February 22, 2018

If You Like “Fake News,” You Will Love “Fake Science”

Filed under: Fake News,Media,Science,Skepticism — Patrick Durusau @ 4:53 pm

Prestigious Science Journals Struggle to Reach Even Average Reliability by Björn Brembs.

Abstract:

In which journal a scientist publishes is considered one of the most crucial factors determining their career. The underlying common assumption is that only the best scientists manage to publish in a highly selective tier of the most prestigious journals. However, data from several lines of evidence suggest that the methodological quality of scientific experiments does not increase with increasing rank of the journal. On the contrary, an accumulating body of evidence suggests the inverse: methodological quality and, consequently, reliability of published research works in several fields may be decreasing with increasing journal rank. The data supporting these conclusions circumvent confounding factors such as increased readership and scrutiny for these journals, focusing instead on quantifiable indicators of methodological soundness in the published literature, relying on, in part, semi-automated data extraction from often thousands of publications at a time. With the accumulating evidence over the last decade grew the realization that the very existence of scholarly journals, due to their inherent hierarchy, constitutes one of the major threats to publicly funded science: hiring, promoting and funding scientists who publish unreliable science eventually erodes public trust in science.

Facts, even “scientific facts,” should be questioned, tested and never blindly accepted.

Knowing a report appears in Nature, or Science, or (zine of your choice), helps you find it. Beyond that, you have to read and evaluate the publication to credit it with more than a place of publication.

Reading beyond abstracts or click-bait headlines, checking footnotes or procedures, do those things very often and you will be in danger of becoming a critical reader. Careful!

January 15, 2018

The Art & Science Factory

Filed under: Art,Complexity,Science — Patrick Durusau @ 8:10 pm

The Art & Science Factory

From the about page:


The Art & Science Factory was started in 2008 by Dr. Brian Castellani to organize the various artistic, scientific and educational endeavours he and different collaborators have engaged in to address the growing complexity of global life.

Dr. Castellani is a complexity scientist/artist.

He is internationally recognized for his expertise in complexity science and its history and for his development of the SACS Toolkit, a case-based, mixed-methods, computationally-grounded framework for modeling complex systems. Dr. Castellani’s main area of study is applying complexity science and the SACS Toolkit to various topics in health and healthcare, including community health and medical education.

In terms of visual complexity, Castellani is recognized around the world for his creation of the complexity map, which can be found on Wikipedia and on this website. He is also recognized for his blog on “all things complexity science and art,” the Sociology and Complexity Science Blog.
… (emphasis in original)

Dr. Castellani apparently dislikes searchable text, the about page quote being hand transcribed from an image that is that page.

Unexpectedly, the SACS toolkit, etc. were not hyperlinks so: SACS toolkit, complexity map, and Sociology and Complexity Science Blog, respectively.

January 11, 2018

The art of writing science

Filed under: Conferences,Science,Writing — Patrick Durusau @ 4:21 pm

The art of writing science by Kevin W. Plaxco

From the post:

The value of writing well should not be underestimated. Imagine, for example, that you hold in your hand two papers, both of which describe precisely the same set of experimental results. One is long, dense, and filled with jargon. The other is concise, engaging, and easy to follow. Which are you more likely to read, understand, and cite? The answer to this question hits directly at the value of good writing: writing well leverages your work. That is, while even the most skillful writing cannot turn bad science into good science, clear and compelling writing makes good science more impactful, and thus more valuable.

The goal of good writing is straightforward: to make your reader’s job as easy as possible. Realizing this goal, though, is not so simple. I, for one, was not a natural-born writer; as a graduate student, my writing was weak and rambling, taking forever to get to the point. But I had the good fortune to postdoc under an outstanding scientific communicator, who taught me the above-described lesson that writing well is worth the considerable effort it demands. Thus inspired, I set out to teach myself how to communicate more effectively, an effort that, some fifteen years later, I am still pursuing.

Along the way I have learned a thing or two that I believe make my papers easier to read, a few of which I am pleased to share with you here. Before I share my hard-won tips, though, I have an admission: there is no single, correct way to write. In fact, there are a myriad of solutions to the problem of writing well (see, e.g., Refs.1–4). The trick, then, is not to copy someone else’s voice, but rather to study what works—and what does not—in your own writing and that of others to formulate your own guide to effective communication. Thus, while I present here some of my most cherished writing conventions (i.e., the rules that I force on my own students), I do not mean to imply that they represent the only acceptable approach. Indeed, you (or your mentor) may disagree strongly with many of the suggestions I make below. This, though, is perfectly fine: my goal is not to convince you that I have found the one true way, but instead simply to get people thinking and talking about writing. I do so in the hope that this will inspire a few more young scientists to develop their own effective styles.

The best way to get the opportunity to do a great presentation for Balisage 2018 is to write a great paper for Balisage 2018. A great paper is step one towards being accepted and having a chance to bask in the admiration of other markup geeks.

OK, so it’s not so much basking as trying to see by star light on a cloudy night.

Still, a great paper will impress the reviewers and if accepted, readers when it appears in the proceedings for this year.

Strong suggestion: Try Plaxco’s first sentence of the paragraph test on your paper (or any you are reviewing). If if fails, start over.

I volunteer to do peer review for Balisage so I’m anticipating some really well-written papers this year.

December 27, 2017

No Peer Review at FiveThirtyEight

Filed under: Humanities,Peer Review,Researchers,Science — Patrick Durusau @ 10:47 am

Politics Moves Fast. Peer Review Moves Slow. What’s A Political Scientist To Do? by Maggie Koerth-Baker

From the post:

Politics has a funny way of turning arcane academic debates into something much messier. We’re living in a time when so much in the news cycle feels absurdly urgent and partisan forces are likely to pounce on any piece of empirical data they can find, either to champion it or tear it apart, depending on whether they like the result. That has major implications for many of the ways knowledge enters the public sphere — including how academics publicize their research.

That process has long been dominated by peer review, which is when academic journals put their submissions in front of a panel of researchers to vet the work before publication. But the flaws and limitations of peer review have become more apparent over the past decade or so, and researchers are increasingly publishing their work before other scientists have had a chance to critique it. That’s a shift that matters a lot to scientists, and the public stakes of the debate go way up when the research subject is the 2016 election. There’s a risk, scientists told me, that preliminary research results could end up shaping the very things that research is trying to understand.

The legend of peer review catching and correcting flaws has a long history. A legend much tarnished by the Top 10 Retractions of 2017 and similar reports. Retractions are self admissions of the failure of peer review. By the hundreds.

Withdrawal of papers isn’t the only debunking of peer review. The reports, papers, etc., on the failure of peer review include: “Data fabrication and other reasons for non-random sampling in 5087 randomised, controlled trials in anaesthetic and general medical journals,” Anaesthesia, Carlisle 2017, DOI: 10.1111/anae.13962; “The peer review drugs don’t work” by Richard Smith; “One in 25 papers contains inappropriately duplicated images, screen finds” by Cat Ferguson.

Koerth-Baker’s quoting of Justin Esarey to support peer review is an example of no or failed peer review at FiveThirtyEight.


But, on aggregate, 100 studies that have been peer-reviewed are going to produce higher-quality results than 100 that haven’t been, said Justin Esarey, a political science professor at Rice University who has studied the effects of peer review on social science research. That’s simply because of the standards that are supposed to go along with peer review – clearly reporting a study’s methodology, for instance – and because extra sets of eyes might spot errors the author of a paper overlooked.

Koerth-Baker acknowledges the failures of peer review but since the article is premised upon peer review insulating the public from “bad science,” she runs in Justin Esarey, “…who has studied the effects of peer review on social science research.” One assumes his “studies” are mentioned to embue his statements with an aura of authority.

Debunking Esarey’s authority to comment on the “…effects of peer review on social science research” doesn’t require much effort. If you scan his list of publications you will find Does Peer Review Identify the Best Papers?, which bears the sub-title, A Simulation Study of Editors, Reviewers, and the Social Science Publication Process.

Esarey’s comments on the effectiveness of peer review are not based on fact but on simulations of peer review systems. Useful work no doubt but hardly the confessing witness needed to exonerate peer review in view of its long history of failure.

To save you chasing the Esarey link, the abstract reads:

How does the structure of the peer review process, which can vary from journal to journal, influence the quality of papers published in that journal? In this paper, I study multiple systems of peer review using computational simulation. I find that, under any system I study, a majority of accepted papers will be evaluated by the average reader as not meeting the standards of the journal. Moreover, all systems allow random chance to play a strong role in the acceptance decision. Heterogen eous reviewer and reader standards for scientific quality drive both results. A peer review system with an active editor (who uses desk rejection before review and does not rely strictly on reviewer votes to make decisions ) can mitigate some of these effects.

If there were peer reviewers, editors, etc., at FiveThirtyEight, shouldn’t at least one of them looked beyond the title Does Peer Review Identify the Best Papers? to ask Koerth-Baker what evidence Esarey has for his support of peer review? Or is agreement with Koerth-Baker sufficient?

Peer review persists for a number of unsavory reasons, prestige, professional advancement, enforcement of discipline ideology, pretension of higher quality of publications, let’s not add a false claim of serving the public.

December 7, 2017

A Guide to Reproducible Code in Ecology and Evolution

Filed under: Bioinformatics,Biology,Replication,Research Methods,Science — Patrick Durusau @ 3:33 pm

A Guide to Reproducible Code in Ecology and Evolution by British Ecological Society.

Natilie Cooper, Natural History Museum, UK and Pen-Yuan Hsing, Durham University, UK, write in the introduction:

The way we do science is changing — data are getting bigger, analyses are getting more complex, and governments, funding agencies and the scientific method itself demand more transparency and accountability in research. One way to deal with these changes is to make our research more reproducible, especially our code.

Although most of us now write code to perform our analyses, it is often not very reproducible. We have all come back to a piece of work we have not looked at for a while and had no idea what our code was doing or which of the many “final_analysis” scripts truly was the final analysis! Unfortunately, the number of tools for reproducibility and all the jargon can leave new users feeling overwhelmed, with no idea how to start making their code more reproducible. So, we have put together this guide to help.

A Guide to Reproducible Code covers all the basic tools and information you will need to start making your code more reproducible. We focus on R and Python, but many of the tips apply to any programming language. Anna Krystalli introduces some ways to organise files on your computer and to document your workflows. Laura Graham writes about how to make your code more reproducible and readable. François Michonneau explains how to write reproducible reports. Tamora James breaks down the basics of version control. Finally, Mike Croucher describes how to archive your code. We have also included a selection of helpful tips from other scientists.

True reproducibility is really hard. But do not let this put you off. We would not expect anyone to follow all of the advice in this booklet at once. Instead, challenge yourself to add one more aspect to each of your projects. Remember, partially reproducible research is much better than completely non-reproducible research.

Good luck!
… (emphasis in original)

Not counting front and back matter, 39 pages total. A lot to grasp in one reading but if you don’t already have reproducible research habits, keep a copy of this publication on top of your desk. Yes, on top of the incoming mail, today’s newspaper, forms and chart requests from administrators, etc. On top means just that, on top.

At some future date, when the pages are too worn, creased, folded, dog eared and annotated to be read easily, reprint it and transfer your annotations to a clean copy.

I first saw this in David Smith’s The British Ecological Society’s Guide to Reproducible Science.

PS: The same rules apply to data science.

November 21, 2017

Maintaining Your Access to Sci-Hub

Filed under: Open Access,Open Data,Open Science,Publishing,Science — Patrick Durusau @ 4:42 pm

A tweet today by @Sci_Hub advises:

Sci-Hub is working. To get around domain names problem, use custom Sci-Hub DNS servers 80.82.77.83 and 80.82.77.84. How to customize DNS in Windows: https://pchelp.ricmedia.com/set-custom-dns-servers-windows/

No doubt, Elsevier will continue to attempt to interfere with your access to Sci-Hub.

Already the largest, bloated and insecure presence academic publishing presence on the Internet, Elsevier labors every day to become more of an attractive nuisance.

What corporate strategy is served by painting a flashing target on your Internet presence?

Thoughts?

PS: Do update your DNS entries while pondering that question.

November 15, 2017

A Docker tutorial for reproducible research [Reproducible Reporting In The Future?]

Filed under: R,Replication,Reporting,Science — Patrick Durusau @ 10:07 am

R Docker tutorial: A Docker tutorial for reproducible research.

From the webpage:

This is an introduction to Docker designed for participants with knowledge about R and RStudio. The introduction is intended to be helping people who need Docker for a project. We first explain what Docker is and why it is useful. Then we go into the the details on how to use it for a reproducible transportable project.

Six lessons, instructions for installing Docker, plus zip/tar ball of the materials. What more could you want?

Science has paid lip service to the idea of replication of results for centuries but with the sharing of data and analysis, reproducible research is becoming a reality.

Is reproducible reporting in the near future? Reporters preparing their analysis and releasing raw data and their extraction methods?

Or will selective releases of data, when raw data is released at all, continue to be the norm?

Please let @ICIJorg know how you feel about data hoarding, #ParadisePapers, #PanamaPapers, when data and code sharing are becoming the norm in science.

June 6, 2017

John Carlisle Hunts Bad Science (you can too!)

Filed under: Data Science,Science,Statistics — Patrick Durusau @ 6:42 pm

Carlisle’s statistics bombshell names and shames rigged clinical trials by Leonid Schneider.

From the post:

John Carlisle is a British anaesthesiologist, who works in a seaside Torbay Hospital near Exeter, at the English Channel. Despite not being a professor or in academia at all, he is a legend in medical research, because his amazing statistics skills and his fearlessness to use them exposed scientific fraud of several of his esteemed anaesthesiologist colleagues and professors: the retraction record holder Yoshitaka Fujii and his partner Yuhji Saitoh, as well as Scott Reuben and Joachim Boldt. This method needs no access to the original data: the number presented in the published paper suffice to check if they are actually real. Carlisle was fortunate also to have the support of his journal, Anaesthesia, when evidence of data manipulations in their clinical trials was found using his methodology. Now, the editor Carlisle dropped a major bomb by exposing many likely rigged clinical trial publications not only in his own Anaesthesia, but in five more anaesthesiology journals and two “general” ones, the stellar medical research outlets NEJM and JAMA. The clinical trials exposed in the latter for their unrealistic statistics are therefore from various fields of medicine, not just anaesthesiology. The medical publishing scandal caused by Carlisle now is perfect, and the elite journals had no choice but to announce investigations which they even intend to coordinate. Time will show how seriously their effort is meant.

Carlisle’s bombshell paper “Data fabrication and other reasons for non-random sampling in 5087 randomised, controlled trials in anaesthetic and general medical journals” was published today in Anaesthesia, Carlisle 2017, DOI: 10.1111/anae.13962. It is accompanied by an explanatory editorial, Loadsman & McCulloch 2017, doi: 10.1111/anae.13938. A Guardian article written by Stephen Buranyi provides the details. There is also another, earlier editorial in Anaesthesia, which explains Carlisle’s methodology rather well (Pandit, 2012).

… (emphasis in original)

Cutting to the chase, Carlisle found 90 papers with statistical patterns unlikely to occur by chance in 5,087 clinical trials.

There is a wealth of science papers to be investigated, Sarah Boon, in 21st Century Science Overload points out (2016) there are 2.5 million new scientific papers published every year, in 28,100 active scholarly peer-reviewed journals (2014).

Since Carlisle has done eight (8) journals, that leaves ~28,092 for your review. 😉

Happy hunting!

PS: I can easily imagine an exercise along these lines being the final project for a data mining curriculum. You?

May 29, 2017

Launch of the PhilMath Archive

Filed under: Mathematics,Philosophy,Philosophy of Science,Science — Patrick Durusau @ 8:39 pm

Launch of the PhilMath Archive: preprint server specifically for philosophy of mathematics

From the post:

PhilSci-Archive is pleased to announce the launch of the PhilMath-Archive, http://philsci-archive.pitt.edu/philmath.html a preprint server specifically for the philosophy of mathematics. The PhilMath-Archive is offered as a free service to the philosophy of mathematics community. Like the PhilSci-Archive, its goal is to promote communication in the field by the rapid dissemination of new work. We aim to provide an accessible repository in which scholarly articles and monographs can find a permanent home. Works posted here can be linked to from across the web and freely viewed without the need for a user account.

PhilMath-Archive invites submissions in all areas of philosophy of mathematics, including general philosophy of mathematics, history of mathematics, history of philosophy of mathematics, history and philosophy of mathematics, philosophy of mathematical practice, philosophy and mathematics education, mathematical applicability, mathematical logic and foundations of mathematics.

For your reference, the PhilSci-Archive.

Enjoy!

April 23, 2017

Fraudulent Peer Review – Clue? Responded On Time!

Filed under: Peer Review,Science — Patrick Durusau @ 7:28 pm

107 cancer papers retracted due to peer review fraud by Cathleen O’Grady.

As if peer review weren’t enough of a sham, some authors took it to another level:


It’s possible to fake peer review because authors are often asked to suggest potential reviewers for their own papers. This is done because research subjects are often blindingly niche; a researcher working in a sub-sub-field may be more aware than the journal editor of who is best-placed to assess the work.

But some journals go further and request, or allow, authors to submit the contact details of these potential reviewers. If the editor isn’t aware of the potential for a scam, they then merrily send the requests for review out to fake e-mail addresses, often using the names of actual researchers. And at the other end of the fake e-mail address is someone who’s in on the game and happy to send in a friendly review.

Fake peer reviewers often “know what a review looks like and know enough to make it look plausible,” said Elizabeth Wager, editor of the journal Research Integrity & Peer Review. But they aren’t always good at faking less obvious quirks of academia: “When a lot of the fake peer reviews first came up, one of the reasons the editors spotted them was that the reviewers responded on time,” Wager told Ars. Reviewers almost always have to be chased, so “this was the red flag. And in a few cases, both the reviews would pop up within a few minutes of each other.”

I’m sure timely submission of reviews weren’t the only basis for calling fraud but it is an amusing one.

It’s past time to jettison the bloated machinery of peer review. Judge work by its use, not where it’s published.

April 17, 2017

Every NASA Image In One Archive – Crowd Sourced Index?

Filed under: Astroinformatics,BigData,Image Processing,NASA,Science — Patrick Durusau @ 8:49 pm

NASA Uploaded Every Picture It Has to One Amazing Online Archive by Will Sabel Courtney.

From the post:

Over the last five decades and change, NASA has launched hundreds of men and women from the planet’s surface into the great beyond. But America’s space agency has had an emotional impact on millions, if not billions, of others who’ve never gone past the Karmann Line separating Earth from space, thanks to the images, audio, and video generated by its astronauts and probes. NASA has given us our best glimpses at distant galaxies and nearby planets—and in the process, helped up appreciate our own world even more.

And now, the agency has placed them all in one place for everyone to see: images.nasa.gov.

No, viewing this site will not be considered an excuse for a late tax return. 😉

On the other hand, it’s an impressive bit of work, although a search only interface seems a bit thin to me.

The API docs don’t offer much comfort:

Name Description
q (optional) Free text search terms to compare to all 
indexed metadata.
center (optional) NASA center which published the media.
description(optional) Terms to search for in “Description” fields.
keywords (optional) Terms to search for in “Keywords” fields. 
Separate multiple values with commas.
location (optional) Terms to search for in “Location” fields.
media_type(optional) Media types to restrict the search to. 
Available types: [“image”, “audio”]. 
Separate multiple values with commas.
nasa_id (optional) The media asset’s NASA ID.
photographer(optional) The primary photographer’s name.
secondary_creator(optional) A secondary photographer/videographer’s name.
title (optional) Terms to search for in “Title” fields.
year_start (optional) The start year for results. Format: YYYY.
year_end (optional) The end year for results. Format: YYYY.

With no index, your results depend on your blind guessing the metadata entered by a NASA staffer.

Well, for “moon” I would expect “the Moon,” but the results are likely to include moons of other worlds, etc.

Indexing this collection has all the marks of a potential crowd sourcing project:

  1. Easy to access data
  2. Free data
  3. Interesting data
  4. Metadata

Interested?

February 17, 2017

5 Million Fungi

Filed under: Open Science,Science — Patrick Durusau @ 3:15 pm

5 Million Fungi – Every living thing is crawling with microorganisms — and you need them to survive by Dan Fost.

Fungus is growing in Brian Perry’s refrigerator — and not the kind blooming in someone’s forgotten lunch bag.

No, the Cal State East Bay assistant professor has intentionally packed his shelves with 1,500 Petri dishes, each containing a tiny sample of fungus from native and endemic Hawaiian plant leaves. The 45-year-old mycologist (a person who studies the genetic and biochemical properties of fungi, among many other things) figures hundreds of those containers hold heretofore-unknown species.

The professor’s work identifying and cataloguing fungal endophytes — microscopic fungi that live inside plants — carries several important implications. Scientists know little about the workings of these fungi, making them a particularly exciting frontier for examination: Learning about endophytes’ relationships to their host plants could save many endangered species; farmers have begun tapping into their power to help crops build resistance to pathogens; and researchers are interested in using them to unlock new compounds to make crucial medicines for people.

The only problem — finding, naming, and preserving them before it’s too late.
… (emphasis in original)

According to Naveed Davoodian in A Long Way to Go: Protecting and Conserving Endangered Fungi, you don’t need to travel to exotic locales to contribute to our knowledge of fungi in the United States.

Willow Nero, editor of McIlvainea: Journal of American Amateur Mycology writes in Commit to Mycology:


I hope you’ll do your part as a NAMA member by renewing your commitment to mycology—the science, that is. When we convene at the North American foray later this year, our leadership will present (and later publish in this journal) clear guidelines so mycologists everywhere can collect reliable data about fungi as part of the North American Mycoflora Project. We will let you know where to start and how to carry your momentum. All we ask is that you join us. Catalogue them all! Or at least set an ambitious goal for yourself or your local NAMA-affiliated club.

I did peek at the North American Mycoflora Project, which has this challenging slogan:

Without a sequenced specimen, it’s a rumor

Sounds like your kind of folks. 😉

Mycology as a hobby has three distinct positives: One, you are not in front your computer monitor. Two, you are gaining knowledge. Three, (hopefully) you will decide to defend fellow residents who cannot defend themselves.

January 31, 2017

Repulsion On A Galactic Scale (Really Big Data/Visualization)

Filed under: Astroinformatics,BigData,Science,Scientific Computing,Visualization — Patrick Durusau @ 10:14 am

Newly discovered intergalactic void repels Milky Way by Rol Gal.

From the post:

For decades, astronomers have known that our Milky Way galaxy—along with our companion galaxy, Andromeda—is moving through space at about 1.4 million miles per hour with respect to the expanding universe. Scientists generally assumed that dense regions of the universe, populated with an excess of galaxies, are pulling us in the same way that gravity made Newton’s apple fall toward earth.

In a groundbreaking study published in Nature Astronomy, a team of researchers, including Brent Tully from the University of Hawaiʻi Institute for Astronomy, reports the discovery of a previously unknown, nearly empty region in our extragalactic neighborhood. Largely devoid of galaxies, this void exerts a repelling force, pushing our Local Group of galaxies through space.

Astronomers initially attributed the Milky Way’s motion to the Great Attractor, a region of a half-dozen rich clusters of galaxies 150 million light-years away. Soon after, attention was drawn to a much larger structure called the Shapley Concentration, located 600 million light-years away, in the same direction as the Great Attractor. However, there has been ongoing debate about the relative importance of these two attractors and whether they suffice to explain our motion.

The work appears in the January 30 issue of Nature Astronomy and can be found online here.

Additional images, video, and links to previous related productions can be found at http://irfu.cea.fr/dipolerepeller.

If you are looking for processing/visualization of data on a galactic scale, this work by Yehuda Hoffman, Daniel Pomarède, R. Brent Tully & Hélène M. Courtois, hits the spot!

It is also a reminder that when you look up from your social media device, there is a universe waiting to be explored.

January 26, 2017

Twistance – “Rogue” Twitter Accounts – US Federal Science Agencies

Filed under: Government,Science,Twitter — Patrick Durusau @ 2:07 pm

Alice Stollmeyer has put together Twistance:

Twitter + resistance = #Twistance. “Rogue” Twitter accounts from US federal science agencies.

As of 26 January 2017, 44 members and 5,133 subscribers.

A long overdue step towards free speech for government employees and voters making decisions on what is known inside the federal government.

Caution:

A claim to be an “alternative” account may or may not be true. As with the official accounts, evaluate factual claims for yourself. Use good security practices when communicating with unknown accounts. (Some of the account names are very close in spelling but are separate accounts.)

  • Alt Hi Volcanoes NP The Unofficial “Resistance” team of Hawaii Volcanoes National Park. Not taxpayer funded.
  • Alt HHS Unofficial and unaffiliated resistance account by concerned scientists for humanity.
  • The Alt NPS and EPA Real news regarding the NPS, EPA, climate science and environmentalism
  • Alt Science Raising awareness of climate change and other threats posed by science denial. Not affiliated with the US gov. #Resist
  • Alternative CDC Unofficial unaffiliated resistance account by concerned scientists for humanity.
  • Alternative HeHo A parody account for the Herbert Hoover National Historic Site
  • Alternative NIH Unofficial group of science advocates. Stand up for science, rights, equality, social justice, & ultimately, for the health of humanity.
  • Alternative NOAA The Unofficial “Resistance” team of the NOAA. Account not tax payer subsidized. We study the oceans, and the atmosphere to understand our planet. #MASA
  • AltBadlandsNatPark You’ll never shut us down, Drumpf!
  • Alt-Badlands NPS Bigly fake #badlandsnationalpark. ‘Sad!’ – Donald J Trump. #badlands #climate #science #datarefuge #resist #resistance
  • AltEPA He can take our official Twitter but he’ll never take our FREEDOM. UNOFFICIALLY resisting.
  • altEPA The Unofficial “Resistance” team of U.S. Environmental Protection Agency. Not taxpayer subsidised! Environmental conditions may vary from alternative facts.
  • AltFDA Uncensored FDA
  • AltGlacierNPS The unofficial Twitter site for Glacier National Park of Science Fact.
  • AltHot Springs NP The Resistance Account of America’s First Resort and Preserve. Account Run By Friends of HSNP.
  • AltLassenVolcanicNP The Unofficial “Resistance” team. Within peaceful mountain forests you will find hissing fumaroles and boiling mud pots and people ready to fight for science.
  • AltMountRainierNPS Unofficial “Resistance” Team from the Mount Rainier National Park Service. Protecting what’s important..
  • AltNASA The unofficial #resist team of the National Aeronautics and Space Administration.
  • AltOlympicNPS Unofficial resistance team of the Olympic National Park. protecting what’s important and fighting fascism with science.
  • AltRockyNPS Unofficial account that is being held for people associated with RMNP. DM if you might be interested in it.
  • AltUSARC USARC’s main duties are to develop an integrated national Arctic research policy and to assist in establishing an Arctic research plan to implement it.
  • AltUSDA Resisting the censorship of facts and science. Truth wins in the end.
  • AltUSForestService The unofficial, and unsanctioned, “Resistance” team for the U.S. Forest Service. Not an official Forest Service account, not publicly funded, citizen run.
  • AltUSFWS The Alt U.S. Fish Wildlife Service (AltUSFWS) is dedicated to the conservation, protection and enhancement of fish, wildlife and plants and their habitats
  • AltUSFWSRefuge The Alt U.S. Fish Wildlife Service (AltUSFWSRefuge) is dedicated to the conservation, protection and enhancement of fish, wildlife and plants and their habitats
  • ALTUSNatParkSer The Unofficial team of U.S. National Park Service. Not taxpayer subsidised! Come for rugged scenery, fossil beds, 89 million acres of landscape
  • AltUSNatParkService The Unofficial #Resistance team of U.S. National Park Service. Not taxpayer subsidised! Come for rugged scenery, facts & 89 million acres of landscape #climate
  • AltNWS The Unofficial Resistance team of U.S. National Weather Service. Not taxpayer subsidized! Come for non-partisan science-based weather, water, and climate info.
  • AltYellowstoneNatPar We are a group of employees and scientists in Yellowstone national park. We are here to continue providing the public with important information
  • AltYosemiteNPS “Unofficial” Resistance Team. Reporting facts & protecting what’s important!
  • Angry National Park Preserving the ecological and historical integrity of National Parks while also making them available and accessible for public use and enjoyment dammit all.
  • BadHombreLands NPS Unofficial feed of Badlands NP. Protecting rugged scenery, fossil beds, 244,000 acres of mixed-grass prairie & wildlife from two-bit cheetoh-hued despots.
  • BadlandsNPSFans Shmofficial fake feed of South Dakota’s Badlands National Park (Great Again™ Edition) Account not run by park employees, current or former, so leave them alone.
  • GlacierNPS The alternative Twitter site for Glacier National Park.
  • March for Science Planning a March for Science. Date TBD. We’ll let you know when official merchandise is out to cover march costs.
  • NOAA (uncensored)
  • Resistance_NASA We are a #Resist sect of the National Aeronautics and Space Administration.
  • Rogue NASA The unofficial “Resistance” team of NASA. Not an official NASA account. Not managed by gov’t employees. Come for the facts, stay for the snark.
  • NatlParksUnderground We post the information Donald Trump censors #FindYourPark #NPS100
  • NWS Podunk We’re the third wheel of forecast offices. We still use WSR-57. Winner of Biggest Polygon at the county fair. Not an actual NWS office…but we should be.
  • Rogue NOAA Research on our climate, oceans, and marine resources should be subject to peer [not political] review. *Not an official NOAA account*
  • Stuff EPA Would Say We post info that Donald Trump censors. We report what the U.S. Environmental Protection Agency would say. Chime in w/ #StuffEPAWouldSay
  • U.S. EPA – Ungagged Ungagged news, links, tips, and conversation that the U.S. Environmental Protection Agency is unable to tell you. Not directly affiliated with @EPA.
  • U.S. Science Service Uncensored & unofficial tweets re: the science happening at the @EPA, @USDA, @NatParkService, @NASA, @NOAA etc. #ClimateChangeIsReal #DefendScience

December 21, 2016

The Course of Science

Filed under: Humor,Science — Patrick Durusau @ 7:52 pm

No doubt you will recognize “other” scientists in this description:

scientific-process

Select the image to get a larger and legible view.

I should point out that “facts” and “truth” have been debated recently in the news media without a Jesuit in sight. So, science isn’t the only area with “iffy” processes and results.

Posted by AlessondraSpringmann on Twitter.

August 26, 2016

A Reproducible Workflow

Filed under: Science,Workflow — Patrick Durusau @ 7:07 pm

The video is 104 seconds and highly entertaining!

From the description:

Reproducible science not only reduce errors, but speeds up the process of re-runing your analysis and auto-generate updated documents with the results. More info at: www.bit.ly/reprodu

How are you making your data analysis reproducible?

Enjoy!

July 14, 2016

Neil deGrasse Tyson and the Religion of Science

Filed under: Science — Patrick Durusau @ 7:53 pm

The next time you see Neil deGrasse Tyson chanting “holy, holy, holy” at the altar of science, re-read The 7 biggest problems facing science, according to 270 scientists by Julia Belluz, Brad Plumer, and Brian Resnick.

From the post:


The scientific process, in its ideal form, is elegant: Ask a question, set up an objective test, and get an answer. Repeat. Science is rarely practiced to that ideal. But Copernicus believed in that ideal. So did the rocket scientists behind the moon landing.

But nowadays, our respondents told us, the process is riddled with conflict. Scientists say they’re forced to prioritize self-preservation over pursuing the best questions and uncovering meaningful truths.

Ah, a quick correction to: “So did the rocket scientists behind the moon landing.”

Not!

The post Did Politics Fuel the Space Race? points to a White House transcript that reveals politics drove the race to the moon:

James Webb – NASA Administrator, President Kennedy.


James Webb: All right, then let me say this: if I go out and say that this is the number-one priority and that everything else must give way to it, I’m going to lose an important element of support for your program and for your administration.

President Kennedy [interrupting]: By who? Who? What people? Who?

James Webb: By a large number of people.

President Kennedy: Who? Who?

James Webb: Well, particularly the brainy people in industry and in the universities who are looking at a solid base.

President Kennedy: But they’re not going to pay the kind of money to get that position that we are [who we are] spending it. I say the only reason you can justify spending this tremendous…why spend five or six billion dollars a year when all these other programs are starving to death?

James Webb: Because in Berlin you spent six billion a year adding to your military budget because the Russians acted the way they did. And I have some feeling that you might not have been as successful on Cuba if we hadn’t flown John Glenn and demonstrated we had a real overall technical capability here.

President Kennedy: We agree. That’s why we wanna put this program…. That’s the dramatic evidence that we’re preeminent in space.

The rocket to the moon wasn’t about science, it about “…dramatic evidence that we’re preeminent in space.

If you need a not so recent example, consider the competition between Edison and Westinghouse in what Wikipedia titles: War of Currents.

Science has always been a mixture of personal ambition, politics, funding, etc.

That’s not to take anything away from science but a caution to remember it is and always has been a human enterprise.

Tyson’s claims for science should be questioned and judged like all other claims.

June 21, 2016

The No-Value-Add Of Academic Publishers And Peer Review

Filed under: Open Access,Open Data,Open Science,Publishing,Science — Patrick Durusau @ 9:33 pm

Comparing Published Scientific Journal Articles to Their Pre-print Versions by Martin Klein, Peter Broadwell, Sharon E. Farb, Todd Grappone.

Abstract:

Academic publishers claim that they add value to scholarly communications by coordinating reviews and contributing and enhancing text during publication. These contributions come at a considerable cost: U.S. academic libraries paid $1.7 billion for serial subscriptions in 2008 alone. Library budgets, in contrast, are flat and not able to keep pace with serial price inflation. We have investigated the publishers’ value proposition by conducting a comparative study of pre-print papers and their final published counterparts. This comparison had two working assumptions: 1) if the publishers’ argument is valid, the text of a pre-print paper should vary measurably from its corresponding final published version, and 2) by applying standard similarity measures, we should be able to detect and quantify such differences. Our analysis revealed that the text contents of the scientific papers generally changed very little from their pre-print to final published versions. These findings contribute empirical indicators to discussions of the added value of commercial publishers and therefore should influence libraries’ economic decisions regarding access to scholarly publications.

The authors have performed a very detailed analysis of pre-prints, 90% – 95% of which are published as open pre-prints first, to conclude there is no appreciable difference between the pre-prints and the final published versions.

I take “…no appreciable difference…” to mean academic publishers and the peer review process, despite claims to the contrary, contribute little or no value to academic publications.

How’s that for a bargaining chip in negotiating subscription prices?

June 18, 2016

Where Has Sci-Hub Gone?

Filed under: Open Access,Open Data,Open Science,Publishing,Science — Patrick Durusau @ 3:24 pm

While I was writing about the latest EC idiocy (link tax), I was reminded of Sci-Hub.

Just checking to see if it was still alive, I tried http://sci-hub.io/.

404 by standard DNS service.

If you are having the same problem, Mike Masnick reports in Sci-Hub, The Repository Of ‘Infringing’ Academic Papers Now Available Via Telegram, you can access Sci-Hub via:

I’m not on Telegram, yet, but that may be changing soon. 😉

BTW, while writing this update, I stumbled across: The New Napster: How Sci-Hub is Blowing Up the Academic Publishing Industry by Jason Shen.

From the post:


This is obviously piracy. And Elsevier, one of the largest academic journal publishers, is furious. In 2015, the company earned $1.1 billion in profits on $2.9 billion in revenue [2] and Sci-hub directly attacks their primary business model: subscription service it sells to academic organizations who pay to get access to its journal articles. Elsevier filed a lawsuit against Sci-Hub in 2015, claiming Sci-hub is causing irreparable injury to the organization and its publishing partners.

But while Elsevier sees Sci-Hub as a major threat, for many scientists and researchers, the site is a gift from the heavens, because they feel unfairly gouged by the pricing of academic publishing. Elsevier is able to boast a lucrative 37% profit margin because of the unusual (and many might call exploitative) business model of academic publishing:

  • Scientists and academics submit their research findings to the most prestigious journal they can hope to land in, without getting any pay.
  • The journal asks leading experts in that field to review papers for quality (this is called peer-review and these experts usually aren’t paid)
  • Finally, the journal turns around and sells access to these articles back to scientists/academics via the organization-wide subscriptions at the academic institution where they work or study

There’s piracy afoot, of that I have no doubt.

Elsevier:

  • Relies on research it does not sponsor
  • Research results are submitted to it for free
  • Research is reviewed for free
  • Research is published in journals of value only because of the free contributions to them
  • Elsevier makes a 37% profit off of that free content

There is piracy but Jason fails to point to Elsevier as the pirate.

Sci-Hub/Alexandra Elbakyan is re-distributing intellectual property that was stolen by Elsevier from the academic community, for its own gain.

It’s time to bring Elsevier’s reign of terror against the academic community to an end. Support Sci-Hub in any way possible.

June 17, 2016

Volumetric Data Analysis – yt

Filed under: Astroinformatics,Data Analysis,Python,Science,Visualization — Patrick Durusau @ 7:23 pm

One of those rotating homepages:

Volumetric Data Analysis – yt

yt is a python package for analyzing and visualizing volumetric, multi-resolution data from astrophysical simulations, radio telescopes, and a burgeoning interdisciplinary community.

Quantitative Analysis and Visualization

yt is more than a visualization package: it is a tool to seamlessly handle simulation output files to make analysis simple. yt can easily knit together volumetric data to investigate phase-space distributions, averages, line integrals, streamline queries, region selection, halo finding, contour identification, surface extraction and more.

Many formats, one language

yt aims to provide a simple uniform way of handling volumetric data, regardless of where it is generated. yt currently supports FLASH, Enzo, Boxlib, Athena, arbitrary volumes, Gadget, Tipsy, ART, RAMSES and MOAB. If your data isn’t already supported, why not add it?

From the non-rotating part of the homepage:

To get started using yt to explore data, we provide resources including documentation, workshop material, and even a fully-executable quick start guide demonstrating many of yt’s capabilities.

But if you just want to dive in and start using yt, we have a long list of recipes demonstrating how to do various tasks in yt. We even have sample datasets from all of our supported codes on which you can test these recipes. While yt should just work with your data, here are some instructions on loading in datasets from our supported codes and formats.

Professional astronomical data and tools like yt put exploration of the universe at your fingertips!

Enjoy!

June 12, 2016

Ten Simple Rules for Effective Statistical Practice

Filed under: Data Science,Science,Statistics — Patrick Durusau @ 7:17 pm

Ten Simple Rules for Effective Statistical Practice by Robert E. Kass, Brian S. Caffo, Marie Davidian, Xiao-Li Meng, Bin Yu, Nancy Reid (Ciation: Kass RE, Caffo BS, Davidian M, Meng X-L, Yu B, Reid N (2016) Ten Simple Rules for Effective Statistical Practice. PLoS Comput Biol 12(6): e1004961. doi:10.1371/journal.pcbi.1004961)

From the post:

Several months ago, Phil Bourne, the initiator and frequent author of the wildly successful and incredibly useful “Ten Simple Rules” series, suggested that some statisticians put together a Ten Simple Rules article related to statistics. (One of the rules for writing a PLOS Ten Simple Rules article is to be Phil Bourne [1]. In lieu of that, we hope effusive praise for Phil will suffice.)

I started to copy out the “ten simple rules,” sans the commentary but that would be a disservice to my readers.

Nodding past a ten bullet point listing isn’t going to make your statistics more effective.

Re-write the commentary on all ten rules to apply them to every project. The focusing of the rules on your work will result in specific advice and examples for your field.

Who knows? Perhaps you will be writing a ten simple rule article in your specific field, sans Phil Bourne as a co-author. (Do be sure and cite Phil.)

PS: For the curious: Ten Simple Rules for Writing a PLOS Ten Simple Rules Article by Harriet Dashnow, Andrew Lonsdale, Philip E. Bourne.

June 5, 2016

Software Carpentry Bug BBQ (June 13th, 2016)

Filed under: Programming,Research Methods,Researchers,Science — Patrick Durusau @ 9:02 pm

Software Carpentry Bug BBQ

From the post:

Software Carpentry is having a Bug BBQ on June 13th

Software Carpentry is aiming to ship a new version (5.4) of the Software Carpentry lessons by the end of June. To help get us over the finish line we are having a Bug BBQ on June 13th to squash as many bugs as we can before we publish the lessons. The June 13th Bug BBQ is also an opportunity for you to engage with our world-wide community. For more info about the event, read-on and visit our Bug BBQ website.

How can you participate? We’re asking you, members of the Software Carpentry community, to spend a few hours on June 13th to wrap up outstanding tasks to improve the lessons. Ahead of the event, the lesson maintainers will be creating milestones to identify all the issues and pull requests that need to be resolved we wrap up version 5.4. In addition to specific fixes laid out in the milestones, we also need help to proofread and bugtest the
lessons.

Where will this be? Join in from where you are: No need to go anywhere – if you’d like to participate remotely, start by having a look at the milestones on the website to see what tasks are still open, and send a pull request with your ideas to the corresponding repo. If you’d like to get together with other people working on these lessons live, we have created this map for live sites that are being organized. And if there’s no site listed near you, organize one yourself and let us know you are doing that here so that we can add your site to the map!

The Bug BBQ is going to be a great chance to get the community together, get our latest lessons over the finish line, and wrap up a product that gives you and all our contributors credit for your hard work with a citable object – we will be minting a DOI for this on publication.

A community BBQ that is open to everyone, dietary restrictions or not!

And the organizers have removed distance as a consideration for “attending.”

For those of us on non-BBQ diets, a unique opportunity to participate with others in the community for a worthy cause.

Mark your calendars today!

June 3, 2016

Reproducible Research Resources for Research(ing) Parasites

Filed under: Open Access,Open Data,Open Science,Research Methods,Researchers,Science — Patrick Durusau @ 3:58 pm

Reproducible Research Resources for Research(ing) Parasites by Scott Edmunds.

From the post:

Two new research papers on scabies and tapeworms published today showcase a new collaboration with protocols.io. This demonstrates a new way to share scientific methods that allows scientists to better repeat and build upon these complicated studies on difficult-to-study parasites. It also highlights a new means of writing all research papers with citable methods that can be updated over time.

While there has been recent controversy (and hashtags in response) from some of the more conservative sections of the medical community calling those who use or build on previous data “research parasites”, as data publishers we strongly disagree with this. And also feel it is unfair to drag parasites into this when they can teach us a thing or two about good research practice. Parasitology remains a complex field given the often extreme differences between parasites, which all fall under the umbrella definition of an organism that lives in or on another organism (host) and derives nutrients at the host’s expense. Published today in GigaScience are articles on two parasitic organisms, scabies and on the tapeworm Schistocephalus solidus. Not only are both papers in parasitology, but the way in which these studies are presented showcase a new collaboration with protocols.io that provides a unique means for reporting the Methods that serves to improve reproducibility. Here the authors take advantage of their open access repository of scientific methods and a collaborative protocol-centered platform, and we for the first time have integrated this into our submission, review and publication process. We now also have a groups page on the portal where our methods can be stored.

A great example of how sharing data advances research.

Of course, that assumes that one of your goals is to advance research and not solely yourself, your funding and/or your department.

Such self-centered as opposed to research-centered individuals do exist, but I would not malign true parasites by describing them as such, even colloquially.

The days of science data hoarders are numbered and one can only hope that the same is true for the “gatekeepers” of humanities data, manuscripts and artifacts.

The only known contribution of hoarders or “gatekeepers” has been to the retarding of their respective disciplines.

Given the choice of advancing your field along with yourself, or only yourself, which one will you choose?

April 30, 2016

MATISSE – Solar System Exploration

Filed under: Astroinformatics,Panama Papers,Science,Visualization — Patrick Durusau @ 3:37 pm

MATISSE: A novel tool to access, visualize and analyse data from planetary exploration missions by Angelo Zinzi, Maria Teresa Capria, Ernesto Palomba, Paolo Giommi, Lucio Angelo Antonelli.

Abstract:

The increasing number and complexity of planetary exploration space missions require new tools to access, visualize and analyse data to improve their scientific return.

ASI Science Data Center (ASDC) addresses this request with the web-tool MATISSE (Multi-purpose Advanced Tool for the Instruments of the Solar System Exploration), allowing the visualization of single observation or real-time computed high-order products, directly projected on the three-dimensional model of the selected target body.

Using MATISSE it will be no longer needed to download huge quantity of data or to write down a specific code for every instrument analysed, greatly encouraging studies based on joint analysis of different datasets.

In addition the extremely high-resolution output, to be used offline with a Python-based free software, together with the files to be read with specific GIS software, makes it a valuable tool to further process the data at the best spatial accuracy available.

MATISSE modular structure permits addition of new missions or tasks and, thanks to dedicated future developments, it would be possible to make it compliant to the Planetary Virtual Observatory standards currently under definition. In this context the recent development of an interface to the NASA ODE REST API by which it is possible to access to public repositories is set.

Continuing a long tradition of making big data and tools for processing big data freely available online (hint, hint, Panama Papers hoarders), this paper describes MATISSE (Multi-purpose Advanced Tool for the Instruments for the Solar System Exploration), which you can find online at:

http://tools.asdc.asi.it/matisse.jsp

Data currently available:

MATISSE currently ingests both public and proprietary data from 4 missions (ESA Rosetta, NASA Dawn, Chinese Chang’e-1 and Chang’e-2), 4 targets (4 Vesta, 21 Lutetia, 67P ChuryumovGerasimenko, the Moon) and 6 instruments (GIADA, OSIRIS, VIRTIS-M, all onboard Rosetta, VIR onboard Dawn, elemental abundance maps from Gamma Ray Spectrometer, Digital Elevation Models by Laser Altimeter and Digital Ortophoto by CCD Camera from Chang’e-1 and Chang’e-2).

If those names don’t sound familiar (links to mission pages):

4 Vesta – asteriod (NASA)

21 Lutetia – asteroid (ESA)

67P ChuryumovGerasimenko – comet (ESA)

the Moon – As in “our” moon.

You can do professional level research on extra-worldly data, but with worldly data (Panama Papers), not so much. Don’t be deceived by the forthcoming May 9th dribble of corporate data from the Panama Papers. Without the details contained in the documents, it’s little more than a suspect’s list.

April 23, 2016

Loading the Galaxy Network of the “Cosmic Web” into Neo4j

Filed under: Astroinformatics,Graphs,Neo4j,Science — Patrick Durusau @ 6:50 pm

Loading the Galaxy Network of the “Cosmic Web” into Neo4j by Michael Hunger.

Cypher script for loading “Cosmic Web” into Neo4j.

You remember “Cosmic Web:”

cosmic-web-fll-full-visualization-kim-albrecht

Enjoy!

300 Terabytes of Raw Collider Data

Filed under: BigData,Physics,Science — Patrick Durusau @ 2:22 pm

CERN Just Dropped 300 Terabytes of Raw Collider Data to the Internet by Andrew Liptak.

From the post:

Yesterday, the European Organization for Nuclear Research (CERN) dropped a staggering amount of raw data from the Large Hadron Collider on the internet for anyone to use: 300 terabytes worth.

The data includes a 100 TB “of data from proton collisions at 7 TeV, making up half the data collected at the LHC by the CMS detector in 2011.” The release follows another infodump from 2014, and you can take a look at all of this information through the CERN Open Data Portal. Some of the information released is simply the raw data that CERN’s own scientists have been using, while another segment is already processed, with the anticipated audience being high school science courses.

It’s not the same as having your own cyclotron in the backyard with a bubble chamber but its the next best thing!

If you have been looking for “big data” to stretch your limits, this fits the bill nicely.

Peer Review Fails, Again.

Filed under: Bioinformatics,Peer Review,Science — Patrick Durusau @ 1:51 pm

One in 25 papers contains inappropriately duplicated images, screen finds by Cat Ferguson.

From the post:

Elisabeth Bik, a microbiologist at Stanford, has for years been a behind-the-scenes force in scientific integrity, anonymously submitting reports on plagiarism and image duplication to journal editors. Now, she’s ready to come out of the shadows.

With the help of two editors at microbiology journals, she has conducted a massive study looking for image duplication and manipulation in 20,621 published papers. Bik and co-authors Arturo Casadevall and Ferric Fang (a board member of our parent organization) found 782 instances of inappropriate image duplication, including 196 published papers containing “duplicated figures with alteration.” The study is being released as a pre-print on bioArxiv.

I don’t know if the refusal of three (3) journals to date to publish this work or that peer reviewers of the original papers missed the duplication is the sadder news about this paper.

Being in the business of publishing, not in the business of publishing correct results, the refusal to publish an article that establishes the poor quality of those publications, is perhaps understandable. Not acceptable but understandable.

Unless the joke is on the reading public and other researchers. Publications are just that, publications. May or may not resemble any experiment or experience that can be duplicated by others. Rely on published results at your own peril.

Transparent access to all data and not peer review is the only path to solving this problem.

March 12, 2016

Laypersons vs. Scientists – “…laypersons may be prone to biases…”

Filed under: Humanities,Psychology,Science — Patrick Durusau @ 9:09 pm

The “distinction” between laypersons and scientists is more a world view about some things than “all scientists are rational” or “all laypersons are irrational.” Scientists and laypersons can be just as rational and/or irrational, depending upon the topic at hand.

Having said that, The effects of social identity threat and social identity affirmation on laypersons’ perception of scientists by Peter Nauroth, et al., finds, unsurprisingly, that if a layperson’s social identity is threatened by research, they have a less favorable view of the scientists involved.

Abstract:

Public debates about socio-scientific issues (e.g. climate change or violent video games) are often accompanied by attacks on the reputation of the involved scientists. Drawing on the social identity approach, we report a minimal group experiment investigating the conditions under which scientists are perceived as non-prototypical, non-reputable, and incompetent. Results show that in-group affirming and threatening scientific findings (compared to a control condition) both alter laypersons’ evaluations of the study: in-group affirming findings lead to more positive and in-group threatening findings to more negative evaluations. However, only in-group threatening findings alter laypersons’ perceptions of the scientists who published the study: scientists were perceived as less prototypical, less reputable, and less competent when their research results imply a threat to participants’ social identity compared to a non-threat condition. Our findings add to the literature on science reception research and have implications for understanding the public engagement with science.

Perceived attacks on personal identity have negative consequences for the “reception” of science.

Implications for public engagement with science

Our findings have immediate implications for public engagement with science activities. When laypersons perceive scientists as less competent, less reputable, and not representative of the scientific community and the scientist’s opinion as deviating from the current scientific state-of-the-art, laypersons might be less willing to participate in constructive discussions (Schrodt et al., 2009). Furthermore, our mediation analysis suggests that these negative perceptions deepen the trench between scientists and laypersons concerning the current scientific state-of-the-art. We speculate that these biases might actually even lead to engagement activities to backfire: instead of developing a mutual understanding they might intensify laypersons’ misconceptions about the scientific state-of-the-art. Corroborating this hypothesis, Binder et al. (2011) demonstrated that discussions about controversial science topics may in fact polarize different groups around a priori positions. Additional preliminary support for this hypothesis can also be found in case studies about public engagement activities in controversial socio-scientific issues. Some of these reports (for two examples, see Lezaun and Soneryd, 2007) indicate problems to maintain a productive atmosphere between laypersons and experts in the discussion sessions.

Besides these practical implications, our results also add further evidence to the growing body of literature questioning the validity of the deficit model in science communication according to which people’s attitudes toward science are mainly determined by their knowledge about science (Sturgis and Allum, 2004). We demonstrated that social identity concerns profoundly influence laypersons’ perceptions and evaluations of scientific results regardless of laypersons’ knowledge. However, our results also question whether involving laypersons in policy decision processes based upon scientific evidence is reasonable in all socio-scientific issues. Particularly when the scientific evidence has potential negative consequences for social groups, our research suggests that laypersons may be prone to biases based upon their social affiliations. For example, if regular video game players were involved in decision-making processes concerning potential sales restrictions of violent video games, they would be likely to perceive scientific evidence demonstrating detrimental effects of violent video games as shoddy and the respective researchers as disreputable (Greitemeyer, 2014; Nauroth et al., 2014, 2015).(emphasis added)

The principle failure of this paper is its failure to study the scientific community and its reaction within science to research that attacks the personal identity of its participants.

I don’t think it is reading too much into the post: Academic, Not Industrial Secrecy, where one group said:

We want restrictions on who could do the analyses.

to say that attacks on personal identity leads to boorish behavior on the part of scientists.

Laypersons and scientists emit a never ending stream of examples of prejudice, favoritism, sycophancy, sloppy reasoning, to say nothing of careless and/or low quality work.

Reception of science among laypersons might improve if the scientific community abandoned its facade of “it’s objective, it’s science.”

That facade was tiresome by WWII and to keep repeating now is a disservice to the scientific community.

All of our efforts, in any field, are human endeavors and thus subject to the vagaries and uncertainties human interaction.

Live with it.

February 26, 2016

How to read and understand a scientific paper….

Filed under: Reading,Science — Patrick Durusau @ 3:15 pm

How to read and understand a scientific paper: a guide for non-scientists by Jennifer Raff.

From the post:

Last week’s post (The truth about vaccinations: Your physician knows more than the University of Google) sparked a very lively discussion, with comments from several people trying to persuade me (and the other readers) that their paper disproved everything that I’d been saying. While I encourage you to go read the comments and contribute your own, here I want to focus on the much larger issue that this debate raised: what constitutes scientific authority?

It’s not just a fun academic problem. Getting the science wrong has very real consequences. For example, when a community doesn’t vaccinate children because they’re afraid of “toxins” and think that prayer (or diet, exercise, and “clean living”) is enough to prevent infection, outbreaks happen.

“Be skeptical. But when you get proof, accept proof.” –Michael Specter

What constitutes enough proof? Obviously everyone has a different answer to that question. But to form a truly educated opinion on a scientific subject, you need to become familiar with current research in that field. And to do that, you have to read the “primary research literature” (often just called “the literature”). You might have tried to read scientific papers before and been frustrated by the dense, stilted writing and the unfamiliar jargon. I remember feeling this way! Reading and understanding research papers is a skill which every single doctor and scientist has had to learn during graduate school. You can learn it too, but like any skill it takes patience and practice.

I want to help people become more scientifically literate, so I wrote this guide for how a layperson can approach reading and understanding a scientific research paper. It’s appropriate for someone who has no background whatsoever in science or medicine, and based on the assumption that he or she is doing this for the purpose of getting a basic understanding of a paper and deciding whether or not it’s a reputable study.

Copy each of Jennifer’s steps, as you follow them, in a notebook with your results from applying them. That will help you remember the rules but help capture your understanding of paper.

BTW, there is also a fully worked example of applying these rules to a vaccine safety study.

Compare this post to Keshav’s How to Read a Paper.

Their techniques vary but both lead to a greater understanding of any paper you read.

Older Posts »

Powered by WordPress