Archive for the ‘Statistically Improbable Phrases (SIPs)’ Category

Identifying duplicate content using statistically improbable phrases

Friday, November 18th, 2011

Identifying duplicate content using statistically improbable phrases by Mounir Errami, Zhaohui Sun, Angela C. George, Tara C. Long, Michael A. Skinner, Jonathan D. Wren and Harold R. Garner.


Motivation: Document similarity metrics such as PubMed’s ‘Find related articles’ feature, which have been primarily used to identify studies with similar topics, can now also be used to detect duplicated or potentially plagiarized papers within literature reference databases. However, the CPU-intensive nature of document comparison has limited MEDLINE text similarity studies to the comparison of abstracts, which constitute only a small fraction of a publication’s total text. Extending searches to include text archived by online search engines would drastically increase comparison ability. For large-scale studies, submitting short phrases encased in direct quotes to search engines for exact matches would be optimal for both individual queries and programmatic interfaces. We have derived a method of analyzing statistically improbable phrases (SIPs) for assistance in identifying duplicate content.

Results: When applied to MEDLINE citations, this method substantially improves upon previous algorithms in the detection of duplication citations, yielding a precision and recall of 78.9% (versus 50.3% for eTBLAST) and 99.6% (versus 99.8% for eTBLAST), respectively.

Availability: Similar citations identified by this work are freely accessible in the Déjà vu database, under the SIP discovery method category at

I ran across this article today while looking for other material on the Déjà vu database.

Why should Amazon have all the fun? 😉

Depending on the breath of the search, I can imagine creating graphs of search data that display more than one SIP per article, allowing researchers to choose paths through the literature. Well, that is beyond what the authors intend here but adaptation of their work to search and refinement of research data seems like a natural extension.

And depending how now finely data from sensors or other automatic sources was segmented, it isn’t hard to imagine something similar for sensor data. Not really plagiarism but duplication that might warrant further investigation.