Archive for the ‘Document Classification’ Category

A document classifier for medicinal chemistry publications trained on the ChEMBL corpus

Tuesday, September 9th, 2014

A document classifier for medicinal chemistry publications trained on the ChEMBL corpus by George Papadatos, et al. (Journal of Cheminformatics 2014, 6:40)



The large increase in the number of scientific publications has fuelled a need for semi- and fully automated text mining approaches in order to assist in the triage process, both for individual scientists and also for larger-scale data extraction and curation into public databases. Here, we introduce a document classifier, which is able to successfully distinguish between publications that are ‘ChEMBL-like’ (i.e. related to small molecule drug discovery and likely to contain quantitative bioactivity data) and those that are not. The unprecedented size of the medicinal chemistry literature collection, coupled with the advantage of manual curation and mapping to chemistry and biology make the ChEMBL corpus a unique resource for text mining.


The method has been implemented as a data protocol/workflow for both Pipeline Pilot (version 8.5) and KNIME (version 2.9) respectively. Both workflows and models are freely available at: webcite. These can be readily modified to include additional keyword constraints to further focus searches.


Large-scale machine learning document classification was shown to be very robust and flexible for this particular application, as illustrated in four distinct text-mining-based use cases. The models are readily available on two data workflow platforms, which we believe will allow the majority of the scientific community to apply them to their own data.

While the abstract mentions “the triage process,” it fails to capture the main goal of this paper:

…the main goal of our project diverges from the goal of the tools mentioned. We aim to meet the following criteria: ranking and prioritising the relevant literature using a fast and high performance algorithm, with a generic methodology applicable to other domains and not necessarily related to chemistry and drug discovery. In this regard, we present a method that builds upon the manually collated and curated ChEMBL document corpus, in order to train a Bag-of-Words (BoW) document classifier.

In more detail, we have employed two established classification methods, namely Naïve Bayesian (NB) and Random Forest (RF) approaches [12]-[14]. The resulting classification score, henceforth referred to as ‘ChEMBL-likeness’, is used to prioritise relevant documents for data extraction and curation during the triage process.

In other words, the focus of this paper is a classifier to help prioritize curation of papers. I take that as being different from classifiers used at other stages or for other purposes in the curation process.

I first saw this in a tweet by ChemConnector.

If you could make a computer do anything with documents,…

Monday, March 17th, 2014

If you could make a computer do anything with documents, what would you make it do?

The OverviewProject has made a number of major improvements in the last year and now they are asking your opinion on what to do next?

They do have funding, developers and are pushing out new features. I take all of those to be positive signs.

No guarantee that what you ask for is possible with their resources or even of any interest to them.

But, you won’t know if you don’t ask.

I will be posting my answer to that question on this blog this coming Friday, 21 March 2014.

Spread the word! Get other people to try Overview and to answer the survey.

How to Build a Text Mining, Machine Learning….

Monday, May 13th, 2013

How to Build a Text Mining, Machine Learning Document Classification System in R! by Timothy DAuria.

From the description:

We show how to build a machine learning document classification system from scratch in less than 30 minutes using R. We use a text mining approach to identify the speaker of unmarked presidential campaign speeches. Applications in brand management, auditing, fraud detection, electronic medical records, and more.

Well made video introduction to R and text mining.

Document Mining with Overview:…

Friday, March 15th, 2013

Document Mining with Overview:… A Digital Tools Tutorial by Jonathan Stray.

The slides from the Overview presentation I mentioned yesterday.

One of the few webinars I have ever attended where nodding off was not a problem! Interesting stuff.

It is designed for the use case where there “…is too much material to read on deadline.”

A cross between document mining and document management.

A cross that hides a lot of the complexity from the user.

Definitely a project to watch.

Technology-Assisted Review Boosted in TREC 2011 Results

Friday, July 20th, 2012

Technology-Assisted Review Boosted in TREC 2011 Results by Evan Koblentz.

From the post:

TREC Legal Track, an annual government-sponsored project for evaluating document review methods, on Friday released its 2011 results containing a virtual vote of confidence for technology-assisted review.

“[T]he results show that the technology-assisted review efforts of several participants achieve recall scores that are about as high as might reasonably be measured using current evaluation methodologies. These efforts require human review of only a fraction of the entire collection, with the consequence that they are far more cost-effective than manual review,” the report states.

The term “technology-assisted review” refers to “any semi-automated process in which a human codes documents as relevant or not, and the system uses that information to code or prioritize further documents,” said TREC co-leader Gordon Cormack, of the University of Waterloo. Its meaning is far wider than just the software method known as predictive coding, he noted.

As such, “There is still plenty of room for improvement in the efficiency and effectiveness of technology-assisted review efforts, and, in particular, the accuracy of intra-review recall estimation tools, so as to support a reasonable decision that ‘enough is enough’ and to declare the review complete. Commensurate with improvements in review efficiency and effectiveness is the need for improved external evaluation methodologies,” the report states.

Good snapshot of current results, plus fertile data sets for testing alternative methodologies.

The report mentions the 100 GB data set size was a problem for some participants? (Overview of the TREC 2011 Legal Track, page 2)

Suggestion: Post the 2013 data set as a public data set to AWS. Would be available to everyone and if not using local clusters, they can fire up capacity on demand. More realistic scenario than local data processing.

Perhaps an informal survey of the amortized cost of processing by different methods (cloud, local cluster) would be of interest to the legal community.

I can hear the claims of “security, security” from here. The question to ask is: What disclosed premium your client is willing to pay for security on data you are going to give to the other side if responsive and non-privileged? 25% 50% 125% or more?

BTW, looking forward to the 2013 competition. Particularly if it gets posted to the AWS or similar cloud.

Let me know if you are interested in forming an ad hoc team or investigating the potential for an ad hoc team.

Introducing DocDiver

Thursday, November 3rd, 2011

Introducing DocDiver by Al Shaw. The ProPublica Nerd Blog

From the post:

Today [4 Oct. 2011] we’re launching a new feature that lets readers work alongside ProPublica reporters—and each other—to identify key bits of information in documents, and to share what they’ve found. We call it DocDiver [1].

Here’s how it works:

DocDiver is built on top of DocumentViewer [2] from DocumentCloud [3]. It frames the DocumentViewer embed and adds a new right-hand sidebar with options for readers to browse findings and to add their own. The “overview” tab shows, at a glance, who is talking about this document and “key findings”—ones that our editors find especially illuminating or noteworthy. The “findings” tab shows all reader findings to the right of each page near where readers found interesting bits.

Graham Moore (Networkedplanet) mentioned early today that the topic map working group should look for technologies and projects where topic maps can make a real difference for a minimal amount of effort. (I’m paraphrasing so if I got it wrong, blame me, not Graham.)

This looks like a case where an application is very close to having topic map capabilities but not quite. The project already has users, developers and I suspect would be interested in anything that would improve their software, without starting over. That would be the critical part, to leverage existing software an imbue it with subject identity as we understand the concept, to the benefit of current users of the software.

Rapid-I: Report the Future

Wednesday, October 19th, 2011

Rapid-I: Report the Future

Source of:

RapidMiner: Professional open source data mining made easy.

Analytical ETL, Data Mining, and Predictive Reporting with a single solution

RapidAnalytics: Collaborative data analysis power.

No 1 in open source business analytics

The key product for business critical predictive analysis

RapidDoc: Webbased solution for document retrieval and analysis.

Classify text, identify trends as well as emerging topics

Easy to use and configure

From About Rapid-I:

Rapid-I provides software, solutions, and services in the fields of predictive analytics, data mining, and text mining. The company concentrates on automatic intelligent analyses on a large-scale base, i.e. for large amounts of structured data like database systems and unstructured data like texts. The open-source data mining specialist Rapid-I enables other companies to use leading-edge technologies for data mining and business intelligence. The discovery and leverage of unused business intelligence from existing data enables better informed decisions and allows for process optimization.

The main product of Rapid-I, the data analysis solution RapidMiner is the world-leading open-source system for knowledge discovery and data mining. It is available as a stand-alone application for data analysis and as a data mining engine which can be integrated into own products. By now, thousands of applications of RapidMiner in more than 30 countries give their users a competitive edge. Among the users are well-known companies as Ford, Honda, Nokia, Miele, Philips, IBM, HP, Cisco, Merrill Lynch, BNP Paribas, Bank of America, mobilkom austria, Akzo Nobel, Aureus Pharma, PharmaDM, Cyprotex, Celera, Revere, LexisNexis, Mitre and many medium-sized businesses benefitting from the open-source business model of Rapid-I.

Data mining/analysis is the first part of any topic map project, however large or small. These tools, which I have not (yet) tried, are likely to prove useful in such projects. Comments welcome.

A Term Association Inference Model for Single Documents:….

Monday, November 22nd, 2010

A Term Association Inference Model for Single Documents: A Stepping Stone for Investigation through Information Extraction Author(s): Sukanya Manna and Tom Gedeon Keywords: Information retrieval, investigation, Gain of Words, Gain of Sentences, term significance, summarization


In this paper, we propose a term association model which extracts significant terms as well as the important regions from a single document. This model is a basis for a systematic form of subjective data analysis which captures the notion of relatedness of different discourse structures considered in the document, without having a predefined knowledge-base. This is a paving stone for investigation or security purposes, where possible patterns need to be figured out from a witness statement or a few witness statements. This is unlikely to be possible in predictive data mining where the system can not work efficiently in the absence of existing patterns or large amount of data. This model overcomes the basic drawback of existing language models for choosing significant terms in single documents. We used a text summarization method to validate a part of this work and compare our term significance with a modified version of Salton’s [1].

Excellent work that illustrates how re-thinking of fundamental assumptions of data mining can lead to useful results.


  1. Create an annotated bibliography of citations to this article.
  2. Citations of items in the bibliography since this paper (2008)? List and annotate.
  3. How would you use this approach with a document archive project? (3-5 pages, no citations)

T-Rex Information Extraction

Friday, October 15th, 2010

T-Rex (Trainable Relation Extraction).

Tools for document classification, entity and relation (read association) extraction.

Topic maps of any size are going to be constructed from mining of “data” and in a lot of cases that will mean “documents” (to the extent that is a meaningful distinction).

Interesting toolkit for that purpose but apparently not being maintained. Parked at Sourceforge after having been funded by the EU.

Does anyone have a status update on this project?