Archive for the ‘Coreference Resolution’ Category

Stanford CoreNLP v3.7.0 beta is out! [Time is short, comments, bug reports, now!]

Thursday, November 3rd, 2016

Stanford CoreNLP v3.7.0 beta

The tweets I saw from Stanford NLP Group read:

Stanford CoreNLP v3.7.0 beta is out—improved coreference, dep parsing—KBP relation annotator—Arabic pipeline #NLProc

We‘re doing an official CoreNLP beta release this time, so bugs, comments, and fixes especially appreciated over the next two weeks!

OK, so, what are you waiting for? 😉

Oh, the standard blurb for your boss on why Stanford CoreNLP should be taking up your time:

Stanford CoreNLP provides a set of natural language analysis tools. It can give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, and mark up the structure of sentences in terms of phrases and word dependencies, indicate which noun phrases refer to the same entities, indicate sentiment, extract open-class relations between mentions, etc.

Choose Stanford CoreNLP if you need:

  • An integrated toolkit with a good range of grammatical analysis tools
  • Fast, reliable analysis of arbitrary texts
  • The overall highest quality text analytics
  • Support for a number of major (human) languages
  • Interfaces available for various major modern programming languages
  • Ability to run as a simple web service

Stanford CoreNLP is an integrated framework. Its goal is to make it very easy to apply a bunch of linguistic analysis tools to a piece of text. A CoreNLP tool pipeline can be run on a piece of plain text with just two lines of code. It is designed to be highly flexible and extensible. With a single option you can change which tools should be enabled and which should be disabled. Stanford CoreNLP integrates many of Stanford’s NLP tools, including the part-of-speech (POS) tagger, the named entity recognizer (NER), the parser, the coreference resolution system, sentiment analysis, bootstrapped pattern learning, and the open information extraction tools. Its analyses provide the foundational building blocks for higher-level and domain-specific text understanding applications.

Using the standard blurb about the Stanford CoreNLP has these advantages:

  • It’s copy-n-paste, you didn’t have to write it
  • It’s appeal to authority (Stanford)
  • It’s truthful

The truthful point is a throw-away these days but thought I should mention it. 😉

Coreference Resolution Tools : A first look

Tuesday, December 18th, 2012

Coreference Resolution Tools : A first look by Sharmila G Sivakumar.

From the post:

Coreference is where two or more noun phrases refer to the same entity. This is an integral part of natural languages to avoid repetition, demonstrate possession/relation etc.

Eg: Harry wouldn’t bother to read “Hogwarts: A History” as long as Hermione is around. He knows she knows the book by heart.

The different types of coreference includes:
Noun phrases: Hogwarts A history <- the book
Pronouns : Harry <- He
Possessives : her, his, their
Demonstratives: This boy

Coreference resolution or anaphor resolution is determining what an entity is refering to. This has profound applications in nlp tasks such as semantic analysis, text summarisation, sentiment analysis etc.

In spite of extensive research, the number of tools available for CR and level of their maturity is much less compared to more established nlp tasks such as parsing. This is due to the inherent ambiguities in resolution.

A bit dated (2010) now but a useful starting point for updating. (Specific to medical records, see: Evaluating the state of the art in coreference resolution for electronic medical records. Other references you would recommend?)

Sharmila goes on to compare the results of using the tools on a set text so you can get a feel for the tools.

Capturing/Defining/Interchanging Coreference Resolutions (Topic Maps!)

Tuesday, December 18th, 2012

While listening to Ruslan Mitkov presentation: Coreference Resolution: to What Extent Does it Help NLP Applications?, the thought occurred to me that coreference resolution lies at the core of topic maps.

A topic map can:

  • Capture a coreference resolution in one representative by merging it with another representative that “pick out the same referent.”
  • Define a coreference resolution by defining representatives that “pick out the same referent.”
  • Interchange coreference resolutions by defining the representation of referents that “pick out the same referent.”

Not to denigrate associations or occurrences, but they depend upon the presence topics, that is representatives that “pick out a referent.”

Merged topics being two or more topics that individually “picked out the same referent,” perhaps using different means of identification.

Rather than starting every coreference resolution application at zero, to test its algorithmic prowess, a topic map could easily prime the pump as it were with known coreference resolutions.

Enabling coreference resolution systems to accumulate resolutions, much as human users do.*

*This may be useful because coreference resolution is a recognized area of research in computational linguistics, unlike topic maps.

Coreference Resolution: to What Extent Does it Help NLP Applications?

Tuesday, December 18th, 2012

Coreference Resolution: to What Extent Does it Help NLP Applications? by Ruslan Mitkov. (presentation – audio only)

The paper from the same conference:

Coreference Resolution: To What Extent Does It Help NLP Applications? by Ruslan Mitkov, Richard Evans, Constantin Orăsan, Iustin Dornescu, Miguel Rios. (Text, Speech and Dialogue, 15th International Conference, TSD 2012, Brno, Czech Republic, September 3-7, 2012. Proceedings, pp. 16-27)

Abstract:

This paper describes a study of the impact of coreference resolution on NLP applications. Further to our previous study [1], in which we investigated whether anaphora resolution could be beneficial to NLP applications, we now seek to establish whether a different, but related task—that of coreference resolution, could improve the performance of three NLP applications: text summarisation, recognising textual entailment and text classification. The study discusses experiments in which the aforementioned applications were implemented in two versions, one in which the BART coreference resolution system was integrated and one in which it was not, and then tested in processing input text. The paper discusses the results obtained.

In the presentation and in the paper, Mitkov distinguishes between anaphora and coreference resolution (from the paper):

While some authors use the terms coreference (resolution) and anaphora (resolution) interchangeably, it is worth noting that they are completely distinct terms or tasks [3]. Anaphora is cohesion which points back to some previous item, with the ‘pointing back’ word or phrase called an anaphor, and the entity to which it refers, or for which it stands, its antecedent. Coreference is the act of picking out the same referent in the real world. A specific anaphor and more than one of the preceding (or following) noun phrases may be coreferential, thus forming a coreferential chain of entities which have the same referent.

I am not sure why the “real world” is necessary in: “Coreference is the act of picking out the same referent in the real world.”

For topic maps, I would shorten it to: Coreference is the act of picking out the same referent. (full stop)

The paper is a useful review of coreference systems and quite unusually, reports a negative result:

This study sought to establish whether or not coreference resolution could have a positive impact on NLP applications, in particular on text summarisation, recognising textual entailment, and text categorisation. The evaluation results presented in Section 6 are in line with previous experiments conducted both by the present authors and other researchers: there is no statistically significant benefit brought by automatic coreference resolution to these applications. In this specific study, the employment of the coreference resolution system distributed in the BART toolkit generally evokes slight but not significant increases in performance and in some cases it even evokes a slight deterioration in the performance results of these applications. We conjecture that the lack of a positive impact is due to the success rate of the BART coreference resolution system which appears to be insufficient to boost performance of the aforementioned applications.

My conjecture is topic maps can boost conference resolution enough to improve performance of NLP applications, including text summarisation, recognising textual entailment, and text categorisation.

What do you think?

How would you suggest testing that conjecture?

Evaluating the state of the art in coreference resolution for electronic medical records

Thursday, August 9th, 2012

Evaluating the state of the art in coreference resolution for electronic medical records by Ozlem Uzuner, Andreea Bodnari, Shuying Shen, Tyler Forbush, John Pestian, and Brett R South. (J Am Med Inform Assoc 2012; 19:786-791 doi:10.1136/amiajnl-2011-000784)

Abstract:

Background The fifth i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records conducted a systematic review on resolution of noun phrase coreference in medical records. Informatics for Integrating Biology and the Bedside (i2b2) and the Veterans Affair (VA) Consortium for Healthcare Informatics Research (CHIR) partnered to organize the coreference challenge. They provided the research community with two corpora of medical records for the development and evaluation of the coreference resolution systems. These corpora contained various record types (ie, discharge summaries, pathology reports) from multiple institutions.

Methods The coreference challenge provided the community with two annotated ground truth corpora and evaluated systems on coreference resolution in two ways: first, it evaluated systems for their ability to identify mentions of concepts and to link together those mentions. Second, it evaluated the ability of the systems to link together ground truth mentions that refer to the same entity. Twenty teams representing 29 organizations and nine countries participated in the coreference challenge.

Results The teams’ system submissions showed that machine-learning and rule-based approaches worked best when augmented with external knowledge sources and coreference clues extracted from document structure. The systems performed better in coreference resolution when provided with ground truth mentions. Overall, the systems struggled in solving coreference resolution for cases that required domain knowledge.

That systems “struggled in solving coreference resolution for cases that required domain knowledge” isn’t surprising.

But, as we saw in > 4,000 Ways to say “You’re OK” [Breast Cancer Diagnosis], for any given diagnosis, there is a finite number of ways to say it.

Usually far fewer than 4,000. If we capture the ways as they are encountered, our systems don’t need “domain knowledge.”

As the lead character in O Brother, Where Art Thou? says, our applications can be as “dumb as a bag of hammers.”

PS: Apologies but I could not find an accessible version of this article. Will run down the details on the coreference workshop tomorrow and hopefully some accessible materials on it.

Reconcile – Coreference Resolution Engine

Sunday, June 3rd, 2012

Reconcile – Coreference Resolution Engine

While we are on the topic of NLP tools:

Reconcile is an automatic coreference resolution system that was developed to provide a stable test-bed for researchers to implement new ideas quickly and reliably. It achieves roughly state of the art performance on many of the most common coreference resolution test sets, such as MUC-6, MUC-7, and ACE. Reconcile comes ready out of the box to train and test on these common data sets (though the data sets are not provided) as well as the ability to run on unlabeled texts. Reconcile utilizes supervised machine learning classifiers from the Weka toolkit, as well as other language processing tools such as the Berkeley Parser and Stanford Named Entity Recognition system.

The source language is Java, and it is freely available under the GPL.

Just in case you want to tune/tweak your coreference resolution against your data sets.

FreeLing 3.0 – An Open Source Suite of Language Analyzers

Sunday, June 3rd, 2012

FreeLing 3.0 – An Open Source Suite of Language Analyzers

Features:

Main services offered by FreeLing library:

  • Text tokenization
  • Sentence splitting
  • Morphological analysis
  • Suffix treatment, retokenization of clitic pronouns
  • Flexible multiword recognition
  • Contraction splitting
  • Probabilistic prediction of unkown word categories
  • Named entity detection
  • Recognition of dates, numbers, ratios, currency, and physical magnitudes (speed, weight, temperature, density, etc.)
  • PoS tagging
  • Chart-based shallow parsing
  • Named entity classification
  • WordNet based sense annotation and disambiguation
  • Rule-based dependency parsing
  • Nominal correference resolution

[Not all features are supported for all languages, see Supported Languages.]

TOC for the user manual.

Something for your topic map authoring toolkit!

(Source: Jack Park)