Archive for the ‘Textual Entailment’ Category

Coreference Resolution: to What Extent Does it Help NLP Applications?

Tuesday, December 18th, 2012

Coreference Resolution: to What Extent Does it Help NLP Applications? by Ruslan Mitkov. (presentation – audio only)

The paper from the same conference:

Coreference Resolution: To What Extent Does It Help NLP Applications? by Ruslan Mitkov, Richard Evans, Constantin Orăsan, Iustin Dornescu, Miguel Rios. (Text, Speech and Dialogue, 15th International Conference, TSD 2012, Brno, Czech Republic, September 3-7, 2012. Proceedings, pp. 16-27)

Abstract:

This paper describes a study of the impact of coreference resolution on NLP applications. Further to our previous study [1], in which we investigated whether anaphora resolution could be beneficial to NLP applications, we now seek to establish whether a different, but related task—that of coreference resolution, could improve the performance of three NLP applications: text summarisation, recognising textual entailment and text classification. The study discusses experiments in which the aforementioned applications were implemented in two versions, one in which the BART coreference resolution system was integrated and one in which it was not, and then tested in processing input text. The paper discusses the results obtained.

In the presentation and in the paper, Mitkov distinguishes between anaphora and coreference resolution (from the paper):

While some authors use the terms coreference (resolution) and anaphora (resolution) interchangeably, it is worth noting that they are completely distinct terms or tasks [3]. Anaphora is cohesion which points back to some previous item, with the ‘pointing back’ word or phrase called an anaphor, and the entity to which it refers, or for which it stands, its antecedent. Coreference is the act of picking out the same referent in the real world. A specific anaphor and more than one of the preceding (or following) noun phrases may be coreferential, thus forming a coreferential chain of entities which have the same referent.

I am not sure why the “real world” is necessary in: “Coreference is the act of picking out the same referent in the real world.”

For topic maps, I would shorten it to: Coreference is the act of picking out the same referent. (full stop)

The paper is a useful review of coreference systems and quite unusually, reports a negative result:

This study sought to establish whether or not coreference resolution could have a positive impact on NLP applications, in particular on text summarisation, recognising textual entailment, and text categorisation. The evaluation results presented in Section 6 are in line with previous experiments conducted both by the present authors and other researchers: there is no statistically significant benefit brought by automatic coreference resolution to these applications. In this specific study, the employment of the coreference resolution system distributed in the BART toolkit generally evokes slight but not significant increases in performance and in some cases it even evokes a slight deterioration in the performance results of these applications. We conjecture that the lack of a positive impact is due to the success rate of the BART coreference resolution system which appears to be insufficient to boost performance of the aforementioned applications.

My conjecture is topic maps can boost conference resolution enough to improve performance of NLP applications, including text summarisation, recognising textual entailment, and text categorisation.

What do you think?

How would you suggest testing that conjecture?

Text Analysis Conference (TAC)

Sunday, November 21st, 2010

Text Analysis Conference (TAC)

From the website:

The Text Analysis Conference (TAC) is a series of evaluation workshops organized to encourage research in Natural Language Processing and related applications, by providing a large test collection, common evaluation procedures, and a forum for organizations to share their results. TAC comprises sets of tasks known as “tracks,” each of which focuses on a particular subproblem of NLP. TAC tracks focus on end-user tasks, but also include component evaluations situated within the context of end-user tasks.

  • Knowledge Base Population

    The goal of the Knowledge Base Population track is to develop systems that can augment an existing knowledge representation (based on Wikipedia infoboxes) with information about entities that is discovered from a collection of documents.

  • Recognizing Textual Entailment

    The goal of the RTE Track is to develop systems that recognize when one piece of text entails another.

  • Summarization

    The goal of the Summarization Track is to develop systems that produce short, coherent summaries of text.

Sponsored by the U.S. Department of Defense.

Rumor has it that one intelligence analysis group won a DoD contract without hiring an ex-general. If you get noticed by a prime contractor here, perhaps you won’t have to either. The primes have lots of ex-generals/colonels, etc.

Questions:

  1. Select a paper from one of the TAC conferences. Update on the status of that research. (3-5 pages, citations)
  2. For the authors of #1, annotated bibliography of publications since the paper.
  3. How would you use the technique from #1 in the construction of a topic map? Inform your understanding, selection, data for that map, etc.? (3-5 pages, no citations)

(Yes, I stole the questions from my DUC conference posting. ;-))