From the webpage:
A participant system is given a piece of newswire text as input and returns discourse relations in the form of a discourse connective (explicit or implicit) taking two arguments (which can be clauses, sentences, or multi-sentence segments). Specifically, the participant system needs to i) locate both explicit (e.g., “because”, “however”, “and”) and implicit discourse connectives (often signaled by periods) in the text, ii) identify the spans of text that serve as the two arguments for each discourse connective, and iii) predict the sense of the discourse connectives (e.g., “Cause”, “Condition”, “Contrast”). Understanding such discourse relations is clearly an important part of natural language understanding that benefits a wide range of natural language applications.
- January 26, 2015: registration begins, and release of training set and scorer
- March 1, 2015: Registration deadline.
- April 20, 2015: Test set available.
- April 24, 2015: Systems collected.
- May 1, 2015: System results due to participants
- May 8, 2015: System papers due.
- May 18, 2015: Reviews due.
- May 21, 2015: notification of acceptance.
- May 28, 2015: camera-ready version of system papers due.
- July 30-31, 2015. CoNLL conference (Beijing China).
You have to admire the ambiguity of the title.
Does it mean the parsing of shallow discourse (my first bet) or does it mean shallow parsing of discourse (my unlikely)?
What do you think?
With the recent advances in deep learning, I am curious if the Turing test could be passed by training an algorithm on sitcom dialogue over the last two or three years?
Would you use regular TV viewers as part of the test or use people who rarely watch TV? Could make a difference in the outcome of the test.
I first saw this in a tweet by Jason Baldridge.