Due date: 1 April 2011
From the website:
The AI mashup challenge accepts and awards mashups that use AI technology, including but not restricted to machine learning and data mining, machine vision, natural language processing, reasoning, ontologies and the semantic web.
Imagine for example:
- Information extraction or automatic text summarization to create a task-oriented overview mashup for mobile devices.
- Semantic Web technology and data sources adapting to user and task-specific configurations.
- Semantic background knowledge (such as ontologies, WordNet or Cyc) to improve search and content combination.
- Machine translation for mashups that cross language borders.
- Machine vision technology for novel ways of aggregating images, for instance mixing real and virtual environments.
- Intelligent agents taking over simple household planning tasks.
- Text-to-speech technology creating a voice mashup with intelligent and emotional intonation.
- The display of Pub Med articles on a map based on geographic entity detection referring to diseases or health centers.
The emphasis is not on providing and consuming semantic markup, but rather on using intelligence to mashup these resources in a more powerful way.
This looks like an opportunity for an application that assists users in explicit identification or confirmation of identification of subjects.
Rather than auto-correcting, human-correcting.
Assuming we can capture the corrections, wouldn’t that mean that our apps would incrementally get “smarter?” Rather than starting off from ground zero with each request? (True, a lot of analysis goes on with logs, etc. Why not just ask?)