A multimodal dialogue mashup for medical image semantics Authors: Daniel Sonntag, and Manuel Möller Keywords: collaborative environments, design, touchscreen interface
Abstract:
This paper presents a multimodal dialogue mashup where different users are involved in the use of different user interfaces for the annotation and retrieval of medical images. Our solution is a mashup that integrates a multimodal interface for speech-based annotation of medical images and dialogue-based image retrieval with a semantic image annotation tool for manual annotations on a desktop computer. A remote RDF repository connects the annotation and querying task into a common framework and serves as the semantic backend system for the advanced multimodal dialogue a radiologist can use.
With regard to the semantics of the interface the authors say:
In a complex interaction system, a common ground of terms and structures is absolutely necessary. A shared representation and a common knowledge base ease the dataflow within the system and avoid costly and error-prone transformation processes.
I disagree with both statements but concede that for a particular use cases, the cost of dataflow question will be resolved differently.
I like the article as an example of interface design.