Multimodal Interaction Strategies in a Multi-Device Environment using Natural Speech

Christian Husodo Schulz, Daniel Sonntag, Markus Weber, Takumi Toyama

In: Proceedings of the Companion Publication of the 2013 International Conference on Intelligent User Interfaces Companion. IUI Workshop on Interactive Machine Learning located at IUI 2013 March 19-22 Santa Monica CA United States ACM Press 2013.


In this paper we present an intelligent user interface which combines a speech-based interface with several other input modalities. The integration of multiple devices into a working environment should provide greater flexibility to the daily routine of medical experts for example. To this end, we will introduce a medical cyber-physical system that demonstrates the use of a bidirectional connection between a speech-based interface and a head-mounted see-through display. We will show examples of how we can exploit multiple input modalities and thus increase the usability of a speech-based interaction system.

German Research Center for Artificial Intelligence
Deutsches Forschungszentrum für Künstliche Intelligenz