Multimodal Input in the Car, Today and Tomorrow

Christian Müller, Garrett Weinberg

In: IEEE Multimedia 18 1 Seiten 98-102 IEEE Computer Society 1/2011.


After a surge in horrific automobile accidents in which distracted driving was proven to be a factor, 38 US states have enacted texting-while-driving bans. While nearly everyone can agree that pecking out a love note on a tiny mobile phone keypad while simultaneously trying to operate a vehicle is bad idea, what about the other activities that we perform on a day-to-day basis using the electronic devices either built in or brought in to our cars? Finding a nearby restaurant acceptable to the vegetarian in the back set? Locating and queuing up that new album you downloaded to your iPod? This article offers a brief overview of multimodal (speech, touch, gaze, etc.) input theory as it pertains to common in-vehicle tasks and devices. After a brief introduction, we walk through a sample multimodal interaction, detailing the steps involved and how information necessary to the interaction can be obtained by combining input modes in various ways. We also discuss how contemporary in-vehicle systems take advantage of multimodality (or fail to do so), and how the capabilities of such systems might be broadened in the future via clever multimodal input mechanisms.

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence