Synthesis of Tongue Motion and Acoustics from Text using a Multimodal Articulatory Database

Ingmar Steiner, Sébastien Le Maguer, Alexander Hewer

In: IEEE/ACM Transactions on Audio Speech and Language Processing 25 12 Pages 2351-2361 IEEE 12/2017.


We present an end-to-end text-to-speech (TTS) synthesis system that generates audio and synchronized tongue motion directly from text. This is achieved by adapting a 3D model of the tongue surface to an articulatory dataset and training a statistical parametric speech synthesis system directly on the tongue model parameters. We evaluate the model at every step by comparing the spatial coordinates of predicted articulatory movements against the reference data. The results indicate a global mean Euclidean distance of less than 2.8 mm, and our approach can be adapted to add an articulatory modality to conventional TTS applications without the need for extra data.

Weitere Links

German Research Center for Artificial Intelligence
Deutsches Forschungszentrum für Künstliche Intelligenz