Skip to main content Skip to main navigation

Publikation

Using Large Language Models for Adaptive Dialogue Management in Digital Telephone Assistants

Hassan Soliman; Milos Kravcik; Nagasandeepa Basvoju; Patrick Jähnichen
In: UMAP Adjunct '24: Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization. International Conference on User Modeling, Adaptation, and Personalization (UMAP-2024), July 1-4, Cagliari, Italy, Pages 399-405, ISBN 979-8-4007-0466-6, ACM, New York, NY, United States, 6/2024.

Zusammenfassung

The advent of modern information technology such as Large Language Models (LLMs) allows for massively simplifying and streamlining the communication processes in human-machine interfaces. In the specific domain of healthcare, and for patient practice interaction in particular, user acceptance of automated voice assistants remains a challenge to be solved. We explore approaches to increase user satisfaction by language model based adaptation of user-directed utterances. The presented study considers parameters such as gender, age group, and sentiment for adaptation purposes. Different LLMs and open-source models are evaluated for their effectiveness in this task. The models are compared, and their performance is assessed based on speed, cost, and the quality of the generated text, with the goal of selecting an ideal model for utterance adaptation. We find that carefully designed prompts and a well-chosen set of evaluation metrics, which balance the relevancy and adequacy of adapted utterances, are crucial for optimizing user satisfaction in conversational artificial intelligence systems successfully. Importantly, our research demonstrates that the GPT-3.5-turbo model currently provides the most balanced performance in terms of adaptation relevancy and adequacy, underscoring its suitability for scenarios that demand high adherence to the information in the original utterances, as required in our case.

Projekte

Weitere Links