Skip to main content Skip to main navigation

Publikation

Mapping and Calibrating User Trust with LLMs: First Steps Towards Developing a Framework for Shaping Trust

Samuel Hill; Joy Belgassem; Felix Nadolni
In: Asbjørn Følstad; Sebastian Hobert; Symeon Papadopoulos; Effie L.-C. Law; Theo Araujo; Petter Bae Brandtzæg (Hrsg.). Proceedings of the 9th International Symposium on Chatbots and Human-Centred AI. International Symposium on Chatbots and Human-Centred AI (CONVERSATIONS-2025), November 12-13, Lübeck, Germany, Lecture Notes in Computer Science (LNCS), Springer, 11/2025.

Zusammenfassung

Trust in Large Language Models (LLMs) is critical for ethical and effective deployment, especially in high-stakes public sector contexts. This study combines a literature review and a qualitative user study with public administration professionals to explore how user trust in LLMs can be mapped and calibrated. The result is a Trust Areas/Trust Dimensions (TA/TD) framework that identifies key factors influencing trust, including accuracy, transparency, privacy, and ethical considerations. The framework supports trust calibration by aligning user expectations with system capabilities and informs future design and governance strategies. It offers a structured, adaptable tool for evaluating and guiding trust in LLMs across evolving technological and societal landscapes.

Weitere Links