Linguistic-Based Reflection on Trust Calibration in Conversations with LLM-Based Chatbots
DOI:
https://doi.org/10.11576/dataninja-1160Keywords:
trust calibration, linguistic trust cues, LLM-based chatbotsAbstract
This paper presents a linguistic approach to trust in human conversations with LLM-based chatbots. Using the concept of trust calibration as a starting point, we aim to address the question of how to increase user AI literacy and prevent misuse of as well as overtrust in the information provided by LLM-based chatbots in educational contexts. We propose a linguistic-based model of trust calibration that supports users in adopting a critical perspective on trust calibration and controlling their trust level. The method combines previous studies on trust in human interaction, specifically linguistic trust cues displayed by human trustors to indicate their level of trustworthiness in naturally occurring contexts with studies on proactive human-computer interaction and the social influence of conversational agent's embodiment in educational contexts.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Milena Belosevic, Hendrik Buschmeier
This work is licensed under a Creative Commons Attribution 4.0 International License.