Linguistic-Based Reflection on Trust Calibration in Conversations with LLM-Based Chatbots

Authors

  • Milena Belosevic
  • Hendrik Buschmeier

DOI:

https://doi.org/10.11576/dataninja-1160

Keywords:

trust calibration, linguistic trust cues, LLM-based chatbots

Abstract

This paper presents a linguistic approach to trust in human conversations with LLM-based chatbots. Using the concept of trust calibration as a starting point, we aim to address the question of how to increase user AI literacy and prevent misuse of as well as overtrust in the information provided by LLM-based chatbots in educational contexts. We propose a linguistic-based model of trust calibration that supports users in adopting a critical perspective on trust calibration and controlling their trust level. The method combines previous studies on trust in human interaction, specifically linguistic trust cues displayed by human trustors to indicate their level of trustworthiness in naturally occurring contexts with studies on proactive human-computer interaction and the social influence of conversational agent's embodiment in educational contexts.

Downloads

Published

2024-10-11