A Diachronic Perspective on User Trust in AI under Uncertainty


Date

2023-12

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

In human-AI collaboration, users typically form a mental model of the AI system, which captures the user’s beliefs about when the system performs well and when it does not. The construction of this mental model is guided by both the system’s veracity as well as the system output presented to the user e.g., the system’s confidence and an explanation for the prediction. However, modern NLP systems are seldom calibrated and are often confidently incorrect about their predictions, which violates users’ mental model and erodes their trust. In this work, we design a study where users bet on the correctness of an NLP system, and use it to study the evolution of user trust as a response to these trust-eroding events and how the user trust is rebuilt as a function of time after these events. We find that even a few highly inaccurate confidence estimation instances are enough to damage users’ trust in the system and performance, which does not easily recover over time. We further find that users are more forgiving to the NLP system if it is unconfidently correct rather than confidently incorrect, even though, from a game-theoretic perspective, their payoff is equivalent. Finally, we find that each user can entertain multiple mental models of the system based on the type of the question. These results highlight the importance of confidence calibration in developing user-centered NLP applications to avoid damaging user trust and compromising the collaboration performance.

Publication status

published

Book title

Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Journal / series

Volume

Pages / Article No.

5567 - 5580

Publisher

Association for Computational Linguistics

Event

2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

09684 - Sachan, Mrinmaya / Sachan, Mrinmaya check_circle

Notes

Funding

ETH-19 21-1 - Neuro-cognitive Model Inspired from Human Language Processing (ETHZ)

Related publications and datasets