Assessing and alleviating state anxiety in large language models
OPEN ACCESS
Loading...
Author / Producer
Date
2025-03-03
Publication Type
Journal Article
ETH Bibliography
yes
Citations
Altmetric
OPEN ACCESS
Data
Abstract
The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate "anxiety" in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4's reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs' "emotional states" can foster safer and more ethical human-AI interactions.
Permanent link
Publication status
published
External links
Editor
Book title
Journal / series
Volume
8 (1)
Pages / Article No.
132
Publisher
Nature