Abstract
The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate "anxiety" in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4's reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs' "emotional states" can foster safer and more ethical human-AI interactions. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000726521Publication status
publishedExternal links
Journal / series
npj Digital MedicineVolume
Pages / Article No.
Publisher
NatureMore
Show all metadata
ETH Bibliography
yes
Altmetrics