
Open access
Datum
2025-03-03Typ
- Journal Article
ETH Bibliographie
yes
Altmetrics
Abstract
The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate "anxiety" in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4's reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs' "emotional states" can foster safer and more ethical human-AI interactions. Mehr anzeigen
Persistenter Link
https://doi.org/10.3929/ethz-b-000726521Publikationsstatus
publishedExterne Links
Zeitschrift / Serie
npj Digital MedicineBand
Seiten / Artikelnummer
Verlag
NatureETH Bibliographie
yes
Altmetrics