Assessing and alleviating state anxiety in large language models


Loading...

Date

2025-03-03

Publication Type

Journal Article

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate "anxiety" in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4's reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs' "emotional states" can foster safer and more ethical human-AI interactions.

Publication status

published

Editor

Book title

Volume

8 (1)

Pages / Article No.

132

Publisher

Nature

Event

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

Notes

Funding

Related publications and datasets