Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback


Loading...

Date

2023-12

Publication Type

Journal Article

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-faceted approach to the development of safer AI systems.

Publication status

published

Editor

Book title

Volume

Pages / Article No.

Publisher

OpenReview

Event

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

03908 - Krause, Andreas / Krause, Andreas check_circle
09764 - Tramèr, Florian / Tramèr, Florian

Notes

Funding

Related publications and datasets