Challenges for Using Impact Regularizers to Avoid Negative Side Effects


Loading...

Date

2022

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

Designing reward functions for reinforcement learning is difficult: besides specifying which behavior is rewarded for a task, the reward also has to discourage undesired outcomes. Misspecified reward functions can lead to unintended negative side effects, and overall unsafe behavior. To overcome this problem, recent work proposed to augment the specified reward function with an impact regularizer that discourages behavior that has a big impact on the environment. Although initial results with impact regularizers seem promising in mitigating some types of side effects, important challenges remain. In this paper, we examine the main current challenges of impact regularizers and relate them to fundamental design decisions. We discuss in detail which challenges recent approaches address and which remain unsolved. Finally, we explore promising directions to overcome the unsolved challenges in preventing negative side effects with impact regularizers.

Publication status

published

External links

Editor

Book title

Journal / series

Volume

Pages / Article No.

Publisher

n.n.

Event

35th AAAI Conference on Artificial Intelligence, workshop on Artificial Intelligence Safety (SafeAI 2022)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

09479 - Grewe, Benjamin / Grewe, Benjamin check_circle

Notes

Funding

Related publications and datasets