Addressing the Long-term Impact of ML Decisions via Policy Regret
METADATA ONLY
Loading...
Author / Producer
Date
2021-08
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
METADATA ONLY
Data
Rights / License
Abstract
Machine Learning (ML) increasingly informs the allocation of opportunities to individuals and communities in areas such as lending, education, employment, and beyond. Such decisions often impact their subjects' future characteristics and capabilities in an a priori unknown fashion. The decision-maker, therefore, faces exploration-exploitation dilemmas akin to those in multi-armed bandits. Following prior work, we model communities as arms. To capture the long-term effects of ML-based allocation decisions, we study a setting in which the reward from each arm evolves every time the decision-maker pulls that arm. We focus on reward functions that are initially increasing in the number of pulls but may become (and remain) decreasing after a certain point. We argue that an acceptable sequential allocation of opportunities must take an arm's potential for growth into account. We capture these considerations through the notion of policy regret, a much stronger notion than the often-studied external regret, and present an algorithm with provably sub-linear policy regret for sufficiently long time horizons. We empirically compare our algorithm with several baselines and find that it consistently outperforms them, in particular for long time horizons.
Permanent link
Publication status
published
External links
Editor
Book title
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21
Journal / series
Volume
Pages / Article No.
537 - 544
Publisher
International Joint Conferences on Artificial Intelligence
Event
30th International Joint Conference on Artificial Intelligence (IJCAI 2021)
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Organisational unit
03908 - Krause, Andreas / Krause, Andreas