Abstract
In reinforcement learning (RL), rewards of states are typically considered additive, and following the Markov assumption, they are independent of states visited previously. In many important applications, such as coverage control, experiment design and informative path planning, rewards naturally have diminishing returns, i.e., their value decreases in light of similar states visited previously. To tackle this, we propose Submodular RL (subRL), a paradigm which seeks to optimize more general, non-additive (and history-dependent) rewards modelled via submodular set functions, which capture diminishing returns. Unfortunately, in general, even in tabular settings, we show that the resulting optimization problem is hard to approximate. On the other hand, motivated by the success of greedy algorithms in classical submodular optimization, we propose subPO, a simple policy gradient-based algorithm for subRL that handles non-additive rewards by greedily maximizing marginal gains. Indeed, under some assumptions on the underlying Markov Decision Process (MDP), subPO recovers optimal constant factor approximations of submodular bandits. Moreover, we derive a natural policy gradient approach for locally optimizing subRL instances even in large state- and action- spaces. We showcase the versatility of our approach by applying subPO to several applications, such as biodiversity monitoring, Bayesian experiment design, informative path planning, and coverage maximization. Our results demonstrate sample efficiency, as well as scalability to high-dimensional state-action spaces. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000637315Publication status
publishedExternal links
Book title
Sixteenth European Workshop on Reinforcement LearningPublisher
OpenReviewEvent
Subject
Reinforcement learning; submodular optimization; Complex objectives in RL; Policy gradientOrganisational unit
03908 - Krause, Andreas / Krause, Andreas
09563 - Zeilinger, Melanie / Zeilinger, Melanie
02219 - ETH AI Center / ETH AI Center
Related publications and datasets
Is new version of: https://doi.org/10.48550/arXiv.2307.13372
Notes
Presentation held on September 16, 2023More
Show all metadata
ETH Bibliography
yes
Altmetrics