On Imitation in Mean-field Games
METADATA ONLY
Loading...
Author / Producer
Date
2024-07
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
METADATA ONLY
Data
Rights / License
Abstract
We explore the problem of imitation learning (IL) in the context of mean-field games (MFGs), where the goal is to imitate the behavior of a population of agents following a Nash equilibrium policy according to some unknown payoff function. IL in MFGs presents new challenges compared to single-agent IL, particularly when both the reward function and the transition kernel depend on the population distribution. In this paper, departing from the existing literature on IL for MFGs, we introduce a new solution concept called the Nash imitation gap. Then we show that when only the reward depends on the population distribution, IL in MFGs can be reduced to single-agent IL with similar guarantees. However, when the dynamics is population-dependent, we provide a novel upper-bound that suggests IL is harder in this setting. To address this issue, we propose a new adversarial formulation where the reinforcement learning problem is replaced by a mean-field control (MFC) problem, suggesting progress in IL within MFGs may have to build upon MFC.
Permanent link
Publication status
published
External links
Book title
Advances in Neural Information Processing Systems 36
Journal / series
Volume
Pages / Article No.
40426 - 40437
Publisher
Curran
Event
37th Annual Conference on Neural Information Processing Systems (NeurIPS 2023)
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Machine Learning (cs.LG); Computer Science and Game Theory (cs.GT); FOS: Computer and information sciences
Organisational unit
09729 - He, Niao / He, Niao
Notes
Poster presented on December 13, 2023.
Funding
Related publications and datasets
Is new version of: https://openreview.net/forum?id=RPFd3D3P3L