Show simple item record

dc.contributor.author
Ramponi, Giorgia
dc.contributor.author
Kolev, Pavel
dc.contributor.author
Pietquin, Olivier
dc.contributor.author
He, Niao
dc.contributor.author
Laurière, Mathieu
dc.contributor.author
Geist, Matthieu
dc.contributor.editor
Oh, Alice
dc.contributor.editor
Naumann, Tristan
dc.contributor.editor
Globerson, Amir
dc.contributor.editor
Saenko, Kate
dc.contributor.editor
Hardt, Moritz
dc.contributor.editor
Levine, Sergey
dc.date.accessioned
2024-07-24T12:21:09Z
dc.date.available
2024-01-27T08:47:32Z
dc.date.available
2024-02-05T13:07:54Z
dc.date.available
2024-07-24T12:21:09Z
dc.date.issued
2024-07
dc.identifier.isbn
978-1-7138-9992-1
en_US
dc.identifier.uri
http://hdl.handle.net/20.500.11850/655722
dc.description.abstract
We explore the problem of imitation learning (IL) in the context of mean-field games (MFGs), where the goal is to imitate the behavior of a population of agents following a Nash equilibrium policy according to some unknown payoff function. IL in MFGs presents new challenges compared to single-agent IL, particularly when both the reward function and the transition kernel depend on the population distribution. In this paper, departing from the existing literature on IL for MFGs, we introduce a new solution concept called the Nash imitation gap. Then we show that when only the reward depends on the population distribution, IL in MFGs can be reduced to single-agent IL with similar guarantees. However, when the dynamics is population-dependent, we provide a novel upper-bound that suggests IL is harder in this setting. To address this issue, we propose a new adversarial formulation where the reinforcement learning problem is replaced by a mean-field control (MFC) problem, suggesting progress in IL within MFGs may have to build upon MFC.
en_US
dc.language.iso
en
en_US
dc.publisher
Curran
en_US
dc.subject
Machine Learning (cs.LG)
en_US
dc.subject
Computer Science and Game Theory (cs.GT)
en_US
dc.subject
FOS: Computer and information sciences
en_US
dc.title
On Imitation in Mean-field Games
en_US
dc.type
Conference Paper
ethz.book.title
Advances in Neural Information Processing Systems 36
en_US
ethz.pages.start
40426
en_US
ethz.pages.end
40437
en_US
ethz.event
37th Annual Conference on Neural Information Processing Systems (NeurIPS 2023)
en_US
ethz.event.location
New Orleans, LA, USA
en_US
ethz.event.date
December 10-16, 2023
en_US
ethz.notes
Poster presented on December 13, 2023.
en_US
ethz.identifier.wos
ethz.publication.place
Red Hook, NY
en_US
ethz.publication.status
published
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02150 - Dep. Informatik / Dep. of Computer Science::02661 - Institut für Maschinelles Lernen / Institute for Machine Learning::09729 - He, Niao / He, Niao
en_US
ethz.leitzahl.certified
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02150 - Dep. Informatik / Dep. of Computer Science::02661 - Institut für Maschinelles Lernen / Institute for Machine Learning::09729 - He, Niao / He, Niao
en_US
ethz.identifier.url
https://neurips.cc/virtual/2023/poster/71662
ethz.relation.isNewVersionOf
https://openreview.net/forum?id=RPFd3D3P3L
ethz.date.deposited
2024-01-27T08:47:32Z
ethz.source
FORM
ethz.eth
yes
en_US
ethz.availability
Metadata only
en_US
ethz.rosetta.installDate
2024-07-24T12:21:12Z
ethz.rosetta.lastUpdated
2025-02-14T12:22:21Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=On%20Imitation%20in%20Mean-field%20Games&rft.date=2024-07&rft.spage=40426&rft.epage=40437&rft.au=Ramponi,%20Giorgia&Kolev,%20Pavel&Pietquin,%20Olivier&He,%20Niao&Lauri%C3%A8re,%20Mathieu&rft.isbn=978-1-7138-9992-1&rft.genre=proceeding&rft.btitle=Advances%20in%20Neural%20Information%20Processing%20Systems%2036
 Search print copy at ETH Library

Files in this item

FilesSizeFormatOpen in viewer

There are no files associated with this item.

Publication type

Show simple item record