The Importance of Non-Markovianity in Maximum State Entropy Exploration
dc.contributor.author
Mutti, Mirco
dc.contributor.author
De Santi, Riccardo
dc.contributor.author
Restelli, Marcello
dc.contributor.editor
Chaudhuri, Kamalika
dc.contributor.editor
Jegelka, Stefanie
dc.contributor.editor
Song, Le
dc.contributor.editor
Szepesvari, Csaba
dc.contributor.editor
Niu, Gang
dc.contributor.editor
Sabato, Sivan
dc.date.accessioned
2023-05-16T07:52:08Z
dc.date.available
2023-05-05T03:07:52Z
dc.date.available
2023-05-16T07:52:08Z
dc.date.issued
2022
dc.identifier.issn
2640-3498
dc.identifier.uri
http://hdl.handle.net/20.500.11850/610765
dc.description.abstract
In the maximum state entropy exploration framework, an agent interacts with a reward-free environment to learn a policy that maximizes the entropy of the expected state visitations it is inducing. Hazan et al. (2019) noted that the class of Markovian stochastic policies is sufficient for the maximum state entropy objective, and exploiting non-Markovianity is generally considered pointless in this setting. In this paper, we argue that non-Markovianity is instead paramount for maximum state entropy exploration in a finite-sample regime. Especially, we recast the objective to target the expected entropy of the induced state visitations in a single trial. Then, we show that the class of non-Markovian deterministic policies is sufficient for the introduced objective, while Markovian policies suffer non-zero regret in general. However, we prove that the problem of finding an optimal non-Markovian policy is NP-hard. Despite this negative result, we discuss avenues to address the problem in a tractable way and how non-Markovian exploration could benefit the sample efficiency of online reinforcement learning in future works.
en_US
dc.language.iso
en
en_US
dc.publisher
PMLR
en_US
dc.title
The Importance of Non-Markovianity in Maximum State Entropy Exploration
en_US
dc.type
Conference Paper
ethz.book.title
Proceedings of the 39th International Conference on Machine Learning
en_US
ethz.journal.title
Proceedings of Machine Learning Research
ethz.journal.volume
162
en_US
ethz.pages.start
16223
en_US
ethz.pages.end
16239
en_US
ethz.event
39th International Conference on Machine Learning (ICML 2022)
en_US
ethz.event.location
Baltimore, MD, USA
en_US
ethz.event.date
July 17-23, 2022
en_US
ethz.identifier.wos
ethz.publication.place
Cambridge, MA
en_US
ethz.publication.status
published
en_US
ethz.identifier.url
https://proceedings.mlr.press/v162/mutti22a.html
ethz.date.deposited
2023-05-05T03:07:57Z
ethz.source
WOS
ethz.eth
yes
en_US
ethz.availability
Metadata only
en_US
ethz.rosetta.installDate
2024-02-02T23:12:05Z
ethz.rosetta.lastUpdated
2024-02-02T23:12:05Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=The%20Importance%20of%20Non-Markovianity%20in%20Maximum%20State%20Entropy%20Exploration&rft.jtitle=Proceedings%20of%20Machine%20Learning%20Research&rft.date=2022&rft.volume=162&rft.spage=16223&rft.epage=16239&rft.issn=2640-3498&rft.au=Mutti,%20Mirco&De%20Santi,%20Riccardo&Restelli,%20Marcello&rft.genre=proceeding&rft.btitle=Proceedings%20of%20the%2039th%20International%20Conference%20on%20Machine%20Learning
Files in this item
Files | Size | Format | Open in viewer |
---|---|---|---|
There are no files associated with this item. |
Publication type
-
Conference Paper [36606]