Metadata only
Date
2023-06-27Type
- Working Paper
ETH Bibliography
yes
Altmetrics
Abstract
Policy learning is an important component of many real-world learning systems. A major challenge in policy learning is how to adapt efficiently to unseen environments or tasks. Recently, it has been suggested to exploit invariant conditional distributions to learn models that generalize better to unseen environments. However, assuming invariance of entire conditional distributions (which we call full invariance) may be too strong of an assumption in practice. In this paper, we introduce a relaxation of full invariance called effect-invariance (e-invariance for short) and prove that it is sufficient, under suitable assumptions, for zero-shot policy generalization. We also discuss an extension that exploits e-invariance when we have a small sample from the test environment, enabling few-shot policy generalization. Our work does not assume an underlying causal graph or that the data are generated by a structural causal model; instead, we develop testing procedures to test e-invariance directly from data. We present empirical results using simulated data and a mobile health intervention dataset to demonstrate the effectiveness of our approach. Show more
Publication status
publishedJournal / series
arXivPages / Article No.
Publisher
Cornell UniversityEdition / version
v2Subject
Machine Learning (stat.ML); Machine Learning (cs.LG); FOS: Computer and information sciencesOrganisational unit
09798 - Peters, Jonas / Peters, Jonas
Related publications and datasets
Is previous version of: https://doi.org/10.3929/ethz-b-000670427
More
Show all metadata
ETH Bibliography
yes
Altmetrics