Metadata only
Date
2021-03Type
- Conference Paper
Abstract
As machine learning is increasingly used to inform consequential decision-making (e.g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision. Counterfactual explanations -"how the world would have (had) to be different for a desirable outcome to occur"- aim to satisfy these criteria. Existing works have primarily focused on designing algorithms to obtain counterfactual explanations for a wide range of settings. However, it has largely been overlooked that ultimately, one of the main objectives is to allow people to act rather than just understand. In layman's terms, counterfactual explanations inform an individual where they need to get to, but not how to get there. In this work, we rely on causal reasoning to caution against the use of counterfactual explanations as a recommendable set of actions for recourse. Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions, shifting the focus from explanations to interventions. © 2021 ACM Show more
Publication status
publishedExternal links
Book title
FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and TransparencyPages / Article No.
Publisher
Association for Computing MachineryEvent
Subject
algorithmic recourse; explainable artificial intelligence; causal inference; counterfactual explanations; contrastive explanations; consequential recommendations; minimal interventionsOrganisational unit
09462 - Hofmann, Thomas / Hofmann, Thomas
09664 - Schölkopf, Bernhard / Schölkopf, Bernhard
Notes
Due to the Coronavirus (COVID-19) the conference was conducted virtually.More
Show all metadata