Metadata only
Datum
2021-03Typ
- Conference Paper
Abstract
As machine learning is increasingly used to inform consequential decision-making (e.g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision. Counterfactual explanations -"how the world would have (had) to be different for a desirable outcome to occur"- aim to satisfy these criteria. Existing works have primarily focused on designing algorithms to obtain counterfactual explanations for a wide range of settings. However, it has largely been overlooked that ultimately, one of the main objectives is to allow people to act rather than just understand. In layman's terms, counterfactual explanations inform an individual where they need to get to, but not how to get there. In this work, we rely on causal reasoning to caution against the use of counterfactual explanations as a recommendable set of actions for recourse. Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions, shifting the focus from explanations to interventions. © 2021 ACM Mehr anzeigen
Publikationsstatus
publishedExterne Links
Buchtitel
FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and TransparencySeiten / Artikelnummer
Verlag
ACMKonferenz
Thema
algorithmic recourse; explainable artificial intelligence; causal inference; counterfactual explanations; contrastive explanations; consequential recommendations; minimal interventionsOrganisationseinheit
09462 - Hofmann, Thomas / Hofmann, Thomas
09664 - Schölkopf, Bernhard / Schölkopf, Bernhard
Anmerkungen
Due to the Coronavirus (COVID-19) the conference was conducted virtually.