- Conference Paper
As machine learning is increasingly used to inform consequential decision-making (e.g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision. Counterfactual explanations -"how the world would have (had) to be different for a desirable outcome to occur"- aim to satisfy these criteria. Existing works have primarily focused on designing algorithms to obtain counterfactual explanations for a wide range of settings. However, it has largely been overlooked that ultimately, one of the main objectives is to allow people to act rather than just understand. In layman's terms, counterfactual explanations inform an individual where they need to get to, but not how to get there. In this work, we rely on causal reasoning to caution against the use of counterfactual explanations as a recommendable set of actions for recourse. Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions, shifting the focus from explanations to interventions. © 2021 ACM Mehr anzeigen
BuchtitelFAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
Seiten / Artikelnummer
Themaalgorithmic recourse; explainable artificial intelligence; causal inference; counterfactual explanations; contrastive explanations; consequential recommendations; minimal interventions
Organisationseinheit09462 - Hofmann, Thomas / Hofmann, Thomas
09664 - Schölkopf, Bernhard / Schölkopf, Bernhard
AnmerkungenDue to the Coronavirus (COVID-19) the conference was conducted virtually.