Fairness-Aware PAC Learning from Corrupted Data
OPEN ACCESS
Loading...
Author / Producer
Date
2022
Publication Type
Journal Article
ETH Bibliography
yes
Citations
Altmetric
OPEN ACCESS
Data
Rights / License
Abstract
Addressing fairness concerns about machine learning models is a crucial step towards their long-term adoption in real-world automated systems. While many approaches have been developed for training fair models from data, little is known about the robustness of these methods to data corruption. In this work we consider fairness-aware learning under worst-case data manipulations. We show that an adversary can in some situations force any learner to return an overly biased classifier, regardless of the sample size and with or without degrading accuracy, and that the strength of the excess bias increases for learning problems with underrepresented protected groups in the data. We also prove that our hardness results are tight up to constant factors. To this end, we study two natural learning algorithms that optimize for both accuracy and fairness and show that these algorithms enjoy guarantees that are order-optimal in terms of the corruption ratio and the protected groups frequencies in the large data limit.
Permanent link
Publication status
published
External links
Editor
Book title
Journal / series
Volume
23
Pages / Article No.
160
Publisher
Microtome Publishing
Event
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Fairness; robustness; data poisoning; trustworthy machine learning; PAC learning
Organisational unit
02219 - ETH AI Center / ETH AI Center