Loss Minimization Yields Multicalibration for Large Neural Networks


Loading...

Date

2024

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

Multicalibration is a notion of fairness for predictors that requires them to provide calibrated predictions across a large set of protected groups. Multicalibration is known to be a distinct goal than loss minimization, even for simple predictors such as linear functions. In this work, we consider the setting where the protected groups can be represented by neural networks of size k, and the predictors are neural networks of size n > k. We show that minimizing the squared loss over all neural nets of size n implies multicalibration for all but a bounded number of unlucky values of n. We also give evidence that our bound on the number of unlucky values is tight, given our proof technique. Previously, results of the flavor that loss minimization yields multicalibration were known only for predictors that were near the ground truth, hence were rather limited in applicability. Unlike these, our results rely on the expressivity of neural nets and utilize the representation of the predictor.

Publication status

published

Book title

15th Innovations in Theoretical Computer Science Conference (ITCS 2024)

Volume

287

Pages / Article No.

Publisher

Schloss Dagstuhl – Leibniz-Zentrum für Informatik

Event

15th Innovations in Theoretical Computer Science Conference (ITCS 2024)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Multi-group fairness; loss minimization; neural networks

Organisational unit

09622 - Steurer, David / Steurer, David check_circle

Notes

Funding

Related publications and datasets