Human-Guided Fair Classification for Natural Language Processing


Date

2023

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

Text classifiers have promising applications in high-stake tasks such as resume screening and content moderation. These classifiers must be fair and avoid discriminatory decisions by being invariant to perturbations of sensitive attributes such as gender or ethnicity. However, there is a gap between human intuition about these perturbations and the formal similarity specifications capturing them. While existing research has started to address this gap, current methods are based on hardcoded word replacements, resulting in specifications with limited expressivity or ones that fail to fully align with human intuition (e.g., in cases of asymmetric counterfactuals). This work proposes novel methods for bridging this gap by discovering expressive and intuitive individual fairness specifications. We show how to leverage unsupervised style transfer and GPT-3's zero-shot capabilities to automatically generate expressive candidate pairs of semantically similar sentences that differ along sensitive attributes. We then validate the generated pairs via an extensive crowdsourcing study, which confirms that a lot of these pairs align with human intuition about fairness in the context of toxicity classification. Finally, we show how limited amounts of human feedback can be leveraged to learn a similarity specification that can be used to train downstream fairness-aware models.

Publication status

published

Editor

Book title

The Eleventh International Conference on Learning Representations

Journal / series

Volume

Pages / Article No.

Publisher

OpenReview

Event

11th International Conference on Learning Representations (ICLR 2023)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Individual fairness; Style transfer; NLP; Crowdsourcing; Human Evaluation

Organisational unit

09627 - Ash, Elliott / Ash, Elliott check_circle
03948 - Vechev, Martin / Vechev, Martin check_circle
02219 - ETH AI Center / ETH AI Center
02150 - Dep. Informatik / Dep. of Computer Science

Notes

Funding

Related publications and datasets