Open access
Date
2020-11Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
The question of how to probe contextual word representations in a way that is principled and useful has seen significant recent attention. In our contribution to this discussion, we argue, first, for a probe metric that reflects the trade-off between probe complexity and performance: the Pareto hypervolume. To measure complexity, we present a number of parametric and non-parametric metrics. Our experiments with such metrics show that probe{'}s performance curves often fail to align with widely accepted rankings between language representations (with, e.g., non-contextual representations outperforming contextual ones). These results lead us to argue, second, that common simplistic probe tasks such as POS labeling and dependency arc labeling, are inadequate to evaluate the properties encoded in contextual word representations. We propose full dependency parsing as an example probe task, and demonstrate it with the Pareto hypervolume. In support of our arguments, the results of this illustrative experiment conform closer to accepted rankings among contextual word representations. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000462313Publication status
publishedExternal links
Book title
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Pages / Article No.
Publisher
Association for Computational LinguisticsEvent
Organisational unit
09682 - Cotterell, Ryan / Cotterell, Ryan
Notes
Due to the Coronavirus (COVID-19) the conference was conducted virtually.More
Show all metadata
ETH Bibliography
yes
Altmetrics