On Isotropy Calibration of Transformers
OPEN ACCESS
Loading...
Author / Producer
Date
2022-05
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
OPEN ACCESS
Data
Rights / License
Abstract
Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone. Meanwhile, static word representations (e.g., Word2Vec or GloVe) have been shown to benefit from isotropic spaces. Therefore, previous work has developed methods to calibrate the embedding space of transformers in order to ensure isotropy. However, a recent study (Cai et al., 2021) shows that the embedding space of transformers is locally isotropic, which suggests that these models are already capable of exploiting the expressive capacity of their embedding space. In this work, we conduct an empirical evaluation of state-of-the-art methods for isotropy calibration on transformers and find that they do not provide consistent improvements across models and tasks. These results support the thesis that, given the local isotropy, transformers do not benefit from additional isotropy calibration.
Permanent link
Publication status
published
External links
Book title
Proceedings of the Third Workshop on Insights from Negative Results in NLP
Journal / series
Volume
Pages / Article No.
1 - 9
Publisher
Association for Computational Linguistics
Event
3rd Workshop on Insights from Negative Results in NLP (Insights 2022)
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Organisational unit
03604 - Wattenhofer, Roger / Wattenhofer, Roger