All Roads Lead to Rome? Exploring the Invariance of Transformers' Representations
Abstract
Transformer models bring propelling advances in various NLP tasks, thus inducing lots of interpretability research on the learned representations of the models. However, we raise a fundamental question regarding the reliability of the representations. Specifically, we investigate whether transformers learn essentially isomorphic representation spaces, or those that are sensitive to the random seeds in their pretraining process. In this work, we formulate the Bijection Hypothesis, which suggests the use of bijective methods to align different models' representation spaces. We propose a model based on invertible neural networks, BERT-INN, to learn the bijection more effectively than other existing bijective methods such as the canonical correlation analysis (CCA). We show the advantage of BERT-INN both theoretically and through extensive experiments, and apply it to align the reproduced BERT embeddings to draw insights that are meaningful to the interpretability research. Our code is at https://github.com/twinkle0331/BERT-similarity. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000653513Publication status
publishedJournal / series
arXivPages / Article No.
Publisher
Cornell UniversityEdition / version
v1Subject
Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); FOS: Computer and information sciencesOrganisational unit
09684 - Sachan, Mrinmaya / Sachan, Mrinmaya
Funding
ETH-19 21-1 - Neuro-cognitive Model Inspired from Human Language Processing (ETHZ)
More
Show all metadata
ETH Bibliography
yes
Altmetrics