All Roads Lead to Rome? Exploring the Invariance of Transformers' Representations


Loading...

Date

2023-05-23

Publication Type

Working Paper

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

Transformer models bring propelling advances in various NLP tasks, thus inducing lots of interpretability research on the learned representations of the models. However, we raise a fundamental question regarding the reliability of the representations. Specifically, we investigate whether transformers learn essentially isomorphic representation spaces, or those that are sensitive to the random seeds in their pretraining process. In this work, we formulate the Bijection Hypothesis, which suggests the use of bijective methods to align different models' representation spaces. We propose a model based on invertible neural networks, BERT-INN, to learn the bijection more effectively than other existing bijective methods such as the canonical correlation analysis (CCA). We show the advantage of BERT-INN both theoretically and through extensive experiments, and apply it to align the reproduced BERT embeddings to draw insights that are meaningful to the interpretability research. Our code is at https://github.com/twinkle0331/BERT-similarity.

Publication status

published

Editor

Book title

Journal / series

Volume

Pages / Article No.

2305.14555

Publisher

Cornell University

Event

Edition / version

v1

Methods

Software

Geographic location

Date collected

Date created

Subject

Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); FOS: Computer and information sciences

Organisational unit

09684 - Sachan, Mrinmaya / Sachan, Mrinmaya check_circle

Notes

Funding

ETH-19 21-1 - Neuro-cognitive Model Inspired from Human Language Processing (ETHZ)

Related publications and datasets