Metadata only
Autor(in)
Alle anzeigen
Datum
2020Typ
- Conference Paper
ETH Bibliographie
yes
Altmetrics
Abstract
In this paper we delve deep in the Transformer architecture by investigating two of its core components: self-attention and contextual embeddings. In particular, we study the identifiability of attention weights and token embeddings, and the aggregation of context into hidden tokens. We show that, for sequences longer than the attention head dimension, attention weights are not identifiable. We propose effective attention as a complementary tool for improving explanatory interpretations based on attention. Furthermore, we show that input tokens retain to a large degree their identity across the model. We also find evidence suggesting that identity information is mainly encoded in the angle of the embeddings and gradually decreases with depth. Finally, we demonstrate strong mixing of input information in the generation of contextual embeddings by means of a novel quantification method based on gradient attribution. Overall, we show that self-attention distributions are not directly interpretable and present tools to better understand and further investigate Transformer models. Mehr anzeigen
Publikationsstatus
publishedVerlag
International Conference on Learning RepresentationsKonferenz
Thema
Attention; Generation; Interpretability; NLP; Self attention; TransformerOrganisationseinheit
03604 - Wattenhofer, Roger / Wattenhofer, Roger
Anmerkungen
Due to the Corona virus (COVID-19) the conference was conducted virtually.ETH Bibliographie
yes
Altmetrics