Tokenization and the Noiseless Channel


Date

2023-07

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

Subword tokenization is a key part of most NLP pipelines. However, little is known about why some tokenizer and hyperparameter combinations lead to improved downstream model performance over others. We propose that good tokenizers lead to efficient channel usage, where the channel is the means by which some input is conveyed to the model and efficiency can be quantified in information-theoretic terms as the ratio of the Shannon entropy to the maximum entropy of the subword distribution. Nevertheless, an optimal encoding according to Shannon entropy assigns extremely long codes to low-frequency subwords and very short codes to high-frequency subwords.Defining efficiency in terms of Rényi entropy, on the other hand, penalizes distributions with either very high or very low-frequency subwords.We posit that (1) extremely high-frequency subwords are problematic because their meaning is not distinct and (2) that low-frequency subwords may not appear frequently enough for their meaning to be learned properly; encodings that induce unigram distributions with either can harm model performance. In machine translation, we find that across multiple tokenizers, the Rényi entropy has a very strong correlation with BLEU: 0.82 in comparison to just -0.30 for compressed length.

Publication status

published

Book title

Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers

Journal / series

Volume

Pages / Article No.

5184 - 5207

Publisher

Association for Computational Linguistics

Event

61st Annual Meeting of the Association for Computational Linguistics (ACL 2023)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

09684 - Sachan, Mrinmaya / Sachan, Mrinmaya check_circle
09682 - Cotterell, Ryan / Cotterell, Ryan check_circle

Notes

Funding

Related publications and datasets