Notice
This record has been edited as far as possible, missing data will be added when the version of record is issued.
Abstract
Modern neural language models widely used in tasks across NLP risk memorizing sensitive information from their training data. As models continue to scale up in parameters, training data, and compute, understanding memorization in language models is both important from a learning-theoretical point of view, and is practically crucial in real world applications. An open question in previous studies of memorization in language models is how to filter out "common" memorization. In fact, most memorization criteria strongly correlate with the number of occurrences in the training set, capturing "common" memorization such as familiar phrases, public knowledge or templated texts. In this paper, we provide a principled perspective inspired by a taxonomy of human memory in Psychology. From this perspective, we formulate a notion of counterfactual memorization, which characterizes how a model's predictions change if a particular document is omitted during training. We identify and study counterfactually-memorized training examples in standard text datasets. We further estimate the influence of each training example on the validation set and on generated texts, and show that this can provide direct evidence of the source of memorization at test time. Show more
Publication status
acceptedEvent
Subject
Machine Learning; PrivacyOrganisational unit
09764 - Tramèr, Florian / Tramèr, Florian
Notes
Poster presentation on December 12, 2023More
Show all metadata
ETH Bibliography
yes
Altmetrics