Open access
Date
2023-11-21Type
- Working Paper
ETH Bibliography
yes
Altmetrics
Abstract
Language models only really need to use an exponential fraction of their neurons for individual inferences. As proof, we present UltraFastBERT, a BERT variant that uses 0.3% of its neurons during inference while performing on par with similar BERT models. UltraFastBERTselectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by replacing feedforward networks with fast feedforward networks (FFFs). While no truly efficient implementation currently exists to unlock the full acceleration potential of conditional neural execution, we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation, and a PyTorch implementation delivering 40x speedup over the equivalent batched feedforward inference. We publish our training code, benchmarking setup, and model weights. (https://github.com/pbelcak/UltraFastBERT) Show more
Permanent link
https://doi.org/10.3929/ethz-b-000642884Publication status
publishedJournal / series
arXivPages / Article No.
Publisher
Cornell UniversityEdition / version
v2Subject
Language models; Feedforward neural network; Fast feedforward network; Model AccelerationOrganisational unit
03604 - Wattenhofer, Roger / Wattenhofer, Roger
Related publications and datasets
Is supplemented by: https://github.com/pbelcak/UltraFastBERT
More
Show all metadata
ETH Bibliography
yes
Altmetrics