Exponentially Faster Language Modelling
dc.contributor.author
Belcak, Peter
dc.contributor.author
Wattenhofer, Roger
dc.date.accessioned
2023-11-27T07:11:51Z
dc.date.available
2023-11-20T18:48:10Z
dc.date.available
2023-11-21T07:40:57Z
dc.date.available
2023-11-21T14:20:30Z
dc.date.available
2023-11-27T07:11:51Z
dc.date.issued
2023-11-21
dc.identifier.other
10.48550/arXiv.2311.10770
en_US
dc.identifier.uri
http://hdl.handle.net/20.500.11850/642884
dc.identifier.doi
10.3929/ethz-b-000642884
dc.description.abstract
Language models only really need to use an exponential fraction of their neurons for individual inferences. As proof, we present UltraFastBERT, a BERT variant that uses 0.3% of its neurons during inference while performing on par with similar BERT models. UltraFastBERTselectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by replacing feedforward networks with fast feedforward networks (FFFs). While no truly efficient implementation currently exists to unlock the full acceleration potential of conditional neural execution, we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation, and a PyTorch implementation delivering 40x speedup over the equivalent batched feedforward inference. We publish our training code, benchmarking setup, and model weights. (https://github.com/pbelcak/UltraFastBERT)
en_US
dc.format
application/pdf
en_US
dc.language.iso
en
en_US
dc.publisher
Cornell University
en_US
dc.rights.uri
http://rightsstatements.org/page/InC-NC/1.0/
dc.subject
Language models
en_US
dc.subject
Feedforward neural network
en_US
dc.subject
Fast feedforward network
en_US
dc.subject
Model Acceleration
en_US
dc.title
Exponentially Faster Language Modelling
en_US
dc.type
Working Paper
dc.rights.license
In Copyright - Non-Commercial Use Permitted
ethz.journal.title
arXiv
ethz.pages.start
2311.10770
en_US
ethz.size
6 p.
en_US
ethz.version.edition
v2
en_US
ethz.identifier.arxiv
2311.10770
ethz.publication.place
Ithaca, NY
en_US
ethz.publication.status
published
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02140 - Dep. Inf.technologie und Elektrotechnik / Dep. of Inform.Technol. Electrical Eng.::02640 - Inst. f. Technische Informatik und Komm. / Computer Eng. and Networks Lab.::03604 - Wattenhofer, Roger / Wattenhofer, Roger
en_US
ethz.relation.isSupplementedBy
https://github.com/pbelcak/UltraFastBERT
ethz.date.deposited
2023-11-20T18:48:10Z
ethz.source
FORM
ethz.eth
yes
en_US
ethz.availability
Open access
en_US
ethz.rosetta.installDate
2023-11-27T07:11:57Z
ethz.rosetta.lastUpdated
2023-11-27T07:11:57Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Exponentially%20Faster%20Language%20Modelling&rft.jtitle=arXiv&rft.date=2023-11-21&rft.spage=2311.10770&rft.au=Belcak,%20Peter&Wattenhofer,%20Roger&rft.genre=preprint&rft_id=info:doi/10.48550/arXiv.2311.10770&
Files in this item
Publication type
-
Working Paper [6060]