Metadata only
Date
2016-06-24Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Since its introduction, Word2Vec and its variants are widely used to learn semantics-preserving representations of words or entities in an embedding space, which can be used to produce state-of-art results for various Natural Language Processing tasks. Existing implementations aim to learn efficiently by running multiple threads in parallel while operating on a single model in shared memory, ignoring incidental memory update collisions. We show that these collisions can degrade the efficiency of parallel learning, and propose a straightforward caching strategy that improves the efficiency by a factor of 4. Show more
Publication status
publishedExternal links
Journal / series
arXivPages / Article No.
Publisher
Cornell UniversityEvent
Organisational unit
09462 - Hofmann, Thomas / Hofmann, Thomas
More
Show all metadata
ETH Bibliography
yes
Altmetrics