- Conference Paper
Rights / licenseIn Copyright - Non-Commercial Use Permitted
Low-precision neural networks represent both weights and activations with few bits, drastically reducing the multiplication complexity. Nonetheless, these products are accumulated using high-precision (typically 32-bit) additions, an operation that dominates the arithmetic complexity of inference when using extreme quantization (e.g., binary weights). To further optimize inference, we propose WrapNet that adapts neural networks to use low-precision (8-bit) additions in the accumulators, achieving classification accuracy comparable to their 32-bit counterparts. We achieve resilience to low-precision accumulation by inserting a cyclic activation layer, as well as an overflow penalty regularizer. We demonstrate the efficacy of our approach on both software and hardware platforms. Show more
Organisational unit09695 - Studer, Christoph / Studer, Christoph
NotesDue to the Coronavirus (COVID-19) the conference will be conducted virtually.
MoreShow all metadata