Tensor Decompositions in Deep Learning


Author / Producer

Date

2022

Publication Type

Doctoral Thesis

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

Tensor Decompositions is a subdomain of multilinear algebra concerned with dimensionality reduction and analysis of multi-dimensional arrays (tensors). The field has numerous applications in physics, chemistry, life sciences, and recently, machine learning, computer vision, and graphics. Despite the maturity of the field, much progress happened in the last years due to affordable parallel compute, driving empirical research. Deep Learning is a young subdomain of machine learning concerned with fitting deep, non-linear parametric models in a non-convex optimization setting with abundant data. The tipping point of interest in deep learning happened when a neural network (AlexNet) set a record-high score on a popular image classification benchmark (ImageNet), thus promising to solve long-standing computer vision problems. Over the past years, most breakthroughs in deep learning happened by finding smarter ways to increase model size and complexity. However, the need to deploy deep models on edge devices, such as for computational photography on mobile phones, has set a new direction for finding lean models. On the other hand, many high-potential deep learning techniques, such as Neural Radiance Fields (NeRF), or vision transformers, leave a huge margin for improvement upon inception. In this thesis, we investigate the use of tensor decompositions in the context of modern deep learning techniques. We aim to improve various types of efficiency: memory footprint and runtime performance, measured in parameters and floating-point operations (FLOPs), respectively. We begin by exploring neural network layer compression schemes and propose a tensorized representation with a basis tensor shared among layers and per-layer coefficients. Subsequently, we study the manifold of Tensor Train (TT) of fixed rank in the context of parameterizing layers of Generative Adversarial Networks (GANs) and demonstrate the ability to compress networks while maintaining the stability of training. Finally, we utilize TT-parameterization to learn compressed NeRFs and devise sampling schemes with support for automatic differentiation to facilitate training. Unlike most previous works on tensor decompositions, we treat decompositions as models in the deep learning sense and update their parameters through backpropagation and optimization. Like prior art, tensorized formats admit to certain algebraic operations, making them an appealing entity at the intersection of two prominent research directions.

Publication status

published

Editor

Contributors

Examiner : Van Gool, Luc
Examiner : Rakhuba, Maxim
Examiner : Ballester-Ripoll, Rafael
Examiner : Pajarola, Renato
Examiner : Novikov, Alexander

Book title

Journal / series

Volume

Pages / Article No.

Publisher

ETH Zurich

Event

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Deep Learning; Machine Learning; Tensor Decompositions; Network Compression; Neural Rendering

Organisational unit

03514 - Van Gool, Luc (emeritus) / Van Gool, Luc (emeritus) check_circle

Notes

Funding