Abstract
Tensor network formats are an efficient tool for numerical computations in many dimensions, yet even this tool often becomes too time- and memory-consuming for a single compute node when applied to problems of scientific interest. Intending to overcome such limitations, we present and analyse a parallelisation scheme for algorithms based on the Hierarchical Tucker Rep- resentation which distributes the network vertices and their associated computations over a set of distributed-memory processors. We then propose a modified version of the alternating least squares (ALS) algorithm for solving linear systems amenable to parallelisation according to the aforementioned scheme and highlight technical considerations important for obtaining an efficient and stable implementation. Our numerical experiments support the theoretical assertion that the parallel scaling of this algorithm is only constrained by the dimensionality and the rank uniformity of the targeted problem. Mehr anzeigen
Publikationsstatus
publishedZeitschrift / Serie
Research ReportBand
Verlag
Seminar für Angewandte Mathematik, ETHOrganisationseinheit
03435 - Schwab, Christoph / Schwab, Christoph
ETH Bibliographie
yes
Altmetrics