Multilevel higher order Quasi-Monte Carlo Bayesian Estimation


METADATA ONLY
Loading...

Date

2016-07

Publication Type

Report

ETH Bibliography

yes

Citations

Altmetric
METADATA ONLY

Data

Rights / License

Abstract

We provide deterministic approximation algorithms for Bayesian inverse problems for operator equations with ``noisy'' input data. The algorithms use a multilevel (ML) approach based on deterministic, higher order quasi-Monte Carlo (HoQMC) quadrature for approximating the high-dimensional expectations, which arise in the Bayesian estimators, and a Petrov-Galerkin (PG) method for approximating the solution to the underlying partial differential equation (PDE). This extends the previous single-level approach from [J. Dick, R. N. Gantner, Q. T. Le Gia and Ch. Schwab, Higher order Quasi-Monte Carlo integration for Bayesian Estimation. Report 2016 Seminar for Applied Mathematics, ETH Zurich (in review)]. Compared to the single-level approach, the present convergence analysis of the multilevel method requires stronger assumptions on holomorphy and regularity of the countably-parametric uncertainty-to-observation maps of the forward problem. As in the single-level case and in the affine-parametric case analyzed in [ J. Dick, F. Y. Kuo, Q. T. Le Gia and Ch. Schwab, Multi-level higher order QMC Galerkin discretization for affine parametric operator equations. Accepted for publication in SIAM J. Numer. Anal., 2016], we obtain sufficient conditions which allow us to achieve arbitrarily high, algebraic convergence rates in terms of work, which are independent of the dimension of the parameter space. The convergence rates are limited only by the spatial regularity of the forward problem, the discretization order achieved by the Petrov Galerkin discretization, and by the sparsity of the uncertainty parametrization. We provide detailed numerical experiments for linear elliptic problems in two space dimensions, with s=1024 parameters characterizing the uncertain input, confirming the theory and showing that the ML HoQMC algorithms outperform, in terms of error vs. computational work, both multilevel Monte Carlo (MLMC) methods and single-level (SL) HoQMC methods.

Publication status

published

Editor

Book title

Volume

2016-34

Pages / Article No.

Publisher

Seminar for Applied Mathematics, ETH Zurich

Event

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Higher order quasi-Monte Carlo; Parametric operator equations; Infinite-dimensional quadrature; Bayesian inverse problems; Uncertainty quantification; CBC construction; SPOD weights

Organisational unit

03435 - Schwab, Christoph / Schwab, Christoph check_circle

Notes

Funding

Related publications and datasets