Statistical Mechanics and Information Theory in Approximate Robust Inference
- Doctoral Thesis
Rights / licenseIn Copyright - Non-Commercial Use Permitted
Dealing with noisy inputs to optimization problems has been one of the central tasks in the field of inference since its invention. The reason is clear: no data comes with- out measurement uncertainty. Robustly optimizing under such uncertainty is the first cornerstone of this thesis. The second cornerstone stems from an idea of contemplat- ing uncertain optimization problems — combinatorial ones in our case, — as large disordered systems and analyzing their optimization behavior using the methods of statistical mechanics. Questions concerning optimization methods per se — e.g. re- lated to a specific optimization procedure or guarantees thereof, — are not considered in this work. More precisely, we first address robust optimization under uncertainty by means of an approximation set-based approach to inference. Here, approximation set-based approach refers to a family of methods regularizing the Empirical Risk Minimization (ERM). One performs it by sampling from a set of near-optimal solutions instead of returning the ERM optimizer. We will describe this approach and design proof-of- concept experiments which support its usability. In addition, we will address one of the known computational bottlenecks of the approach by proving and testing theo- retical results. Second, we expand the approximation set-based approach to the field of algo- rithmic combinatorial optimization under uncertain input, picking as an example its application to the Minimum Spanning Tree (MST) problem. In short, the aim is to perform approximation by means of optimal stopping of the algorithm. We will derive a fast computation pipeline which is free of computational bottlenecks men- tioned above; we then carry out experiments revealing an ability of our approach not only to robustly solve MST problems, but also to establish a score that ranks various algorithms according to their expected localization error — as in model validation. Next, we gradually shift our focus to Maximum Entropy inference for combi- natorial optimization problems. Considering them as large disordered particle sys- tems (Mezard and Montanari, 2009) driven by energy optimization via Gibbs distribu- tions, we rigorously study a so-called Gibbs relaxation of the approximation set-based approach. Surprisingly, this study leads to a prominent mathematical problem — asymptotically-precise computing of free energy, — which we then fully solve for a special class of combinatorial optimization problems. Thus, in this part of the thesis we fulfill a twofold task: on the one hand, we study properties of Gibbs relaxation of iiiiv approximate robust optimization; the other, we prove a theoretical result, which is of an independent interest in statistical mechanics. Last, we drift away from approximate inference and remain in statistical mechani- cal setting. Inspired by the theoretical results described in the previous paragrpah, we ask fundamental questions about statistical mechanics of combinatorial optimization problems and how their properties define their solution structure. We pick a com- binatorial optimization problem (sparse Minimum Bisection Problem, sMBP) and compare its asymptotic behavior with the one of well-known Random Energy Model (REM, Derrida, 1981), leading to some interesting conjectures about di erences in their search complexity. Show more
External linksSearch print copy at ETH Library
Organisational unit03659 - Buhmann, Joachim M. / Buhmann, Joachim M.
MoreShow all metadata