Journal: Information and Inference: A Journal of the IMA

Loading...

Abbreviation

Publisher

Oxford University Press

Journal Volumes

ISSN

2049-8764
2049-8772

Description

Search Results

Publications 1 - 10 of 14
  • Kireeva, Anastasia; Mourrat, Jean-Christophe (2024)
    Information and Inference: A Journal of the IMA
    Let S and S̃ be two independent and identically distributed random variables, which we interpret as the signal, and let P1 and P2 be two communication channels. We can choose between two measurement scenarios: either we observe S through P1 and P2, and also S̃ through P1 and P2; or we observe S twice through P1, and S̃ twice through P2. In which of these two scenarios do we obtain the most information on the signal (S, S̃)? While the first scenario always yields more information when P1 and P2 are additive Gaussian channels, we give examples showing that this property does not extend to arbitrary channels. As a consequence of this result, we show that the continuous-time mutual information arising in the setting of community detection on sparse stochastic block models is not concave, even in the limit of large system size. This stands in contrast to the case of models with diverging average degree, and brings additional challenges to the analysis of the asymptotic behavior of this quantity.
  • Abdalla, Pedro; Kümmerle, Christian (2022)
    Information and Inference: A Journal of the IMA
    The recovery of signals that are sparse not in a given basis, but rather sparse with respect to an over-complete dictionary is one of the most flexible settings in the field of compressed sensing with numerous applications. As in the standard compressed sensing setting, it is possible that the signal can be reconstructed efficiently from a few linear measurements, for example, by the so-called ℓ1-synthesis method. However, it has been less well understood which measurement matrices provably work for this setting. Whereas in the standard setting, it has been shown that even certain heavy-tailed measurement matrices can be used in the same sample complexity regime as Gaussian matrices, comparable results are only available for the restrictive class of sub-Gaussian measurement matrices as far as the recovery of dictionary-sparse signals via ℓ1-synthesis is concerned. In this work, we fill this gap and establish optimal guarantees for the recovery of vectors that are (approximately) sparse with respect to a dictionary via the ℓ1-synthesis method from linear, potentially noisy measurements for a large class of random measurement matrices. In particular, we show that random measurements that fulfill only a small-ball assumption and a weak moment assumption, such as random vectors with i.i.d. Student-t entries with a logarithmic number of degrees of freedom, lead to comparable guarantees as (sub-)Gaussian measurements. Our results apply for a large class of both random and deterministic dictionaries. As a technical tool, we show a bound on the expectation of the sum of squared order statistics under very general assumptions, which might be of independent interest. As a corollary of our results, we also obtain a slight improvement on the weakest assumption on a measurement matrix with i.i.d. rows sufficient for uniform recovery in standard compressed sensing, improving on results by Lecué and Mendelson and by Dirksen, Lecué and Rauhut.
  • Debiased LASSO under Poisson-Gauss model
    Item type: Journal Article
    Abdalla, Pedro; Kur, Gil (2025)
    Information and Inference: A Journal of the IMA
    Quantifying uncertainty in high-dimensional sparse linear regression is a fundamental task in statistics that arises in various applications. One of the most successful methods for quantifying uncertainty is the debiased LASSO, which has a solid theoretical foundation but is restricted to settings where the noise is purely additive. Motivated by real-world applications, we study the so-called Poisson inverse problem with additive Gaussian noise and propose a debiased LASSO algorithm that only requires $n \gg s\log <^>{2}p$ samples, which is optimal up to a logarithmic factor.
  • Grohs, Philipp; Sprecher, Markus (2016)
    Information and Inference: A Journal of the IMA
  • Super-resolution radar
    Item type: Journal Article
    Heckel, Reinhard; Morgenshtern, Veniamin I.; Soltanolkotabi, Mahdi (2016)
    Information and Inference: A Journal of the IMA
  • Loeffler, Matthias; Wein, Alexander S.; Bandeira, Afonso S. (2022)
    Information and Inference: A Journal of the IMA
  • Riegler, Erwin; Koliander, Günther; Bölcskei, Helmut (2023)
    Information and Inference: A Journal of the IMA
    This paper is concerned with the lossy compression of general random variables, specifically with rate-distortion theory and quantization of random variables taking values in general measurable spaces such as, e.g. manifolds and fractal sets. Manifold structures are prevalent in data science, e.g. in compressed sensing, machine learning, image processing and handwritten digit recognition. Fractal sets find application in image compression and in the modeling of Ethernet traffic. Our main contributions are bounds on the rate-distortion function and the quantization error. These bounds are very general and essentially only require the existence of reference measures satisfying certain regularity conditions in terms of small ball probabilities. To illustrate the wide applicability of our results, we particularize them to random variables taking values in (i) manifolds, namely, hyperspheres and Grassmannians and (ii) self-similar sets characterized by iterated function systems satisfying the weak separation property.
  • Heckel, Reinhard; Tschannen, Michael; Bölcskei, Helmut (2017)
    Information and Inference: A Journal of the IMA
    Subspace clustering refers to the problem of clustering unlabeled high-dimensional data points into a union of low-dimensional linear subspaces, whose number, orientations and dimensions are all unknown. In practice, one may have access to dimensionality-reduced observations of the data only, resulting, e.g., from undersampling due to complexity and speed constraints on the acquisition device or mechanism. More pertinently, even if the high-dimensional data set is available, it is often desirable to first project the data points into a lower-dimensional space and to perform clustering there; this reduces storage requirements and computational cost. The purpose of this article is to quantify the impact of dimensionality reduction through random projection on the performance of three subspace clustering algorithms, all of which are based on principles from sparse signal recovery. Specifically, we analyze the thresholding based subspace clustering (TSC) algorithm, the sparse subspace clustering (SSC) algorithm and an orthogonal matching pursuit variant thereof (SSC-OMP). We find, for all three algorithms, that dimensionality reduction down to the order of the subspace dimensions is possible without incurring significant performance degradation. Moreover, these results are order-wise optimal in the sense that reducing the dimensionality further leads to a fundamentally ill-posed clustering problem. Our findings carry over to the noisy case as illustrated through analytical results for TSC and simulations for SSC and SSC-OMP. Extensive experiments on synthetic and real data complement our theoretical findings.
  • Liu, Ping; Ammari, Habib (2023)
    Information and Inference: A Journal of the IMA
  • Graczyk, Robert; Lapidoth, Amos; Wigger, Michèle (2022)
    Information and Inference: A Journal of the IMA
    Two variations on Wyner’s common information are proposed: conditional common information and relevant common information. These are shown to have operational meanings analogous to those of Wyner’s common information in appropriately defined distributed problems of compression, simulation and channel synthesis. For relevant common information, an additional operational meaning is identified: on a multiple-access channel with private and common messages, it is the minimal common-message rate that enables communication at the maximum sum-rate under a weak coordination constraint on the inputs and output. En route, the weak-coordination problem over a Gray-Wyner network is solved under the no-excess-rate constraint.
Publications 1 - 10 of 14