ID: 1705.05070

Statistical Physics and Information Theory Perspectives on Linear Inverse Problems

May 15, 2017

View on ArXiv

Similar papers 2

A Statistical view of Iterative Methods for Linear Inverse Problems

April 4, 2005

88% Match
Ana K. Fermin, Carenne Ludena
Statistics Theory
Statistics Theory

In this article we study the problem of recovering the unknown solution of a linear ill-posed problem, via iterative regularization methods. We review the problem of projection-regularization from a statistical point of view. A basic purpose of the paper is the consideration of adaptive model selection for determining regularization parameters. This article introduces a new regularized estimator which has the best possible adaptive properties for a wide range of linear functi...

Find SimilarView on arXiv

Computational and Statistical Tradeoffs via Convex Relaxation

November 5, 2012

88% Match
Venkat Chandrasekaran, Michael I. Jordan
Statistics Theory
Information Theory
Information Theory
Optimization and Control
Statistics Theory

In modern data analysis, one is frequently faced with statistical inference problems involving massive datasets. Processing such large datasets is usually viewed as a substantial computational challenge. However, if data are a statistician's main resource then access to more data should be viewed as an asset rather than as a burden. In this paper we describe a computational framework based on convex relaxation to reduce the computational complexity of an inference procedure w...

Find SimilarView on arXiv

Optimal subgradient algorithms with application to large-scale linear inverse problems

February 28, 2014

88% Match
Masoud Ahookhosh
Optimization and Control

This study addresses some algorithms for solving structured unconstrained convex optimiza- tion problems using first-order information where the underlying function includes high-dimensional data. The primary aim is to develop an implementable algorithmic framework for solving problems with multi- term composite objective functions involving linear mappings using the optimal subgradient algorithm, OSGA, proposed by Neumaier in [49]. To this end, we propose some prox-functions...

Find SimilarView on arXiv

Probabilistic Linear Solvers for Machine Learning

October 19, 2020

88% Match
Jonathan Wenger, Philipp Hennig
Machine Learning
Numerical Analysis
Numerical Analysis

Linear systems are the bedrock of virtually all numerical computation. Machine learning poses specific challenges for the solution of such systems due to their scale, characteristic structure, stochasticity and the central role of uncertainty in the field. Unifying earlier work we propose a class of probabilistic linear solvers which jointly infer the matrix, its inverse and the solution from matrix-vector product observations. This class emerges from a fundamental set of des...

Find SimilarView on arXiv

Performance Trade-Offs in Multi-Processor Approximate Message Passing

April 10, 2016

88% Match
Junan Zhu, Ahmad Beirami, Dror Baron
Information Theory
Distributed, Parallel, and C...
Machine Learning
Information Theory

We consider large-scale linear inverse problems in Bayesian settings. Our general approach follows a recent line of work that applies the approximate message passing (AMP) framework in multi-processor (MP) computational systems by storing and processing a subset of rows of the measurement matrix along with corresponding measurements at each MP node. In each MP-AMP iteration, nodes of the MP system and its fusion center exchange lossily compressed messages pertaining to their ...

Find SimilarView on arXiv

Spectral Inference Methods on Sparse Graphs: Theory and Applications

October 14, 2016

87% Match
Alaa Saade
Disordered Systems and Neura...
Information Theory
Machine Learning
Information Theory

In an era of unprecedented deluge of (mostly unstructured) data, graphs are proving more and more useful, across the sciences, as a flexible abstraction to capture complex relationships between complex objects. One of the main challenges arising in the study of such networks is the inference of macroscopic, large-scale properties affecting a large number of objects, based solely on the microscopic interactions between their elementary constituents. Statistical physics, precis...

Find SimilarView on arXiv

Sensing Theorems for Unsupervised Learning in Linear Inverse Problems

March 23, 2022

87% Match
Julián Tachella, Dongdong Chen, Mike Davies
Machine Learning
Machine Learning
Image and Video Processing

Solving an ill-posed linear inverse problem requires knowledge about the underlying signal model. In many applications, this model is a priori unknown and has to be learned from data. However, it is impossible to learn the model using observations obtained via a single incomplete measurement operator, as there is no information about the signal model in the nullspace of the operator, resulting in a chicken-and-egg problem: to learn the model we need reconstructed signals, but...

Find SimilarView on arXiv

Modern Regularization Methods for Inverse Problems

January 30, 2018

87% Match
Martin Benning, Martin Burger
Numerical Analysis

Regularization methods are a key tool in the solution of inverse problems. They are used to introduce prior knowledge and make the approximation of ill-posed (pseudo-)inverses feasible. In the last two decades interest has shifted from linear towards nonlinear regularization methods even for linear inverse problems. The aim of this paper is to provide a reasonably comprehensive overview of this development towards modern nonlinear regularization methods, including their analy...

Find SimilarView on arXiv

Least squares approximations in linear statistical inverse learning problems

November 22, 2022

87% Match
Tapio Helin
Statistics Theory
Numerical Analysis
Numerical Analysis
Statistics Theory

Statistical inverse learning aims at recovering an unknown function $f$ from randomly scattered and possibly noisy point evaluations of another function $g$, connected to $f$ via an ill-posed mathematical model. In this paper we blend statistical inverse learning theory with the classical regularization strategy of applying finite-dimensional projections. Our key finding is that coupling the number of random point evaluations with the choice of projection dimension, one can d...

Find SimilarView on arXiv

AMP-Inspired Deep Networks for Sparse Linear Inverse Problems

December 4, 2016

87% Match
Mark Borgerding, Philip Schniter, Sundeep Rangan
Information Theory
Information Theory

Deep learning has gained great popularity due to its widespread success on many inference problems. We consider the application of deep learning to the sparse linear inverse problem, where one seeks to recover a sparse signal from a few noisy linear measurements. In this paper, we propose two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: th...

Find SimilarView on arXiv