February 19, 2002
Similar papers 2
July 8, 2016
We consider the estimation of a signal from the knowledge of its noisy linear random Gaussian projections, a problem relevant in compressed sensing, sparse superposition codes or code division multiple access just to cite few. There has been a number of works considering the mutual information for this problem using the heuristic replica method from statistical physics. Here we put these considerations on a firm rigorous basis. First, we show, using a Guerra-type interpolatio...
January 30, 2021
The integration and transfer of information from multiple sources to multiple targets is a core motive of neural systems. The emerging field of partial information decomposition (PID) provides a novel information-theoretic lens into these mechanisms by identifying synergistic, redundant, and unique contributions to the mutual information between one and several variables. While many works have studied aspects of PID for Gaussian and discrete distributions, the case of general...
November 24, 2022
Neural networks encode information through their collective spiking activity in response to external stimuli. This population response is noisy and strongly correlated, with complex interplay between correlations induced by the stimulus, and correlations caused by shared noise. Understanding how these correlations affect information transmission has so far been limited to pairs or small groups of neurons, because the curse of dimensionality impedes the evaluation of mutual in...
May 11, 2007
Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under LDPC network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and the sy...
August 19, 2016
One of the most controversial problems in neural decoding is quantifying the information loss caused by ignoring noise correlations during optimal brain computations. For more than a decade, the measure here called $ \Delta I^{DL} $ has been believed exact. However, we have recently shown that it can exceed the information loss $ \Delta I^{B} $ caused by optimal decoders constructed ignoring noise correlations. Unfortunately, the different information notions underlying $ \De...
August 12, 2010
We study the optimality conditions of information transfer in systems with memory in the low signal-to-noise ratio regime of vanishing input amplitude. We find that the optimal mutual information is represented by a maximum-variance of the signal time course, with correlation structure determined by the Fisher information matrix. We provide illustration of the method on a simple biologically-inspired model of electro-sensory neuron. Our general results apply also to the study...
June 5, 1998
The influence of a macroscopic time-dependent threshold on the retrieval process of three-state extremely diluted neural networks is examined. If the threshold is chosen appropriately in function of the noise and the pattern activity of the network, adapting itself in the course of the time evolution, it guarantees an autonomous functioning of the network. It is found that this self-control mechanism considerably improves the retrieval quality, especially in the limit of low ...
March 4, 2019
Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurate approximations to the mutual information but this approach is restricted to continuous variables because the calculation of Fisher information requires derivatives with respect to the encoded variables. In this paper, we conside...
May 8, 2017
In recent years important progress has been achieved towards proving the validity of the replica predictions for the (asymptotic) mutual information (or "free energy") in Bayesian inference problems. The proof techniques that have emerged appear to be quite general, despite they have been worked out on a case-by-case basis. Unfortunately, a common point between all these schemes is their relatively high level of technicality. We present a new proof scheme that is quite straig...
April 14, 2020
Replica-mean-field models have been proposed to decipher the activity of neural networks via a multiply-and-conquer approach. In this approach, one considers limit networks made of infinitely many replicas with the same basic neural structure as that of the network of interest, but exchanging spikes in a randomized manner. The key point is that these replica-mean-field networks are tractable versions that retain important features of the finite structure of interest. To date,...