February 19, 2002
Similar papers 3
December 24, 2007
Precise characterization of the mutual information of MIMO systems is required to assess the throughput of wireless communication channels in the presence of Rician fading and spatial correlation. Here, we present an asymptotic approach allowing to approximate the distribution of the mutual information as a Gaussian distribution in order to provide both the average achievable rate and the outage probability. More precisely, the mean and variance of the mutual information of t...
October 12, 2017
Multilayer (or deep) networks are powerful probabilistic models based on multiple stages of a linear transform followed by a non-linear (possibly random) function. In general, the linear transforms are defined by matrices and the non-linear functions are defined by information channels. These models have gained great popularity due to their ability to characterize complex probabilistic relationships arising in a wide variety of inference problems. The contribution of this pap...
February 1, 2021
Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative and general arguments, we discuss, categorize, and...
January 20, 2017
We consider the estimation of a signal from the knowledge of its noisy linear random Gaussian projections. A few examples where this problem is relevant are compressed sensing, sparse superposition codes, and code division multiple access. There has been a number of works considering the mutual information for this problem using the replica method from statistical physics. Here we put these considerations on a firm rigorous basis. First, we show, using a Guerra-Toninelli type...
May 24, 2006
The inclusion of a macroscopic adaptive threshold is studied for the retrieval dynamics of layered feedforward neural network models with synaptic noise. It is shown that if the threshold is chosen appropriately as a function of the cross-talk noise and of the activity of the stored patterns, adapting itself automatically in the course of the recall process, an autonomous functioning of the network is guaranteed.This self-control mechanism considerably improves the quality of...
November 4, 2016
While Shannon's mutual information has widespread applications in many disciplines, for practical applications it is often difficult to calculate its value accurately for high-dimensional variables because of the curse of dimensionality. This paper is focused on effective approximation methods for evaluating mutual information in the context of neural population coding. For large but finite neural populations, we derive several information-theoretic asymptotic bounds and appr...
August 2, 2007
The inclusion of a macroscopic adaptive threshold is studied for the retrieval dynamics of both layered feedforward and fully connected neural network models with synaptic noise. These two types of architectures require a different method to be solved numerically. In both cases it is shown that, if the threshold is chosen appropriately as a function of the cross-talk noise and of the activity of the stored patterns, adapting itself automatically in the course of the recall pr...
September 27, 2021
We present an information-theoretic formalism to study signal transduction in four architectural variants of a model two-step cascade with increasing input population. Our results categorize these four types into two classes depending upon the effect played out by activation and repression on mutual information, net synergy, and signal-to-noise ratio. Within the Gaussian framework and using the linear noise approximation, we derive the analytic expressions for these metrics t...
December 20, 2022
Estimating the Shannon information associated with individual neurons is a non-trivial problem. Three key methods used to estimate the mutual information between neuron inputs and outputs are described, and a list of further readings is provided.
November 11, 2014
To fully characterize the information that two `source' variables carry about a third `target' variable, one must decompose the total information into redundant, unique and synergistic components, i.e. obtain a partial information decomposition (PID). However Shannon's theory of information does not provide formulae to fully determine these quantities. Several recent studies have begun addressing this. Some possible definitions for PID quantities have been proposed, and some ...