ID: cond-mat/0107313

Numerical simulation of a binary communication channel: Comparison between a replica calculation and an exact solution

July 14, 2001

View on ArXiv

Similar papers 4

Which Boolean Functions are Most Informative?

February 11, 2013

83% Match
Gowtham R. Kumar, Thomas A. Courtade
Information Theory
Information Theory

We introduce a simply stated conjecture regarding the maximum mutual information a Boolean function can reveal about noisy inputs. Specifically, let $X^n$ be i.i.d. Bernoulli(1/2), and let $Y^n$ be the result of passing $X^n$ through a memoryless binary symmetric channel with crossover probability $\alpha$. For any Boolean function $b:\{0,1\}^n\rightarrow \{0,1\}$, we conjecture that $I(b(X^n);Y^n)\leq 1-H(\alpha)$. While the conjecture remains open, we provide substantial ev...

Find SimilarView on arXiv

Storage capacity of correlated perceptrons

October 22, 1996

83% Match
D. Malzahn, A. Engel, I. Kanter
Disordered Systems and Neura...

We consider an ensemble of $K$ single-layer perceptrons exposed to random inputs and investigate the conditions under which the couplings of these perceptrons can be chosen such that prescribed correlations between the outputs occur. A general formalism is introduced using a multi-perceptron costfunction that allows to determine the maximal number of random inputs as a function of the desired values of the correlations. Replica-symmetric results for $K=2$ and $K=3$ are compar...

Find SimilarView on arXiv

Entropy-Constrained Maximizing Mutual Information Quantization

January 7, 2020

83% Match
Thuan Nguyen, Thinh Nguyen
Information Theory
Information Theory

In this paper, we investigate the quantization of the output of a binary input discrete memoryless channel that maximizing the mutual information between the input and the quantized output under an entropy-constrained of the quantized output. A polynomial time algorithm is introduced that can find the truly global optimal quantizer. These results hold for binary input channels with an arbitrary number of quantized output. Finally, we extend these results to binary input conti...

Find SimilarView on arXiv

Binary autoencoder with random binary weights

April 30, 2020

83% Match
Viacheslav Osaulenko
Machine Learning
Machine Learning

Here is presented an analysis of an autoencoder with binary activations $\{0, 1\}$ and binary $\{0, 1\}$ random weights. Such set up puts this model at the intersection of different fields: neuroscience, information theory, sparse coding, and machine learning. It is shown that the sparse activation of the hidden layer arises naturally in order to preserve information between layers. Furthermore, with a large enough hidden layer, it is possible to get zero reconstruction error...

Find SimilarView on arXiv

Statistical Mechanical Approach to Error Exponents of Lossy Data Compression

November 6, 2003

82% Match
Tadaaki Hosaka, Yoshiyuki Kabashima
Statistical Mechanics
Disordered Systems and Neura...

We present herein a scheme by which to accurately evaluate the error exponents of a lossy data compression problem, which characterize average probabilities over a code ensemble of compression failure and success above or below a critical compression rate, respectively, utilizing the replica method (RM). Although the existing method used in information theory (IT) is, in practice, limited to ensembles of randomly constructed codes, the proposed RM-based approach can be applie...

Find SimilarView on arXiv

High-SNR Asymptotics of Mutual Information for Discrete Constellations with Applications to BICM

December 28, 2012

82% Match
Alex Alvarado, Fredrik Brannstrom, ... , Koch Tobias
Information Theory
Information Theory

Asymptotic expressions of the mutual information between any discrete input and the corresponding output of the scalar additive white Gaussian noise channel are presented in the limit as the signal-to-noise ratio (SNR) tends to infinity. Asymptotic expressions of the symbol-error probability (SEP) and the minimum mean-square error (MMSE) achieved by estimating the channel input given the channel output are also developed. It is shown that for any input distribution, the condi...

Find SimilarView on arXiv

Information-Theoretic Bounds and Approximations in Neural Population Coding

November 4, 2016

82% Match
Wentao Huang, Kechen Zhang
Information Theory
Machine Learning
Information Theory

While Shannon's mutual information has widespread applications in many disciplines, for practical applications it is often difficult to calculate its value accurately for high-dimensional variables because of the curse of dimensionality. This paper is focused on effective approximation methods for evaluating mutual information in the context of neural population coding. For large but finite neural populations, we derive several information-theoretic asymptotic bounds and appr...

Find SimilarView on arXiv

Estimation in the spiked Wigner model: A short proof of the replica formula

January 5, 2018

82% Match
Ahmed El Alaoui, Florent Krzakala
Information Theory
Information Theory
Probability
Statistics Theory
Statistics Theory

We consider the problem of estimating a rank-one perturbation of a Wigner matrix in a setting of low signal-to-noise ratio. This serves as a simple model for principal component analysis in high dimensions. The mutual information per variable between the spike and the observed matrix, or equivalently, the normalized Kullback-Leibler divergence between the planted and null models are known to converge to the so-called {\em replica-symmetric} formula, the properties of which de...

Find SimilarView on arXiv

Optimal Errors and Phase Transitions in High-Dimensional Generalized Linear Models

August 10, 2017

82% Match
Jean Barbier, Florent Krzakala, Nicolas Macris, ... , Zdeborová Lenka
cs.IT
cond-mat.dis-nn
cs.AI
cs.LG
math.IT
math.MP

Generalized linear models (GLMs) arise in high-dimensional machine learning, statistics, communications and signal processing. In this paper we analyze GLMs when the data matrix is random, as relevant in problems such as compressed sensing, error-correcting codes or benchmark models in neural networks. We evaluate the mutual information (or "free entropy") from which we deduce the Bayes-optimal estimation and generalization errors. Our analysis applies to the high-dimensional...

Find SimilarView on arXiv

Data-Driven Estimation of Capacity Upper Bounds

May 13, 2022

82% Match
Christian Häger, Erik Agrell
Information Theory
Machine Learning
Signal Processing
Information Theory

We consider the problem of estimating an upper bound on the capacity of a memoryless channel with unknown channel law and continuous output alphabet. A novel data-driven algorithm is proposed that exploits the dual representation of capacity where the maximization over the input distribution is replaced with a minimization over a reference distribution on the channel output. To efficiently compute the required divergence maximization between the conditional channel and the re...

Find SimilarView on arXiv