ID: math/0610712

A Linear Programming Inequality with Applications to Concentration of Measure

October 24, 2006

View on ArXiv
Leonid Kontorovich
Mathematics
Functional Analysis
Probability

We prove an elementary yet useful inequality bounding the maximal value of certain linear programs. This leads directly to a bound on the martingale difference for arbitrarily dependent random variables, providing a generalization of some recent concentration of measure results. The linear programming inequality may be of independent interest.

Similar papers 1

Concentration of measure without independence: a unified approach via the martingale method

February 1, 2016

87% Match
Aryeh Kontorovich, Maxim Raginsky
Probability
Information Theory
Information Theory

The concentration of measure phenomenon may be summarized as follows: a function of many weakly dependent random variables that is not too sensitive to any of its individual arguments will tend to take values very close to its expectation. This phenomenon is most completely understood when the arguments are mutually independent random variables, and there exist several powerful complementary methods for proving concentration inequalities, such as the martingale method, the en...

Find SimilarView on arXiv

An Introduction to Matrix Concentration Inequalities

January 7, 2015

87% Match
Joel A. Tropp
math.PR
cs.DS
cs.IT
cs.NA
math.IT
stat.ML

In recent years, random matrices have come to play a major role in computational mathematics, but most of the classical areas of random matrix theory remain the province of experts. Over the last decade, with the advent of matrix concentration inequalities, research has advanced to the point where we can conquer many (formerly) challenging problems with a page or two of arithmetic. The aim of this monograph is to describe the most successful methods from this area along with ...

Find SimilarView on arXiv

On the Missing Factor in Some Concentration Inequalities for Martingales

December 29, 2024

87% Match
Arun Kumar Kuchibhotla
Probability
Statistics Theory
Applications
Statistics Theory

In this note, we improve some concentration inequalities for martingales with bounded increments. These results recover the missing factor in Freedman-style inequalities and are near optimal. We also provide minor refinements of concentration inequalities for functions of independent random variables. These proofs use techniques from the works of Bentkus and Pinelis.

Find SimilarView on arXiv

Concentration inequalities for dependent Random variables via the martingale method

September 29, 2006

87% Match
Aryeh Leonid, Kontorovich, Kavita Ramanan
Probability
Functional Analysis

The martingale method is used to establish concentration inequalities for a class of dependent random sequences on a countable state space, with the constants in the inequalities expressed in terms of certain mixing coefficients. Along the way, bounds are obtained on martingale differences associated with the random sequences, which may be of independent interest. As applications of the main result, concentration inequalities are also derived for inhomogeneous Markov chains a...

Find SimilarView on arXiv

A Note on Matrix Concentration Inequalities via the Method of Exchangeable Pairs

December 10, 2012

87% Match
Daniel Paulin
Probability
Functional Analysis

The aim of this paper is to prove an improved version of the bounded differences inequality for matrix valued functions, by developing the methods of Mackey et al.: "Matrix Concentration Inequalities via the Method of Exchangeable Pairs". Along the way, we prove new trace inequalities for the matrix exponential.

Find SimilarView on arXiv

Anti-Concentration Inequalities for the Difference of Maxima of Gaussian Random Vectors

August 23, 2024

87% Match
Alexandre Belloni, Ethan X. Fang, Shuting Shen
Statistics Theory
Probability
Statistics Theory

We derive novel anti-concentration bounds for the difference between the maximal values of two Gaussian random vectors across various settings. Our bounds are dimension-free, scaling with the dimension of the Gaussian vectors only through the smaller expected maximum of the Gaussian subvectors. In addition, our bounds hold under the degenerate covariance structures, which previous results do not cover. In addition, we show that our conditions are sharp under the homogeneous c...

Find SimilarView on arXiv

PAC-Bayesian Inequalities for Martingales

October 31, 2011

87% Match
Yevgeny Seldin, François Laviolette, Nicolò Cesa-Bianchi, ... , Auer Peter
Machine Learning
Information Theory
Information Theory
Machine Learning

We present a set of high-probability inequalities that control the concentration of weighted averages of multiple (possibly uncountably many) simultaneously evolving and interdependent martingales. Our results extend the PAC-Bayesian analysis in learning theory from the i.i.d. setting to martingales opening the way for its application to importance weighted sampling, reinforcement learning, and other interactive learning domains, as well as many other domains in probability t...

Find SimilarView on arXiv

Concentration Inequalities for Bounded Random Vectors

August 30, 2013

86% Match
Xinjia Chen
Probability
Machine Learning
Statistics Theory
Statistics Theory

We derive simple concentration inequalities for bounded random vectors, which generalize Hoeffding's inequalities for bounded scalar random variables. As applications, we apply the general results to multinomial and Dirichlet distributions to obtain multivariate concentration inequalities.

Find SimilarView on arXiv

Concentration of Measure Inequalities and Their Communication and Information-Theoretic Applications

October 10, 2015

86% Match
Maxim Raginsky, Igal Sason
Information Theory
Information Theory
Probability

During the last two decades, concentration of measure has been a subject of various exciting developments in convex geometry, functional analysis, statistical physics, high-dimensional statistics, probability theory, information theory, communications and coding theory, computer science, and learning theory. One common theme which emerges in these fields is probabilistic stability: complicated, nonlinear functions of a large number of independent or weakly dependent random va...

Find SimilarView on arXiv

Obtaining Measure Concentration from Markov Contraction

November 7, 2007

86% Match
Aryeh Leonid, Kontorovich
Probability
Functional Analysis

Concentration bounds for non-product, non-Haar measures are fairly recent: the first such result was obtained for contracting Markov chains by Marton in 1996 via the coupling method. The work that followed, with few exceptions, also used coupling. Although this technique is of unquestionable utility as a theoretical tool, it is not always simple to apply. As an alternative to coupling, we use the elementary Markov contraction lemma to obtain simple, useful, and apparently nov...

Find SimilarView on arXiv