November 12, 2002
Similar papers 2
April 22, 2024
In this work we study symmetric random matrices with variance profile satisfying certain conditions. We establish the convergence of the operator norm of these matrices to the largest element of the support of the limiting empirical spectral distribution. We prove that it is sufficient for the entries of the matrix to have finite only the $4$-th moment or the $4+\epsilon$ moment in order for the convergence to hold in probability or almost surely respectively. Our approach de...
October 2, 2010
Let H=A+UBU* where A and B are two N-by-N Hermitian matrices and U is a Haar-distributed random unitary matrix, and let \mu_H, \mu_A, and \mu_B be empirical measures of eigenvalues of matrices H, A, and B, respectively. Then, it is known (see, for example, Pastur-Vasilchuk, CMP, 2000, v.214, pp.249-286) that for large N, measure \mu_H is close to the free convolution of measures \mu_A and \mu_B, where the free convolution is a non-linear operation on probability measures. T...
June 15, 2015
In contemporary applied and computational mathematics, a frequent challenge is to bound the expectation of the spectral norm of a sum of independent random matrices. This quantity is controlled by the norm of the expected square of the random matrix and the expectation of the maximum squared norm achieved by one of the summands; there is also a weak dependence on the dimension of the random matrix. The purpose of this paper is to give a complete, elementary proof of this impo...
January 11, 2013
Non-asymptotic theory of random matrices strives to investigate the spectral properties of random matrices, which are valid with high probability for matrices of a large fixed size. Results obtained in this framework find their applications in high-dimensional convexity, analysis of convergence of algorithms, as well as in random matrix theory itself. In these notes we survey some recent results in this area and describe the techniques aimed for obtaining explicit probability...
January 2, 2022
Considering random matrix $X \in \mathcal M_{p,n}$ with independent columns satisfying the convex concentration properties issued from a famous theorem of Talagrand, we express the linear concentration of the resolvent $Q = (I_p - \frac{1}{n}XX^T) ^{-1}$ around a classical deterministic equivalent with a good observable diameter for the nuclear norm. The general proof relies on a decomposition of the resolvent as a series of powers of $X$.
April 22, 2015
Matrix concentration inequalities give bounds for the spectral-norm deviation of a random matrix from its expected value. These results have a weak dimensional dependence that is sometimes, but not always, necessary. This paper identifies one of the sources of the dimensional term and exploits this insight to develop sharper matrix concentration inequalities. In particular, this analysis delivers two refinements of the matrix Khintchine inequality that use information beyond ...
March 25, 2012
Let $X,X_1,...,X_n$ be independent identically distributed random variables. The paper deals with the question about the behavior of the concentration function of the random variable $\sum_{k=1}^{n}a_k X_k$ according to the arithmetic structure of coefficients $a_k$. Recently the interest to this question has increased significantly due to the study of distributions of eigenvalues of random matrices. In this paper we formulate and prove some refinements of the results of Frie...
May 3, 2013
This paper derives exponential tail bounds and polynomial moment inequalities for the spectral norm deviation of a random matrix from its mean value. The argument depends on a matrix extension of Stein's method of exchangeable pairs for concentration of measure, as introduced by Chatterjee. Recent work of Mackey et al. uses these techniques to analyze random matrices with additive structure, while the enhancements in this paper cover a wider class of matrix-valued random elem...
March 8, 2016
We prove estimates for the expected value of operator norms of Gaussian random matrices with independent and mean-zero entries, acting as operators from $\ell^m_{p^*}$ to $\ell_q^n$, $1\leq p^* \leq 2 \leq q \leq \infty$.
April 22, 2010
We give a new, elementary proof of a key inequality used by Rudelson in the derivation of his well-known bound for random sums of rank-one operators. Our approach is based on Ahlswede and Winter's technique for proving operator Chernoff bounds. We also prove a concentration inequality for sums of random matrices of rank one with explicit constants.