October 5, 2004
Similar papers 3
March 9, 2012
The frequentist behavior of nonparametric Bayes estimates, more specifically, rates of contraction of the posterior distributions to shrinking $L^r$-norm neighborhoods, $1\le r\le\infty$, of the unknown parameter, are studied. A theorem for nonparametric density estimation is proved under general approximation-theoretic assumptions on the prior. The result is applied to a variety of common examples, including Gaussian process, wavelet series, normal mixture and histogram prio...
February 1, 2008
We consider nonparametric Bayesian estimation of a probability density $p$ based on a random sample of size $n$ from this density using a hierarchical prior. The prior consists, for instance, of prior weights on the regularity of the unknown density combined with priors that are appropriate given that the density has this regularity. More generally, the hierarchy consists of prior weights on an abstract model index and a prior on a density model for each model index. We prese...
November 19, 2021
In this paper, we study the learning rate of generalized Bayes estimators in a general setting where the hypothesis class can be uncountable and have an irregular shape, the loss function can have heavy tails, and the optimal hypothesis may not be unique. We prove that under the multi-scale Bernstein's condition, the generalized posterior distribution concentrates around the set of optimal hypotheses and the generalized Bayes estimator can achieve fast learning rate. Our resu...
November 27, 2020
We study the posterior contraction rates of a Bayesian method with Gaussian process priors in nonparametric regression and its plug-in property for differential operators. For a general class of kernels, we establish convergence rates of the posterior measure of the regression function and its derivatives, which are both minimax optimal up to a logarithmic factor for functions in certain classes. Our calculation shows that the rate-optimal estimation of the regression functio...
April 5, 2013
Building on ideas from Castillo and Nickl [Ann. Statist. 41 (2013) 1999-2028], a method is provided to study nonparametric Bayesian posterior convergence rates when "strong" measures of distances, such as the sup-norm, are considered. In particular, we show that likelihood methods can achieve optimal minimax sup-norm rates in density estimation on the unit interval. The introduced methodology is used to prove that commonly used families of prior distributions on densities, na...
September 29, 2011
We show that rate-adaptive multivariate density estimation can be performed using Bayesian methods based on Dirichlet mixtures of normal kernels with a prior distribution on the kernel's covariance matrix parameter. We derive sufficient conditions on the prior specification that guarantee convergence to a true density at a rate that is optimal minimax for the smoothness class to which the true density belongs. No prior knowledge of smoothness is assumed. The sufficient condit...
August 3, 2020
Optimality results for two outstanding Bayesian estimation problems are given in this paper: the estimation of the sampling distribution for the squared total variation function and the estimation of the density for the $L^1$-squared loss function. The posterior predictive distribution provides the solution to these problems. Some examples are presented to illustrate it. The Bayesian estimation problem of a distribution function is also addressed. Consistency of the estimator...
December 13, 2013
A novel block prior is proposed for adaptive Bayesian estimation. The prior does not depend on the smoothness of the function or the sample size. It puts sufficient prior mass near the true signal and automatically concentrates on its effective dimension. A rate-optimal posterior contraction is obtained in a general framework, which includes density estimation, white noise model, Gaussian sequence model, Gaussian regression and spectral density estimation.
July 17, 2008
Upper bounds for rates of convergence of posterior distributions associated to Gaussian process priors are obtained by van der Vaart and van Zanten in [14] and expressed in terms of a concentration function involving the Reproducing Kernel Hilbert Space of the Gaussian prior. Here lower-bound counterparts are obtained. As a corollary, we obtain the precise rate of convergence of posteriors for Gaussian priors in various settings. Additionally, we extend the upper-bound result...
November 17, 2014
We prove that the convex least squares estimator (LSE) attains a $n^{-1/2}$ pointwise rate of convergence in any region where the truth is linear. In addition, the asymptotic distribution can be characterized by a modified invelope process. Analogous results hold when one uses the derivative of the convex LSE to perform derivative estimation. These asymptotic results facilitate a new consistent testing procedure on the linearity against a convex alternative. Moreover, we show...