ID: 1204.5243

Repulsive Mixtures

April 24, 2012

View on ArXiv

Similar papers 2

Relabelling Algorithms for Large Dataset Mixture Models

March 10, 2014

87% Match
Wanchuang Zhu, Yanan Fan
Applications

Mixture models are flexible tools in density estimation and classification problems. Bayesian estimation of such models typically relies on sampling from the posterior distribution using Markov chain Monte Carlo. Label switching arises because the posterior is invariant to permutations of the component parameters. Methods for dealing with label switching have been studied fairly extensively in the literature, with the most popular approaches being those based on loss function...

Find SimilarView on arXiv

Repulsive Mixture Models of Exponential Family PCA for Clustering

April 7, 2020

87% Match
Maoying Qiao, Tongliang Liu, Jun Yu, ... , Tao Dacheng
Machine Learning
Machine Learning

The mixture extension of exponential family principal component analysis (EPCA) was designed to encode much more structural information about data distribution than the traditional EPCA does. For example, due to the linearity of EPCA's essential form, nonlinear cluster structures cannot be easily handled, but they are explicitly modeled by the mixing extensions. However, the traditional mixture of local EPCAs has the problem of model redundancy, i.e., overlaps among mixing co...

Find SimilarView on arXiv

Computational Solutions for Bayesian Inference in Mixture Models

December 18, 2018

87% Match
Gilles Celeux, Kaniav Kamary, Gertraud Malsiner-Walli, ... , Robert Christian P.
Computation

This chapter surveys the most standard Monte Carlo methods available for simulating from a posterior distribution associated with a mixture and conducts some experiments about the robustness of the Gibbs sampler in high dimensional Gaussian settings. This is a chapter prepared for the forthcoming 'Handbook of Mixture Analysis'.

Find SimilarView on arXiv

Minimum Message Length Clustering Using Gibbs Sampling

January 16, 2013

87% Match
Ian Davidson
Machine Learning
Machine Learning

The K-Mean and EM algorithms are popular in clustering and mixture modeling, due to their simplicity and ease of implementation. However, they have several significant limitations. Both coverage to a local optimum of their respective objective functions (ignoring the uncertainty in the model space), require the apriori specification of the number of classes/clsuters, and are inconsistent. In this work we overcome these limitations by using the Minimum Message Length (MML) pri...

Find SimilarView on arXiv

Bayesian Inference for Latent Biologic Structure with Determinantal Point Processes (DPP)

June 27, 2015

87% Match
Yanxun Xu, Peter Mueller, Donatello Telesca
Methodology
Applications

We discuss the use of the determinantal point process (DPP) as a prior for latent structure in biomedical applications, where inference often centers on the interpretation of latent features as biologically or clinically meaningful structure. Typical examples include mixture models, when the terms of the mixture are meant to represent clinically meaningful subpopulations (of patients, genes, etc.). Another class of examples are feature allocation models. We propose the DPP pr...

Find SimilarView on arXiv

Proximity penalty priors for Bayesian mixture models

July 27, 2011

87% Match
Matthew Sperrin
Methodology

When using mixture models it may be the case that the modeller has a-priori beliefs or desires about what the components of the mixture should represent. For example, if a mixture of normal densities is to be fitted to some data, it may be desirable for components to focus on capturing differences in location rather than scale. We introduce a framework called proximity penalty priors (PPPs) that allows this preference to be made explicit in the prior information. The approach...

Find SimilarView on arXiv

Bayesian inference of Gaussian mixture models with noninformative priors

May 19, 2014

87% Match
Colin J. Stoneking
Methodology

This paper deals with Bayesian inference of a mixture of Gaussian distributions. A novel formulation of the mixture model is introduced, which includes the prior constraint that each Gaussian component is always assigned a minimal number of data points. This enables noninformative improper priors such as the Jeffreys prior to be placed on the component parameters. We demonstrate difficulties involved in specifying a prior for the standard Gaussian mixture model, and show how ...

Find SimilarView on arXiv

Bayesian Clustering via Fusing of Localized Densities

March 31, 2023

86% Match
Alexander Dombowsky, David B. Dunson
Methodology

Bayesian clustering typically relies on mixture models, with each component interpreted as a different cluster. After defining a prior for the component parameters and weights, Markov chain Monte Carlo (MCMC) algorithms are commonly used to produce samples from the posterior distribution of the component labels. The data are then clustered by minimizing the expectation of a clustering loss function that favours similarity to the component labels. Unfortunately, although these...

Find SimilarView on arXiv

Bayesian density regression for discrete outcomes

March 31, 2016

86% Match
Georgios Papageorgiou
Methodology

We develop Bayesian models for density regression with emphasis on discrete outcomes. The problem of density regression is approached by considering methods for multivariate density estimation of mixed scale variables, and obtaining conditional densities from the multivariate ones. The approach to multivariate mixed scale outcome density estimation that we describe represents discrete variables, either responses or covariates, as discretised versions of continuous latent vari...

Find SimilarView on arXiv

Covariate-dependent hierarchical Dirichlet process

July 2, 2024

86% Match
Huizi Zhang, Sara Wade, Natalia Bochkina
Methodology

The intricacies inherent in contemporary real datasets demand more advanced statistical models to effectively address complex challenges. In this article we delve into problems related to identifying clusters across related groups, when additional covariate information is available. We formulate a novel Bayesian nonparametric approach based on mixture models, integrating ideas from the hierarchical Dirichlet process and "single-atoms" dependent Dirichlet process. The proposed...

Find SimilarView on arXiv