Jiang and Tanner (2008) consider a method of classification using the Gibbs posterior which is directly constructed from the empirical classification errors. They propose an algorithm to sample from the Gibbs posterior which utilizes a smoothed approximation of the empirical classification error, via a Gibbs sampler with augmented latent variables. In this paper, we note some drawbacks of this algorithm and propose an alternative method for sampling from the Gibbs posterior, based on the Metropolis algorithm. The numerical performance of the algorithms is examined and compared via simulated data. We find that the Metropolis algorithm produces good classification results at an improved speed of computation.
ASJC Scopus subject areas
- Statistics, Probability and Uncertainty
- Statistics and Probability