Parallel markov chain Monte Carlo

Markov chain Monte Carlo (MCMC) sampling has garnered wide acceptance as a very general approach for approximating integrals (expectations) with respect to a wide range of distributions. As the complexity of statistical models has increased and datasets have grown in size, it is critical to design c...

Full description

Saved in:
Bibliographic Details
Main Author: Tibbits, Matthew M
Format: Dissertation
Language:English
Published: ProQuest Dissertations & Theses 01-01-2011
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Markov chain Monte Carlo (MCMC) sampling has garnered wide acceptance as a very general approach for approximating integrals (expectations) with respect to a wide range of distributions. As the complexity of statistical models has increased and datasets have grown in size, it is critical to design computationally efficient MCMC sampling algorithms. Further, as MCMC algorithms are, by design, serial in nature, the efficient use of modern, parallel computing hardware is non-trivial. It is also important to develop 'off-the-shelf' or automated MCMC algorithms that require minimal user intervention to allow a more general user access to these methods without extensive knowledge of the optimal tuning parameters for either the MCMC algorithm or its implementation within a parallelized environment. These driving forces necessitate the development of efficient MCMC sampling techniques which can be decomposed and distributed across parallel architectures irrespective of the statistical model in question. Methods for parallelizing MCMC algorithms tend to focus on either parallelizing the evolution of a single Markov chain, or when these computations are not easily decomposed, employ multiple chains in a cooperative fashion. In this dissertation, we investigate three parallel sampling approaches. The first is an extension to the parallel multivariate slice sampler where sampling occurs in a transformed space to simultaneously reduce posterior autocorrelation in successive draws and facilitate an automated tuning procedure. The second approach is a parallelized single-chain Langevin-Hastings sampler where the gradient vector and Hessian matrix are simultaneously approximated when analytically intractable. The third approach is a parallelized multi-chain method of constructing an efficient interpolator for an expensive likelihood computation which can then be efficiently sampled using a single chain. These parallel samplers are then compared on the bases of sampling and computational efficiency within the context of several examples including linear and generalized-linear spatial models.
ISBN:9781303552076
1303552078