Mcmc Error
Contents |
Pseudo-failures to replicate » Markov chain Monte Carlo standard errors Posted byAndrew on 2 April 2007, 12:36 am Galin Jones sent me this paper (by James Flegal, Murali Haran, and himself) which he said started with a suggestion I once made to him long ago. That's monte carlo standard error pretty cool! Here's the abstract: Current reporting of results based on Markov chain Monte Carlo computations monte carlo standard error definition could be improved. In particular, a measure of the accuracy of the resulting estimates is rarely reported in the literature. Thus the reader bayesian standard error has little ability to objectively assess the quality of the reported estimates. This paper is an attempt to address this issue in that we discuss why Monte Carlo standard errors are important, how they can be easily calculated
What Is "monte Carlo Error"
in Markov chain Monte Carlo and how they can be used to decide when to stop the simulation. We compare their use to a popular alternative in the context of two examples. This is a clear paper with some interesting results. My main suggestion is to distinguish two goals: estimating a parameter in a model and estimating an expectation. To use Bayesian notation, if we have simulations theta_1,…,theta_L from a posterior distribution p(theta|y), the two goals are estimating how to calculate monte carlo standard error theta or estimating E(theta|y). (Assume for simplicity here that theta is a scalar, or a scalar summary of a vector parameter.) Inference for theta or inference for E(theta) When the goal is to estimate theta, then all you really need is to estimate theta to more accuracy than its standard error (in Bayesian terms, its posterior standard deviation). For example, if a parameter is estimated at 3.5 +/- 1.2, that's fine. There's no point in knowing that the posterior mean is 3.538. To put it another way, as we draw more simulations, we can estimate that "3.538" more precisely-our standard error on E(theta|y) will approach zero-but that 1.2 ain't going down much. The standard error on theta (that is, sd(theta|y)) is what it is. This is a general issue in simulation (not just using Markov chains), and we discuss it on page 277 of Bayesian Data Analysis (second edition): if the goal is inference about theta, and you have 100 or more independent simulation draws, then the Monte Carlo error adds almost nothing to the uncertainty coming from the actual posterior variance. On the other hand, if your goal is to estimate E(theta|y) to some precision, then you might want lots and lots of simulation draws. This would be the example where you actually want to know that it's 3.538, rather than simply 3.5. In my applications, I want inf
the fact that that the http://amosdevelopment.com/webhelp/imp-post-labelse.htm posterior mean is calculated from a finite sample drawn from the posterior distribution. S.E. is estimated by the method of batch means. By default, 20 batches are used to estimate S.E.. To change the number of batches, click View → Options → MCMC.
Funding Previous Research Funding Inferring the error variance in Metropolis-Hastings MCMC Posted on February 7, 2013 by dovstekel One of the great joys of working with two talented post-docs in the research group - Mike Stout and Mudassar Iqbal https://dovlab.wordpress.com/2013/02/07/inferring-the-error-variance-in-metropolis-hastings-mcmc/ - as well as a great collaboration with Theodore Kypraios, is that they are often one step ahead of me and I am playing catch-up. Recently, Theo has discussed with them how to estimate the error variance associated with the data used in Metropolis-Hastings MCMC simulations. The starting point, usually, is that we have some data, let us say for , and a model - usually, in our case, a monte carlo dynamical system - which we are trying to fit to the data. For any given set of parameters , our model will provide estimates for the data points that we will call . Now, assuming uniform Gaussian errors, our likelihood function looks like: where is the error variance associated with the data. Now, when I first started using MCMC, I naively thought that we could use values for provided by monte carlo standard our experimental collaborators, and so we could use different values of according to how confident our collaborators were in the measurements, equipment etc. What I found in practice was that these values rarely worked (in terms of convergence of the Markov chain) and we have had to make up error variances using trial and error. So I was delighted when I heard that Theo had briefed both Mike and Mudassar about a method for estimating the error variance as part of the MCMC. Since I have not tried it before, I thought I would give it a go. I am posting the theory and some of my simulations, which are helpful results. Theory The theory behind estimating is as follows. First, set We can then re-write the likelihood, now for the model parameters and also the unknown value , as Now observe that this has the functional form of a Gamma distribution for , as the p.d.f. for a Gamma distribution is given by: So if we set a prior distribution for as a Gamma distribution with parameters and , then the conditional posterior distribution for is given by: We observe that this is itself a Gamma distribution, with parameters and . Thus the parameter can be sampled wi