3
$\begingroup$

In short: Could you share any references that explore the assessment of "convergence" and mean-estimate precision in MCMC by means of quantiles (or related quantities), rather than of variance?

Longer explanation:

All common measures for assessing "convergence" of Markov-chain Monte Carlo, as well as the error of the estimate of the mean obtained from it, seem to be essentially based on the computation of variances/ standard deviations, or estimates thereof. This seems to be true, for instance, for methods based on batches and on time-series and autocorrelation computations. Examples are batch means, Geweke diagnostic, Geyer's initial-sequence estimator, and others; see for instance the Handbook of Markov Chain Monte Carlo.

I'm looking for works that explore the use of estimators and diagnostics based on quantiles rather than variances; or on related quantities such as interquartile range or median absolute deviation. Grateful to anyone who can share relevant references!

NB: I'm not speaking about estimation of quantiles, but about estimation by means of quantiles.

Edit: Why?

  • First of all, just curiosity

  • The use of the Monte Carlo Standard Error is related to various beautiful theorems about its (possible) normality as the number of MCMC samples increases. As far as I know, all of these theorems assume that the stationary distribution of the Markov chain has a finite variance (besides other assumptions about ergodicity etc). But what if it hasn't? Suppose for instance that it's a t-distribution with degrees of freedom between 1 and 2 (finite mean, infinite variance). (Edit: interesting recent paper about this!)

  • Error-related quantities like IQR and MAD are more robust than standard deviation.

$\endgroup$
5
  • 1
    $\begingroup$ Could you clarify the possible connection between "mean-estimate precision" and quantiles? The two would seem to be only distantly related. $\endgroup$ Commented Jul 3 at 13:12
  • $\begingroup$ @whuber In the sense that the "precision" of the MC-estimate of the mean is usually quantified as an (estimated) standard deviation; but one could also use an IQR. Tricky with terminology here... $\endgroup$ Commented Jul 3 at 13:38
  • $\begingroup$ @whuber changed "precision" to "error" and modified the wording slightly. Not sure if it's more comprehensible now. $\endgroup$ Commented Jul 3 at 13:40
  • 1
    $\begingroup$ Thanks. There is a related technical sense of "precision" in this context: it is the reciprocal of the variance of the mean. I mention that because it highlights the unique relevance of the variance for assessing any estimate of any parameter and leads me to wonder, what do you hope to learn from studying quantiles of a Markov chain? $\endgroup$ Commented Jul 3 at 14:05
  • $\begingroup$ @whuber thank you for the useful terminological explanation – I wasn't aware! I'm going to answer your "why?" question in my original question, will edit it now. $\endgroup$ Commented Jul 3 at 15:02

1 Answer 1

4
$\begingroup$

If I understand the question correctly then the improved rank-based $\hat{R}$ of Vehtari et al. and the associated effective sample size estimators (for mean, median, quantiles, ...) is exactly what you are looking for: https://doi.org/10.1214/20-BA1221

It is also arguably a pretty standard diagnostic, being the default in Stan and associated packages.

$\endgroup$
4
  • $\begingroup$ Thank you very much for pointing this out. Not being a fan of the Gelman-Rubin R, and thinking this was just some changes to it, I skipped it altogether. But it has some nice ideas, especially as regards infinite variance, that are independent of that particular diagnostic. $\endgroup$ Commented Jul 4 at 10:47
  • 1
    $\begingroup$ @pglpm nice to hear. As a practitioner, my experience over many models (mostly Stan but also others) is that the current $\hat{R}$ version produces almost 0 false positives (when it is > 1.01, there is a problem) while being very sensitive (when there are problems signalled by other diagnostics, $\hat{R}$ is also very frequently high). So I can recommend. $\endgroup$ Commented Jul 4 at 12:38
  • $\begingroup$ MUCH appreciated comment, thank you! $\endgroup$ Commented Jul 4 at 14:05
  • $\begingroup$ I'll wait a bit before selecting this as answer, to invite others to share other possible references. $\endgroup$ Commented Jul 4 at 14:07

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.