Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

5
  • 1
    $\begingroup$ I am confused. In what sort of case would you know that you prior should depend on the model parameterization? $\endgroup$ Commented Apr 30, 2013 at 19:42
  • 2
    $\begingroup$ If we want to predict longevity as a function of body weight, using a GLM, we know that the conclusion should not be affected whether we weigh the subject in kg or lb; if you use a simple uniform prior over the weights you might get different outcome depending on the units of measurement. $\endgroup$ Commented May 1, 2013 at 10:08
  • 2
    $\begingroup$ That's a case when you know that it shouldn't be affected. What is a case where it should? $\endgroup$ Commented May 3, 2013 at 17:21
  • 1
    $\begingroup$ I think you are missing my point. Say we don't know anything about the attributes, not even that they have units of measurement to which the analysis should be invariant. In that case your prior would encode less information about the problem than the Jeffrey's prior, hence the Jeffrey's prior is not completely uninformative. The may or may not be situations where the analysis should not be invariant to some transformation, but that is beside the point. $\endgroup$ Commented May 3, 2013 at 17:51
  • 2
    $\begingroup$ N.B according to the BUGS book (p83), Jeffrey's himself referred to such transformation invariant priors as being "minimally informative", which implies that he saw them as encoding some information about the problem. $\endgroup$ Commented May 3, 2013 at 18:06