Skip to main content
18 events
when toggle format what by license comment
Apr 17, 2024 at 6:27 comment added Dikran Marsupial "there would have been no contradiction" there is no contradiction - that is the point! A 95% confidence interval does not CLAIM to contain the "true value" with probability 95%.
Apr 17, 2024 at 6:27 comment added Dikran Marsupial @jginestet I think you are missing the point. There may be multiple CIs, but the point is not that the CI is invalid, just that it doesn't answer the same question as a credible interval. The fact that you can make ANY valid CI that is guaranteed not to contain the true value makes that point. There is another example in David MacKay's book where a 75% CI has only a 50% chance of containing the true value. The example isn't a fallacy, it is just making a valid point which is that credible and confidence intervals are not the same thing.
Apr 16, 2024 at 22:23 comment added jginestet ...cont. That counterexample is a fallacy. It is complex enough that it sounds convincing, but the argument relies on an a-priori information on the probability model which was not used to construct the CI; not fair! And thus not a counter example. Just like your own counter example, in the linked post, which suffers from exactly the same issue: not using all the a-priori known model information...
Apr 16, 2024 at 22:18 comment added jginestet ...cont. The CI shoiuld have been constructed to comply with ALL the information of the model (including that the low bound of the CI had an upper bound by definition, and only then to be of minimal size. This is basically a variant of the Texas sharpshooter fallacy: I challenge you to hit a barn, you shoot the wall in front of you, and then I paint a target on the side wall, and tell you "you missed!". Sorry, no one missed; I withheld information from you. If Jaynes constructed his CI properly (with the constraint on the low bound", there would have been no contradiction. ...cont
Apr 16, 2024 at 22:14 comment added jginestet @DikranMarsupial, except that Jaynes' "counterexample" in not valid. He proposes a probability model, then derives a formula for construction 95% CI's of a parameter of his model. That formula allows for an multiple possible CI's. He then proposes to pick the CI which has the narrowest width (a reasonable proposal in many cases). He then declares this CI to be invalid because its low bound is greater than $x_1$, which is nonsensical given the properties of the model. QFD? No, because when he created that CI, he did not use that property. So of course his CI will not be compatible. ...cont
Apr 16, 2024 at 16:51 comment added Dikran Marsupial " "that one, single CI has at least a 100⋅(1−α) chance" unless it is the single CI in ET Jaynes' example, in which case it definitely doesn't. For me there is usually a good chance of the CI containing the "true value", the only problem is that there is no direct connection between that probability and $\alpha$ (because confidence and credible intervals answer different questions but the answers are usually similar). All we really need is for the reader to understand CIs well enough to understand that, and then they can use their common sense.
Apr 16, 2024 at 14:04 comment added whuber There are some picky voters here ;-). You might weather the vagaries of voting by refining your characterization "that one, single CI has a 100⋅(1−α) chance" to "that one, single CI has at least a 100⋅(1−α) chance (in the sense of the preceding paragraph)..."//Although I agree that this might be a mathematical "distinction without a difference," the two different ways of phrasing the event (that a CI covers the parameter) are understood by English speakers in substantially different ways. To succeed with statistical analysis we need to communicate well, so we need to be sensitive to that.
Apr 16, 2024 at 0:04 comment added Christian Hennig @jginestet Do you know about fiducial inference en.wikipedia.org/wiki/Fiducial_inference ? How is your interpretation different, "common sense" or not? (It's not so "common" by the way, as many disagree with this.)
Apr 15, 2024 at 23:13 comment added jginestet @ChristianHennig, are you commenting on the use of the full information from the model? Then no, this is not "fiducial", this is simply common sense: one can not make some statements based on some information, then another statement based on less information, and then claim that there is a contradiction. This is the Sharpshooter fallacy (I tell 1 shooter to hit the pre-drawn target, but I draw my target after I shot...).
Apr 15, 2024 at 22:36 comment added Christian Hennig This is Fisher's fiducial argument, isn't it? Neyman thought it's wrong but it refuses to die... (also in the literature).
Apr 15, 2024 at 19:58 comment added jginestet ...cont... But if I make use of the full model when interpreting a single CI (if the width is 0, confidence is 50%, otherwise it is 100%), then there is no contradiction. You need to find a better counter-example, if you can... Just as a question: if I say a 95% CI has a 95% chance of containing θ�, or θ� has a 95% chance of being contained in the CI, are the 2 statements: 1) both wrong 2) both correct 3) only 1st is correct 4) only 2nd is correct 5) unknowable? What is your take?
Apr 15, 2024 at 19:58 comment added jginestet @DikranMarsupial, unfortunately your example in the linked post is not valid. When you say your set of 4 CI's is a 75% confidence set, this ignores information you have about the probability model. But then when you say the interval [29,29] is only a 50% interval (btw, why change θ?), you are making use of all the information from the model. Apples and oranges. The set of 4 possible CI's is in fact 2 sets: a set of 2 with 50% confidence, and a set of 2 with 100%. It averages to 75%, but then I ignores that, when min=max, it is really only a 50% CI. ...cont...
Apr 15, 2024 at 10:58 comment added Dikran Marsupial Confusing confidence intervals with Bayesian credible intervals is often benign because there is a choice of prior for which the two are numerically the same. However practitioners need to understand the distinction for the cases where it is not benign, because otherwise they can fall into a pitfall without knowing it was there. It is not just a matter of purism - they are answers to different questions.
Apr 15, 2024 at 10:55 comment added Dikran Marsupial This answer is incorrect (see stats.stackexchange.com/questions/26450/… ) it is possible to construct a valid p% confidence interval for a sample of data that you can be absolutely certain does not contain the true value. I think the error immediately follows "Now, let's continue from here." your probability refers to the imaginary population of experiments that you didn't perform, not the specific experiment that you actually did.
S Apr 15, 2024 at 9:36 history suggested CommunityBot CC BY-SA 4.0
MathJax corrections
Apr 15, 2024 at 7:43 comment added Graham Bornholt Unfortunately, your "correct" interpretation is not correct. Sure, pre-sampling, the random CI may cover the parameter with, say, 95% probability. However, post sample, there is no randomness left so the only probability attached to the sample CI is 0, or 1. From a frequentist perspective, your uncertainty about the true value of a parameter is not enough to justify any other probability value. Hence the term "confidence" is used.
Apr 15, 2024 at 4:37 review Suggested edits
S Apr 15, 2024 at 9:36
Apr 14, 2024 at 23:07 history answered jginestet CC BY-SA 4.0