You are not logged in. Your edit will be placed in a queue until it is peer reviewed.
We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.
Required fields*
- $\begingroup$ It’s worth mentioning, in view of Lindley’s paradox, that your inferences here are going to depend a lot on your prior. The uniform prior makes it really hard to reject when the true probability is close to, but not quite, 1/2, which is probably the case with an actual coin. $\endgroup$guy– guy2018-08-26 20:14:04 +00:00Commented Aug 26, 2018 at 20:14
- 1$\begingroup$ True, but it is inherently difficult to distinguish parameter values that are close together, and this naturally requires a lot of data. So I don't really see that as a drawback of the method; it is just a natural aspect of statistics. $\endgroup$Ben– Ben2018-08-27 00:37:37 +00:00Commented Aug 27, 2018 at 0:37
- $\begingroup$ It's hard to distinguish points that are close together, sure. But Bayesian inference under a uniform prior diverges drastically from Frequentist inference in this particular case. I can decrease the Bayes factor in favor of the alternative by a factor of 5 just by consider a Uniform(.4,.6) prior under the alternative rather than a Uniform(0,1); this applies in settings where the Uniform(.4,.6) prior and Uniform(0,1) prior result in essentially the same inference about $\pi$ when you don't include a point mass at $1/2$. $\endgroup$guy– guy2018-08-27 02:11:00 +00:00Commented Aug 27, 2018 at 2:11
- $\begingroup$ Let me see if I can dumb down your answer a bit: We basically comsider not just the probability of experiencing a result for the null hypothesis, we alsp consider the probability of experiencing the result we got given a whole range of heads probabilities from 0 to 1 $\endgroup$moonman239– moonman2392018-09-15 02:59:08 +00:00Commented Sep 15, 2018 at 2:59
- $\begingroup$ @moonman: Yes, that is the essence of Bayesian analysis --- we have an unknown probability of heads, represented by a parameter $\theta$, and we give this a prior distribution and then determine the posterior from the data. In hypothesis testing this generally entails giving a distribution under specific values, not just the general alternative hypothesis. $\endgroup$Ben– Ben2018-09-15 03:39:16 +00:00Commented Sep 15, 2018 at 3:39
Add a comment |
How to Edit
- Correct minor typos or mistakes
- Clarify meaning without changing it
- Add related resources or links
- Always respect the author’s intent
- Don’t use edits to reply to the author
How to Format
- create code fences with backticks ` or tildes ~ ```
like so
``` - add language identifier to highlight code ```python
def function(foo):
print(foo)
``` - put returns between paragraphs
- for linebreak add 2 spaces at end
- _italic_ or **bold**
- indent code by 4 spaces
- backtick escapes
`like _so_` - quote by placing > at start of line
- to make links (use https whenever possible) <https://example.com>[example](https://example.com)<a href="https://example.com">example</a>
- MathJax equations
$\sin^2 \theta$
How to Tag
A tag is a keyword or label that categorizes your question with other, similar questions. Choose one or more (up to 5) tags that will help answerers to find and interpret your question.
- complete the sentence: my question is about...
- use tags that describe things or concepts that are essential, not incidental to your question
- favor using existing popular tags
- read the descriptions that appear below the tag
If your question is primarily about a topic for which you can't find a tag:
- combine multiple words into single-words with hyphens (e.g. machine-learning), up to a maximum of 35 characters
- creating new tags is a privilege; if you can't yet create a tag you need, then post this question without it, then ask the community to create it for you