Timeline for Why can we assume that samples $X_i$'s are independent if the parameter is fixed (though unknown)?
Current License: CC BY-SA 3.0
7 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| S Jan 29, 2015 at 17:18 | history | suggested | jaradniemi | CC BY-SA 3.0 | capitalization |
| Jan 29, 2015 at 16:36 | review | Suggested edits | |||
| S Jan 29, 2015 at 17:18 | |||||
| Jan 27, 2015 at 6:39 | comment | added | Charlie Parker | I see @Sven thanks that makes sense now. I think I also see how it things make sense in bayesianism, in bayesianism we are updating our belief about $\theta$ (which is represented as a probability value). Which obviously depends on the data. Our belief/uncertainty about the parameter we think it is has to change as we receive more data, otherwise this whole process it silly. Thanks! | |
| Jan 27, 2015 at 6:34 | comment | added | Sven | I think Xi'an said it before but: in the frequentist setting, there is no randomness in $\theta$. Each outcome might affect our estimate $\hat{\theta}$ of $\theta$, but $Y$ is drawn not using $\hat{\theta}$, but $\theta$. | |
| Jan 27, 2015 at 6:19 | comment | added | Charlie Parker | I think I see what you mean, but I am still confused about one thing, so the reason that the following doesn't apply in frequentists "Each toss tells us something about the parameter θ, and thereby about the probability of the next toss" is because of your last paragraph? Or why? It seems like a sentence that holds true regardless of which view you hold, no? every coin toss affects our $\theta$. Also, why does that sentence hold even more in a bayesian setting? | |
| Jan 27, 2015 at 5:57 | history | edited | Yair Daon | CC BY-SA 3.0 | summary |
| Jan 27, 2015 at 5:47 | history | answered | Yair Daon | CC BY-SA 3.0 |