Does the multiple hypothesis testing problem apply to the calculation of an F statistic for joint significance? It seems to me that the more variables you are including in your test for joint significance, the more you are accepting that any one of your tests will produce a false positive, right?
If so:
- For all intents and purposes, is it really that big of a deal?
- Is there a way to get around this?
EDIT: The responses make me think that I am understanding either the multiple hypothesis testing problem or F tests wrong. Here's an explanation of the conflict that is in my head, which may be incorrect.
My understanding of the multiple hypothesis testing problem is this: Our alpha level (ex. $\alpha=0.05)$ is our accepted Type I error level, which is sort of a theoretical concept. If we are testing multiple hypotheses simultaneously, like $H_1: x_1 = 0$ and $H_2: x_2 = 0$, then we are implicitly testing $H_0:(x_1 = 0 |x_2=0)$, right? In which case we would add $0.05+0.05$ to get the probability of making a Type I error for the joint test, right?
And it's my assumption that theoretically, this is what an F test does. So if you are running an F test on 20 variables, for example, then you are guaranteed, theoretically, to get a Type I error, right?
Or, now that I think about it, perhaps I am understanding Type I error incorrectly. Any help would be appreciated.