In the Monobit Test for randomness, the threshold for passing or failing is based on a confidence interval. I understand that a 1% significance level (99% confidence) results in a larger threshold compared to a 5% significance level (95% confidence).
However, this seems counterintuitive—since higher confidence should mean less deviation from the expected 50:50 ratio, why does it allow a larger difference between the number of ones and zeros? Shouldn't a stricter test (lower alpha) demand a smaller deviation instead?
Could someone clarify how the threshold is mathematically set and why this behavior occurs?
What I Tried: I reviewed the Monobit Test formula, which sets the threshold based on the Z-score corresponding to the chosen confidence level. I also looked into how the critical values for 95% and 99% confidence are derived from the normal distribution.
What I Expected: I expected that a higher confidence level (99%) would result in a stricter test, meaning a smaller allowed difference between the number of ones and zeros.
What Actually Happened: Instead, I found that the higher confidence level (99%) allows a larger difference between ones and zeros compared to 95%. This seems counterintuitive because I assumed that a more stringent test should tolerate less deviation. I’d like clarification on why this happens mathematically.