I am new to statistics so please bear with my question. There are some similar questions to mine but I didn't get the clear answer after reading them. I have a program that simulates the coin tossing situation. The user enters a guess and the program determines whether the user's guess is correct or not. Each user's guess will be considered as the null hypothesis. For example the user enters: the number of getting heads in 1000 coin flips is 400 if the probability of each single flip is 0.5. Then based on calculating the lower and upper critical values with the significance value of 0.05, the program informs the user whether their guess is correct or not. To code my program, I can think of the following way:
The program receives the number of flips from the user which is 1000 in this example and then does the 1000 flips for 100 separate times and after all experiments are done, the program calculates the mean of heads:
$mean = \frac{1}{100} \sum_{i=1}^{100} a_i$
where each a(i) is the number of heads in each 1000 flips. Eventually if the calculated mean is between lower and upper critical values, the user's guess is failed to reject, otherwise it is rejected.
Now I wonder if this way is the correct one?
Thanks.