I don't understand why the standard error of the mean does not depend on the number of samples of the mean that you take. To clarify, let's use a simplified version of the example in this answer. Two samples of size 10 and 100 respectively are taken from the same set of values and the mean of each sample is computed. We expect those means to be different and the standard error describes the variability in those estimates of the true population mean. My assumption was that is you took 100 samples of variable size, you'd get 100 estimates of the mean which would reduce you standard error. But the number of samples never appear in the standard error formula : $\sigma/n$.
In terms of intuition, the standard error of the mean describes the variability of the estimated mean. But if you compute the mean over the whole sample instead of several sub-samples as I understand it, you only have 1 estimate. What does the variability of 1 single value represent ?
In terms of maths, let's call $m$ the number of samples (2 in the example above) and $n_i$ the size of sample i (10 and 100 in the example) with $i \in [1,m]$. Let's also call the total sample size $n$ with $n = \sum_i^m n_i$. Using the formula for the standard deviation, I would have written the standard error, i.e, the standard deviation of the mean, as :
$$SE = \sqrt{ \frac{\sum_{i=1}^m (\bar{x_i} - \bar{x})^2}{m}} $$
with $\bar{x_i} = \frac{\sum_{j = 1}^{n_i} x_j}{n_i}$ the mean of sample i.
But I can't figure out how to go from this version to the equation for the SE nor why $m$ does not appear in the final formula. This answer clarifies some of the math but it does not got to the full length to show if $\frac{1}{N}(\frac{\sigma^2}{n} + \sigma_G^2)$ is actually equal to $\sigma / n$. Considering that the mean of the whole sample is not necessarily equal to the mean of the means of sub-samples, I'm thinking maybe the two aren't equal actually. But at that point I'm a bit lost haha.
Thank you for your help.