I noticed when calculating the variance of the slope and intercept estimators in simple linear regression that the final formulae do not contain any $y_i$'s. While I understand the calculation, I was wondering if there is a simple intuitive argument for this. I assume that the errors being i.i.d. with a fixed variance regardless of $x$ or $y$ is key, but I think there might be something a bit deeper here that I am missing. It just seems like there must be a calculation free, logical, explanation of this.
\begin{equation} y_i = \theta_0 + \theta_1x_i + \epsilon_i, \end{equation}
\begin{equation} Var{[\hat{\theta_0}]}=\frac{\sigma^2\sum\limits_{i=1}^{n}x_i^2}{n\sum\limits_{i=1}^{n}x_i^2-\left(\sum\limits_{i=1}^{n}x_i\right)^2} \end{equation}
\begin{equation} Var{[\hat{\theta_1}]}=\frac{n\sigma^2}{n\sum\limits_{i=1}^{n}x_i^2-\left(\sum\limits_{i=1}^{n}x_i\right)^2} \end{equation}