The reference to undergraduates suggests the most elementary possible solution is sought. I will therefore rely solely on (a) high-school level algebra and (b) basic properties of variances and covariances: namely, their bilinearity.
The point of this presentation is to demonstrate both the computational and conceptual value of choosing an appropriate way to express the variables: namely, to standardize them to unit variance and (in the case of the explanatory variables) to zero mean. This enables you to see what the solution must be before you do any algebraic work at all. It also leads to effective algorithms in case you need to code a solution.
Begin with a basic operation: standardizing the variables. Changing the units of measure, express the variables as $y_i = \sigma\eta_i$ (thereby rescaling the error terms in the model to have unit variance) and $x_i = m + s \xi_i$ (rescaled and centered) where $$0 = n\bar \xi_i = \sum_i \xi_i$$ and $$\sum_i \xi^2 = n.$$ As usual, we find
$$m=\bar x;\ s^2 = \frac{1}{n}\sum_i (x_i-\bar x)^2;\ \text{and }\xi_i = \frac{x_i - \bar x}{s}.\tag{*}$$
In these units, and writing $\tau = 1/\sigma$ for convenience, the model is
$$\eta_i/\tau = y_i = \beta_0+\beta_1 x_i + \varepsilon_i = \beta_0 + \beta_1(m + s\xi_i) + \varepsilon_i.$$
Equivalently, multiplying both sides by $\tau$ and expanding,
$$\eta_i = \tau(\beta_0 + m\beta_1) + (\tau s \beta_1)\xi_i + \delta_i = \alpha_0 + \alpha_1 \xi_i + \delta_i.$$
where $\operatorname{Var}(\delta_i) = \tau^2 \sigma^2 = 1.$
If we employ the usual least squares solutions (not the weighted least squares solutions!) then their linearity implies
$$\tau(\hat\beta_0 + m\hat\beta_1) = \hat\alpha_0 = 0\ \text{ and }\ \tau s \hat\beta_1 = \hat\alpha_1 = \frac{\sum_i \eta_i \xi_i}{\sum_i \xi_i^2} = \frac{1}{n}\sum_i \eta_i \xi_i.$$
Use these to compute the variances and covariances of the $\hat\alpha_*:$
$0 = \operatorname{Var}(0) = \operatorname{Var}(\hat\alpha_0) = \tau^2\color{red}{\operatorname{Var}(\hat\beta_0)} + 2\tau m \color{red}{\operatorname{Cov}(\hat\beta_0,\hat\beta_1)} + (\tau m)^2 \color{red}{\operatorname{Var}(\hat\beta_1)}.$
$0 = \operatorname{Cov}(0,\hat\alpha_1) = \tau(\tau s)\color{red}{\operatorname{Cov}(\hat\beta_0,\hat\beta_1)} + \tau(m)(\tau s)\color{red}{\operatorname{Var}(\hat\beta_1)}.$
$\operatorname{Var}(n\hat\alpha_1) = (n \tau s)^2\color{red}{\operatorname{Var}(\hat\beta_1)} = \operatorname{Var}\left(\sum_i \eta_i\xi_i\right) = \color{blue}{\sum_i \sum_j \xi_i\xi_j \operatorname{Cov}(\delta_i,\delta_j)}.$
These are three simultaneous linear equations for the variances and covariances of $(\hat\beta_0,\hat\beta_1)$ (shown in red type). Moreover, they are triangular, making them straightforward to solve:
From (3) we find $$\operatorname{Var}(\hat\beta_1) = \frac{1}{(\tau s)^2}\color{blue}{\sum_i \sum_j \xi_i\xi_j \operatorname{Cov}(\delta_i,\delta_j)}.$$
With that solution in hand, from (2) we find $$\operatorname{Cov}(\hat\beta_0,\hat\beta_1) = -m\operatorname{Var}(\hat\beta_1) = - \frac{m}{(\tau s)^2}\color{blue}{\sum_i \sum_j \xi_i\xi_j \operatorname{Cov}(\delta_i,\delta_j)}.$$
At this point we have all we need, but it's worth noticing that $\operatorname{Var}(\hat\beta_0)$ can be found by plugging the two preceding solutions into (1).
The only coefficient requiring calculation is the last, which is determined by the covariance assumptions in the question: $\rho = \operatorname{Cov}(\varepsilon_i, \varepsilon_j) = \sigma^2 \operatorname{Cov}(\delta_i, \delta_j).$ Thus
$$\color{blue}{\sum_i \sum_j \xi_i\xi_j \operatorname{Cov}(\delta_i,\delta_j)} = \sum_{i=1}^n \xi_i^2 + \left(\frac{\rho}{\sigma^2}\right) 2 \sum_{i=1}^{n-1} \xi_i \xi_{i+1} = n + 2\left(\frac{\rho}{\sigma^2}\right) \sum_{i=1}^{n-1} \xi_i \xi_{i+1}.$$
Only the final term is new to this problem: everything else is the algebra of ordinary least squares regression. Clearly, it arises from the correlations between successive observations.
You can see the purported solution (presented in the question) taking shape here. The rest is merely a matter of plugging everything in from the solutions to $(1)$ - $(3)$ along with $(*)$ and doing the algebraic simplification, which can be left to the undergraduates to perform, because they have already seen it in the context of ordinary least squares ;-).