By definition, for a random variable $X$, $Var(X) = E[(X-E[X])^2] = \sum_{x} (X - E[X])^2 P(X=x)$.
We can write this formula in what is often a more convenient way: $Var(X) = E[X^2] - E^2[X]$.
The derivation of this follows below:
$$Var(X) = \sum_{x} (X - E[X])^2 P(X=x) = \sum_{x} (X^2 - 2XE[X] + E^2[X]) P(X=x) = \sum_{x} X^2 P(X=x) - 2E[X] \sum_{x} X P(X=x) + E^2[X] = E[X^2] - 2E^2[X] + E^2[X] = E[X^2] - E^2[X]$$
In the case of a normal distribution of mean (expectation) $\mu = \beta x_i$ and variance $\sigma^2$ (where $\sigma$ is the standard deviation), the expression you mentioned is not true.
To see this, let us attempt a proof by contradiction.
Let's assume that the statement is true: $Var(y_i) = E(y_i^2)$.
Then, we have that $E^2[y_i] = 0$, which implies that $E[y_i] = 0$.
However, we know that $E[y_i] = \beta E[x_i]$. So, unless $\beta$ or $E[x_i]$ are equal to zero, it doesn't hold, since we have a contradiction. The formula for the expectation of $y_i$ can be derived from the formula for $Y$ and the linearity of expectation.
The only case when what you state is true is when the expectation of the random variable is indeed $0$, in which case its probability distribution is centered around $0$.