Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

6
  • $\begingroup$ So the variance of a layer is just kind of a measure of how "big" the values in the layer are? $\endgroup$ Commented Oct 12, 2020 at 20:48
  • $\begingroup$ Yes it is, but we would have to assume that the mean of our variable of interest is $0$ (which is normally the case in the moment of initialitation of the weights). Just for visualization purposes imagine that $Var(x_1) = 2 Var(x_2)$, then if $\bar{x}_1=\bar{x}_2=0$, the magnitude of $x_2$ (measured by its absolute value) tends to be bigger because it's more spread w.r.t. the $0$ value. $\endgroup$ Commented Oct 12, 2020 at 20:55
  • $\begingroup$ In the case where the mean $\neq 0$, then we could not consider that the variance is a measure of how big the values in the layers are. Imagine a big mean value, then a big variance may lead to low values. But whatever the mean is, if the layers under the same variance share the same mean, then we would have similar rythms of learning as we saw in the post. $\endgroup$ Commented Oct 12, 2020 at 21:02
  • $\begingroup$ Beautiful Answer ❤️ +1! Kudos! $\endgroup$ Commented Oct 13, 2020 at 8:25
  • $\begingroup$ Would you be able to say a bit more about why we care about the variances of the activations? I can understand that the variances of the differentials are directly related to unstable gradients, but in many sources such as here and in the articles linked to in the question people focus a lot on the variance of the activations (the outputs of the layers). $\endgroup$ Commented Oct 26, 2020 at 22:26