Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

4
  • $\begingroup$ but in backpropagation(neural network), also derivative equals to zero when expected output=target as in equation(non-linear sigmoid function): visualstudiomagazine.com/articles/2017/06/01/~/media/ECG/… Does this case also have single global minima? $\endgroup$ Commented Jul 27, 2020 at 10:07
  • $\begingroup$ output=target for each term in the sum is not the only way that the sum can equal zero. $\endgroup$ Commented Jul 27, 2020 at 10:14
  • $\begingroup$ Suppose it a single example for training will lead to having single minima, so in stochastic gradient descent, there will be no local minima! $\endgroup$ Commented Jul 27, 2020 at 10:42
  • $\begingroup$ A single example can't be used for training if there's more than one parameter in the model... $\endgroup$ Commented Jul 29, 2020 at 5:56