Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

4
  • $\begingroup$ It would help folks if you wrote out the optimization problem you're solving, the gradient, as well as the error distribution (and how they're combined) contributing to the gradient in both the approx and non-approx forms. $\endgroup$ Commented Sep 13, 2016 at 18:10
  • $\begingroup$ @MarkL.Stone thanks. Are you asking me to revise the question? or giving me clues to the answer of my question. $\endgroup$ Commented Sep 13, 2016 at 18:13
  • $\begingroup$ Suggesting you edit question to make math clear without going through code. I have lots of hints, but am busy, so will probably leave answer to others. $\endgroup$ Commented Sep 13, 2016 at 18:15
  • $\begingroup$ @hxd1011 another thing to note is that for SGD you are updating the residual $r=Ax-b$ every time you update $x$ over the SGD epoch, so SGD explores different gradients on the curved error surface. In this way, SGD vs. BGD is reminiscent of Gauss-Seidel vs. Jacobi iteration, two basic iterative methods for linear systems. When applied to the normal equations in your linear LSQR case, this comparison may be exact ($\Delta x$ update directions for SGD=Gauss-Seidel and for BGD=Jacobi). This is discussed in sec 8.2 of this book. $\endgroup$ Commented Sep 13, 2016 at 22:52