Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

2
  • 2
    $\begingroup$ I think that the term "batch GD" is sometimes used to refer to "mini-batch GD" and that "stochastic GD" may actually refer to "mini-batch GD". I think people should take context into account because others may be using these terms inconsistently (this is my impression). $\endgroup$ Commented Jul 31, 2021 at 19:57
  • $\begingroup$ @Yahya, when you say for 'batch GD' that 'we take the average of all the training data...' you specifically mean that from the total aggregate loss on all independent training samples which is then backpropagated, you take the average across all the data to make that single update to the parameters from a single derivative of the model params. Is that correct? Can you also please include in your answer the analogous description for how this is similarly done in Mini-batch. $\endgroup$ Commented Jan 31, 2023 at 2:47