Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

3
  • 5
    $\begingroup$ After reading all the other answers, this answer made it "click" for me! You train with the train set, check that you're not overfitting with the validation set (and that the model and hyperparameters work with "unknown data"), and then you assess with the test set - "new data" - whether you now have any predictive powers..! $\endgroup$ Commented Mar 15, 2017 at 22:17
  • 1
    $\begingroup$ This is a fair way to look at it in the sense that the test data should never be part of the training process: and if we treat it as "future" data then that becomes an impossible mistake to make. $\endgroup$ Commented Mar 20, 2019 at 4:03
  • $\begingroup$ this is the best answer imo. Gives the core reason behind all this $\endgroup$ Commented Apr 5, 2021 at 8:48