Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

11
  • 1
    $\begingroup$ So you don't advocate cross-validation through splitting of large data-sets for predictive model testing / validation? $\endgroup$ Commented Dec 15, 2014 at 3:42
  • 12
    $\begingroup$ No, unless the dataset is huge or the signal:noise ratio is high. Cross-validation is not as precise as the bootstrap in my experience, and it does not use the whole sample size. In many cases you have to repeat cross-validation 50-100 times to achieve adequate precision. But in your datasets have > 20,000 subjects, simple approaches such as split-sample validation are often OK. $\endgroup$ Commented Dec 15, 2014 at 4:17
  • 2
    $\begingroup$ That's really good to know! Thanks. And coming from you, that's a great "source" of info. Cheers! $\endgroup$ Commented Dec 15, 2014 at 4:43
  • 3
    $\begingroup$ Split-sample validation often performs worse than rigorous bootstrapping. Create an outer bootstrap look that repeats all supervised learning steps (all steps that use Y). The Efron-Gong optimism bootstrap estimates how much the predictive model falls apart in data not seen by the algorithm, without holding back data. $\endgroup$ Commented Dec 5, 2018 at 0:06
  • 2
    $\begingroup$ Yes with emphasis on repeating. It's the single split that is problematic. $\endgroup$ Commented May 28, 2019 at 17:56