Skip to main content
3 events
when toggle format what by license comment
Nov 4, 2022 at 23:52 comment added usεr11852 @MxML cause essentially the model itself is a hyperparameter to tune. So when we are comparing models, we need a "second" validation procedure. In general, even with a single model, doing a nested CV is not wrong. As Michael says (+1), for a single model, using a single test fold/split is mostly fine. Ideally actually we can compared the CV-procedure error to the test fold one and it should be relatively close to each other.
Nov 4, 2022 at 23:29 comment added MxML It is not the different variations of CV that I am confused with. I am just wondering why would one NOT use 100% of the dataset to do CV and instead go for a train_test_split then CV potenially risking an unbalanced distribution
Nov 4, 2022 at 21:57 history answered Michael M CC BY-SA 4.0