Timeline for Why is the outer loop of nested cross validation needed? [duplicate]
Current License: CC BY-SA 4.0
8 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| May 8, 2020 at 16:53 | history | closed | Firebug kjetil b halvorsen♦ Frans Rodenburg Peter Flom | Duplicate of Nested cross validation for model selection | |
| May 7, 2020 at 19:49 | history | edited | Sean | CC BY-SA 4.0 | added 127 characters in body |
| May 7, 2020 at 18:32 | review | Close votes | |||
| May 8, 2020 at 16:53 | |||||
| May 7, 2020 at 18:31 | history | edited | Sean | CC BY-SA 4.0 | added 70 characters in body; edited title |
| May 7, 2020 at 18:20 | comment | added | Sean | @Firebug I've updated the end of my question to hopefully make what I'm asking a bit clearer. | |
| May 7, 2020 at 18:20 | history | edited | Sean | CC BY-SA 4.0 | added 175 characters in body |
| May 7, 2020 at 18:16 | comment | added | Sean | @Firebug Thanks but not really. It more talks about why the inner loop and outer loops do, which i understand. It also discusses selecting the model at the end. I understand that the purpose of nested-cross validation is not to select an exact model, but to essentially generate each algorithm or models test performance in order to compare against the others, and select the best one. I was hoping for an explanation of why the outer loop is required, rather than what is does. More so, why do we need to separate the model selection and performance estimation into two separate loops? | |
| May 7, 2020 at 18:04 | history | asked | Sean | CC BY-SA 4.0 |