Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

5
  • 1
    $\begingroup$ there is no reason to think that the variables selected based on the training data are the right variables for making predictions on new data. This is way too harsh. If not, then there is no reason to believe anything based on training data, and we can trash all of statistics and machine learning... $\endgroup$ Commented Mar 22, 2023 at 9:33
  • $\begingroup$ @RichardHardy The claim is not that training means nothing to new data. The claim is that instability during training casts doubt, with good reason, on what follows. This is not specific to feature selection. If I did any kind of repeated testing (cross validation, bootstrap) and got results that were all over the place, I would be skeptical about how the modeling would perform moving forward. $\endgroup$ Commented Mar 22, 2023 at 13:26
  • 2
    $\begingroup$ To me, there is a big difference between casts doubt and there is no reason to think, and I cannot help but interpret your statement the way I outlined in my first comment. As currently phrased, I find it more misleading than useful. $\endgroup$ Commented Mar 22, 2023 at 13:28
  • $\begingroup$ @RichardHardy I have made an edit that I hope softens the tone. $\endgroup$ Commented Mar 22, 2023 at 13:33
  • 1
    $\begingroup$ I think it is phrased better now. Also, instable --> unstable. $\endgroup$ Commented Mar 22, 2023 at 14:08