Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

3
  • 2
    $\begingroup$ The effect may be small, but it may not be that small. As you say, it's like pre-scaling your independent variables before CV, which will use "the future" (test data) to help scale "the present" (training data), which won't happen in the real world. If you have random folds (not using time series, stratification, etc) it's less of an effect, but why break the Train/Test barrier and all? $\endgroup$ Commented Oct 13, 2016 at 18:35
  • $\begingroup$ @Wayne I certainly agree with you that whenever possible, it's best not to break the train/test barrier. Personally, I've never encountered real-world cases where this made a difference (w.r.t. unsupervised FS and/or normalization), but I have encountered cases where it was absolutely infeasible to do feature selection the "right way" (i.e., within each fold). However, I see from your fine answer (which I'm upvoting) that you have encountered the opposite case, so apparently both scenarios exist. $\endgroup$ Commented Oct 13, 2016 at 18:41
  • $\begingroup$ I'm not sure that I've encountered CV results where normalization made a difference either, which I attribute to usually doing 10-fold CV which means the test fold is only 10%, which makes its effect smaller. I have seen a difference with something like a 67/33 or even 75/25 non-CV split. $\endgroup$ Commented Oct 13, 2016 at 18:55