Skip to main content
5 events
when toggle format what by license comment
Apr 13, 2017 at 12:44 history edited CommunityBot
replaced http://stats.stackexchange.com/ with https://stats.stackexchange.com/
Jun 16, 2016 at 12:18 comment added Firebug No, you misunderstood. You use cross-validation to obtain an estimate of the generalized performance of your model, and there's bias and variance in that estimate. Diminishing both would be the ideal. The variance itself is no indicator of overfitting, it just makes the estimation worse. Now, the optimistic bias leads to overfit because your model building strategy is done through hyperparameter optimization based on performance estimates.
Jun 16, 2016 at 8:00 comment added maia First ,thanks for answering all my question,while using k-fold cross-validation how much variance is good indicator that i overcome overfitting ? i mean as i understand , if all 10 repeating of taring-and-evaluating the classifier in k-fold give us nearly same performance , this indicate that we overcome the overfitting issue which otherwise without using k-fold was main drawback of dealing with same number of feature and observation?
Jun 12, 2016 at 17:03 history edited Firebug CC BY-SA 3.0
added 181 characters in body
Jun 12, 2016 at 16:34 history answered Firebug CC BY-SA 3.0