Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

3
  • $\begingroup$ Off the top of my head, conditions likely to produce better accuracy with a validation set include a small N and a narrow distribution of the outcome variable (e.g., 98% "0" and 2% "1"). $\endgroup$ Commented Nov 28, 2015 at 1:44
  • $\begingroup$ I have in the training set, 26% of "0" and in the validation set, 27% of "0". I think, it is equally distributed. The training set hat 1151 samples and the test set 291 samples. $\endgroup$ Commented Nov 28, 2015 at 1:56
  • $\begingroup$ For this type of question, the absolute counts (total cases in both classes for test set and cross validation as well as the absolute counts of correctly recognized cases for both validation schemes are important: the precision of sensitivity & specificity depends on the absolute number of cases in the denominator). $\endgroup$ Commented Nov 29, 2015 at 15:02