Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

4
  • $\begingroup$ I don't have the same output. Mine is 0.9 AUC in both V1 and V2 $\endgroup$ Commented Aug 19 at 16:17
  • $\begingroup$ @underflow please can you provide the complete output and I will take a look. $\endgroup$ Commented Aug 19 at 18:31
  • 1
    $\begingroup$ @robert first of all thank you so much for this magnificent explanation ! I feel more confident especially since I was opting for strategy 1 all the way. Now I will investigate other strategies. My question is, when dealing with a supervised learning use cases can we limit the process to model performance monitoring (AUC, ROC, FPR, F1-score, Accuracy) and ignore the statistical metrics and statistical tests? $\endgroup$ Commented Aug 25 at 12:55
  • 1
    $\begingroup$ @RobertLong Sorry for the delay and sorry it was my mistake - the code works well. $\endgroup$ Commented Aug 30 at 12:10