Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

12
  • 2
    $\begingroup$ The threshold of 0.5 is the assumption. The probabilistic prediction $q$ is mapped to a classification by using the threshold, and the misclassification loss is then only a function of this classification. You could calculate the misclassification loss equally for any other classification, e.g., one that rolls a die and assigns an instance to class A if we roll a 1 or 2. I did my best to explain what is a complicated and often misunderstood topic (and I do feel that everything I write about is relevant); I am sorry if I did not succeed. I would be happy to discuss any remaining points. $\endgroup$ Commented Aug 1, 2018 at 6:13
  • 1
    $\begingroup$ As for the relevancy comment, I apologize if it came off the wrong way. I tried to focus the scope of the question to be specifically about proper vs. improper, not discontinuous/misleading/etc. I am well acquainted with the links you provided and have no issues with your comments on misclassification costs or bottom line. I am just seeking a more rigorous explanation of the statement "accuracy is improper", especially given that this paper suggests otherwise for the common use case of binary outcomes. I appreciate you taking the time to discuss this with me and share your detailed thoughts. $\endgroup$ Commented Aug 1, 2018 at 9:47
  • 1
    $\begingroup$ After further reflection, I think I have a clearer grasp of the point you are making. If we consider the same step function with the step at 0.6 (corresponding to classification at a threshold of 0.6), then the scoring rule is improper, because expected loss will no longer be minimized by a prediction q = n for n in the range [0.5, 0.6]. More generally, it will be improper at every threshold other than 0.5, and often in practice we want to use other thresholds due to asymmetric costs of misclassification, as you pointed out. $\endgroup$ Commented Aug 1, 2018 at 12:32
  • 1
    $\begingroup$ I concur that accuracy is clearly a bad metric for evaluating probabilities, even when a threshold of 0.5 is justified. I did say as much at the end of the original post I made, but this helped clear up the specific details I was having trouble with - namely, reconciling something I misunderstood as showing that accuracy is proper for binary outcomes (when it reality it only applies to the very specific case of a 0.5 threshold) with the seemingly black-and-white statement "accuracy is improper" that I have been seeing a lot. Thanks for your help and patience. $\endgroup$ Commented Aug 1, 2018 at 12:56
  • 1
    $\begingroup$ @PasserBy: that is absolutely a valid point. My answer: because what "hard" classifications do is implicitly not classify or predict, but decide on actions, "treat instance X as belonging to class A". What accuracy et al. do is conflate classification accuracy with optimal decisions. I believe it is usually (not necessarily always!) better to separate the "classification" step and the decision. And for that, it makes more sense to have calibrated probabilistic predictions, and cost-optimal thresholds: stats.stackexchange.com/a/312124/1352 $\endgroup$ Commented May 29, 2024 at 10:55