Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

3
  • 18
    $\begingroup$ That is a fascinating example, really some food for thought. $\endgroup$ Commented Dec 14, 2016 at 21:59
  • 4
    $\begingroup$ @CagdasOzgenc I have tried implementing this example and unfortunately I don't think it is correct. I think the blog author has used the hinge loss to fit a logistic regression model to the probability of class membership, rather than to data drawn from the implied distribution. That way all of the data to the left of x=0 have p < 0.5 and all of the data above have p > 0.5. However if you sample data from that distribution you will have labels of 0 and 1 on both sides of x=0 and you do not get the result shown. So unfortunately without that detail it is a bit misleading. $\endgroup$ Commented Mar 22, 2022 at 14:36
  • 4
    $\begingroup$ Have done a bit more work on this and posted it here: stats.stackexchange.com/questions/568821/… It still works as a demo that an improper scoring rule can give better accuracy, but it isn't nearly as clear cut as the example from the blog. $\endgroup$ Commented Mar 29, 2022 at 16:01