0
$\begingroup$

I'm quite new to machine learning so sorry if this is a simple question. When constructing a regression model (in the form of a neural network), we might use MSE as the main metric of comparison between models. However, when constructing classification models, where the outputs are encoded as one-hot target vectors, cross entropy is a better loss function to use for training, and we would generally assess models by accuracy/precision/recall metrics. Surely though the final crossentropy loss for a trained model would still tell you how well different models are capable of fitting the data, to point one's hyperparameter search in the right direction? Are there any caveats to be aware of in interpreting crossentropy loss for a trained model and using it to compare between different neural networks?

$\endgroup$
4
  • $\begingroup$ crossentropy is a loss function. $\endgroup$ Commented Apr 17, 2018 at 12:00
  • $\begingroup$ I know, but you can still find the summed loss over all of your inputs and use it as a measure of network performance, just like with MSE. $\endgroup$ Commented Apr 17, 2018 at 12:05
  • $\begingroup$ You can use accuracy/precision/recall or cross entropy (or even ROC AUC!) These are all methods that people use to compare models and evaluate model fit. $\endgroup$ Commented Apr 24, 2018 at 23:20
  • $\begingroup$ Cross-entropy is an example of strictly proper scoring rule: stats.stackexchange.com/questions/312780/…. Kolassa and another Cross Validated member, Frank Harrell (among others), advocate heavily for proper scoring rules instead of threshold-based metrics like accuracy, precision, recall, sensitivity, specificity, and F1 score. $\endgroup$ Commented Jul 14, 2020 at 14:39

1 Answer 1

-1
$\begingroup$

We use cross-entropy for training a model but not for comparing different models. Cross-entropy is a loss function, which is a function for model parameters.

On the other hand, accuracy/precision/recall etc are model independent.

$\endgroup$
3
  • $\begingroup$ Yikes! Newcomers, please disregard this. Cross Validated supports proper scoring rules (see, for instance, Frank Harrell or Stephan Kolassa). $\endgroup$ Commented Jul 14, 2020 at 14:28
  • $\begingroup$ @Dave ? Sorry I totally have no idea I even answered this question. What's the matter? $\endgroup$ Commented Jul 14, 2020 at 14:29
  • $\begingroup$ Cross-entropy is an example of strictly proper scoring rule: stats.stackexchange.com/questions/312780/…. Accuracy, precision, recall, and F1 score are threshold-based and are not proper scoring rules. $\endgroup$ Commented Jul 14, 2020 at 14:35

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.