0
$\begingroup$

I have an object detection model with my labels and images. I am trying to use the tensorflow ranking metric for MAP, https://www.tensorflow.org/ranking/api_docs/python/tfr/keras/metrics/MeanAveragePrecisionMetric. The metric is used when I compile the model but this is the result I get:

Epoch 2/220 92/92 [==============================] - 22s 243ms/step - loss: 0.0027 - mean_average_precision_metric: 0.0000e+00 - val_loss: 0.0019 - val_mean_average_precision_metric: 0.0000e+00 Epoch 3/220 92/92 [==============================] - 22s 245ms/step - loss: 0.0014 - mean_average_precision_metric: 0.0000e+00 - val_loss: 7.5579e-04 - val_mean_average_precision_metric: 0.0000e+00 Epoch 4/220 92/92 [==============================] - 23s 247ms/step - loss: 8.7288e-04 - mean_average_precision_metric: 0.0000e+00 - val_loss: 6.7357e-04 - val_mean_average_precision_metric: 0.0000e+00 Epoch 5/220 92/92 [==============================] - 23s 248ms/step - loss: 7.3901e-04 - mean_average_precision_metric: 0.0000e+00 - val_loss: 5.3464e-04 - val_mean_average_precision_metric: 0.0000e+00 

My labels and images are all normalized as well according to my image dimensions.

train_images /= 255 val_images /= 255 test_images /= 255 train_targets /= TARGET_SIZE val_targets /= TARGET_SIZE test_targets /= TARGET_SIZE 
model.compile(loss='mse', optimizer='adam', metrics=[tfr.keras.metrics.MeanAveragePrecisionMetric()]) 

Could the metric not be the right way of using it or maybe not meant for my data?

$\endgroup$
0

1 Answer 1

0
$\begingroup$

I would look into whether your loss function is correct. Mean square error is a regression metric (and precision is a classification metric). Something like categorical cross entropy is probably more suited.

Eitherway as a sanity check I would you can always run a model for say 10 epochs. Then run predictions and calculate the precision manually (or with sklearns builtin method.

$\endgroup$
2
  • $\begingroup$ I have tried to change the loss function but there was no change in the result. However, I have tried to calculate the AP using the function with``` y_pred = np.array([106, 86, 115, 92]) y_truth = np.array([105, 85, 114, 91]) average_precision_score(y_pred, y_truth). However, I get an error called multiclass format is not supported```. This is with my predictions and ground truth labels. $\endgroup$ Commented Sep 7, 2022 at 8:14
  • $\begingroup$ @NevMthw this is because average_precision_score is expecting binary labels for classification. If you have 100 classes then you need to one-hot encode them to give something like this: y_pred =np.array([[1,0,0],[1,0,0],[1,0,0],[0,1,0],[0,0,1]]) y_truth = np.array([[1,0,0],[0,1,0],[1,0,0],[0,1,0],[0,0,1]]) average_precision_score(y_pred, y_truth) $\endgroup$ Commented Sep 14, 2022 at 10:22

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.