I have been testing different approaches in building nn models (tensorflow, keras) and I saw that there was something strange with metric during compile model.
I checked two ways:
model.compile( loss=keras.losses.CategoricalCrossentropy(), optimizer=keras.optimizers.Adam(), metrics=keras.metrics.Accuracy() ) and
model.compile( loss=keras.losses.CategoricalCrossentropy(), optimizer=keras.optimizers.Adam(), metrics=["accuracy"] ) Result of first approach:
Epoch 1/2 1875/1875 - 2s - loss: 0.0494 - accuracy: 0.0020 Epoch 2/2 1875/1875 - 2s - loss: 0.0401 - accuracy: 0.0030 <tensorflow.python.keras.callbacks.History at 0x7f9c00bc06d8> Result of second approach:
Epoch 1/2 1875/1875 - 2s - loss: 0.0368 - accuracy: 0.9884 Epoch 2/2 1875/1875 - 2s - loss: 0.0303 - accuracy: 0.9913 <tensorflow.python.keras.callbacks.History at 0x7f9bfd7d35c0> This is quite strange, I thought that "accuracy" is exactly the same as keras.metrics.Accuracy(). At least this is the case in arguments "loss" and "optimizer", e.g. "adam" is the same as keras.optimizers.Adam(). Does anybody know why is this so weird or I missed something?
Edit:
Approach with metric in [] gives strange results too:
model.compile( loss=keras.losses.CategoricalCrossentropy(), optimizer=keras.optimizers.Adam(), metrics=[keras.metrics.Accuracy()] ) Epoch 1/2 1875/1875 - 2s - loss: 0.2996 - accuracy: 0.0000e+00 Epoch 2/2 1875/1875 - 2s - loss: 0.1431 - accuracy: 1.8333e-05 <tensorflow.python.keras.callbacks.History at 0x7f9bfd1045f8>
metrics=[keras.metrics.categorical_accuracy]