I am building a MLP using TensorFlow 2.0. I am plotting the learning curve and also using keras.evaluate on both training and test data to see how well it performed. The code I'm using:
history = model.fit(X_train, y_train, batch_size=32, epochs=200, validation_split=0.2, verbose=0) # evaluate the model eval_result_tr = model.evaluate(X_train, y_train) eval_result_te = model.evaluate(X_test, y_test) print("[training loss, training accuracy]:", eval_result_tr) print("[test loss, test accuracy]:", eval_result_te) #[training loss, training accuracy]: [0.5734676122665405, 0.9770742654800415] #[test loss, test accuracy]: [0.7273344397544861, 0.9563318490982056] #plot the learning rate curve import matplotlib.pyplot as plt plt.plot(history.history["loss"], label='eğitim') plt.plot(history.history['val_loss'], label='doğrulama') plt.xlabel("Öğrenme ivmesi") plt.ylabel("Hata payı") plt.title("Temel modelin öğrenme eğrisi") plt.legend() The output is:
My question is: How keras.evaluate() calculates the training loss to be 0.5734676122665405? I take the average of history.history["loss"] bu it returns different (0.7975356701016426) value.
Or, am I mistaken to begin with by trying to evaluate the model performance on training data by eval_result_tr = model.evaluate(X_train, y_train)?
