For my trained model this code :
model(x[0].reshape(1,784).cuda()) returns :
tensor([[-1.9903, -4.0458, -4.1143, -4.0074, -3.5510, 7.1074]], device='cuda:0') My network model is defined as :
# Hyper-parameters input_size = 784 hidden_size = 50 num_classes = 6 num_epochs = 5000 batch_size = 1 learning_rate = 0.0001 criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size, num_classes) def forward(self, x): out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out I'm attempting to understand the returned value :
tensor([[-1.9903, -4.0458, -4.1143, -4.0074, -3.5510, 7.1074]], device='cuda:0') The value 7.1074 is the most probable as it maximum value in tensor array ? As 7.1074 is at position 5, is the significance here that the associated output value being predicted for input x[0] is 5 ? If so what is the intuition behind this ?