0

my code:

import torch import torch.nn as nn import torch.nn.functional as F class MultivariateLinearRegressionModel(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(3,1) def forward(self,x): # print(1) return self.linear(x) x_train = torch.FloatTensor([[73,80,75], [93,88,93], [89,91,90], [96,98,100], [73,66,70]]) y_train = torch.FloatTensor([[152],[185],[180],[196], [142]]) model = MultivariateLinearRegressionModel() optimizer = torch.optim.SGD(model.parameters(), lr = 1e-5) # print(222) ep = 2000 for epoch in range(ep+1): hypothesis = model(x_train) cost = F.mse_loss(hypothesis, y_train) if epoch % 100 == 0: print('Epoch {:4d}/{} Cost: {:.6f}'.format( epoch, 2000, cost.item() )) optimizer.zero_grad() cost.backward() optimizer.step() 

my problem:

this code is my own MultivariateLinearRegressionModel.

But in the for loop hypothesis = model(x_train) why this code is same with hypothesis = model.forward(x_train) ??

i don't know why this 2 code statement is same. is this a python grammar??

1 Answer 1

2

Because your model MultivariateLinearRegressionModel is inherited from nn.Module so when ever you call model(x_train), it will automatically execute the forward function which is defined in MultivariateLinearRegressionModel class.

That's why model(x_train) and model.forward(x_train) give the same result.

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.