32

I load features and labels from my training dataset. Both of them are originally numpy arrays, but I change them to the torch tensor using torch.from _numpy(features.copy()) and torch.tensor(labels.astype(np.bool)).

And I notice that torch.autograd.Variable is something like placeholder in tensorflow.

When I train my network, first I tried

features = features.cuda() labels = labels.cuda() outputs = Config.MODEL(features) loss = Config.LOSS(outputs, labels) 

Then I tried

features = features.cuda() labels = labels.cuda() input_var = Variable(features) target_var = Variable(labels) outputs = Config.MODEL(input_var) loss = Config.LOSS(outputs, target_var) 

Both blocks succeed in activating training, but I worried that there might be trivial difference.

1 Answer 1

41

According to this question you no longer need variables to use Pytorch Autograd.

Thanks to @skytree, we can make this even more explizit: Variables have been deprecated, i.e. you're not supposed to use them anymore.

Autograd automatically supports Tensors with requires_grad set to True.

And more importantly

Variable(tensor) and Variable(tensor, requires_grad) still work as expected, but they return Tensors instead of Variables.

This means that if your features and labels are tensors already (which they seem to be in your example) your Variable(features) and Variable(labels) does only return a tensor again.

The original purpose of Variables was to be able to use automatic differentiation (Source):

Variables are just wrappers for the tensors so you can now easily auto compute the gradients.

Sign up to request clarification or add additional context in comments.

2 Comments

The Variable API has been deprecated. pytorch.org/docs/stable/autograd.html
Well, I kind of said that in my first sentence. But I will edit my answer to make it clearer.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.