5

I can't use torch.Tensor() with requires_grad parameter (torch version : 0.4.1)

without requires_grad :

x = torch.Tensor([[.5, .3, 2.1]]) print(x) > tensor([[0.5000, 0.3000, 2.1000]]) 

with requires_grad=True or requires_grad=False :

x = torch.Tensor([[.5, .3, 2.1]], requires_grad=False) print(x) Traceback (most recent call last): File "D:/_P/dev/ai/pytorch/notes/tensor01.py", line 4, in <module> x = torch.Tensor([[.5, .3, 2.1]], requires_grad=False) TypeError: new() received an invalid combination of arguments - got (list, requires_grad=bool), but expected one of: * (torch.device device) * (torch.Storage storage) * (Tensor other) * (tuple of ints size, torch.device device) didn't match because some of the keywords were incorrect: requires_grad * (object data, torch.device device) didn't match because some of the keywords were incorrect: requires_grad 

1 Answer 1

13

You are creating the tensor x by using the torch.Tensor class constructor which doesn't take the requires_grad flag. Instead, you want to use torch.tensor() (lowercase 't') method

x = torch.tensor([[.5, .3, 2.1]], requires_grad=False) 

Edit: adding a link to docs: torch.Tensor

Sign up to request clarification or add additional context in comments.

3 Comments

Wondering why there are torch.tensor and torch.Tensor.
@EduardoReis I agree, this is very confusing behavior.
@Eduardo Reis, @afarley The torch.Tensor class constructor always creates an uninitialised tensor with the default torch.float32 data type. The torch.tensor() method, which is a factory method, tries to figure out the data type intelligently from its argument and can be used to create properly initialised tensors.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.