24

What is volatile attribute of a Variable in Pytorch? Here's a sample code for defining a variable in PyTorch.

datatensor = Variable(data, volatile=True) 

2 Answers 2

31

Basically, set the input to a network to volatile if you are doing inference only and won't be running backpropagation in order to conserve memory.

From the docs:

Volatile is recommended for purely inference mode, when you’re sure you won’t be even calling .backward(). It’s more efficient than any other autograd setting - it will use the absolute minimal amount of memory to evaluate the model. volatile also determines that requires_grad is False.

Edit: The volatile keyword has been deprecated as of pytorch version 0.4.0

Sign up to request clarification or add additional context in comments.

Comments

9

For versions of Pytorch previous to 0.4.0, Variable and Tensor were two different entities. For variables, you could specify two flags: volatile and require_grad. Both of them were used for fine grained exclusion of subgraphs from gradient computation.

The difference between volatile and requires_grad is in how the flag is applied to the outputs of an operation. If there is even a single volatile = True Variable as input to an operation, its output is also going to be marked as volatile. For requires_grad, you need all the inputs to that operation to be flagged requires_grad = False, so that the output is also flagged in the same way.

From Pytorch 0.4.0, Tensors and Variables have merged, and the volatile flag is deprecated.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.