0
$\begingroup$

Recently I deployed a program using libtorch (PyTorch C++ API). The program run as expected but its gives me a warning.

Warning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters(). 

How do I disable the warning ?

$\endgroup$
2
  • 1
    $\begingroup$ Naive solution, but why not fix the warning by actually calling flatten_parameters? You'd save on memory, right? $\endgroup$ Commented Dec 9, 2019 at 8:19
  • $\begingroup$ coz the problem is not solved yet github.com/pytorch/pytorch/issues/19053 $\endgroup$ Commented Dec 9, 2019 at 9:03

1 Answer 1

0
$\begingroup$

I had the same problem recently, but the warning wasn't shown on every forward call of the RNN.

It turned out that only when I had previously moved a model from the CUDA GPU to the CPU and back to te GPU, a forward call on the model would throw the warning. I resolved it with some workaround code and just left the models on the GPU.

$\endgroup$

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.