9

How can I know whether tensorflow tensor is in cuda or cpu? Take this very simple example:

import tensorflow as tf tf.debugging.set_log_device_placement(True) # Place tensors on the CPU with tf.device('/device:GPU:0'): a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]) # print tensor a print(a) # Run on the GPU c = tf.matmul(a, b) print(c) 

The code runs fine. Here, I am physically placing tensor 'a' and 'b' on the GPU. While printing 'a', I get:

tf.Tensor( [[1. 2. 3.] [4. 5. 6.]], shape=(2, 3), dtype=float32) 

It does not give any info whether 'a' in CPU or GPU. Now, suppose that there is an intermediate tensor like tensor 'c' which gets created during some operation. How can I know that tensor 'c' is a CPU or a GPU tensor? Also, suppose the tensor is placed on GPU. How can I move it to CPU?

2 Answers 2

8

As of Tensorflow 2.3 you can use .device property of a Tensor:

import tensorflow as tf a = tf.constant([1, 2, 3]) print(a.device) # /job:localhost/replica:0/task:0/device:CPU:0 

More detailed explanation can be found here

Sign up to request clarification or add additional context in comments.

Comments

1

You may refer to the memory management in PyTorch, where you explicitly define in which memory the tensor is saved. To my knowledge, this is not supported in Tensorflow (Talking about 2.X) and you either work on the CPU or GPU. This is decided, depending on your TF-Version, at the first declaration of a Tensor. As far as I know, the GPU is used by default, else it has to be specified explicitly before you start any Graph Operations.

Thumb rule: If you have a working cuda environment and a TF version that supports GPU by default, it will be always on the GPU, else on the CPU, except if you define it manually.

Refering to the answer of Patwie in SO

2 Comments

As you mentioned about the memory management in pytorch which gives option to define tensor in CPU/GPU. While I was studying some model repository in pytorch, the code specifically moved some of the data from GPU to CPU stating that the specific operation is faster in CPU. I was looking for those kind of feature. But I guess that is not present.
I am afraid, no. However, what might help you is the .numpy() on an eager tensor. It returns you a numpy array, which then is stored in CPU memory

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.