How can I know whether tensorflow tensor is in cuda or cpu? Take this very simple example:
import tensorflow as tf tf.debugging.set_log_device_placement(True) # Place tensors on the CPU with tf.device('/device:GPU:0'): a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]) # print tensor a print(a) # Run on the GPU c = tf.matmul(a, b) print(c) The code runs fine. Here, I am physically placing tensor 'a' and 'b' on the GPU. While printing 'a', I get:
tf.Tensor( [[1. 2. 3.] [4. 5. 6.]], shape=(2, 3), dtype=float32) It does not give any info whether 'a' in CPU or GPU. Now, suppose that there is an intermediate tensor like tensor 'c' which gets created during some operation. How can I know that tensor 'c' is a CPU or a GPU tensor? Also, suppose the tensor is placed on GPU. How can I move it to CPU?