2

I'm new to TensorFlow and have installed CUDA-7.5 and cudnn-v4 as per the instructions on the TensorFlow website. After adjusting the TensorFlow configuration file and trying to run the following example from the website:

python -m tensorflow.models.image.mnist.convolutional 

I'm pretty sure TensorFlow is using one of the GPUs instead of the other, however, I'd like it to use the faster one. I was wondering if this example code just defaults to using the first GPU it finds. If so, how can I choose which GPU to use in my TensorFlow code in python?

The messages I get when running the example code are:

ldt-tesla:~$ python -m tensorflow.models.image.mnist.convolutional I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally Extracting data/train-images-idx3-ubyte.gz Extracting data/train-labels-idx1-ubyte.gz Extracting data/t10k-images-idx3-ubyte.gz Extracting data/t10k-labels-idx1-ubyte.gz I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties: name: Tesla K20c major: 3 minor: 5 memoryClockRate (GHz) 0.7055 pciBusID 0000:03:00.0 Total memory: 4.63GiB Free memory: 4.57GiB W tensorflow/stream_executor/cuda/cuda_driver.cc:572] creating context when one is currently active; existing: 0x2f27390 I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 1 with properties: name: Quadro K2200 major: 5 minor: 0 memoryClockRate (GHz) 1.124 pciBusID 0000:02:00.0 Total memory: 3.95GiB Free memory: 3.62GiB I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 0 to device ordinal 1 I tensorflow/core/common_runtime/gpu/gpu_init.cc:59] cannot enable peer access from device ordinal 1 to device ordinal 0 I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 1 I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y N I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 1: N Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:806] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K20c, pci bus id: 0000:03:00.0) I tensorflow/core/common_runtime/gpu/gpu_device.cc:793] Ignoring gpu device (device: 1, name: Quadro K2200, pci bus id: 0000:02:00.0) with Cuda multiprocessor count: 5. The minimum required count is 8. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT. Initialized! 

2 Answers 2

6

You can set the CUDA_VISIBLE_DEVICES environment variable to expose only the ones that you want, quoting this example on masking gpus:

CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible CUDA_VISIBLE_DEVICES=”0,1” Same as above, quotation marks are optional CUDA_VISIBLE_DEVICES=0,2,3 Devices 0, 2, 3 will be visible; device 1 is masked 
Sign up to request clarification or add additional context in comments.

2 Comments

Thanks! That seems to do the job and get rid of that error :). I also get a message that says "Ignoring gpu device with cuda multiprocessor count 5. The minimum required count is 8. You can adjust this requirement with the...". Doing the same thing you suggest, I can use the environment variable to change the count but I'm not sure what this means. What does the count/minimum count mean? Thank you!
1

You can set which GPU you'd like to run the program on at run-time instead of hard-coding it into your scripts too. This will prevent issues with running on devices that don't have multiple GPUs or don't have too many GPUs.

Say you want to run on GPU #3, you can do that like:

CUDA_VISIBLE_DEVICES=3, python -m tensorflow.models.image.mnist.convolutional 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.