243

I have a plan to use distributed TensorFlow, and I saw TensorFlow can use GPUs for training and testing. In a cluster environment, each machine could have 0 or 1 or more GPUs, and I want to run my TensorFlow graph into GPUs on as many machines as possible.

I found that when running tf.Session() TensorFlow gives information about GPU in the log messages like below:

I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0) 

My question is how do I get information about current available GPU from TensorFlow? I can get loaded GPU information from the log, but I want to do it in a more sophisticated, programmatic way. I also could restrict GPUs intentionally using the CUDA_VISIBLE_DEVICES environment variable, so I don't want to know a way of getting GPU information from OS kernel.

In short, I want a function like tf.get_available_gpus() that will return ['/gpu:0', '/gpu:1'] if there are two GPUs available in the machine. How can I implement this?

1
  • 3
    why aren't simple things just easier in tensorflow? Commented Oct 1, 2021 at 20:17

16 Answers 16

310

There is an undocumented method called device_lib.list_local_devices() that enables you to list the devices available in the local process. (N.B. As an undocumented method, this is subject to backwards incompatible changes.) The function returns a list of DeviceAttributes protocol buffer objects. You can extract a list of string device names for the GPU devices as follows:

from tensorflow.python.client import device_lib def get_available_gpus(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos if x.device_type == 'GPU'] 

Note that (at least up to TensorFlow 1.4), calling device_lib.list_local_devices() will run some initialization code that, by default, will allocate all of the GPU memory on all of the devices (GitHub issue). To avoid this, first create a session with an explicitly small per_process_gpu_fraction, or allow_growth=True, to prevent all of the memory being allocated. See this question for more details.

Sign up to request clarification or add additional context in comments.

9 Comments

PS, if this method ever gets moved/renamed, I would look inside tensorflow/python/platform/test.py:is_gpu_available since that's being used quite a bit
Is there a way to get the devices Free and Total memory? I see that there is a memory_limit field in the DeviceAttributes and I think it is the free memory and not total
I remember that for earlier versions than 1 tensorflow would print some info about gpus when it was imported in python. Have those messages been removed in the newer tensorflow versions? (hence your suggestion the only way to check gpu stuff)?
@CharlieParker I believe we still print one log line per GPU device on startup in TF1.1.
@aarbelle - using the above mentioned method to return all attributes includes a field Free memory for me, using tensorflow1.1. In python: from tensorflow.python.client import device_lib, then device_lib.list_local_devices()
|
172

You can check all device list using following code:

from tensorflow.python.client import device_lib device_lib.list_local_devices() 

4 Comments

@Kulbear because it contains strictly less information than the existing answer.
Still prefer this answer due to its simplicity. I am using it directly from bash: python3 -c "from tensorflow.python.client import device_lib; print(device_lib.list_local_devices())"
I agree, this answer saved me time. I just copy/pasted the code without having to read the longer official answer. I know the details, just needed the line of code. It already wasn't picked as the answer and that's sufficient. No need to downvote.
getting error cannot import name 'format_exc' from 'traceback'
68

Since TensorFlow 2.1, you can use tf.config.list_physical_devices('GPU'):

import tensorflow as tf gpus = tf.config.list_physical_devices('GPU') for gpu in gpus: print("Name:", gpu.name, " Type:", gpu.device_type) 

If you have two GPUs installed, it outputs this:

Name: /physical_device:GPU:0 Type: GPU Name: /physical_device:GPU:1 Type: GPU 

In TF 2.0, you must add experimental:

gpus = tf.config.experimental.list_physical_devices('GPU') 

See:

2 Comments

Command worked great. I had to change 'GPU' to 'XLA_GPU'.
This is the right answer for tensorflow 2.1+
66

There is also a method in the test util. So all that has to be done is:

tf.test.is_gpu_available() 

and/or

tf.test.gpu_device_name() 

Look up the Tensorflow docs for arguments.

6 Comments

This returns just GPU:0
@Tal that means you have 1 GPU available (at PCI slot ID 0). So tf.test.is_gpu_available() will return True
The OP requested a method that returns a list of available GPUS. At least on my multi-GPU setup, tf.test.gpu_device_name() returns only the name of the first one.
This method is now deprecated, use tf.config.list_physical_devices('GPU') instead
is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use tf.config.list_physical_devices('GPU') instead.
|
25

The accepted answer gives you the number of GPUs but it also allocates all the memory on those GPUs. You can avoid this by creating a session with fixed lower memory before calling device_lib.list_local_devices() which may be unwanted for some applications.

I ended up using nvidia-smi to get the number of GPUs without allocating any memory on them.

import subprocess n = str(subprocess.check_output(["nvidia-smi", "-L"])).count('UUID') 

3 Comments

such list does not match tensorflow list. Enumeration can be different.
Another thing is after setting tf.config.set_visible_devices(), the aforementioned commands still get all GPUs in that machine.
Also if you have mismatched cuda drivers version vs required by TensorFlow version this will show show some numbers of GPUs, while possible GPUs to be used by TensorFlow will be zero.
10

Apart from the excellent explanation by Mrry, where he suggested to use device_lib.list_local_devices() I can show you how you can check for GPU related information from the command line.

Because currently only Nvidia's gpus work for NN frameworks, the answer covers only them. Nvidia has a page where they document how you can use the /proc filesystem interface to obtain run-time information about the driver, any installed NVIDIA graphics cards, and the AGP status.

/proc/driver/nvidia/gpus/0..N/information

Provide information about each of the installed NVIDIA graphics adapters (model name, IRQ, BIOS version, Bus Type). Note that the BIOS version is only available while X is running.

So you can run this from command line cat /proc/driver/nvidia/gpus/0/information and see information about your first GPU. It is easy to run this from python and also you can check second, third, fourth GPU till it will fail.

Definitely Mrry's answer is more robust and I am not sure whether my answer will work on non-linux machine, but that Nvidia's page provide other interesting information, which not many people know about.

Comments

8

The following works in tensorflow 2:

import tensorflow as tf gpus = tf.config.experimental.list_physical_devices('GPU') for gpu in gpus: print("Name:", gpu.name, " Type:", gpu.device_type) 

From 2.1, you can drop experimental:

 gpus = tf.config.list_physical_devices('GPU') 

https://www.tensorflow.org/api_docs/python/tf/config/list_physical_devices

2 Comments

Does this work when i use a scaleTier of BASIC_GPU too. When i run this code it give me just the CPUs
Duplicate answer of MiniQuark (but with less detail..)
5

I got a GPU called NVIDIA GTX GeForce 1650 Ti in my machine with tensorflow-gpu==2.2.0

Run the following two lines of code:

import tensorflow as tf print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) 

Output:

Num GPUs Available: 1 

Comments

5

In TensorFlow Core v2.3.0, the following code should work.

import tensorflow as tf visible_devices = tf.config.get_visible_devices() for devices in visible_devices: print(devices) 

Depending on your environment, this code will produce flowing results.

PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU') PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')

Comments

2

latest version recommended by tensorflow:

tf.config.list_physical_devices('GPU') 

Comments

1

I am working on TF-2.1 and torch, so I don't want to specific this automacit choosing in any ML frame. I just use original nvidia-smi and os.environ to get a vacant gpu.

def auto_gpu_selection(usage_max=0.01, mem_max=0.05): """Auto set CUDA_VISIBLE_DEVICES for gpu :param mem_max: max percentage of GPU utility :param usage_max: max percentage of GPU memory :return: """ os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID' log = str(subprocess.check_output("nvidia-smi", shell=True)).split(r"\n")[6:-1] gpu = 0 # Maximum of GPUS, 8 is enough for most for i in range(8): idx = i*3 + 2 if idx > log.__len__()-1: break inf = log[idx].split("|") if inf.__len__() < 3: break usage = int(inf[3].split("%")[0].strip()) mem_now = int(str(inf[2].split("/")[0]).strip()[:-3]) mem_all = int(str(inf[2].split("/")[1]).strip()[:-3]) # print("GPU-%d : Usage:[%d%%]" % (gpu, usage)) if usage < 100*usage_max and mem_now < mem_max*mem_all: os.environ["CUDA_VISIBLE_EVICES"] = str(gpu) print("\nAuto choosing vacant GPU-%d : Memory:[%dMiB/%dMiB] , GPU-Util:[%d%%]\n" % (gpu, mem_now, mem_all, usage)) return print("GPU-%d is busy: Memory:[%dMiB/%dMiB] , GPU-Util:[%d%%]" % (gpu, mem_now, mem_all, usage)) gpu += 1 print("\nNo vacant GPU, use CPU instead\n") os.environ["CUDA_VISIBLE_EVICES"] = "-1" 

If I can get any GPU, it will set CUDA_VISIBLE_EVICES to BUSID of that gpu :

GPU-0 is busy: Memory:[5738MiB/11019MiB] , GPU-Util:[60%] GPU-1 is busy: Memory:[9688MiB/11019MiB] , GPU-Util:[78%] Auto choosing vacant GPU-2 : Memory:[1MiB/11019MiB] , GPU-Util:[0%] 

else, set to -1 to use CPU:

GPU-0 is busy: Memory:[8900MiB/11019MiB] , GPU-Util:[95%] GPU-1 is busy: Memory:[4674MiB/11019MiB] , GPU-Util:[35%] GPU-2 is busy: Memory:[9784MiB/11016MiB] , GPU-Util:[74%] No vacant GPU, use CPU instead 

Note: Use this function before you import any ML frame that require a GPU, then it can automatically choose a gpu. Besides, it's easy for you to set multiple tasks.

Comments

0

Use this way and check all parts :

from __future__ import absolute_import, division, print_function, unicode_literals import numpy as np import tensorflow as tf import tensorflow_hub as hub import tensorflow_datasets as tfds version = tf.__version__ executing_eagerly = tf.executing_eagerly() hub_version = hub.__version__ available = tf.config.experimental.list_physical_devices("GPU") print("Version: ", version) print("Eager mode: ", executing_eagerly) print("Hub Version: ", h_version) print("GPU is", "available" if avai else "NOT AVAILABLE") 

Comments

0

Ensure you have the latest TensorFlow 2.x GPU installed in your GPU supporting machine, Execute the following code in python,

from __future__ import absolute_import, division, print_function, unicode_literals import tensorflow as tf print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU'))) 

Will get an output looks like,

2020-02-07 10:45:37.587838: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-02-07 10:45:37.588896: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0, 1, 2, 3, 4, 5, 6, 7 Num GPUs Available: 8

Comments

0

Run the following in any shell

python -c "import tensorflow as tf; print(\"Num GPUs Available: \", len(tf.config.list_physical_devices('GPU')))" 

Comments

0

You can use the following code fields to show device name, type, memory and locality.

from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) 

Comments

0

The accepted answer gives you the device description like:

['/device:GPU:0'] 

If you want more details you can use tf.config.experimental.get_device_details()

import tensorflow as tf def get_available_gpus(): physical_gpus = tf.config.list_physical_devices(device_type="GPU") return [(x, tf.config.experimental.get_device_details(x)) for x in physical_gpus] 

This will give you details on device_name and compute_capability, e.g.:

[(PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU'), {'device_name': 'NVIDIA T500', 'compute_capability': (7, 5)})] 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.