74

I've tried tensorflow on both cuda 7.5 and 8.0, w/o cudnn (my GPU is old, cudnn doesn't support it).

When I execute device_lib.list_local_devices(), there is no gpu in the output. Theano sees my gpu, and works fine with it, and examples in /usr/share/cuda/samples work fine as well.

I installed tensorflow through pip install. Is my gpu too old for tf to support it? gtx 460

5
  • 5
    A couple quick suggestions: 1. Did you install the GPU-enabled PIP package? (e.g. pip install tensorflow-gpu) 2. Are there any log messages about loading the CUDA libraries the first time you create a tf.Session? Commented Dec 30, 2016 at 20:56
  • Thanks for a quick reply. I installed tensorflow-gpu. During the session initialization it wrote to the terminal that minimum cuda compute capability is 3.0, while my card has 2.1 :( Commented Dec 30, 2016 at 22:08
  • 1
    I have that error with the gpu version. Commented Jul 13, 2018 at 20:33
  • @mrry tensorflow-gpu doesn't actually install a GPU build anymore. Mine installed a MKL build, for instance. You have to specify a GPU build of the tensorflow package instead. stackoverflow.com/a/71809780/125507 Commented Apr 9, 2022 at 18:02
  • Note: when I installed tensorflow-gpu (2.9) and uninstalled older tensorflow(2.6), and then installed corresponding new tensorflow (also 2.9) GPU became visible and workable. Commented Jun 21, 2022 at 8:10

10 Answers 10

44

I came across this same issue in jupyter notebooks. This could be an easy fix.

$ pip uninstall tensorflow $ pip install tensorflow-gpu 

You can check if it worked with:

tf.test.gpu_device_name() 

Update 2020

It seems like tensorflow 2.0+ comes with gpu capabilities therefore pip install tensorflow should be enough

Sign up to request clarification or add additional context in comments.

4 Comments

Helped me to fix a problem after something was broken with conda install tensorflow-gpu
This worked perfectly for me. I had everything configured correctly but just both tensorflow and tensorflow-gpu installed. I guess it was using tensorflow only and hence earlier only listed my CPU. I uninstalled both and then just installed tensorflow-gpu. Now I can see both CPU and GPU as a result to function call device_lib.list_local_devices()
Not applicable for Tensorflow 2.0+
Yep doesn't work for newer versions of tensorflow. Looks like tnesorflow-gpu is no longer a package. I think 2.2 supports GPUs out of the box.
38

Note: If you use Windows only install tensorflow version 2.10 else use linux or WSL. ( tensorflow after 2.10 not suport GPU in windows )

Summary:

  1. check if tensorflow sees your GPU (optional)
  2. check if your videocard can work with tensorflow (optional)
  3. find versions of CUDA Toolkit and cuDNN SDK, compatible with your tf version
  4. install CUDA Toolkit
  5. check active CUDA version and switch it (if necessary)
  6. install cuDNN SDK
  7. pip uninstall tensorflow; pip install tensorflow-gpu
  8. check if tensorflow sees your GPU

* source - https://www.tensorflow.org/install/gpu

Detailed instruction:

  1. check if tensorflow sees your GPU (optional)

     from tensorflow.python.client import device_lib def get_available_devices(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos] print(get_available_devices()) # my output was => ['/device:CPU:0'] # good output must be => ['/device:CPU:0', '/device:GPU:0'] 
  2. check if your card can work with tensorflow (optional)

  3. find versions of CUDA Toolkit and cuDNN SDK, that you need

    a) find your tf version

     import sys print (sys.version) # 3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)] import tensorflow as tf print(tf.__version__) # my output was => 1.13.1 

    b) find right versions of CUDA Toolkit and cuDNN SDK for your tf version

     https://www.tensorflow.org/install/source#linux * it is written for linux, but worked in my case see, that tensorflow_gpu-1.13.1 needs: CUDA Toolkit v10.0, cuDNN SDK v7.4 
  4. install CUDA Toolkit

    a) install CUDA Toolkit 10.0

     https://developer.nvidia.com/cuda-toolkit-archive select: CUDA Toolkit 10.0 and download base installer (2 GB) installation settings: select only CUDA (my installation path was: D:\Programs\x64\Nvidia\Cuda_v_10_0\Development) 

    b) add environment variables:

     system variables / path must have: D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\bin D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\libnvvp D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\extras\CUPTI\libx64 D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\include 
  5. check active CUDA version and switch it (if necessary)

    a) run in cmd:

     nvcc --version This shows currently active CUDA version in system. Restart cmd after each variables change. 

    b) if you have multiple CUDA versions installed and wanna switch to 11.5, do this:

     - system variables / CUDA_PATH must have: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5 - system variables / path must have: all lines with v11.5 at the top (use "move up" button) 
  6. install cuDNN SDK

    a) download cuDNN SDK v7.4

     https://developer.nvidia.com/rdp/cudnn-archive (needs registration, but it is simple) select "Download cuDNN v7.4.2 (Dec 14, 2018), for CUDA 10.0" 

    b) add path to 'bin' folder into "environment variables / system variables / path":

     D:\Programs\x64\Nvidia\cudnn_for_cuda_10_0\bin 
  7. pip uninstall tensorflow pip install tensorflow-gpu

  8. check if tensorflow sees your GPU

     - restart your PC - print(get_available_devices()) - # now this code should return => ['/device:CPU:0', '/device:GPU:0'] 

9 Comments

This is a extremely complex post to format, hope it's ok for you. Note that for code blocks a 4 space indentation is needed. If the code block is inside an (un)ordered list it must have 8 spaces to indent properly. Stack Overflow markdown is a bit different from Github (e.g.), you can check here the help topic about it
Thank you, brasofilo! Now the post looks great! I will try to apply your recommendations in the next post.
@endolith, cause they are in software requirements. AFAIK, TF uses CUDA to access GPU functionality. CUDNN contains examples of popular networks written in CUDA.
@endolith, I'm sorry, but I don't know how to help you now. I haven't tested the solution for 3 years. If I have time, I'll test it again. Maybe worth going through the instructions from tensorflow.org/install/gpu.
Worked (nondetailed instructions), the only part outdated is installing tensorflow-gpu since it is no longer needed. now tensorflow-gpu is part of tensorflow.
|
26

If you are using conda, you might have installed the cpu version of the tensorflow. Check package list (conda list) of the environment to see if this is the case . If so, remove the package by using conda remove tensorflow and install keras-gpu instead (conda install -c anaconda keras-gpu. This will install everything you need to run your machine learning codes in GPU. Cheers!

P.S. You should check first if you have installed the drivers correctly using nvidia-smi. By default, this is not in your PATH so you might as well need to add the folder to your path. The .exe file can be found at C:\Program Files\NVIDIA Corporation\NVSMI

2 Comments

This fixed it for me as well. Main because many of the dependencies I had were the wrong versions
Hi, is this solution for tensorflow 2.0.+ versions?
18

When I look up your GPU, I see that it only supports CUDA Compute Capability 2.1. (Can be checked through https://developer.nvidia.com/cuda-gpus) Unfortunately, TensorFlow needs a GPU with minimum CUDA Compute Capability 3.0. https://www.tensorflow.org/get_started/os_setup#optional_install_cuda_gpus_on_linux

You might see some logs from TensorFlow checking your GPU, but ultimately the library will avoid using an unsupported GPU.

Comments

12

I had a problem because I didn't specify the version of Tensorflow so my version was 2.11. After many hours I found that my problem is described in install guide:

Caution: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin

Before that, I read most of the answers to this and similar questions. I followed @AndrewPt answer. I already had installed CUDA but updated the version just in case, installed cudNN, and restarted the computer.

The easiest solution for me was to downgrade to 2.10 (you can try different options mentioned in the install guide). I first uninstalled all of these packages (probably it's not necessary, but I didn't want to see how pip messed up versions at 2 am):

pip uninstall keras pip uninstall tensorflow-io-gcs-filesystem pip uninstall tensorflow-estimator pip uninstall tensorflow pip uninstall Keras-Preprocessing pip uninstall tensorflow-intel 

because I wanted only packages required for the old version, and I didn't do it for all required packages for 2.11 version. After that I installed tensorflow 2.10:

pip install tensorflow<2.11 

and it worked.

I used this code to check if GPU is visible:

import tensorflow as tf print(tf.config.list_physical_devices('GPU')) 

Comments

10

So as of 2022-04, the tensorflow package contains both CPU and GPU builds. To install a GPU build, search to see what's available:

λ conda search tensorflow Loading channels: done # Name Version Build Channel tensorflow 0.12.1 py35_1 conda-forge tensorflow 0.12.1 py35_2 conda-forge tensorflow 1.0.0 py35_0 conda-forge … tensorflow 2.5.0 mkl_py39h1fa1df6_0 pkgs/main tensorflow 2.6.0 eigen_py37h37bbdb1_0 pkgs/main tensorflow 2.6.0 eigen_py38h63d3545_0 pkgs/main tensorflow 2.6.0 eigen_py39h855417c_0 pkgs/main tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main tensorflow 2.6.0 mkl_py37h9623b36_0 pkgs/main tensorflow 2.6.0 mkl_py38hdc16138_0 pkgs/main tensorflow 2.6.0 mkl_py39h31650da_0 pkgs/main 

You can see that there are builds of TF 2.6.0 that support Python 3.7, 3.8 and 3.9, and that are built for MKL (Intel CPU), Eigen, or GPU.

To narrow it down, you can use wildcards in the search. This will find any Tensorflow 2.x version that is built for GPU, for instance:

λ conda search tensorflow=2*=gpu* Loading channels: done # Name Version Build Channel tensorflow 2.0.0 gpu_py36hfdd5754_0 pkgs/main tensorflow 2.0.0 gpu_py37h57d29ca_0 pkgs/main tensorflow 2.1.0 gpu_py36h3346743_0 pkgs/main tensorflow 2.1.0 gpu_py37h7db9008_0 pkgs/main tensorflow 2.5.0 gpu_py37h23de114_0 pkgs/main tensorflow 2.5.0 gpu_py38h8e8c102_0 pkgs/main tensorflow 2.5.0 gpu_py39h7dc34a2_0 pkgs/main tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main 

To install a specific version in an otherwise empty environment, you can use a command like:

λ conda activate tf (tf) λ conda install tensorflow=2.6.0=gpu_py39he88c5ba_0 … The following NEW packages will be INSTALLED: _tflow_select pkgs/main/win-64::_tflow_select-2.1.0-gpu … cudatoolkit pkgs/main/win-64::cudatoolkit-11.3.1-h59b6b97_2 cudnn pkgs/main/win-64::cudnn-8.2.1-cuda11.3_0 … tensorflow pkgs/main/win-64::tensorflow-2.6.0-gpu_py39he88c5ba_0 tensorflow-base pkgs/main/win-64::tensorflow-base-2.6.0-gpu_py39hb3da07e_0 … 

As you can see, if you install a GPU build, it will automatically also install compatible cudatoolkit and cudnn packages. You don't need to manually check versions for compatibility, or manually download several gigabytes from Nvidia's website, or register as a developer, as it says in other answers or on the official website.

After installation, confirm that it worked and it sees the GPU by running:

λ python Python 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf >>> tf.__version__ '2.6.0' >>> tf.config.list_physical_devices() [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] 

Getting conda to install a GPU build and other packages you want to use is another story, however, because there are a lot of package incompatibilities for me. I think the best you can do is specify the installation criteria using wildcards and cross your fingers.

This tries to install any TF 2.x version that's built for GPU and that has dependencies compatible with Spyder and matplotlib's dependencies, for instance:

λ conda install tensorflow=2*=gpu* spyder matplotlib 

For me, this ended up installing a two year old GPU version of tensorflow:

 matplotlib pkgs/main/win-64::matplotlib-3.5.1-py37haa95532_1 spyder pkgs/main/win-64::spyder-5.1.5-py37haa95532_1 tensorflow pkgs/main/win-64::tensorflow-2.1.0-gpu_py37h7db9008_0 

I had previously been using the tensorflow-gpu package, but that doesn't work anymore. conda typically grinds forever trying to find compatible packages to install, and even when it's installed, it doesn't actually install a gpu build of tensorflow or the CUDA dependencies:

λ conda list … cookiecutter 1.7.2 pyhd3eb1b0_0 cryptography 3.4.8 py38h71e12ea_0 cycler 0.11.0 pyhd3eb1b0_0 dataclasses 0.8 pyh6d0b6a4_7 … tensorflow 2.3.0 mkl_py38h8557ec7_0 tensorflow-base 2.3.0 eigen_py38h75a453f_0 tensorflow-estimator 2.6.0 pyh7b7c402_0 tensorflow-gpu 2.3.0 he13fc11_0 

2 Comments

I had the very same configuration and same problem. Since it seems like Conda does not have a GPU verision of Tensorflow 2.3 in its repository, I solved just installing tensorflow-gpu==2.3 through pip
Thanks a lot, I wasted so much time wondering why my 4090 couldn't be seen by Tensorflow 2.10.0 I thought my CUDA+CuDNN was badly installed, but instead it was simply the fact that I was getting a CPU build. Fixed it with conda install tensorflow=2.10.0=gpu_py39h9bca9fa_0
7

The following worked for me, hp laptop. I have a Cuda Compute capability (version) 3.0 compatible Nvidia card. Windows 7.

pip3.6.exe uninstall tensorflow-gpu pip3.6.exe uninstall tensorflow-gpu pip3.6.exe install tensorflow-gpu 

Comments

1

I have had an issue where I needed the latest TensorFlow (2.8.0 at the time of writing) with GPU support running in a conda environment. The problem was that it was not available via conda. What I did was

conda install cudatoolkit==11.2 pip install tensorflow-gpu==2.8.0 

Although I've cheched that the cuda toolkit version was compatible with the tensorflow version, it was still returning an error, where libcudart.so.11.0 was not found. As a result, GPUs were not visible. The remedy was to set environmental variable LD_LIBRARY_PATH to point to your anaconda3/envs/<your_tensorflow_environment>/lib with this command

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/<user>/anaconda3/envs/<your_tensorflow_environment>/lib 

Unless you make it permanent, you will need to create this variable every time you start a terminal prior to a session (jupyter notebook). It can be conveniently automated by following this procedure from conda's official website.

Comments

0

In my case, I had a working tensorflow-gpu version 1.14 but suddenly it stopped working. I fixed the problem using:

 pip uninstall tensorflow-gpu==1.14 pip install tensorflow-gpu==1.14 

Comments

0

I experienced the same problem on my Windows OS. I followed tensorflow's instructions on installing CUDA, cudnn, etc., and tried the suggestions in the answers above - with no success. What solved my issue was to update my GPU drivers. You can update them via:

  1. Pressing windows-button + r
  2. Entering devmgmt.msc
  3. Right-Clicking on "Display adapters" and clicking on the "Properties" option
  4. Going to the "Driver" tab and selecting "Updating Driver".
  5. Finally, click on "Search automatically for updated driver software"
  6. Restart your machine and run the following check again:
from tensorflow.python.client import device_lib local_device_protos = device_lib.list_local_devices() [x.name for x in local_device_protos] 
Sample output: 2022-01-17 13:41:10.557751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: name: GeForce 940MX major: 5 minor: 0 memoryClockRate(GHz): 1.189 pciBusID: 0000:01:00.0 2022-01-17 13:41:10.558125: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2022-01-17 13:41:10.562095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0 2022-01-17 13:45:11.392814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix: 2022-01-17 13:45:11.393617: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0 2022-01-17 13:45:11.393739: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N 2022-01-17 13:45:11.401271: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/device:GPU:0 with 1391 MB memory) -> physical GPU (device: 0, name: GeForce 940MX, pci bus id: 0000:01:00.0, compute capability: 5.0) >>> [x.name for x in local_device_protos] ['/device:CPU:0', '/device:GPU:0'] 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.