10

when I run glxgears, I get following error.

libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast Error: couldn't get an RGB, Double-buffered visual 

My system is ubuntu 16.04 as docker image - nvidia/cuda:8.0-runtime-ubuntu16.04.

The image contains VirtualGL and TurboVNC and its is started with the following parameters:

docker run --runtime=nvidia --privileged -d -v /tmp/.X11-unix/X0:/tmp/.X11-unix/X0 -e USE_DISPLAY="7" my_image

There is no problem if I change the base image to nvidia/cuda:10.2-runtime-ubuntu18.04. But an application for which this container is, needs CUDA 8.

I found some advice to remove library: sudo rm /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1. But it does not work.

Ubuntu 16.04, CUDA 8:

user@host:/opt/noVNC$ sudo ldconfig -p | grep -i libGL.so libGL.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 libGL.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGL.so libGL.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/mesa/libGL.so user@host:/usr/lib/x86_64-linux-gnu$ ll libGL* lrwxrwxrwx 1 root root 13 Jun 14 2018 libGL.so -> mesa/libGL.so lrwxrwxrwx 1 root root 32 May 25 14:14 libGLESv1_CM_nvidia.so.1 -> libGLESv1_CM_nvidia.so.440.33.01 -rw-r--r-- 1 root root 63696 Nov 12 2019 libGLESv1_CM_nvidia.so.440.33.01 lrwxrwxrwx 1 root root 29 May 25 14:14 libGLESv2_nvidia.so.2 -> libGLESv2_nvidia.so.440.33.01 -rw-r--r-- 1 root root 111416 Nov 12 2019 libGLESv2_nvidia.so.440.33.01 -rw-r--r-- 1 root root 911218 Oct 23 2015 libGLU.a lrwxrwxrwx 1 root root 15 Oct 23 2015 libGLU.so -> libGLU.so.1.3.1 lrwxrwxrwx 1 root root 15 Oct 23 2015 libGLU.so.1 -> libGLU.so.1.3.1 -rw-r--r-- 1 root root 453352 Oct 23 2015 libGLU.so.1.3.1 lrwxrwxrwx 1 root root 26 May 25 14:14 libGLX_indirect.so.0 -> libGLX_nvidia.so.440.33.01 lrwxrwxrwx 1 root root 26 May 25 14:14 libGLX_nvidia.so.0 -> libGLX_nvidia.so.440.33.01 -rw-r--r-- 1 root root 1114496 Nov 12 2019 libGLX_nvidia.so.440.33.01 user@host:/usr/lib/x86_64-linux-gnu$ ll mesa -rw-r--r-- 1 root root 31 Jun 14 2018 ld.so.conf lrwxrwxrwx 1 root root 14 Jun 14 2018 libGL.so -> libGL.so.1.2.0 lrwxrwxrwx 1 root root 14 Jun 14 2018 libGL.so.1 -> libGL.so.1.2.0 -rw-r--r-- 1 root root 471680 Jun 14 2018 libGL.so.1.2.0 

Ubuntu 18.04, CUDA 10:

user@host:/opt/noVNC$ sudo ldconfig -p | grep -i libGL.so libGL.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGL.so.1 user@host:/usr/lib/x86_64-linux-gnu$ ll libGL* lrwxrwxrwx 1 root root 14 May 10 2019 libGL.so.1 -> libGL.so.1.0.0 -rw-r--r-- 1 root root 567624 May 10 2019 libGL.so.1.0.0 lrwxrwxrwx 1 root root 32 May 20 16:43 libGLESv1_CM_nvidia.so.1 -> libGLESv1_CM_nvidia.so.440.33.01 -rw-r--r-- 1 root root 63696 Nov 12 2019 libGLESv1_CM_nvidia.so.440.33.01 lrwxrwxrwx 1 root root 29 May 20 16:43 libGLESv2_nvidia.so.2 -> libGLESv2_nvidia.so.440.33.01 -rw-r--r-- 1 root root 111416 Nov 12 2019 libGLESv2_nvidia.so.440.33.01 lrwxrwxrwx 1 root root 15 May 21 2016 libGLU.so.1 -> libGLU.so.1.3.1 -rw-r--r-- 1 root root 453352 May 21 2016 libGLU.so.1.3.1 lrwxrwxrwx 1 root root 15 May 10 2019 libGLX.so.0 -> libGLX.so.0.0.0 -rw-r--r-- 1 root root 68144 May 10 2019 libGLX.so.0.0.0 lrwxrwxrwx 1 root root 16 Feb 19 05:09 libGLX_indirect.so.0 -> libGLX_mesa.so.0 lrwxrwxrwx 1 root root 20 Feb 19 05:09 libGLX_mesa.so.0 -> libGLX_mesa.so.0.0.0 -rw-r--r-- 1 root root 488344 Feb 19 05:09 libGLX_mesa.so.0.0.0 lrwxrwxrwx 1 root root 26 May 20 16:43 libGLX_nvidia.so.0 -> libGLX_nvidia.so.440.33.01 -rw-r--r-- 1 root root 1114496 Nov 12 2019 libGLX_nvidia.so.440.33.01 lrwxrwxrwx 1 root root 22 May 10 2019 libGLdispatch.so.0 -> libGLdispatch.so.0.0.0 -rw-r--r-- 1 root root 612792 May 10 2019 libGLdispatch.so.0.0.0 user@host:/usr/lib/x86_64-linux-gnu$ ll mesa ls: cannot access 'mesa': No such file or directory 

The host has CUDA 10.2 but I do not know if it is required and can cause a problem.

I have no idea how to solve this problem.

Thank you for any advice.

3 Answers 3

18

The two errors also appear when Running ROS with GUI in Docker using Windows Subsystem for Linux 2 (WSL2).

The error libGL error: No matching fbConfigs or visuals found can be fixed with:

export LIBGL_ALWAYS_INDIRECT=1

The error libGL error: failed to load driver: swrast can be fixed with:

sudo apt-get install -y mesa-utils libgl1-mesa-glx


Probably irrelevant side-note:

For "the ROS with GUI on Docker guide" to run, you also have to install dbus.

sudo apt-get update sudo apt-get install -y dbus 

I do not think this is relevant here, since you will see the two errors in question only after having installed dbus, but I do not know the background of the question, perhaps it helps. Installing dbus would get rid of the error D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open “/var/lib/dbus/machine-id”.

2
  • 3
    Would like to give +2. You solved two of the three problems I have, getting OpenGL work from Docker, in an unrelated project. Commented Jul 26, 2021 at 22:16
  • 1
    I logged in just to give you a an upvote Commented Jan 20, 2023 at 15:06
2

The solution is to replace nvidia/cuda:8.0-runtime-ubuntu16.04 image with nvidia/opengl:1.0-glvnd-runtime-ubuntu16.04 and install CUDA 8 manually.

CUDA 8 installation: https://gitlab.com/nvidia/container-images/cuda/-/blob/ubuntu16.04/8.0/runtime/Dockerfile

Do not forget to add display to nvidia capabilities var:

ENV NVIDIA_DRIVER_CAPABILITIES compute,utility,display 

https://hub.docker.com/r/nvidia/opengl

0

I had similar errors on a docker container(Ubuntu 18) on host(Ubuntu 22) with Nvidia. Here is what I did:

Step 1 (On the host machine): #Install nvidia-container-toolkit

curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add - curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu22.04/nvidia-docker.list > /etc/apt/sources.list.d/nvidia-docker.list apt update apt -y install nvidia-container-toolkit systemctl restart docker 

#Test

docker run --gpus all nvidia/cuda:11.5.2-base-ubuntu20.04 nvidia-smi 

#If you find issues with the test, restart your host machine.

Step 2 (Running docker) #Run with flags NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=all --gpus all

#In my case:

docker run -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=all --gpus all -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY myimage 

Step 3 (If you are using vscode devcontainer)

#Provide the following in the runargs in devcontainer.json :

"runArgs": [ "-e","DISPLAY=:1", "-e","NVIDIA_VISIBLE_DEVICES=all", "-e","NVIDIA_DRIVER_CAPABILITIES=all", "--gpus","all", "--runtime=nvidia", ] 

#In the mounts section of decontainer.json:

"mounts": [ "source=/tmp/.X11-unix,target=/tmp/.X11-unix,type=bind", ], 
1
  • Sometimes you may also be need to mount dbus on the container, then mount it with volume mount "source=/var/run/dbus,target=/var/run/dbus,type=bind" Commented Jan 21, 2023 at 13:01

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.