I am trying to use docker (Docker Desktop for Windows 10 Pro) with the WSL2 Backend (WINDOWS SUBSHELL LINUX (WSL) (Ubuntu 20.04.4 LTS)
).
That part seems to be working fine, except I would like to pass my GPU (Nvidia RTX A5000
) through to my docker container.
Before I even get that far, I am still trying to set things up. I found a very good tutorial aimed at 18.04, but found all the steps are the same for 20.04, just with some version numbers bumpede.
At the end, I can see that my Cuda versions do not match. You can see that here, .
The real issue is when I try to run the test command as shown on the docker website:
docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
I get this error:
--> docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380:
starting container process caused: process_linux.go:545: container init caused: Running
hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli:
requirement error: unsatisfied condition: cuda>=11.6, please update your driver to a
newer version, or use an earlier cuda container: unknown.
… and I just don’t know what to do, or how I can fix this.
Can someone explain how to get the GPU to pass through to a docker container successfully.
2
Answers
The comment from @RobertCrovella resolved this:
Downloading the most current Nvidia driver:
Now I am able to support CUDA 11.6, and the test from the docker documentation now works:
Thank you for the quick response!
I had the same issue on Ubuntu when I tried to run the container:
In my case it occurred when I tried to launch docker image that have nvidia cuda version which is higher than what was installed on my host.
When I checked my cuda version that was installed on my host I have found that it is version 11.3.
So when I try to run the same cuda version (11.3) it works well: