Content:
I’m having trouble getting PyTorch to recognize CUDA on my system. Here are the details:
System Information:
- OS: Ubuntu 22.04.4 LTS (x86_64) running on WSL2
- Python version: 3.7.16
- PyTorch version: 1.12.0+cu113
- GPU: NVIDIA GeForce GTX 1650 with Max-Q Design
- Nvidia driver version: 537.13
Environment Information:
python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.12.0+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.17
Python version: 3.7.16 (default, Jan 17 2023, 22:20:44) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-debian-bookworm-sid
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650 with Max-Q Design
Nvidia driver version: 537.13
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.6
[pip3] torch==1.12.0+cu113
[pip3] torchaudio==0.12.0+cu113
[pip3] torchvision==0.13.0+cu113
[conda] numpy 1.21.6 pypi_0 pypi
[conda] torch 1.12.0+cu113 pypi_0 pypi
[conda] torchaudio 0.12.0+cu113 pypi_0 pypi
[conda] torchvision 0.13.0+cu113 pypi_0 pypi
Steps I’ve Taken:
-
Verified that CUDA is installed correctly:
nvcc --version
Output:
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Mon_May__3_19:15:13_PDT_2021 Cuda compilation tools, release 11.3, V11.3.109 Build cuda_11.3.r11.3/compiler.29920130_0
-
Set environment variables:
export PATH=/usr/local/cuda-11.3/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=/usr/local/cuda-11.3/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
OR
echo 'export PATH=/usr/local/cuda-11.3/bin${PATH:+:${PATH}}' >> ~/.bashrc echo 'export LD_LIBRARY_PATH=/usr/local/cuda-11.3/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc
-
Verified NVIDIA driver status:
nvidia-smi
Output:
Sun May 19 03:03:53 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.103 Driver Version: 537.13 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce GTX 1650 ... On | 00000000:02:00.0 Off | N/A | | N/A 50C P0 13W / 35W | 0MiB / 4096MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+
-
Checked if CUDA is available in PyTorch:
import torch print(torch.cuda.is_available())
Output:
False
Questions:
- Why is
torch.cuda.is_available()
returningFalse
? - What additional checks or steps should I perform to resolve this issue?
Thank you in advance for your help!
2
Answers
I solved the problem with the link below!
https://forums.developer.nvidia.com/t/wsl2-libcuda-so-and-libcuda-so-1-should-be-symlink/236301
I suggest you check this page for the cuda compatibility.
With Pytorch is important to follow all dependencies recommended on the installation page, if you skip only one thing you can’t start torch.
In your case, driver 537.13 is compatible with CUDA 12.2.
Try to recheck the installation step, maybe you have skipped something