The project is a utility tool for PyTorch installation on Linux64 machine.
pytorch_compute_capabilities.py
checks the compute capabalities of each pytorch package in the PyTorch conda channel by running cuobjdump
from the CUDA Toolkit on the included *.so
files.
For results see table.csv.
Run streamlit run app.py
in your terminal, or you can visit this web page.
Just select the PyTorch (or Python or CUDA) version or compute capability you have, the page will give you the available combinations.
For example, if you want to install PyTorch v1.7.1 through conda, Python of your conda environment is v3.8 and the GPU you use is Tesla V100, then you can choose the following option to see the environment constraints.
If there is no output, that means your needs possibly cannot be satisfied.
If you cannot find older cuDNN version from conda channels (such as -c nvidia
, -c conda-forge
, -c anaconda
, -c main
), you can find archive packages here.
This repo is inherited from https://github.com/moi90/pytorch_compute_capabilities of Simon-Martin Schröder. I reuse the code pytorch_compute_capabilities.py
to collect version constraints.
Here are some great resources that provide insight into compatibility of cuda:
https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#major-components
https://jia.je/software/2021/12/26/nvidia-cuda/
https://jia.je/software/2022/07/06/install-nvidia-cuda/
https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/