From David's Wiki
\( \newcommand{\P}[]{\unicode{xB6}} \newcommand{\AA}[]{\unicode{x212B}} \newcommand{\empty}[]{\emptyset} \newcommand{\O}[]{\emptyset} \newcommand{\Alpha}[]{Α} \newcommand{\Beta}[]{Β} \newcommand{\Epsilon}[]{Ε} \newcommand{\Iota}[]{Ι} \newcommand{\Kappa}[]{Κ} \newcommand{\Rho}[]{Ρ} \newcommand{\Tau}[]{Τ} \newcommand{\Zeta}[]{Ζ} \newcommand{\Mu}[]{\unicode{x039C}} \newcommand{\Chi}[]{Χ} \newcommand{\Eta}[]{\unicode{x0397}} \newcommand{\Nu}[]{\unicode{x039D}} \newcommand{\Omicron}[]{\unicode{x039F}} \DeclareMathOperator{\sgn}{sgn} \def\oiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x222F}\,}{\unicode{x222F}}{\unicode{x222F}}{\unicode{x222F}}}\,}\nolimits} \def\oiiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x2230}\,}{\unicode{x2230}}{\unicode{x2230}}{\unicode{x2230}}}\,}\nolimits} \)


I suggest using conda to install cuda for version control your project.

Note that nvidia-smi lists the maximum CUDA version supported by the GPU driver, not the installed version of CUDA.
You can have a different version of CUDA installed in each conda environment, independently of the version supported by the GPU driver.


See nvidia/cuda-toolkit and nvidia/cuda-libraries-dev

For example:

# Install the runtime only
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
# Install the runtime and the development tools
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit cuda-libraries-dev cuda-nvcc


CUDA Toolkit


See CUDA Ubuntu Installation

# Set UBUNTU_VERSION to 2004 or 2204
UBUNTU_VERSION=$(lsb_release -sr | sed -e 's/\.//g')

# Install nvidia driver
sudo apt install nvidia-driver-545

# Add NVIDIA package repositories
sudo mv cuda-ubuntu${UBUNTU_VERSION}.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys${UBUNTU_VERSION}/x86_64/
sudo add-apt-repository "deb${UBUNTU_VERSION}/x86_64/ /"

# Install cuda.
sudo apt install cuda
# Reboot and check that the drivers are working with nvidia-smi
sudo reboot

# Install cudnn if needed
sudo apt install libcudnn8 libcudnn8-dev
  • For machine learning, use Anaconda or Docker's CUDA since different versions of TensorFlow and PyTorch require different CUDA versions.

You may need to add LD_LIBRARY_PATH=/usr/local/cuda/lib64 to your environment variables.
You can also do this in PyCharm.

GCC Versions

nvcc sometimes only supports older gcc/g++ versions.
To make it use those by default, create the following symlinks:

  • sudo ln -s /usr/bin/gcc-6 /usr/local/cuda/bin/gcc
  • sudo ln -s /usr/bin/g++-6 /usr/local/cuda/bin/g++

Alternatively, you can use -ccbin and point to your gcc:

-ccbin /usr/local/cuda/bin/gcc