Product docs and API reference are now on Akamai TechDocs.
Search product docs.
Search for “” in product docs.
Search API reference.
Search for “” in API reference.
Search Results
 results matching 
 results
No Results
Filters
Guides - Install the NVIDIA CUDA Toolkit
Linux virtual machines equipped with a tailored set of resources designed to run any cloud-based workload.
To take advantage of the powerful parallel processing capabilities offered by GPU instances equipped with NVIDIA Quadro RTX cards, you first need to install NVIDIA’s CUDA Toolkit. This guide walks you through deploying a GPU instance and installing the CUDA Toolkit.
Deploy a GPU Compute Instance using Cloud Manager, the Linode CLI, or the Linode API. It’s recommended to follow the instructions within the following guides:
Be sure to select a distribution that’s compatible with the NVIDIA CUDA Toolkit. Review NVIDIA’s System Requirements to learn which distributions are supported.
Upgrade your system and install the kernel headers and development packages for your distribution. See NVIDIA’s Pre-installation Actions for additional information.
Ubuntu and Debian
sudo apt update && sudo apt upgrade sudo apt install build-essential linux-headers-$(uname -r)
CentOS/RHEL 8, AlmaLinux 8, Rocky Linux 8, and Fedora
sudo dnf upgrade sudo dnf install gcc kernel-devel-$(uname -r) kernel-headers-$(uname -r)
CentOS/RHEL 7
sudo yum update sudo yum install gcc kernel-devel-$(uname -r) kernel-headers-$(uname -r)
Install the NVIDIA CUDA Toolkit software that corresponds to your distribution.
Navigate to the NVIDIA CUDA Toolkit Download page. This page provides the installation instructions for the latest version of the CUDA Toolkit.
Under the Select Target Platform section, choose the following options:
- Operating System: Linux
- Architecture: x86_64
- Distribution: Select the distribution you have installed on your GPU instance (such as Ubuntu).
- Version: Select the distribution version that’s installed (such as 22.04).
- Installer Type: Select from one of the following methods:
rpm (local) or deb (local): Stand-alone installer that contains dependencies. This is a much larger initial download size but is recommended for most users.
rpm (network) or deb (network): Smaller initial download size as dependencies are managed separately through the package management system. Some distributions may not contain the dependencies needed and you may receive an error when installing the CUDA package.
runfile (local): Installs the software outside of your package management system, which is typically not desired or recommended.
Warning If you decide to use the runfile installation method, you may need to install gcc and other dependencies before running the installer file. In addition, you also need to disable any existing nouveau drivers that installed on most distributions by default. The runfile method is not covered in this guide. Instead, reference NVIDIA’s runfile installation instructions for Ubuntu, Debian, CentOS, Fedora, or openSUSE.
The Download Installer (or similar) section should appear and display a list of commands needed to download and install the CUDA Toolkit. Run each command listed there.
Reboot the GPU instance after all the commands have completed successfully.
Run
nvidia-smi
to verify that the NVIDIA drivers and CUDA Toolkit are installed successfully. This command should output details about the driver version, CUDA version, and the GPU itself.
You should now be ready to run your CUDA-optimized workloads. You can optionally download NVIDIA’s CUDA code samples and review CUDA’s Programming Guide to learn more about developing software to take advantage of a GPU instance.
Optional: After you have completed the installation, you can capture a custom image of the Compute Instance and use it the next time you need to deploy a GPU instance.
This page was originally published on