Configure CUDA, cudnn and TensorRT environments under Ubuntu 22.04

Some time ago, CUDA, cudnn and TensorRT environments were configured under Ubuntu 18. Later, ros2 was installed. The ros2 version corresponding to Ubuntu 18.04 is dashing, and maintenance has been stopped. During use, it was found that many commands of dashing are not supported, so the system was simply upgraded to Ubuntu 20, so the environment was configured once.

At first, after installing cuda11.3 and cudnn8.2.1 on the Ubuntu 20 system (although the versions of CUDA and cudnn do not correspond to the Ubuntu version, they can also be installed with the version of 18.04), there were some problems in installing tensorRT, mainly because the Ubuntu version was too new, so other versions of CUDA and cudnn were newly installed, as shown in the table below.  

cuda       cudnntensorRT
cuda_11.7.0_515.43.04_linux.runcudnn-linux-x86_64-8.4.1.50_cuda11.6-archive.tar.xzTensorRT-8.4.3.1.Linux.x86_64-gnu.cuda-11.6.cudnn8.4.tar.gz

catalogue

Install the drive

Installing cuda

Install cudnn

Installing tensorRT

Install pytorch

Some errors during installation

reference resources

Install the drive

Check the hardware model of the graphics card first, and then install the recommended driver

ubuntu-drivers devices
sudo ubuntu-drivers autoinstall

Restart after installation.  

Installing cuda

In Official website Download the offline CUDA installation package. Currently, only cuda11.7 supports Ubuntu 22.04. I installed the three offline packages in the above table under Ubuntu after installing them on windows. Enter the directory of CUDA installation package and install CUDA:

sudo sh cuda_11.7.0_515.43.04_linux.run

Follow the prompts and enter accept first. Since nvidia driver is already installed, invert this option when selecting whether to install nvidia driver. Select Install to start the installation. Restart after the installation is completed. Then configure the environment variables of cuda, open the configuration file ~ /. bashrc, and add the following at the end of the file:

export PATH=/usr/local/cuda-11.7/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-11.7/lib64:$LD_LIBRARY_PATH

Install cudnn

In Official website Download the cudnn compressed package (for 11.x) corresponding to cuda11.7. Decompress:

tar -xvf cudnn-linux-x86_64-8.4.1.50_cuda11.6-archive.tar.xz

After decompression, copy the unzipped file to cuda installation directory:

cp cuda/lib/* /usr/local/cuda-11.7/lib64/
cp cuda/include/* /usr/local/cuda-11.7/include/

Then, use the following command to view cuDNN version information:

cat /usr/local/cuda-11.7/include/cudnn_version.h | grep CUDNN_MAJOR -A 2

Installing tensorRT

In Official website Download the tensorRT compressed package corresponding to cuda11.7, and then unzip it.

tar -xvf TensorRT-8.4.3.1.Linux.x86_64-gnu.cuda-11.6.cudnn8.4.tar.gz

Configure the tensorRT environment variable, and add the following at the end of the file ~ /. bashrc:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH: /home/bao/Softwares/TensorRT-8.4.3.1/lib
source ~/.bashrc

Test whether the installation is successful

cd TensorRT-8.4.3.1/samples/sampleMNIST
make -j4
TensorRT-8.4.3.1/bin/sample_mnist

Successful execution means successful installation.

Enter TensorRT directory TensorRT-8.4.3.1/python, which contains multiple versions of Python packages. My Python is 3.7.10, so I choose the version corresponding to 37 to install.

pip install --force-reinstall tensorrt-8.4.3.1-cp37-none-linux_x86_64.whl

Install pytorch

At present, pytorch supports cuda 11.6, so the CUDA toolkit used is 11.6. When installing, pay attention to whether the installed pytorch is a gpu version. You can also download pytorch offline package.

conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge

Check whether the gpu version is installed

python
import torch
print(torch.cuda.is_available())

Some errors during installation

1. libnvinfer.so.8: cannot open shared object file: No such file or directory

Configure / etc / ld.so Conf to add the path of TensorRT lib,

sudo gedit /etc/ld.so.conf

Add a line at the end of the file:

/home/bao/Softwares/TensorRT-8.4.3.1/lib

Execute the following command to enable the system to find the new DLL.  

sudo ldconfig

2. Libnvinfer cannot be found_ builder_ resource.so.8.4.3

sudo cp ./TensorRT-8.4.3.1/lib/libnvinfer_builder_resource.so.8.4.3 /usr/lib

reference resources

1. Install drivers, CUDA, cudnn and TensorRT under Ubuntu 22.04

2. cuda11.7+python3.7+pytorch GPU

Tags: Linux Ubuntu

Posted by JNettles on Fri, 02 Sep 2022 22:25:22 +0300