1. 버전 조합 개요
Ubuntu: 22.04 LTS
Python: 3.10
Clang: 16
Nvidia Driver: 설치할 CUDA Toolkit 버전의 요구사항 충족 되는 버전 이상이면 무방할듯..(그냥 최신 버전 설치함)
CUDA Toolkit: 11.8
cuDNN: 8.6.0.163
TensorRT: 8.5.3.1
TensorFlow: 2.13
2. Download
CUDA Toolkit 11.8 Downloads
Resources CUDA Documentation/Release NotesMacOS Tools Training Sample Code Forums Archive of Previous CUDA Releases FAQ Open Source PackagesSubmit a BugTarball and Zip Archive Deliverables
developer.nvidia.com
# wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run
cuDNN(Nvidia 계정 필요): https://developer.nvidia.com/rdp/cudnn-archive#a-collapse860-118
cuDNN Archive
Download releases from the GPU-accelerated primitive library for deep neural networks.
developer.nvidia.com
TensorRT(Nvidia 계정 필요): https://developer.nvidia.com/nvidia-tensorrt-8x-download
3. Nvidia Driver 설치
Disable nouveau
# sudo vi /etc/modprobe.d/blacklist-nvidia-nouveau.conf
blacklist nouveau
options nouveau modeset=0
저장
# sudo update-initramfs -u
리부팅
Install Nvidia Driver from ppa
# sudo apt-get install linux-headers-$(uname -r) dkms
# sudo dpkg --add-architecture i386
# sudo add-apt-repository ppa:graphics-drivers/ppa
# sudo apt update
# sudo apt install libc6:i386
# sudo apt install build-essential libglvnd-dev pkg-config
# sudo ubuntu-drivers devices
# sudo ubuntu-drivers autoinstall (혹은 sudo apt install nvidia-driver-535)
리부팅
4-1. Clang
# wget https://apt.llvm.org/llvm.sh
# chmod +x llvm.sh
# sudo ./llvm.sh 16 all
# sudo vi /etc/profile.d/llvm.sh
export LLVM_HOME=/lib/llvm-16
export PATH=$PATH:$LLVM_HOME/bin
저장
# source /etc/profile.d/llvm.sh
4-2. Python
# sudo apt update
# sudo apt install python3-dev python3-pip python3-venv
# sudo apt-get update
# sudo apt-get install idle3
# sudo apt install python3-testresources
# sudo apt install libxcb-xinerama0
# python3 -m pip install --user --upgrade pip setuptools wheel packaging requests opt_einsum
# python3 -m pip install --user --upgrade keras_preprocessing --no-deps
# python3 -m pip install --user --upgrade numpy scipy matplotlib ipython h5py jupyter spyder pandas sympy nose
# python3 -m pip install --user --upgrade scikit-learn ipyparallel pydot pydotplus pydot_ng graphviz
# sudo apt install graphviz
# echo 'export PATH=$PATH:'$(python3 -c 'import site; print(site.USER_BASE)')'/bin' >> ~/.bash_profile
# source ~/.bash_profile
# python3 -m pip freeze --user > ~/requirements.txt
(상기 설치 내역 한번에 재설치시 # python -m pip install --user -r requirements.txt 실행)
5. Install CUDA Toolkit
# sudo apt-get install gcc g++ freeglut3 freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev
# sudo apt-get install libglu1-mesa libglu1-mesa-dev libfreeimage3 libfreeimage-dev libxcb-xinput-dev
# cd /home/userid/Downloads/
(CUDA Toolkit 다운로드한 디렉토리로 이동)
# chmod 744 cuda_11.8.0_520.61.05_linux.run
# sudo ./cuda_11.8.0_520.61.05_linux.run
(설치 구성에 CUDA Toolkit에서 제공하는 Driver는 설치되지 않도록 설치 항목 조정 필요. 아래 참고..)
Continue 선택후 Enter
accept 입력후 Enter
Driver 관련 항목 선택 해제하고 Options 항목 선택후 Enter
Driver Options 항목 선택후 Enter
Do not install... 항목 모두 선택하고 Done 선택후 Enter
Done 선택후 Enter
Install 선택후 Enter
Summary를 보여주고 /var/log/cuda-installer.log에서 설치 과정 로그를 확인 할 수 있다.
(일반적으로 /usr/local/cuda-<version>/의 경로는 /usr/local/cuda/로 링크가 만들어지니 참고하자.)
# sudo vi /etc/profile.d/cuda.sh
export CUDADIR=/usr/local/cuda
export PATH=$PATH:$CUDADIR/bin
저장
# source /etc/profile.d/cuda.sh
# sudo vi /etc/ld.so.conf.d/cuda.conf
/usr/local/cuda/lib64
/usr/local/cuda/extras/CUPTI/lib64
저장
# sudo ldconfig
6. Install cuDNN
# sudo apt-get install zlib1g
# tar -xvf cudnn-linux-x86_64-8.6.0.163_cuda11-archive.tar.xz
# sudo cp cudnn-linux-x86_64-8.6.0.163_cuda11-archive/include/cudnn*.h /usr/local/cuda/include
# sudo cp -P cudnn-linux-x86_64-8.6.0.163_cuda11-archive/lib/libcudnn* /usr/local/cuda/lib64
# sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
# sudo ldconfig
7. Install TensorRT
# tar xzvf TensorRT-8.5.3.1.Linux.x86_64-gnu.cuda-11.8.cudnn8.6.tar.gz
# cd /usr/local
# sudo cp -rP /home/userid/Download/TensorRT-8.5.3.1/ .
# sudo vi /etc/ld.so.conf.d/TensorRT.conf
/usr/local/TensorRT-8.5.3.1/lib
저장
# sudo ldconfig
# cd /usr/local/TensorRT-8.5.3.1/python
# python3 -m pip install --user tensorrt-8.5.3.1-cp310-none-linux_x86_64.whl
# cd ../uff
# python3 -m pip install --user uff-0.6.9-py2.py3-none-any.whl
# which convert-to-uff
# cd ../graphsurgeon
# python3 -m pip install --user graphsurgeon-0.4.6-py2.py3-none-any.whl
# cd ../onnx_graphsurgeon
# python3 -m pip install --user onnx_graphsurgeon-0.3.12-py2.py3-none-any.whl
# python3 -m pip install --user --upgrade pyyaml requests tqdm
8. Install TensorFlow
# python3 -m pip install --user --upgrade tensorflow==2.13
# python3 -m pip install --user --upgrade tensorflow_datasets
# python3 -m pip install --user --upgrade tensorflow-text==2.13
# python3 -m pip install --user --upgrade tensorboard==2.13
# python3 -m pip install --user --upgrade onnx
# python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
# python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
※ 상기 구성에 PyTorch를 함께 설치 하고 싶다면 아래 참고
# python3 -m pip install --user --upgrade torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
# python3 -c "import torch; print(torch.cuda.is_available())"
# python3 -c "import torch; print(torch.cuda.get_device_name())"
# python3 -c "import torch; x = torch.rand(5, 3); print(x)"
https://pytorch.org/get-started/locally/
PyTorch
An open source machine learning framework that accelerates the path from research prototyping to production deployment.
pytorch.org
※ Library Version 관련 참고:
https://www.tensorflow.org/install/source?hl=ko#gpu
소스에서 빌드 | TensorFlow
이 페이지는 Cloud Translation API를 통해 번역되었습니다. Switch to English 소스에서 빌드 컬렉션을 사용해 정리하기 내 환경설정을 기준으로 콘텐츠를 저장하고 분류하세요. 소스에서 TensorFlow pip 패키
www.tensorflow.org
Release Notes :: CUDA Toolkit Documentation
Beginning in 2022, the NVIDIA Math Libraries official hardware support will follow an N-2 policy, where N is an x100 series GPU.
docs.nvidia.com
https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn-860/support-matrix/index.html
Support Matrix - NVIDIA Docs
docs.nvidia.com
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-853/support-matrix/index.html
Support Matrix :: NVIDIA Deep Learning TensorRT Documentation
These support matrices provide a look into the supported platforms, features, and hardware capabilities of the NVIDIA TensorRT 8.5.3 APIs, parsers, and layers. For previously released TensorRT documentation, see TensorRT Archives.
docs.nvidia.com
'Programming > Python' 카테고리의 다른 글
[Python] random 모듈 사용 예제 (0) | 2023.11.08 |
---|---|
[Python] calendar, datetime 모듈 사용 예제 (0) | 2023.09.07 |
[Python] Jupyter Notebook Remote Configuration (0) | 2023.01.31 |
[Python] TensorFlow(2.11) Installation (Ubuntu, Nvidia GPU, + PyTorch) (1) | 2022.11.10 |
[Python] TensorRT Installation(tar file) (0) | 2022.10.31 |