Deepstream6.3部署YOLOv8
前提:
本人使用A机连向日葵到跳转机B机上,再在跳转机中使用Xshell连接控制服务器
我这里已经装好yolov8的环境,也有我训练好的模型
一、配置Deepstream环境(dGPU)
通篇参考Quickstart Guide — DeepStream 6.3 Release documentation(如果你选择其他DS,可以去找其他教程)
dGPU 型号 平台和操作系统兼容性,先查看自己的显卡型号选择DS版本,再升级安装对应软件
参考:http://t.csdn.cn/Y1j86,安装顺序:ubuntu->driver->cuda->tensorrt->deepstream
1.查看显卡型号与信息
输入命令查看
nvidia-smi
显示我有两个GPU显卡,型号都为nvidia A30,Driver version是显卡驱动版本,对应上表格中的Display driver,CUDA Version对应的是CUDA release
总结:我的显卡型号A30可以安装的是DS6.1-6.3,我选择安装DS6.3
2.查看安装包型号(在当前环境下输入conda list都能找到)
下面的内容是我用来记录的,直接跳到3.安装依赖项
(1)ubuntu
输入命令:cat /etc/lsb-release
Ubuntu型号为20.04,符合DS6.1.1的要求
(2)GStreamer
输入命令查看型号:gst-inspect-1.0 --version,高于要求的1.16.2,怎么办?(需要严格要求)
输入命令查找安装位置:whereis gst-launch-1.0
(3)GCC版本
输入命令gcc --version,显示为9.4.0,符合
(3)cuda 的版本
输入nvcc -V
显示未安装,说明系统中没有安装cuda,
输入:conda list,显示当前虚拟环境下的安装包,查找,剩下的软件版本都能在这找到
或者输入
python
import torch
print(torch.cuda.is_available()) # cuda是否可用
print(torch.version.cuda) # cuda版本
print(torch.backends.cudnn.version()) #cudnn版本
cuda版本为11.7,cuda cudnn版本为8500
DS6.1.1要求cuda版本为11.7.1 ,cuda cudnn版本为cuDNN 8.4.1.50
3.安装依赖项
cd转到你需要的环境的安装目录下
例如:我的yolov8的anaconda3的环境目录:/home/yyt/anaconda3/envs/yolov8
在下面执行指令安装软件包
输入安装命令:例如:sudo apt install libssl1.1
需要安装的软件包:
libssl1.1
libgstreamer1.0-0
gstreamer1.0-tools
gstreamer1.0-plugins-good
gstreamer1.0-plugins-bad
gstreamer1.0-plugins-ugly
gstreamer1.0-libav
libgstreamer-plugins-base1.0-dev
libgstrtspserver-1.0-0
libjansson4
libyaml-cpp-dev
gcc
make
git
python3
我之前装过了,所以安装的时候显示
顺便修改了一下环境路径(不用做)
vim ~/.bashrc
添加路径:export PATH="/home/yyt/anaconda3/envs/yolov8/bin:$PATH"
source ~/.bashrc
4.安装软件
我做到这一步的时候上司阻止了我,他怕我弄坏服务器,所以接下来安装软件这部分都是他弄的
一般参考Quickstart Guide — DeepStream 6.3 Release documentation
还有这位大佬的教程:
deepstream6.1-YOLOv5部署_deepstream yolov5_爱吃油淋鸡的莫何的博客-CSDN博客
完成安装后,输入命令:
deepstream-app --version-all
显示如下
即配置成功
将deepstream-6.3-lib添加到系统lib路径中,第二行是添加的路径
sudo vi /etc/ld.so.conf
(在文本后面添加该路径)/opt/nvidia/deepstream/deepstream-6.3/lib/
保存文件
sudo ldconfig
显示如下
到这一切顺利。
二、安装和使用DeepStream
1.下载项目文件
先下载你的DeepStream-Yolo-master项目文件,链接:https://github.com/marcoslucianops/DeepStream-Yolo
我的下载路径是在/home/yyt/Deep-Yolo-master下面(这个项目路径不用太讲究)
2.生成.oxnn文件
复制yolov8项目中已经训练好的best.pt模型,可以重命名,我的模型命名为yolov8m_best.pt
将模型复制到yolov8项目的根目录下面,这种才是根目录
打开/Deep-Yolo-master/utils,下面有一个export_yoloV8.py文件,也把他复制到yolov8根目录下
可用输入命令来复制,更改路径即可
cp (你的deepstream路径)/DeepStream-Yolo/utils/export_yoloV8.py (yolov8根目录)/ultralytics/
输入cd命令转到yolov8根目录下,输入命令
python3 export_yoloV8.py -w yolov8s.pt --simplify
生成labels.txt和 xxx.oxnn文件
3.编译
cd命令进入..../Deep-Yolo-master/ 目录下面,运行代码,记得修改你的cuda版本
CUDA_VER=12.1 make -C nvdsinfer_custom_impl_Yolo
#根据cuda版本来修改12.1
编译成功后会出现文件nvdsinfer_custom_impl_Yolo,下面有这个文件:libnvdsinfer_custom_impl_Yolo.so
4.修改配置文件
修改/Deep-Yolo-master/config_infer_primary_yoloV8.txt文件,打开文件,修改的几个部分如下:
onnx-file=/模型路径/yolov8m_best.oxnn
model-engine-file=yolov8m_best.onnx_b1_gpu0_fp32.engine
num-detected-classes=9(你的类别数)
labelfile-path=/文件路径/labels.txt(之前生成在yolov8根目录下)
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
修改/Deep-Yolo-master/deepstream_app_config_yoloV8.txt,修改项如下
config-file=config_infer_primary_yoloV8.txt
在cd ../Deep-Yolo-master后,输入命令
deepstream-app -c deepstream_app_config.txt
在Deep-Yolo-master文件夹下生成一个后缀为.engine的文件,即成功
报错
1.Config file path: /home/yyt/DeepStream-Yolo-master/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <parse_config_file:642>: parse_config_file failed
** ERROR: <main:687>: Failed to parse config file 'deepstream_app_config_yolov8_drone.txt'
Quitting
App run failed
(base) root@wsjdy-08:/home/yyt/DeepStream-Yolo-master# deepstream-app -c deepstream_app_config.txt
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /home/yyt/DeepStream-Yolo-master/yolov8m.onnx_b1_gpu0
0:00:03.788108872 188438 0x5642611efe00 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from ngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/home/yyt/DeepStream-Yolo-master/yolov8m.onnx_b1_gpu0_fp32.e
0:00:03.919250024 188438 0x5642611efe00 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from endContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/home/yyt/DeepStream-Yolo-master/yolov8m.onnrebuild
0:00:03.919329835 188438 0x5642611efe00 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to ca
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
Building the TensorRT Engine
Building complete
0:05:03.550611374 188438 0x5642611efe00 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDvdsinfer_context_impl.cpp:2034> [UID = 1]: serialize cuda engine to file: /home/yyt/DeepStream-Yolo-master/model_b16_gpu0_fp32.engine successfully
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BAT return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 8400x4
2 OUTPUT kFLOAT scores 8400x1
3 OUTPUT kFLOAT classes 8400x1
0:05:03.566493516 188438 0x5642611efe00 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from Params() <nvdsinfer_context_impl.cpp:1920> [UID = 1]: Backend has maxBatchSize 1 whereas 16 has been requested
0:05:03.566505451 188438 0x5642611efe00 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsdsinfer_context_impl.cpp:2052> [UID = 1]: deserialized backend context :/home/yyt/DeepStream-Yolo-master/model_b16_gpu0_fp32.engine failed to match confi
0:05:03.793129765 188438 0x5642611efe00 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsontext() <nvdsinfer_context_impl.cpp:2108> [UID = 1]: build backend context failed
0:05:03.793162562 188438 0x5642611efe00 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsdsinfer_context_impl.cpp:1282> [UID = 1]: generate backend failed, check config file settings
0:05:03.793290824 188438 0x5642611efe00 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContex
0:05:03.793297155 188438 0x5642611efe00 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<primary_gie> error: Config file path: /home/yytfer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:716>: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/yyt/DeepStream-Yolo-master/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
解决:config_infer_primary_yoloV8.txt中的model-engine-file
yolov8m_best_b1_gpu0_fp32.engine没有改成yolov8m_best.onnx_b1_gpu0_fp32.engine,即要加上.onnx
2.** ERROR: <main:733>: Could not open X Display
完整运行后,出现这种错误
这个问题是因为类似Xshell的ssh软件无法打开运行图像软件,不影响.engine文件的生成。
按需求解决
解决:参考** (java:10104): WARNING **: Could not open X display (MobaXterm无法打开smartgit)