Hi3516上yolov5的pytorch转onnx转caffe转wk详解
1. 环境配置
VMware Workstation Pro 16.2.4
Ubuntu 18.04.6
Pytorch 1.10.0
Caffe-cpu 1.0
Python 3.6
Opencv 3.2.0
2. 下载并安装虚拟机
安装秘钥:
ZF3R0-FHED2-M80TY-8QYGC-NPKYF
3. 下载并安装Ubuntu18.04
下载地址:https://releases.ubuntu.com/18.04/ubuntu-18.04.6-desktop-amd64.iso
4. 安装Pytorch1.10.0,注意ubuntu有默认Python为Python2,下面均使用Python3
5.1修改pip3源
sudo apt update
sudo apt upgrade
sudo apt install vim
sudo apt install python3-pip
sudo pip3 install --upgrade pip
修改源
mkdir ~/.pip
vim ~/.pip/pip.conf
[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple
或者
pip config set global.index-url https://pypi.mirrors.ustc.edu.cn/simple/
vim .config/pip/pip.conf 或 pip config list
可选安装pytorch,或者在yolov5的reqirements.txt一起安装
pip3 install torch==1.10.0+cpu torchvision==0.11.0+cpu torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
5. 下载YOLOv5-4.0,修改网络结构,进行训练,也可下载yolov5-6.0
6.1下载Yolov5-4.0
下载地址:https://codeload.github.com/ultralytics/yolov5/zip/refs/tags/v4.0
6.2安装依赖文件
cd yolov5-4.0
pip3 install -r requirements.txt
6.3 添加数据集和配置文件
cd yolov5-4.0
vim data/roadsign_voc.yaml
train: ../data/train.txt # train images (relative to 'path') 128 images val: ../data/valid.txt # val images (relative to 'path') 128 images test: # test images (optional) # Classes nc: 4 # number of classes names: ['speedlimit', 'crosswalk', 'trafficlight', 'stop'] # class names
6.4 修改网络配置
cd yolov5-4.0
vim models/yolov5s.yaml
# parameters nc: 80 # number of classes depth_multiple: 0.33 # model depth multiple width_multiple: 0.50 # layer channel multiple # anchors anchors: - [10,13, 16,30, 33,23] # P3/8 - [30,61, 62,45, 59,119] # P4/16 - [116,90, 156,198, 373,326] # P5/32 # YOLOv5 backbone backbone: # [from, number, module, args] [ #[-1, 1, Focus, [64, 3]], # 0-P1/2 [-1, 1, Conv, [64, 3,2]], # 0-P1/2 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4 [-1, 3, C3, [128]], [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 [-1, 9, C3, [256]], [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 [-1, 9, C3, [512]], [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 [-1, 1, SPP, [1024, [5, 9, 13]]], [-1, 3, C3, [1024, False]], # 9 ] # YOLOv5 head head: [[-1, 1, Conv, [512, 1, 1]], #[-1, 1, nn.Upsample, [None, 2, 'nearest']], [-1, 1, nn.ConvTranspose2d, [256, 256, 2,2]], [[-1, 6], 1, Concat, [1]], # cat backbone P4 [-1, 3, C3, [512, False]], # 13 [-1, 1, Conv, [256, 1, 1]], #[-1, 1, nn.Upsample, [None, 2, 'nearest']], [-1, 1, nn.ConvTranspose2d, [128, 128, 2,2]], [[-1, 4], 1, Concat, [1]], # cat backbone P3 [-1, 3, C3, [256, False]], # 17 (P3/8-small) [-1, 1, Conv, [256, 3, 2]], [[-1, 14], 1, Concat, [1]], # cat head P4 [-1, 3, C3, [512, False]], # 20 (P4/16-medium) [-1, 1, Conv, [512, 3, 2]], [[-1, 10], 1, Concat, [1]], # cat head P5 [-1, 3, C3, [1024, False]], # 23 (P5/32-large) [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5) ]
6.5 修改网络结构
cd yolov5-4.0
vim models/common.py
class Conv(nn.Module): # Standard convolution def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups super(Conv, self).__init__() self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) self.bn = nn.BatchNorm2d(c2) # self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) self.act = nn.ReLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) def forward(self, x): return self.act(self.bn(self.conv(x))) def fuseforward(self, x): return self.act(self.conv(x)) class BottleneckCSP(nn.Module): # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion super(BottleneckCSP, self).__init__() c_ = int(c2 * e) # hidden channels self.cv1 = Conv(c1, c_, 1, 1) self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False) self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False) self.cv4 = Conv(2 * c_, c2, 1, 1) self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3) # self.act = nn.LeakyReLU(0.1, inplace=True) self.act = nn.ReLU() self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) def forward(self, x): y1 = self.cv3(self.m(self.cv1(x))) y2 = self.cv2(x) return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1)))) class SPP(nn.Module): # Spatial pyramid pooling layer used in YOLOv3-SPP def __init__(self, c1, c2, k=(5, 9, 13)): super(SPP, self).__init__() c_ = c1 // 2 # hidden channels self.cv1 = Conv(c1, c_, 1, 1) self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1) # self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2, ceil_mode=True) for x in k]) def forward(self, x): x = self.cv1(x) return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
6.6 训练模型
cd yolov5-4.0/weights
wget https://github.com/ultralytics/yolov5/releases/download/v4.0/yolov5s.pt
python3 train.py --img 640 --batch 16 --epochs 1 --data data/roadsign_voc.yaml --cfg models/yolov5s.yaml --weights weights/yolov5s.pt --noautoanchor
6. 导出onnx模型
pip3 install onnx==1.8.1
pip3 install onnx-simplifier
cd yolov5-4.0/
vim models/export.py
model.model[-1].export = True
torch.onnx.export(model, img, f, verbose=False, opset_version=10, input_names=['images'], output_names=['classes', 'boxes'] if y is None else ['output'])
python3 models/export.py --weights runs/train/exp/weights/last.pt
python3 -m onnxsim runs/train/exp/weights/last.onnx runs/train/exp/weights/yolov5s_sim.onnx
7. 下载并安装yolov5_caffe,编译生成caffe和pycaffe
7.1下载地址:
https://codeload.github.com/Wulingtian/yolov5_caffe/zip/refs/heads/master
7.2安装依赖包
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
sudo apt-get install --no-install-recommends libboost-all-dev
sudo apt-get install python3-dev
sudo apt-get install libatlas-base-dev
sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
sudo apt-get install python3-opencv
7.3修改安装配置文件,使用CPU版本进行编译安装caffe
cd yolov5_caffe
vim Makefile.config
# USE_CUDNN := 1 CPU_ONLY := 1 OPENCV_VERSION := 3 BLAS := atlas # CUDA_DIR := /usr # CUDA_ARCH := -gencode arch=compute_61,code=compute_61 # ANACONDA_HOME := $(HOME)/anaconda # PYTHON_INCLUDE := $(ANACONDA_HOME)/include \ # $(ANACONDA_HOME)/include/python3.6m \ # $(ANACONDA_HOME)/lib/python3.6/site-packages/numpy/core/include PYTHON_LIBRARIES := boost_python3 python3.6m PYTHON_INCLUDE := /usr/include/python3.6m \ /usr/lib/python3.6/dist-packages/numpy/core/include #PYTHON_LIBRARIES := boost_python-py35 # PYTHON_INCLUDE := /usr/include/python3.5m \ # /usr/lib/python3.5/dist-packages/numpy/core/include PYTHON_LIB := /usr/lib # PYTHON_LIB := $(ANACONDA_HOME)/lib WITH_PYTHON_LAYER := 1 INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial BUILD_DIR := build DISTRIBUTE_DIR := distribute # TEST_GPUID := 0 Q ?= @
vim Makefile
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_serial_hl hdf5_serial
验证是否安装成功
make clean
make all -j8
make test -j8
make runtest -j8
7.4安装pycaffe
cd yolov5_caffe/python
for req in $(cat requirements.txt); do pip install $req; done
或者
sudo apt-get install python-numpy python-scipy python-matplotlib python-sklearn python-skimage python-h5py python-protobuf python-leveldb python-networkx python-nose python-pandas python-gflags Cython ipython
sudo apt-get install protobuf-c-compiler protobuf-compiler
cd yolov5_caffe/
make pycaffe -j8
vim ~/.bashrc
export PYTHONPATH=.../yolov5_caffe/python:$PYTHONPATH
source ~/.bashrc
验证是否安装成功
python3
import caffe
8. 下载yolov5_onnx2caff,把onnx模型转换为caffe模型
下载地址:
https://codeload.github.com/Hiwyl/yolov5_onnx2caffe/zip/refs/heads/master
cd yolov5_onnx2caffe/
vim convertCaffe.py
onnx_path = "./weights/yolov5s_sim.onnx" prototxt_path = "./weights/yolov5s_sim.prototxt" caffemodel_path = "./weights/yolov5s_sim.caffemodel" convertToCaffe(graph, prototxt_path, caffemodel_path)
9. 验证转换的caffe模型
cd yolov5_caffe/tools
设置输入参数和模型路径
vim caffe_yolov5s.cpp
#define INPUT_W 640 #define INPUT_H 640 #define IsPadding 1 #define NUM_CLASS 4 #define NMS_THRESH 0.6 #define CONF_THRESH 0.3 std::string prototxt_path = "./weights/yolov5s_sim.prototxt"; std::string caffemodel_path = "./weithgts/yolov5s_sim.caffemodel"; std::string pic_path = "./weights/road580.png";
编译caffe_yolov5s
cd yolov5_caffe/
sudo apt install cmake
make
运行caffe_yolov5s
cp ../yolov5_onnx2caffe/weights/yolov5s_sim.caffemodel weights/
cp ../yolov5_onnx2caffe/weights/yolov5s_sim.prototxt weights/
./build/tools/caffe_yolov5s
查看检测结果
xdg-open result.jpg
10. 安装nnie mapper,把caffe模型转换为wk模型
10.1 在hi3516dv300中提供的资料中找到nnie_mapper_12,安装参考文档HiSVP 开发指南的 Linux 版 NNIE mapper 安装
nnie_mapper_12位置在SVP_PC\HiSVP_PC_V1.2.2.2\tools\nnie\linux\mapper中
使用readelf查看nnie_mapper_12的依赖库,需要protobuf3.6.1和opencv3.4
readelf -d nnie_mapper_12
10.2 安装opencv3.4.2
下载地址:https://github.com/opencv/opencv/archive/3.4.2.zip
安装依赖
sudo apt-get install build-essential
sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libtiff-dev libjasper-dev libdc1394-22-dev
安装opencv
mkdir yolov5_caffe2nnie
cd yolov5_caffe2nnie
mkdir 3rd
cd 3rd
tar -xvzf opencv-3.4.2.tar.gz
cd opencv-3.4.2
mkdir build && cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=.../yolov5_caffe2nnie ../
make -j8
make install
添加环境路径
vim ~/.bashrc
export PATH=.../yolov5_caffe2nnie/bin:$PATH export LD_LIBRARY_PATH=.../yolov5_caffe2nnie/lib:$LD_LIBRARY_PATH export PKG_CONFIG_PATH=.../yolov5_caffe2nnie/lib/pkgconfig
source ~/.bashrc
验证opencv是否安装成功
运行opencv_version显示3.4.2
编译opencv程序
sudo apt-get install libcanberra-gtk-module
cd opencv-3.4.2/samples/cpp/
gcc `pkg-config --cflags opencv` -o facedetect facedetect.cpp `pkg-config --libs opencv` -lstdc++
./facedetect ../data/lena.jpg
10.3 安装protobuf3.6.1
protobuf3.6.1编译需要用gcc4.8,Ubuntu18.04自带gcc7.5,需要安装多版本gcc
sudo apt update
sudo apt install build-essential
sudo apt install software-properties-common
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt install gcc-4.8 g++-4.8 gcc-7 g++-7 gcc-8 g++-8 gcc-9 g++-9
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 48 --slave /usr/bin/g++ g++ /usr/bin/g++-4.8
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 70 --slave /usr/bin/g++ g++ /usr/bin/g++-7
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 80 --slave /usr/bin/g++ g++ /usr/bin/g++-8
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 90 --slave /usr/bin/g++ g++ /usr/bin/g++-9
切换到gcc4.8
sudo update-alternatives --config gcc
gcc -v
下载protobuf3.6.1:
https://github.com/google/protobuf/releases/download/v3.6.1/protobuf-all-3.6.1.tar.gz
tar -xvf protobuf.3.6.1
cd protobuf3.6.1
./configure -prefix=.../yolov5_caffe2nnie
make
make check
make install
验证protobuf.3.6.1
protoc --version
显示libprotoc 3.6.1
切换到gcc7.5
sudo update-alternatives --config gcc
gcc -v
10.4 安装nnie_mapper_12
复制nnie_mapper_12到.../yolov5_caffe2nnie/bin
验证nnie_mapper_12
复制SVP_PC\HiSVP_PC_V1.2.2.2\software\data到.../yolov5_caffe2nnie/
cd .../yolov5_caffe2nnie/data
运行nnie_mapper_12 classification/alexnet/alexnet_no_group_inst.cfg
显示下列信息,说明安装成功
Mapper Version 1.2.2.1_B030 (NNIE_1.2) 19090610466402 begin net parsing.... end net parsing begin prev optimizing.... end prev optimizing.... begin net quantalizing(CPU).... end quantalizing begin optimizing.... end optimizing begin NNIE[0] mem allocation.... end NNIE[0] memory allocating begin NNIE[0] instruction generating.... end NNIE[0] instruction generating begin parameter compressing.... end parameter compressing begin compress index generating.... end compress index generating begin binary code generating.... end binary code generating begin quant files writing.... end quant files writing
10.5 修改yolov5的caffe网络结构,并转换为wk模型
复制onnx转换好的caffe模型到.../yolov5_caffe2nnie/
cp ../yolov5_onnx2caffe/weights/yolov5s_sim.caffemodel ./weights/
cp ../yolov5_onnx2caffe/weights/yolov5s_sim.prototxt ./weights/
修改yolov5的caffe网络结构的三个输出头部
vim yolov5s_sim.prototxt
删除三个Permute
修改三个Reshape为下面格式
layer { name: "Reshape_151" type: "Reshape" bottom: "265" top: "277" reshape_param { shape { dim: 0 dim: 3 dim: 9 dim: 6400 } } } layer { name: "Reshape_165" type: "Reshape" bottom: "279" top: "291" reshape_param { shape { dim: 0 dim: 3 dim: 9 dim: 1600 } } } layer { name: "Reshape_179" type: "Reshape" bottom: "293" top: "305" reshape_param { shape { dim: 0 dim: 3 dim: 9 dim: 400 } } }
编写转换配置文件,参考yolov3的配置文件,具体参数参考HiSVP 开发指南中的nnie_mapperde的配置文件说明
vim weights/yolov5s_sim.cfg
[prototxt_file] ./weights/yolov5s_sim.prototxt [caffemodel_file] ./weights/yolov5s_sim.caffemodel [batch_num] 1 [net_type] 0 [sparse_rate] 0 [compile_mode] 1 [is_simulation] 0 [log_level] 2 [instruction_name] ./weights/yolov5s_sim [RGB_order] BGR [data_scale] 0.0039062 [internal_stride] 16 [image_list] ../../data/valid.txt [image_type] 1 [mean_file] null [norm_type] 3
11. 下载YOLOv5-6.0,修改网络结构,进行训练
11.1下载Yolov5-6.0
下载地址:https://codeload.github.com/ultralytics/yolov5/zip/refs/tags/v4.0
11.2安装依赖文件
cd yolov5-6.0
pip3 install -r requirements.txt
11.3 添加数据集和配置文件
cd yolov5-6.0
vim data/roadsign_voc.yaml
train: ../data/train.txt # train images (relative to 'path') 128 images
val: ../data/valid.txt # val images (relative to 'path') 128 images
test: # test images (optional)
# Classes
nc: 4 # number of classes
names: ['speedlimit', 'crosswalk', 'trafficlight', 'stop'] # class names
11.4 修改网络配置
cd yolov5-6.0
vim models/yolov5s.yaml
# YOLOv5 v6.0 backbone backbone: # [from, number, module, args] [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4 [-1, 3, C3, [128]], [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 [-1, 6, C3, [256]], [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 [-1, 9, C3, [512]], [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 [-1, 3, C3, [1024]], [-1, 1, SPPF, [1024, 5]], # 9 ] # YOLOv5 v6.0 head head: [[-1, 1, Conv, [512, 1, 1]], #[-1, 1, nn.Upsample, [None, 2, 'nearest']], [-1, 1, nn.ConvTranspose2d, [256, 256, 2, 2]], [[-1, 6], 1, Concat, [1]], # cat backbone P4 [-1, 3, C3, [512, False]], # 13 [-1, 1, Conv, [256, 1, 1]], #[-1, 1, nn.Upsample, [None, 2, 'nearest']], [-1, 1, nn.ConvTranspose2d, [128, 128, 2, 2]], [[-1, 4], 1, Concat, [1]], # cat backbone P3 [-1, 3, C3, [256, False]], # 17 (P3/8-small) [-1, 1, Conv, [256, 3, 2]], [[-1, 14], 1, Concat, [1]], # cat head P4 [-1, 3, C3, [512, False]], # 20 (P4/16-medium) [-1, 1, Conv, [512, 3, 2]], [[-1, 10], 1, Concat, [1]], # cat head P5 [-1, 3, C3, [1024, False]], # 23 (P5/32-large) [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5) ]
6.5 修改网络结构
cd yolov5-6.0
vim models/common.py
class Conv(nn.Module): # Standard convolution def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups super().__init__() self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) self.bn = nn.BatchNorm2d(c2) #self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) self.act = nn.ReLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) def forward(self, x): return self.act(self.bn(self.conv(x))) def forward_fuse(self, x): return self.act(self.conv(x)) lass BottleneckCSP(nn.Module): # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion super().__init__() c_ = int(c2 * e) # hidden channels self.cv1 = Conv(c1, c_, 1, 1) self.cv2 = nn.Conv2d(c1, c_, 1, 1, bias=False) self.cv3 = nn.Conv2d(c_, c_, 1, 1, bias=False) self.cv4 = Conv(2 * c_, c2, 1, 1) self.bn = nn.BatchNorm2d(2 * c_) # applied to cat(cv2, cv3) #self.act = nn.LeakyReLU(0.1, inplace=True) self.act = nn.ReLU() self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) def forward(self, x): y1 = self.cv3(self.m(self.cv1(x))) y2 = self.cv2(x) return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1)))) class SPPF(nn.Module): # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13)) super().__init__() c_ = c1 // 2 # hidden channels self.cv1 = Conv(c1, c_, 1, 1) self.cv2 = Conv(c_ * 4, c2, 1, 1) #self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2) self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2, ceil_mode=True) def forward(self, x): x = self.cv1(x) with warnings.catch_warnings(): warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning y1 = self.m(x) y2 = self.m(y1) return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))
11.6 训练模型
cd yolov5-6.0/weights
wget https://github.com/ultralytics/yolov5/releases/download/v6.0/yolov5s.pt
python3 train.py --img 640 --batch 16 --epochs 1 --data data/roadsign_voc.yaml --cfg models/yolov5s.yaml --weights weights/yolov5s.pt --noautoanchor
参考:
重点参考:
https://github.com/Hiwyl/yolov5_onnx2caffe
https://github.com/Wulingtian/yolov5_caffe
https://blog.csdn.net/weixin_44743827/article/details/125779320
https://blog.csdn.net/Mrsherlock_/article/details/114172946
https://blog.csdn.net/tangshopping/article/details/110038605
https://blog.csdn.net/racesu/article/details/107045858#t2
安装caffe
https://www.jianshu.com/p/ec5660916fdb
https://www.cnblogs.com/acgoto/p/11570188.html
https://blog.csdn.net/u014106566/article/details/85179450
http://caffe.berkeleyvision.org/installation.html
https://blog.csdn.net/sinat_38439143/article/details/97244296
https://blog.csdn.net/weixin_41998715/article/details/120325053
https://blog.csdn.net/weixin_47182486/article/details/120129373
https://blog.csdn.net/piupiu78/article/details/124457581
https://blog.csdn.net/weixin_30676635/article/details/116648050
https://blog.csdn.net/weixin_39550486/article/details/116865423
https://blog.csdn.net/weixin_34229622/article/details/116865422
https://blog.csdn.net/Unique960215/article/details/82861966
https://qengineering.eu/install-caffe-on-ubuntu-20.04-with-opencv-4.4.html
yolov5转wk
https://blog.csdn.net/weixin_44743827/article/details/125779320
https://blog.csdn.net/tangshopping/article/details/110038605
https://github.com/Hiwyl/yolov5_onnx2caffe
https://github.com/Wulingtian/yolov5_caffe
https://github.com/Wulingtian/yolov5_onnx2caffe
https://github.com/xxradon/ONNXToCaffe
https://blog.csdn.net/yayalejianyue/article/details/114878503
https://blog.csdn.net/weixin_41012399/article/details/120866027
https://blog.csdn.net/qq_37532213/article/details/123903703
https://blog.csdn.net/qq_37532213/article/details/124061990
https://blog.csdn.net/ynzzxc/article/details/116931806
https://blog.csdn.net/tangshopping/article/details/111150493
https://blog.csdn.net/Yong_Qi2015/article/details/114362223
https://github.com/mahxn0/Yolov5-Hisi3559a-Train
https://github.com/mahxn0/Hisi3559A_Yolov5
https://github.com/mxsurui/NNIE-lite
https://blog.csdn.net/racesu/article/details/107045858#t2
https://blog.csdn.net/weixin_41012399/article/details/120066576?spm=1001.2014.3001.5501
https://www.leheavengame.com/article/6155078a995acf6bc1e13e9e
安装nnie mapper
https://blog.csdn.net/racesu/article/details/107045858#t2
https://blog.csdn.net/avideointerfaces/article/details/100178343
https://www.cnblogs.com/liujinhong/p/8867391.html