yolov5+deepsort+slowfast复现
1.运行环境
ubuntu 18.04.1
Cuda 11.5
Python 3.8.15
torch 1.10.1+cu113
torchvision 0.11.2+cu113
2.安装PyTorchVideo
cd /home
git clone https://gitee.com/YFwinston/pytorchvideo.git
cd pytorchvideo
pip install -e .
3.安装yolov5-slowfast-deepsort-PytorchVideo
下载yolov5-slowfast-deepsort-PytorchVideo
使用gitee(推荐)
cd /home
git clone https://gitee.com/YFwinston/yolov5-slowfast-deepsort-PytorchVideo.git
安装
cd /home/yolov5-slowfast-deepsort-PytorchVideo
pip install -r requirements2.txt
下载文件
[yolov5_file](阿里云盘 (aliyundrive.com))
[slowfast_file](阿里云盘 (aliyundrive.com))
我是将ckpt.t7放在了:/user-data/yolov5_file/
我是将SLOWFAST_8x8_R50_DETECTION.pyth放在了:/user-data/slowfast_file/
我是将yolov5l6.pt放在了:/user-data/yolov5_file/
我是将yolov5-master.zip放在了:/user-data/yolov5_file/
mkdir -p /home/yolov5-slowfast-deepsort-PytorchVideo/deep_sort/deep_sort/deep/checkpoint/
cp /user-data/yolov5_file/ckpt.t7 /home/yolov5-slowfast-deepsort-PytorchVideo/deep_sort/deep_sort/deep/checkpoint/ckpt.t7
mkdir -p /root/.cache/torch/hub/checkpoints/
cp /user-data/slowfast_file/SLOWFAST_8x8_R50_DETECTION.pyth /root/.cache/torch/hub/checkpoints/SLOWFAST_8x8_R50_DETECTION.pyth
cp /user-data/yolov5_file/yolov5l6.pt /home/yolov5-slowfast-deepsort-PytorchVideo/yolov5l6.pt
cp /user-data/yolov5_file/yolov5-master.zip /root/.cache/torch/hub/master.zip
4.测试
我将1.mp4存放在了/home/yolov5-slowfast-deepsort-PytorchVideo/demo/中
cd /home/yolov5-slowfast-deepsort-PytorchVideo
mkdir demo
cd /home/yolov5-slowfast-deepsort-PytorchVideo
python yolo_slowfast.py --input ./demo/1.mp4
报错1
报错2
照着这个连接操作
报错3
5.替换成自己的数据集
5.1 yolov5模型训练自己的数据集
数据集目录结构,使用labelImg标注该类数据就可以
最后类似于python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128开始运行
5.2 deepsort模型训练自己的reid数据集
以一段视频为例,将此段视频图片的帧按一秒取一次帧得到,然后使用labelImg进行标注
然后使用下面的代码将标注的部分裁剪出来,裁剪使用的代码可用下面这一段
#根据预测出来的txt文件裁剪图片
import os
import cv2
from tqdm import tqdm
image_input = 'E:\\pythoncode\\shipinphto\\2'
txt_input = 'E:\\pythoncode\\shipinphto\\label_2\\'
path_output = "E:\\pythoncode\\path_output\\2\\" # 裁剪出来的小图保存的根目录
class_names_path = 'classes.txt'
img_total = []
txt_total = []
def read_class_name(path): #读取path下的类别民
f = open(path,'r')
classes_name = []
for i in f.readlines():
classes_name.append(i.strip())
return classes_name
classes_name = read_class_name(class_names_path)
file_image = os.listdir(image_input)
for filename in file_image:#在做jpg文件名列表
first,last = os.path.splitext(filename)
img_total.append(first)
file_txt = os.listdir(txt_input)
for filename in file_txt:#在做txt文件名列表
first,last = os.path.splitext(filename)
txt_total.append(first)
for img_ in tqdm(img_total):
if img_ in txt_total:
filename_img = img_+".jpg"
path1 = os.path.join(image_input,filename_img)
img = cv2.imread(path1)
filename_txt = img_+'.txt' #预测出来的txt文件没有后缀名,有则加 {+".txt"}
h = img.shape[0]
w = img.shape[1]
n = 1
with open(os.path.join(txt_input,filename_txt),"r+",encoding="utf-8",errors="ignore") as f:
for line in f:
aa = line.split(" ")
# if not int(aa[0]) == 0: continue #判断需要裁剪的类别:0--vehicle
x_center = w * float(aa[1]) # aa[1]左上点的x坐标
y_center = h * float(aa[2]) # aa[2]左上点的y坐标
width = int(w*float(aa[3])) # aa[3]图片width
height = int(h*float(aa[4])) # aa[4]图片height
lefttopx = int(x_center-width/2.0)
lefttopy = int(y_center-height/2.0)
roi = img[lefttopy+1:lefttopy+height+3,lefttopx+1:lefttopx+width+1] # [左上y:右下y,左上x:右下x]
# (y1:y2,x1:x2)需要调参,否则裁剪出来的小图可能不太好
if roi.size == 0: continue
filename_last = img_+"_"+str(n)+".jpg" # 裁剪出来的小图文件名
x = int(aa[0])
path2 = os.path.join(path_output,classes_name[x]) # 需要在path_output路径下创建一个cut_txt文件夹
if not os.path.exists(path2):
os.mkdir(path2)
try:
cv2.imwrite(os.path.join(path2,filename_last),roi)
except:
continue
n = n+1
else:
continue
裁剪出图片之后就按照图片里面有的目标进行分类,打个比方:标注的是苹果,那么我分类的时候要不同的苹果放到不同文件夹里。
如下所示:
之后要修改代码里的两个地方,变成如下这样:
一是train.py文件:
transform_train = torchvision.transforms.Compose([
torchvision.transforms.Resize((128, 64)),
torchvision.transforms.RandomCrop((128, 64), padding=4),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
[0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
二是model.py文件(只用修改num_classes为自己数据集的类别就好,我这里只有四种进行测试):
class Net(nn.Module):
def __init__(self, num_classes=4 ,reid=False):
super(Net,self).__init__()
# 3 128 64
self.conv = nn.Sequential(
nn.Conv2d(3,64,3,stride=1,padding=1),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
# nn.Conv2d(32,32,3,stride=1,padding=1),
# nn.BatchNorm2d(32),
# nn.ReLU(inplace=True),
nn.MaxPool2d(3,2,padding=1),
)
最后python train.py开始训练
5.3 slowfast模型训练自己的数据集
首先这里说明一下这里训练的是ava数据集,所以下面主要是ava数据集的制作步骤,先看一下ava数据集的分布
这里的数据集制作方法可以参考,需要看具体内容可以去下载数据集:
要想制作好的数据集在slowfast上跑起来,主要是下面代码的修改:
1.首先是配置文件,我修改的是slowfast源码里的config/AVA/c2/SLOW_8x8_R50.yaml的文件
TRAIN:
ENABLE: True #这里要注意
DATASET: ava
BATCH_SIZE: 2
EVAL_PERIOD: 1
CHECKPOINT_PERIOD: 1
AUTO_RESUME: True
# CHECKPOINT_FILE_PATH: path to pretrain model
CHECKPOINT_TYPE: caffe2
DATA:
NUM_FRAMES: 4
SAMPLING_RATE: 16
TRAIN_JITTER_SCALES: [256, 320]
TRAIN_CROP_SIZE: 224
TEST_CROP_SIZE: 256
INPUT_CHANNEL_NUM: [3]
PATH_TO_DATA_DIR: '/home/xxx/pythoncode/slowfast/datasets' #这里要注意
DETECTION:
ENABLE: True
ALIGNED: True
AVA:
BGR: False
DETECTION_SCORE_THRESH: 0.9
FRAME_DIR: '/home/xxx/pythoncode/slowfast/datasets/frames' #这里要注意
FRAME_LIST_DIR: '/home/xxx/pythoncode/slowfast/datasets/frame_lists' #这里要注意
ANNOTATION_DIR: '/home/xxx/pythoncode/slowfast/datasets/annotations' #这里要注意
DETECTION_SCORE_THRESH: 0.8
TRAIN_PREDICT_BOX_LISTS: [
"person_box_67091280_iou90/ava_detection_train_boxes_and_labels_include_negative_v2.2.csv", #这里要注意
"person_box_67091280_iou90/ava_detection_train_boxes_and_labels_include_negative_v2.2.csv", #这里要注意
]
TEST_PREDICT_BOX_LISTS: ["person_box_67091280_iou90/ava_detection_val_boxes_and_labels.csv"] #这里要注意
RESNET:
ZERO_INIT_FINAL_BN: True
WIDTH_PER_GROUP: 64
NUM_GROUPS: 1
DEPTH: 50
TRANS_FUNC: bottleneck_transform
STRIDE_1X1: False
NUM_BLOCK_TEMP_KERNEL: [[3], [4], [6], [3]]
SPATIAL_DILATIONS: [[1], [1], [1], [2]]
SPATIAL_STRIDES: [[1], [2], [2], [1]]
NONLOCAL:
LOCATION: [[[]], [[]], [[]], [[]]]
GROUP: [[1], [1], [1], [1]]
INSTANTIATION: softmax
BN:
USE_PRECISE_STATS: False
NUM_BATCHES_PRECISE: 200
SOLVER:
MOMENTUM: 0.9
WEIGHT_DECAY: 1e-7
OPTIMIZING_METHOD: sgd
MODEL:
NUM_CLASSES: 80
ARCH: slow
MODEL_NAME: ResNet
LOSS_FUNC: bce
DROPOUT_RATE: 0.5
HEAD_ACT: sigmoid
TEST:
ENABLE: False #这里要注意
DATASET: ava
BATCH_SIZE: 1
DATA_LOADER:
NUM_WORKERS: 2
PIN_MEMORY: True
NUM_GPUS: 1
NUM_SHARDS: 1
RNG_SEED: 0
OUTPUT_DIR: .
2.然后是slowfast/slowfast/datasets/ava_helper.py
#!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
import logging
import os
from collections import defaultdict
from slowfast.utils.env import pathmgr
logger = logging.getLogger(__name__)
FPS = 30
AVA_VALID_FRAMES = range(2, 9) #这里要注意
def load_image_lists(cfg, is_train):
"""
Loading image paths from corresponding files.
Args:
cfg (CfgNode): config.
is_train (bool): if it is training dataset or not.
Returns:
image_paths (list[list]): a list of items. Each item (also a list)
corresponds to one video and contains the paths of images for
this video.
video_idx_to_name (list): a list which stores video names.
"""
list_filenames = [
os.path.join(cfg.AVA.FRAME_LIST_DIR, filename)
for filename in (
cfg.AVA.TRAIN_LISTS if is_train else cfg.AVA.TEST_LISTS
)
]
image_paths = defaultdict(list)
video_name_to_idx = {}
video_idx_to_name = []
for list_filename in list_filenames:
with pathmgr.open(list_filename, "r") as f:
f.readline()
for line in f:
row = line.split(",") #这里要注意
# The format of each row should follow:
# original_vido_id video_id frame_id path labels.
assert len(row) == 5
video_name = row[0]
if video_name not in video_name_to_idx:
idx = len(video_name_to_idx)
video_name_to_idx[video_name] = idx
video_idx_to_name.append(video_name)
data_key = video_name_to_idx[video_name]
image_paths[data_key].append(
os.path.join(cfg.AVA.FRAME_DIR, row[3])
)
image_paths = [image_paths[i] for i in range(len(image_paths))]
logger.info(
"Finished loading image paths from: %s" % ", ".join(list_filenames)
)
return image_paths, video_idx_to_name
参考文章
Whiffe/yolov5-slowfast-deepsort-PytorchVideo (github.com)
Yolov5 + Deepsort 重新训练自己的数据(保姆级超详细)_yolov5+deepsort训练自己的数据集_武大人民泌外I科人工智能团队的博客-CSDN博客
YOLOv5+Deepsort训练自己的数据集实现多目标跟踪_科研段子手的博客-CSDN博客
deepsort训练车辆特征参数_ckpt.t7_王定邦的博客-CSDN博客
【目标跟踪】Yolov5_DeepSort_Pytorch训练自己的数据 - 知乎 (zhihu.com)
自定义ava数据集及训练与测试 完整版 时空动作/行为 视频数据集制作 yolov5, deep sort, VIA MMAction, SlowFast_CV-杨帆的博客-CSDN博客
如若遇到问题,可私信联系