基于AIidlux平台的自动驾驶环境感知与智能预警
自动驾驶汽车又称为无人驾驶车,是一种需要驾驶员辅助或者完全不需操控的车辆。
自动驾驶分级:
自动驾驶系统的组成部分:
环境感知系统:
自动驾驶系统架构:
自动驾驶数据集:
Aidlux的作用:
YOLOP算法:
损失函数:
模型训练:
数据集:
修改配置文件 lib/config/default.py
训练:
pip install -r requirements.txt
python tools/train.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | _C.GPUS = ( 0 , 1 ) #v根据你实际的显卡数进行修改 _C.WORKERS = 0 # 由cpu的数量确认worker是的数量,直接影响数据加载速度 _C.DATASET.DATAROOT = 'dataset/images' # the path of images folder _C.DATASET.LABELROOT = 'dataset/det_annotations' # the path of det_annotations folder _C.DATASET.MASKROOT = 'dataset/da_seg_annotations' # the path of da_seg_annotations folder _C.DATASET.LANEROOT = 'dataset/ll_seg_annotations' # the path of ll_seg_annotations folder _C.DATASET.DATASET = 'BddDataset' _C.DATASET.TRAIN_SET = 'train' _C.DATASET.TEST_SET = 'val' _C.DATASET.DATA_FORMAT = 'jpg' _C.TRAIN.BEGIN_EPOCH = 0 _C.TRAIN.END_EPOCH = 240 # if training 3 tasks end-to-end, set all parameters as True # Alternating optimization _C.TRAIN.SEG_ONLY = False # Only train two segmentation branchs _C.TRAIN.DET_ONLY = False # Only train detection branch _C.TRAIN.ENC_SEG_ONLY = False # Only train encoder and two segmentation branchs _C.TRAIN.ENC_DET_ONLY = False # Only train encoder and detection branch # Single task _C.TRAIN.DRIVABLE_ONLY = False # Only train da_segmentation task _C.TRAIN.LANE_ONLY = False # Only train ll_segmentation task _C.TRAIN.DET_ONLY = False # Only train detection task |
onnx:是开放式神经网络的简称。目前官方支持加载onnx模型的框架有:Caff2,Pytorch,MXNet等。执行命令:
python export_onnx.py --height 640 --width 640
在weigths文件夹下生成转换成功的onnx模型
onnx转换核心api:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | if __name__ = = "__main__" : parser = argparse.ArgumentParser() parser.add_argument( '--height' , type = int , default = 640 ) # height parser.add_argument( '--width' , type = int , default = 640 ) # width args = parser.parse_args() do_simplify = True device = 'cuda' if torch.cuda.is_available() else 'cpu' model = MCnet(YOLOP) checkpoint = torch.load( './weights/End-to-end.pth' , map_location = device) model.load_state_dict(checkpoint[ 'state_dict' ]) model. eval () height = args.height width = args.width print ( "Load ./weights/End-to-end.pth done!" ) onnx_path = f './weights/yolop-{height}-{width}.onnx' # onnx_path = f'./weights/yolop-test.onnx' inputs = torch.randn( 1 , 3 , height, width) print (f "Converting to {onnx_path}" ) torch.onnx.export(model, inputs, onnx_path, verbose = False , opset_version = 12 , input_names = [ 'images' ], output_names = [ 'det_out' , 'drive_area_seg' , 'lane_line_seg' ]) print ( 'convert' , onnx_path, 'to onnx finish!!!' ) # Checks model_onnx = onnx.load(onnx_path) # load onnx model onnx.checker.check_model(model_onnx) # check onnx model print (onnx.helper.printable_graph(model_onnx.graph)) # print |
Aidlux平台部署推理:
找到home目录,上传YOLOP文件夹至home内。打开终端,安装pytorch环境。
智能预警:
包含三个任务:目标检测、可行驶区域检测、车道线检测。
执行 python forewarning.py进行智能预警检测。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | def main(source, save_path): cap = cv2.VideoCapture(source) width = int (cap.get(cv2.CAP_PROP_FRAME_WIDTH)) # 获取视频的宽度 height = int (cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) # 获取视频的高度 fps = cap.get(cv2.CAP_PROP_FPS) # 获取视频的帧率 fourcc = int (cap.get(cv2.CAP_PROP_FOURCC)) # 视频的编码 # fourcc = cv2.VideoWriter_fourcc(*'avc1') #定义视频对象输出 writer = cv2.VideoWriter(save_path, fourcc, fps, (width, height)) # 检查是否导入视频成功 if not cap.isOpened(): print ( "视频无法打开" ) exit() frame_id = 0 while True : ret, frame = cap.read() if not ret: print ( "视频推理完毕..." ) break frame_id + = 1 # if frame_id % 3 != 0: # continue canvas, r, dw, dh, new_unpad_w, new_unpad_h = resize_unscale(frame, ( 640 , 640 )) img = canvas.copy().astype(np.float32) # (3,640,640) RGB img / = 255.0 img[:, :, 0 ] - = 0.485 img[:, :, 1 ] - = 0.456 img[:, :, 2 ] - = 0.406 img[:, :, 0 ] / = 0.229 img[:, :, 1 ] / = 0.224 img[:, :, 2 ] / = 0.225 img = img.transpose( 2 , 0 , 1 ) img = np.expand_dims(img, 0 ) # (1, 3,640,640) # 推理 img_det, boxes, color_seg, fps = infer(frame, img, r, dw, dh, new_unpad_w, new_unpad_h) if img_det is None : continue color_mask = np.mean(color_seg, 2 ) img_merge = canvas[dh:dh + new_unpad_h, dw:dw + new_unpad_w, :] # merge: resize to original size img_merge[color_mask ! = 0 ] = \ img_merge[color_mask ! = 0 ] * 0.5 + color_seg[color_mask ! = 0 ] * 0.5 img_merge = img_merge.astype(np.uint8) img_merge = cv2.resize(img_merge, (width, height), interpolation = cv2.INTER_LINEAR) img_merge = cv2AddChineseText(img_merge, f '帧数:{frame_id} 帧率:{fps} 前方共有 {boxes.shape[0]} 辆车...' , ( 100 , 50 ), textColor = ( 0 , 0 , 255 ), textSize = 30 ) img_merge = cv2AddChineseText(img_merge, '前方绿色区域为可行驶区域,红色为检出的车道线...' , ( 100 , 100 ), textColor = ( 0 , 0 , 255 ), textSize = 30 ) for i in range (boxes.shape[ 0 ]): x1, y1, x2, y2, conf, label = boxes[i] x1, y1, x2, y2, label = int (x1), int (y1), int (x2), int (y2), int (label) img_merge = cv2.rectangle(img_merge, (x1, y1), (x2, y2), ( 0 , 255 , 0 ), 2 , 2 ) # cv2.imshow('img_merge', img_merge) # cv2.waitKey(0) writer.write(img_merge) cap.release() # 释放摄像头 writer.release() # 可以实现预览 cv2.destroyAllWindows() |
总结与学习心得
笔者是在Aidlux团队的训练营中学习而来,期间老师通过视频授课展现出整个项目的流程与细节。不管是AI算法小白还是AI算法的老手都在这次训练营受益匪浅。Aidlux工程实践内容全是干货,同时过程也遇见了很多问题,但是老师和训练营的其他同学们都很认真为其他学员解决,耐心辅导,对我来言,刚刚接触这一领域,以及Aidlux平台的使用,让我耳目一新。整个流程下,我已经学会了如何在Aidlux进行模型部署,令我也感觉到成就感,在此特别感谢老师和Aidlux团队的贡献,希望他们以后在AI算法开发的道路事业更加顺利。最后放上本次自动驾驶感知和智能预警的效果视频的地址。具体代码大家请关注Aidlux的公众号领取呦!
https://www.bilibili.com/video/BV11V411g7os/
https://www.bilibili.com/video/BV1RV4y1h7vA/
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· AI与.NET技术实操系列(二):开始使用ML.NET
· 单线程的Redis速度为什么快?
· Pantheons:用 TypeScript 打造主流大模型对话的一站式集成库