官网
https://github.com/ultralytics/ultralytics
教程
https://gitcode.net/mirrors/ultralytics/ultralytics?utm_source=csdn_github_accelerator
Pip 安装 ultralytics 包,包括使用PyTorch>=1.8的Python>=3.8环境中的所有要求。
1 创建环境
1 | conda create - n py39 - yolov8 python = 3.9 |
激活
1 | activate py39 - yolov8 |
2 安装库
1 2 3 4 | #克隆地址 git clone https: / / github.com / ultralytics / ultralytics.git #安装依赖 pip install - r requirements.txt |
在最新版本中,Ultralytics YOLOv8同时提供了完整的命令行界面(CLI)API和Python SDK,用于执行训练、验证和推理任务。
为了使用yolo命令行界面(CLI),我们需要安装ultralytics包,命令如下:
1 | pip install ultralytics |
3测试
1 | yolo predict model = yolov8n.pt source = 'ultralytics/assets/bus.jpg' show = True save = True |
Python测试
https://gitcode.net/mirrors/ultralytics/ultralytics?utm_source=csdn_github_accelerator
1 跟踪功能
https://docs.ultralytics.com/modes/track/
对象跟踪是一项涉及识别对象的位置和类别,然后为视频流中的检测分配唯一 ID 的任务。
跟踪器的输出与添加对象 ID 的检测相同。
可用的追踪器
Ultralytics YOLO 支持以下跟踪算法。可以通过传递相关的 YAML 配置文件来启用它们,例如tracker=tracker_type.yaml
:
默认跟踪器是 BoT-SORT。
追踪
要在视频流上运行跟踪器,请使用经过训练的检测、分段或姿势模型,例如 YOLOv8n、YOLOv8n-seg 和 YOLOv8n-pose。
1 2 3 4 5 6 7 8 9 10 11 | from ultralytics import YOLO # Load an official or custom model model = YOLO( 'yolov8n.pt' ) # Load an official Detect model #model = YOLO('yolov8n-seg.pt') # Load an official Segment model #model = YOLO('yolov8n-pose.pt') # Load an official Pose model #model = YOLO('path/to/best.pt') # Load a custom trained model # Perform tracking with the model results = model.track(source = "https://youtu.be/Zgi9g1ksQHc" , show = True ) # Tracking with default tracker #results = model.track(source="https://youtu.be/Zgi9g1ksQHc", show=True, tracker="bytetrack.yaml") # Tracking with ByteTrack trackerz |

自动下载权重
1 | https: / / github.com / ultralytics / assets / releases / download / v0. 0.0 / yolov8n - seg.ptq |
缺库自动下载安装
配置
跟踪参数
跟踪配置与预测模式共享属性,例如conf
、iou
和show
。如需进一步配置,请参阅预测模型页面。
1 2 3 4 5 | from ultralytics import YOLO # Configure the tracking parameters and run the tracker model = YOLO( 'yolov8n.pt' ) results = model.track(source = "https://youtu.be/Zgi9g1ksQHc" , conf = 0.3 , iou = 0.5 , show = True ) |
追踪器选择
Ultralytics 还允许您使用修改后的跟踪器配置文件。为此,只需custom_tracker.yaml
从ultralytics/cfg/trackerstracker_type
复制跟踪器配置文件(例如 ),并根据您的需要修改任何配置(除了)。
1 2 3 4 5 | from ultralytics import YOLO # Load the model and run the tracker with a custom configuration file model = YOLO( 'yolov8n.pt' ) results = model.track(source = "https://youtu.be/Zgi9g1ksQHc" , tracker = 'custom_tracker.yaml' ) |
Python 示例
持久曲目循环
这是一个使用 OpenCV ( cv2
) 和 YOLOv8 在视频帧上运行对象跟踪的 Python 脚本。该脚本仍然假设您已经安装了必要的软件包(opencv-python
和ultralytics
)。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | import cv2 from ultralytics import YOLO # Load the YOLOv8 model model = YOLO( 'yolov8n.pt' ) # Open the video file video_path = "path/to/video.mp4" cap = cv2.VideoCapture(video_path) # Loop through the video frames while cap.isOpened(): # Read a frame from the video success, frame = cap.read() if success: # Run YOLOv8 tracking on the frame, persisting tracks between frames results = model.track(frame, persist = True ) # Visualize the results on the frame annotated_frame = results[ 0 ].plot() # Display the annotated frame cv2.imshow( "YOLOv8 Tracking" , annotated_frame) # Break the loop if 'q' is pressed if cv2.waitKey( 1 ) & 0xFF = = ord ( "q" ): break else : # Break the loop if the end of the video is reached break # Release the video capture object and close the display window cap.release() cv2.destroyAllWindows() |
绘制随时间变化的轨迹
通过连续帧可视化对象轨迹可以为视频中检测到的对象的运动模式和行为提供有价值的见解。借助 Ultralytics YOLOv8,绘制这些轨迹是一个无缝且高效的过程。
在下面的示例中,我们演示了如何利用 YOLOv8 的跟踪功能来绘制多个视频帧中检测到的对象的运动。该脚本涉及打开视频文件、逐帧读取它,并利用 YOLO 模型来识别和跟踪各种对象。通过保留检测到的边界框的中心点并将它们连接起来,我们可以绘制代表跟踪对象所遵循的路径的线。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | from collections import defaultdict import cv2 import numpy as np from ultralytics import YOLO # Load the YOLOv8 model model = YOLO( 'yolov8n.pt' ) # Open the video file #video_path = "path/to/video.mp4" video_path = "video1.mp4" #video_path=1 cap = cv2.VideoCapture(video_path) # Store the track history track_history = defaultdict( lambda : []) cv2.namedWindow( "Car Tracking" , 0 ) # Loop through the video frames while cap.isOpened(): # Read a frame from the video success, frame = cap.read() if success: # Run YOLOv8 tracking on the frame, persisting tracks between frames results = model.track(frame, persist = True ) print ( "数目" , len (results)) # Get the boxes and track IDs boxes = results[ 0 ].boxes.xywh.cpu() cur_carnum = len (boxes) print ( "当前数目" ,cur_carnum) if len (boxes) = = 0 or results[ 0 ].boxes. id = = None : cv2.imshow( "Car Tracking" , frame) continue track_ids = results[ 0 ].boxes. id . int ().cpu().tolist() # Visualize the results on the frame annotated_frame = results[ 0 ].plot() # Plot the tracks for box, track_id in zip (boxes, track_ids): x, y, w, h = box track = track_history[track_id] track.append(( float (x), float (y))) # x, y center point if len (track) > 30 : # retain 90 tracks for 90 frames track.pop( 0 ) # Draw the tracking lines points = np.hstack(track).astype(np.int32).reshape(( - 1 , 1 , 2 )) cv2.polylines(annotated_frame, [points], isClosed = False , color = ( 0 , 0 , 255 ), thickness = 3 ) # Display the annotated frame cv2.imshow( "Car Tracking" , annotated_frame) #else: #cv2.imshow("Car Tracking", frame) # Break the loop if 'q' is pressed if cv2.waitKey( 1 ) & 0xFF = = ord ( "q" ): break else : # Break the loop if the end of the video is reached break # Release the video capture object and close the display window cap.release() cv2.destroyAllWindows() |
多线程跟踪
多线程跟踪提供了同时在多个视频流上运行对象跟踪的功能。这在处理多个视频输入(例如来自多个监控摄像头的视频输入)时特别有用,其中并发处理可以大大提高效率和性能。
在提供的 Python 脚本中,我们利用 Python 的threading
模块同时运行跟踪器的多个实例。每个线程负责在一个视频文件上运行跟踪器,并且所有线程在后台同时运行。
为了确保每个线程接收正确的参数(视频文件和要使用的模型),我们定义一个run_tracker_in_thread
接受这些参数并包含主跟踪循环的函数。该函数逐帧读取视频,运行跟踪器并显示结果。
本示例中使用了两个不同的模型:yolov8n.pt
和yolov8n-seg.pt
,每个模型跟踪不同视频文件中的对象。video_file1
视频文件在和中指定video_file2
。
daemon=True
中的参数表示threading.Thread
主程序一结束就关闭这些线程。start()
然后,我们使用和 use启动线程,join()
使主线程等待,直到两个跟踪器线程完成。
最后,在所有线程完成其任务后,使用 关闭显示结果的窗口cv2.destroyAllWindows()
。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | import threading import cv2 from ultralytics import YOLO def run_tracker_in_thread(filename, model): video = cv2.VideoCapture(filename) frames = int (video.get(cv2.CAP_PROP_FRAME_COUNT)) for _ in range (frames): ret, frame = video.read() if ret: results = model.track(source = frame, persist = True ) res_plotted = results[ 0 ].plot() cv2.imshow( 'p' , res_plotted) if cv2.waitKey( 1 ) = = ord ( 'q' ): break # Load the models model1 = YOLO( 'yolov8n.pt' ) model2 = YOLO( 'yolov8n-seg.pt' ) # Define the video files for the trackers video_file1 = 'path/to/video1.mp4' video_file2 = 'path/to/video2.mp4' # Create the tracker threads tracker_thread1 = threading.Thread(target = run_tracker_in_thread, args = (video_file1, model1), daemon = True ) tracker_thread2 = threading.Thread(target = run_tracker_in_thread, args = (video_file2, model2), daemon = True ) # Start the tracker threads tracker_thread1.start() tracker_thread2.start() # Wait for the tracker threads to finish tracker_thread1.join() tracker_thread2.join() # Clean up and close windows cv2.destroyAllWindows() |
通过创建更多线程并应用相同的方法,可以轻松扩展此示例以处理更多视频文件和模型。
更多测试样例
https://docs.ultralytics.com/modes/track/#multithreaded-tracking
手动权重下载地址
https://github.com/ultralytics/assets/releases/
姿态识别
https://docs.ultralytics.com/tasks/pose/
https://docs.ultralytics.com/modes/predict/#inference-arguments
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | from collections import defaultdict import cv2 import numpy as np from ultralytics import YOLO # Load the YOLOv8 model model = YOLO( 'yolov8n-pose.pt' ) # yolov8n yolov8n-pose # Open the video file #video_path = "path/to/video.mp4" video_path = 0 cap = cv2.VideoCapture(video_path) # Store the track history track_history = defaultdict( lambda : []) # Loop through the video frames while cap.isOpened(): # Read a frame from the video success, frame = cap.read() if success: results = model(frame) print (results) # Process results list for result in results: boxes = result.boxes # Boxes object for bbox outputs masks = result.masks # Masks object for segmentation masks outputs keypoints = result.keypoints # Keypoints object for pose outputs probs = result.probs # Probs object for classification outputs # Visualize the results on the frame annotated_frame = results[ 0 ].plot() # Display the annotated frame cv2.imshow( "YOLOv8 Inference" , annotated_frame) if cv2.waitKey( 1 ) & 0xFF = = ord ( "q" ): break else : # Break the loop if the end of the video is reached break # Release the video capture object and close the display window cap.release() cv2.destroyAllWindows() |
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 单线程的Redis速度为什么快?
· 展开说说关于C#中ORM框架的用法!
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· Pantheons:用 TypeScript 打造主流大模型对话的一站式集成库
· SQL Server 2025 AI相关能力初探
2017-09-08 OpenCV 学习笔记(1)WIN10+ VS2013 配置Opencv2413 64位