Yolov8-源码解析-二十-

Yolov8 源码解析(二十)


description: Learn how to use the TritonRemoteModel class for interacting with remote Triton Inference Server models. Detailed guide with code examples and attributes.
keywords: Ultralytics, TritonRemoteModel, Triton Inference Server, model client, inference, remote model, machine learning, AI, Python

Reference for ultralytics/utils/triton.py

!!! Note

This file is available at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/triton.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/triton.py). If you spot a problem please help fix it by [contributing](https://docs.ultralytics.com/help/contributing/) a [Pull Request](https://github.com/ultralytics/ultralytics/edit/main/ultralytics/utils/triton.py) 🛠️. Thank you 🙏!

::: ultralytics.utils.triton.TritonRemoteModel




description: Explore how to use ultralytics.utils.tuner.py for efficient hyperparameter tuning with Ray Tune. Learn implementation details and example usage.
keywords: Ultralytics, tuner, hyperparameter tuning, Ray Tune, YOLO, machine learning, AI, optimization

Reference for ultralytics/utils/tuner.py

!!! Note

This file is available at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/tuner.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/tuner.py). If you spot a problem please help fix it by [contributing](https://docs.ultralytics.com/help/contributing/) a [Pull Request](https://github.com/ultralytics/ultralytics/edit/main/ultralytics/utils/tuner.py) 🛠️. Thank you 🙏!

::: ultralytics.utils.tuner.run_ray_tune




description: Explore the comprehensive reference for ultralytics.utils in the Ultralytics library. Enhance your ML workflow with these utility functions.
keywords: Ultralytics, utils, TQDM, Python, ML, Machine Learning utilities, YOLO, threading, logging, yaml, settings

Reference for ultralytics/utils/__init__.py

!!! Note

This file is available at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/\_\_init\_\_.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/__init__.py). If you spot a problem please help fix it by [contributing](https://docs.ultralytics.com/help/contributing/) a [Pull Request](https://github.com/ultralytics/ultralytics/edit/main/ultralytics/utils/__init__.py) 🛠️. Thank you 🙏!

::: ultralytics.utils.TQDM





::: ultralytics.utils.SimpleClass





::: ultralytics.utils.IterableSimpleNamespace





::: ultralytics.utils.ThreadingLocked





::: ultralytics.utils.TryExcept





::: ultralytics.utils.Retry





::: ultralytics.utils.SettingsManager





::: ultralytics.utils.plt_settings





::: ultralytics.utils.set_logging





::: ultralytics.utils.emojis





::: ultralytics.utils.yaml_save





::: ultralytics.utils.yaml_load





::: ultralytics.utils.yaml_print





::: ultralytics.utils.read_device_model





::: ultralytics.utils.is_ubuntu





::: ultralytics.utils.is_colab





::: ultralytics.utils.is_kaggle





::: ultralytics.utils.is_jupyter





::: ultralytics.utils.is_docker





::: ultralytics.utils.is_raspberrypi





::: ultralytics.utils.is_jetson





::: ultralytics.utils.is_online





::: ultralytics.utils.is_pip_package





::: ultralytics.utils.is_dir_writeable





::: ultralytics.utils.is_pytest_running





::: ultralytics.utils.is_github_action_running





::: ultralytics.utils.get_git_dir





::: ultralytics.utils.is_git_dir





::: ultralytics.utils.get_git_origin_url





::: ultralytics.utils.get_git_branch





::: ultralytics.utils.get_default_args





::: ultralytics.utils.get_ubuntu_version





::: ultralytics.utils.get_user_config_dir





::: ultralytics.utils.colorstr





::: ultralytics.utils.remove_colorstr





::: ultralytics.utils.threaded





::: ultralytics.utils.set_sentry





::: ultralytics.utils.deprecation_warn





::: ultralytics.utils.clean_url





::: ultralytics.utils.url2file




comments: true
description: Explore Ultralytics Solutions using YOLOv8 for object counting, blurring, security, and more. Enhance efficiency and solve real-world problems with cutting-edge AI.
keywords: Ultralytics, YOLOv8, object counting, object blurring, security systems, AI solutions, real-time analysis, computer vision applications

Ultralytics Solutions: Harness YOLOv8 to Solve Real-World Problems

Ultralytics Solutions provide cutting-edge applications of YOLO models, offering real-world solutions like object counting, blurring, and security systems, enhancing efficiency and accuracy in diverse industries. Discover the power of YOLOv8 for practical, impactful implementations.

Ultralytics Solutions Thumbnail

Solutions

Here's our curated list of Ultralytics solutions that can be used to create awesome computer vision projects.

  • Object Counting 🚀 NEW: Learn to perform real-time object counting with YOLOv8. Gain the expertise to accurately count objects in live video streams.
  • Object Cropping 🚀 NEW: Master object cropping with YOLOv8 for precise extraction of objects from images and videos.
  • Object Blurring 🚀 NEW: Apply object blurring using YOLOv8 to protect privacy in image and video processing.
  • Workouts Monitoring 🚀 NEW: Discover how to monitor workouts using YOLOv8. Learn to track and analyze various fitness routines in real time.
  • Objects Counting in Regions 🚀 NEW: Count objects in specific regions using YOLOv8 for accurate detection in varied areas.
  • Security Alarm System 🚀 NEW: Create a security alarm system with YOLOv8 that triggers alerts upon detecting new objects. Customize the system to fit your specific needs.
  • Heatmaps 🚀 NEW: Utilize detection heatmaps to visualize data intensity across a matrix, providing clear insights in computer vision tasks.
  • Instance Segmentation with Object Tracking 🚀 NEW: Implement instance segmentation and object tracking with YOLOv8 to achieve precise object boundaries and continuous monitoring.
  • VisionEye View Objects Mapping 🚀 NEW: Develop systems that mimic human eye focus on specific objects, enhancing the computer's ability to discern and prioritize details.
  • Speed Estimation 🚀 NEW: Estimate object speed using YOLOv8 and object tracking techniques, crucial for applications like autonomous vehicles and traffic monitoring.
  • Distance Calculation 🚀 NEW: Calculate distances between objects using bounding box centroids in YOLOv8, essential for spatial analysis.
  • Queue Management 🚀 NEW: Implement efficient queue management systems to minimize wait times and improve productivity using YOLOv8.
  • Parking Management 🚀 NEW: Organize and direct vehicle flow in parking areas with YOLOv8, optimizing space utilization and user experience.
  • Analytics 📊 NEW: Conduct comprehensive data analysis to discover patterns and make informed decisions, leveraging YOLOv8 for descriptive, predictive, and prescriptive analytics.
  • Live Inference with Streamlit 🚀 NEW: Leverage the power of YOLOv8 for real-time object detection directly through your web browser with a user-friendly Streamlit interface.

Contribute to Our Solutions

We welcome contributions from the community! If you've mastered a particular aspect of Ultralytics YOLO that's not yet covered in our solutions, we encourage you to share your expertise. Writing a guide is a great way to give back to the community and help us make our documentation more comprehensive and user-friendly.

To get started, please read our Contributing Guide for guidelines on how to open up a Pull Request (PR) 🛠️. We look forward to your contributions!

Let's work together to make the Ultralytics YOLO ecosystem more robust and versatile 🙏!

FAQ

How can I use Ultralytics YOLO for real-time object counting?

Ultralytics YOLOv8 can be used for real-time object counting by leveraging its advanced object detection capabilities. You can follow our detailed guide on Object Counting to set up YOLOv8 for live video stream analysis. Simply install YOLOv8, load your model, and process video frames to count objects dynamically.

What are the benefits of using Ultralytics YOLO for security systems?

Ultralytics YOLOv8 enhances security systems by offering real-time object detection and alert mechanisms. By employing YOLOv8, you can create a security alarm system that triggers alerts when new objects are detected in the surveillance area. Learn how to set up a Security Alarm System with YOLOv8 for robust security monitoring.

How can Ultralytics YOLO improve queue management systems?

Ultralytics YOLOv8 can significantly improve queue management systems by accurately counting and tracking people in queues, thus helping to reduce wait times and optimize service efficiency. Follow our detailed guide on Queue Management to learn how to implement YOLOv8 for effective queue monitoring and analysis.

Can Ultralytics YOLO be used for workout monitoring?

Yes, Ultralytics YOLOv8 can be effectively used for monitoring workouts by tracking and analyzing fitness routines in real-time. This allows for precise evaluation of exercise form and performance. Explore our guide on Workouts Monitoring to learn how to set up an AI-powered workout monitoring system using YOLOv8.

How does Ultralytics YOLO help in creating heatmaps for data visualization?

Ultralytics YOLOv8 can generate heatmaps to visualize data intensity across a given area, highlighting regions of high activity or interest. This feature is particularly useful in understanding patterns and trends in various computer vision tasks. Learn more about creating and using Heatmaps with YOLOv8 for comprehensive data analysis and visualization.


comments: true
description: Master image classification using YOLOv8. Learn to train, validate, predict, and export models efficiently.
keywords: YOLOv8, image classification, AI, machine learning, pretrained models, ImageNet, model export, predict, train, validate

Image Classification

Image classification examples

Image classification is the simplest of the three tasks and involves classifying an entire image into one of a set of predefined classes.

The output of an image classifier is a single class label and a confidence score. Image classification is useful when you need to know only what class an image belongs to and don't need to know where objects of that class are located or what their exact shape is.



Watch: Explore Ultralytics YOLO Tasks: Image Classification using Ultralytics HUB

!!! Tip "Tip"

YOLOv8 Classify models use the `-cls` suffix, i.e. `yolov8n-cls.pt` and are pretrained on [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/ImageNet.yaml).

Models

YOLOv8 pretrained Classify models are shown here. Detect, Segment and Pose models are pretrained on the COCO dataset, while Classify models are pretrained on the ImageNet dataset.

Models download automatically from the latest Ultralytics release on first use.

Model size
(pixels)
acc
top1
acc
top5
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B) at 640
YOLOv8n-cls 224 69.0 88.3 12.9 0.31 2.7 4.3
YOLOv8s-cls 224 73.8 91.7 23.4 0.35 6.4 13.5
YOLOv8m-cls 224 76.8 93.5 85.4 0.62 17.0 42.7
YOLOv8l-cls 224 76.8 93.5 163.0 0.87 37.5 99.7
YOLOv8x-cls 224 79.0 94.6 232.0 1.01 57.4 154.8
  • acc values are model accuracies on the ImageNet dataset validation set.
    Reproduce by yolo val classify data=path/to/ImageNet device=0
  • Speed averaged over ImageNet val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val classify data=path/to/ImageNet batch=1 device=0|cpu

Train

Train YOLOv8n-cls on the MNIST160 dataset for 100 epochs at image size 64. For a full list of available arguments see the Configuration page.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-cls.yaml")  # build a new model from YAML
    model = YOLO("yolov8n-cls.pt")  # load a pretrained model (recommended for training)
    model = YOLO("yolov8n-cls.yaml").load("yolov8n-cls.pt")  # build from YAML and transfer weights

    # Train the model
    results = model.train(data="mnist160", epochs=100, imgsz=64)
    ```

=== "CLI"

    ```py
    # Build a new model from YAML and start training from scratch
    yolo classify train data=mnist160 model=yolov8n-cls.yaml epochs=100 imgsz=64

    # Start training from a pretrained *.pt model
    yolo classify train data=mnist160 model=yolov8n-cls.pt epochs=100 imgsz=64

    # Build a new model from YAML, transfer pretrained weights to it and start training
    yolo classify train data=mnist160 model=yolov8n-cls.yaml pretrained=yolov8n-cls.pt epochs=100 imgsz=64
    ```

Dataset format

YOLO classification dataset format can be found in detail in the Dataset Guide.

Val

Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the model retains its training data and arguments as model attributes.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-cls.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom model

    # Validate the model
    metrics = model.val()  # no arguments needed, dataset and settings remembered
    metrics.top1  # top1 accuracy
    metrics.top5  # top5 accuracy
    ```

=== "CLI"

    ```py
    yolo classify val model=yolov8n-cls.pt  # val official model
    yolo classify val model=path/to/best.pt  # val custom model
    ```

Predict

Use a trained YOLOv8n-cls model to run predictions on images.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-cls.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom model

    # Predict with the model
    results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image
    ```

=== "CLI"

    ```py
    yolo classify predict model=yolov8n-cls.pt source='https://ultralytics.com/images/bus.jpg'  # predict with official model
    yolo classify predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg'  # predict with custom model
    ```

See full predict mode details in the Predict page.

Export

Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-cls.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom trained model

    # Export the model
    model.export(format="onnx")
    ```

=== "CLI"

    ```py
    yolo export model=yolov8n-cls.pt format=onnx  # export official model
    yolo export model=path/to/best.pt format=onnx  # export custom trained model
    ```

Available YOLOv8-cls export formats are in the table below. You can export to any format using the format argument, i.e. format='onnx' or format='engine'. You can predict or validate directly on exported models, i.e. yolo predict model=yolov8n-cls.onnx. Usage examples are shown for your model after export completes.

Format format Argument Model Metadata Arguments
PyTorch - yolov8n-cls.pt -
TorchScript torchscript yolov8n-cls.torchscript imgsz, optimize, batch
ONNX onnx yolov8n-cls.onnx imgsz, half, dynamic, simplify, opset, batch
OpenVINO openvino yolov8n-cls_openvino_model/ imgsz, half, int8, batch
TensorRT engine yolov8n-cls.engine imgsz, half, dynamic, simplify, workspace, int8, batch
CoreML coreml yolov8n-cls.mlpackage imgsz, half, int8, nms, batch
TF SavedModel saved_model yolov8n-cls_saved_model/ imgsz, keras, int8, batch
TF GraphDef pb yolov8n-cls.pb imgsz, batch
TF Lite tflite yolov8n-cls.tflite imgsz, half, int8, batch
TF Edge TPU edgetpu yolov8n-cls_edgetpu.tflite imgsz
TF.js tfjs yolov8n-cls_web_model/ imgsz, half, int8, batch
PaddlePaddle paddle yolov8n-cls_paddle_model/ imgsz, batch
NCNN ncnn yolov8n-cls_ncnn_model/ imgsz, half, batch

See full export details in the Export page.

FAQ

What is the purpose of YOLOv8 in image classification?

YOLOv8 models, such as yolov8n-cls.pt, are designed for efficient image classification. They assign a single class label to an entire image along with a confidence score. This is particularly useful for applications where knowing the specific class of an image is sufficient, rather than identifying the location or shape of objects within the image.

How do I train a YOLOv8 model for image classification?

To train a YOLOv8 model, you can use either Python or CLI commands. For example, to train a yolov8n-cls model on the MNIST160 dataset for 100 epochs at an image size of 64:

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-cls.pt")  # load a pretrained model (recommended for training)

    # Train the model
    results = model.train(data="mnist160", epochs=100, imgsz=64)
    ```

=== "CLI"

    ```py
    yolo classify train data=mnist160 model=yolov8n-cls.pt epochs=100 imgsz=64
    ```

For more configuration options, visit the Configuration page.

Where can I find pretrained YOLOv8 classification models?

Pretrained YOLOv8 classification models can be found in the Models section. Models like yolov8n-cls.pt, yolov8s-cls.pt, yolov8m-cls.pt, etc., are pretrained on the ImageNet dataset and can be easily downloaded and used for various image classification tasks.

How can I export a trained YOLOv8 model to different formats?

You can export a trained YOLOv8 model to various formats using Python or CLI commands. For instance, to export a model to ONNX format:

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-cls.pt")  # load the trained model

    # Export the model to ONNX
    model.export(format="onnx")
    ```

=== "CLI"

    ```py
    yolo export model=yolov8n-cls.pt format=onnx  # export the trained model to ONNX format
    ```

For detailed export options, refer to the Export page.

How do I validate a trained YOLOv8 classification model?

To validate a trained model's accuracy on a dataset like MNIST160, you can use the following Python or CLI commands:

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-cls.pt")  # load the trained model

    # Validate the model
    metrics = model.val()  # no arguments needed, uses the dataset and settings from training
    metrics.top1  # top1 accuracy
    metrics.top5  # top5 accuracy
    ```

=== "CLI"

    ```py
    yolo classify val model=yolov8n-cls.pt  # validate the trained model
    ```

For more information, visit the Validate section.


comments: true
description: Learn about object detection with YOLOv8. Explore pretrained models, training, validation, prediction, and export details for efficient object recognition.
keywords: object detection, YOLOv8, pretrained models, training, validation, prediction, export, machine learning, computer vision

Object Detection

Object detection examples

Object detection is a task that involves identifying the location and class of objects in an image or video stream.

The output of an object detector is a set of bounding boxes that enclose the objects in the image, along with class labels and confidence scores for each box. Object detection is a good choice when you need to identify objects of interest in a scene, but don't need to know exactly where the object is or its exact shape.



Watch: Object Detection with Pre-trained Ultralytics YOLOv8 Model.

!!! Tip "Tip"

YOLOv8 Detect models are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).

Models

YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models are pretrained on the COCO dataset, while Classify models are pretrained on the ImageNet dataset.

Models download automatically from the latest Ultralytics release on first use.

Model size
(pixels)
mAPval
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n 640 37.3 80.4 0.99 3.2 8.7
YOLOv8s 640 44.9 128.4 1.20 11.2 28.6
YOLOv8m 640 50.2 234.7 1.83 25.9 78.9
YOLOv8l 640 52.9 375.2 2.39 43.7 165.2
YOLOv8x 640 53.9 479.1 3.53 68.2 257.8
  • mAPval values are for single-model single-scale on COCO val2017 dataset.
    Reproduce by yolo val detect data=coco.yaml device=0
  • Speed averaged over COCO val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val detect data=coco8.yaml batch=1 device=0|cpu

Train

Train YOLOv8n on the COCO8 dataset for 100 epochs at image size 640. For a full list of available arguments see the Configuration page.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n.yaml")  # build a new model from YAML
    model = YOLO("yolov8n.pt")  # load a pretrained model (recommended for training)
    model = YOLO("yolov8n.yaml").load("yolov8n.pt")  # build from YAML and transfer weights

    # Train the model
    results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
    ```

=== "CLI"

    ```py
    # Build a new model from YAML and start training from scratch
    yolo detect train data=coco8.yaml model=yolov8n.yaml epochs=100 imgsz=640

    # Start training from a pretrained *.pt model
    yolo detect train data=coco8.yaml model=yolov8n.pt epochs=100 imgsz=640

    # Build a new model from YAML, transfer pretrained weights to it and start training
    yolo detect train data=coco8.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640
    ```

Dataset format

YOLO detection dataset format can be found in detail in the Dataset Guide. To convert your existing dataset from other formats (like COCO etc.) to YOLO format, please use JSON2YOLO tool by Ultralytics.

Val

Validate trained YOLOv8n model accuracy on the COCO8 dataset. No argument need to passed as the model retains its training data and arguments as model attributes.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom model

    # Validate the model
    metrics = model.val()  # no arguments needed, dataset and settings remembered
    metrics.box.map  # map50-95
    metrics.box.map50  # map50
    metrics.box.map75  # map75
    metrics.box.maps  # a list contains map50-95 of each category
    ```

=== "CLI"

    ```py
    yolo detect val model=yolov8n.pt  # val official model
    yolo detect val model=path/to/best.pt  # val custom model
    ```

Predict

Use a trained YOLOv8n model to run predictions on images.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom model

    # Predict with the model
    results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image
    ```

=== "CLI"

    ```py
    yolo detect predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'  # predict with official model
    yolo detect predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg'  # predict with custom model
    ```

See full predict mode details in the Predict page.

Export

Export a YOLOv8n model to a different format like ONNX, CoreML, etc.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom trained model

    # Export the model
    model.export(format="onnx")
    ```

=== "CLI"

    ```py
    yolo export model=yolov8n.pt format=onnx  # export official model
    yolo export model=path/to/best.pt format=onnx  # export custom trained model
    ```

Available YOLOv8 export formats are in the table below. You can export to any format using the format argument, i.e. format='onnx' or format='engine'. You can predict or validate directly on exported models, i.e. yolo predict model=yolov8n.onnx. Usage examples are shown for your model after export completes.

Format format Argument Model Metadata Arguments
PyTorch - yolov8n.pt -
TorchScript torchscript yolov8n.torchscript imgsz, optimize, batch
ONNX onnx yolov8n.onnx imgsz, half, dynamic, simplify, opset, batch
OpenVINO openvino yolov8n_openvino_model/ imgsz, half, int8, batch
TensorRT engine yolov8n.engine imgsz, half, dynamic, simplify, workspace, int8, batch
CoreML coreml yolov8n.mlpackage imgsz, half, int8, nms, batch
TF SavedModel saved_model yolov8n_saved_model/ imgsz, keras, int8, batch
TF GraphDef pb yolov8n.pb imgsz, batch
TF Lite tflite yolov8n.tflite imgsz, half, int8, batch
TF Edge TPU edgetpu yolov8n_edgetpu.tflite imgsz
TF.js tfjs yolov8n_web_model/ imgsz, half, int8, batch
PaddlePaddle paddle yolov8n_paddle_model/ imgsz, batch
NCNN ncnn yolov8n_ncnn_model/ imgsz, half, batch

See full export details in the Export page.

FAQ

How do I train a YOLOv8 model on my custom dataset?

Training a YOLOv8 model on a custom dataset involves a few steps:

  1. Prepare the Dataset: Ensure your dataset is in the YOLO format. For guidance, refer to our Dataset Guide.
  2. Load the Model: Use the Ultralytics YOLO library to load a pre-trained model or create a new model from a YAML file.
  3. Train the Model: Execute the train method in Python or the yolo detect train command in CLI.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a pretrained model
    model = YOLO("yolov8n.pt")

    # Train the model on your custom dataset
    model.train(data="my_custom_dataset.yaml", epochs=100, imgsz=640)
    ```

=== "CLI"

    ```py
    yolo detect train data=my_custom_dataset.yaml model=yolov8n.pt epochs=100 imgsz=640
    ```

For detailed configuration options, visit the Configuration page.

What pretrained models are available in YOLOv8?

Ultralytics YOLOv8 offers various pretrained models for object detection, segmentation, and pose estimation. These models are pretrained on the COCO dataset or ImageNet for classification tasks. Here are some of the available models:

For a detailed list and performance metrics, refer to the Models section.

How can I validate the accuracy of my trained YOLOv8 model?

To validate the accuracy of your trained YOLOv8 model, you can use the .val() method in Python or the yolo detect val command in CLI. This will provide metrics like mAP50-95, mAP50, and more.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load the model
    model = YOLO("path/to/best.pt")

    # Validate the model
    metrics = model.val()
    print(metrics.box.map)  # mAP50-95
    ```

=== "CLI"

    ```py
    yolo detect val model=path/to/best.pt
    ```

For more validation details, visit the Val page.

What formats can I export a YOLOv8 model to?

Ultralytics YOLOv8 allows exporting models to various formats such as ONNX, TensorRT, CoreML, and more to ensure compatibility across different platforms and devices.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load the model
    model = YOLO("yolov8n.pt")

    # Export the model to ONNX format
    model.export(format="onnx")
    ```

=== "CLI"

    ```py
    yolo export model=yolov8n.pt format=onnx
    ```

Check the full list of supported formats and instructions on the Export page.

Why should I use Ultralytics YOLOv8 for object detection?

Ultralytics YOLOv8 is designed to offer state-of-the-art performance for object detection, segmentation, and pose estimation. Here are some key advantages:

  1. Pretrained Models: Utilize models pretrained on popular datasets like COCO and ImageNet for faster development.
  2. High Accuracy: Achieves impressive mAP scores, ensuring reliable object detection.
  3. Speed: Optimized for real-time inference, making it ideal for applications requiring swift processing.
  4. Flexibility: Export models to various formats like ONNX and TensorRT for deployment across multiple platforms.

Explore our Blog for use cases and success stories showcasing YOLOv8 in action.


comments: true
description: Explore Ultralytics YOLOv8 for detection, segmentation, classification, OBB, and pose estimation with high accuracy and speed. Learn how to apply each task.
keywords: Ultralytics YOLOv8, detection, segmentation, classification, oriented object detection, pose estimation, computer vision, AI framework

Ultralytics YOLOv8 Tasks


Ultralytics YOLO supported tasks

YOLOv8 is an AI framework that supports multiple computer vision tasks. The framework can be used to perform detection, segmentation, obb, classification, and pose estimation. Each of these tasks has a different objective and use case.



Watch: Explore Ultralytics YOLO Tasks: Object Detection, Segmentation, OBB, Tracking, and Pose Estimation.

Detection

Detection is the primary task supported by YOLOv8. It involves detecting objects in an image or video frame and drawing bounding boxes around them. The detected objects are classified into different categories based on their features. YOLOv8 can detect multiple objects in a single image or video frame with high accuracy and speed.

Detection Examples

Segmentation

Segmentation is a task that involves segmenting an image into different regions based on the content of the image. Each region is assigned a label based on its content. This task is useful in applications such as image segmentation and medical imaging. YOLOv8 uses a variant of the U-Net architecture to perform segmentation.

Segmentation Examples

Classification

Classification is a task that involves classifying an image into different categories. YOLOv8 can be used to classify images based on their content. It uses a variant of the EfficientNet architecture to perform classification.

Classification Examples

Pose

Pose/keypoint detection is a task that involves detecting specific points in an image or video frame. These points are referred to as keypoints and are used to track movement or pose estimation. YOLOv8 can detect keypoints in an image or video frame with high accuracy and speed.

Pose Examples

OBB

Oriented object detection goes a step further than regular object detection with introducing an extra angle to locate objects more accurate in an image. YOLOv8 can detect rotated objects in an image or video frame with high accuracy and speed.

Oriented Detection

Conclusion

YOLOv8 supports multiple tasks, including detection, segmentation, classification, oriented object detection and keypoints detection. Each of these tasks has different objectives and use cases. By understanding the differences between these tasks, you can choose the appropriate task for your computer vision application.

FAQ

What tasks can Ultralytics YOLOv8 perform?

Ultralytics YOLOv8 is a versatile AI framework capable of performing various computer vision tasks with high accuracy and speed. These tasks include:

  • Detection: Identifying and localizing objects in images or video frames by drawing bounding boxes around them.
  • Segmentation: Segmenting images into different regions based on their content, useful for applications like medical imaging.
  • Classification: Categorizing entire images based on their content, leveraging variants of the EfficientNet architecture.
  • Pose estimation: Detecting specific keypoints in an image or video frame to track movements or poses.
  • Oriented Object Detection (OBB): Detecting rotated objects with an added orientation angle for enhanced accuracy.

How do I use Ultralytics YOLOv8 for object detection?

To use Ultralytics YOLOv8 for object detection, follow these steps:

  1. Prepare your dataset in the appropriate format.
  2. Train the YOLOv8 model using the detection task.
  3. Use the model to make predictions by feeding in new images or video frames.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    model = YOLO("yolov8n.pt")  # Load pre-trained model
    results = model.predict(source="image.jpg")  # Perform object detection
    results[0].show()
    ```

=== "CLI"

    ```py
    yolo detect model=yolov8n.pt source='image.jpg'
    ```

For more detailed instructions, check out our detection examples.

What are the benefits of using YOLOv8 for segmentation tasks?

Using YOLOv8 for segmentation tasks provides several advantages:

  1. High Accuracy: The segmentation task leverages a variant of the U-Net architecture to achieve precise segmentation.
  2. Speed: YOLOv8 is optimized for real-time applications, offering quick processing even for high-resolution images.
  3. Multiple Applications: It is ideal for medical imaging, autonomous driving, and other applications requiring detailed image segmentation.

Learn more about the benefits and use cases of YOLOv8 for segmentation in the segmentation section.

Can Ultralytics YOLOv8 handle pose estimation and keypoint detection?

Yes, Ultralytics YOLOv8 can effectively perform pose estimation and keypoint detection with high accuracy and speed. This feature is particularly useful for tracking movements in sports analytics, healthcare, and human-computer interaction applications. YOLOv8 detects keypoints in an image or video frame, allowing for precise pose estimation.

For more details and implementation tips, visit our pose estimation examples.

Why should I choose Ultralytics YOLOv8 for oriented object detection (OBB)?

Oriented Object Detection (OBB) with YOLOv8 provides enhanced precision by detecting objects with an additional angle parameter. This feature is beneficial for applications requiring accurate localization of rotated objects, such as aerial imagery analysis and warehouse automation.

  • Increased Precision: The angle component reduces false positives for rotated objects.
  • Versatile Applications: Useful for tasks in geospatial analysis, robotics, etc.

Check out the Oriented Object Detection section for more details and examples.


comments: true
description: Discover how to detect objects with rotation for higher precision using YOLOv8 OBB models. Learn, train, validate, and export OBB models effortlessly.
keywords: Oriented Bounding Boxes, OBB, Object Detection, YOLOv8, Ultralytics, DOTAv1, Model Training, Model Export, AI, Machine Learning

Oriented Bounding Boxes Object Detection

Oriented object detection goes a step further than object detection and introduce an extra angle to locate objects more accurate in an image.

The output of an oriented object detector is a set of rotated bounding boxes that exactly enclose the objects in the image, along with class labels and confidence scores for each box. Object detection is a good choice when you need to identify objects of interest in a scene, but don't need to know exactly where the object is or its exact shape.

!!! Tip "Tip"

YOLOv8 OBB models use the `-obb` suffix, i.e. `yolov8n-obb.pt` and are pretrained on [DOTAv1](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/DOTAv1.yaml).

Watch: Object Detection using Ultralytics YOLOv8 Oriented Bounding Boxes (YOLOv8-OBB)

Watch: Object Detection with YOLOv8-OBB using Ultralytics HUB

Visual Samples

Ships Detection using OBB Vehicle Detection using OBB
Ships Detection using OBB Vehicle Detection using OBB

Models

YOLOv8 pretrained OBB models are shown here, which are pretrained on the DOTAv1 dataset.

Models download automatically from the latest Ultralytics release on first use.

Model size
(pixels)
mAPtest
50
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-obb 1024 78.0 204.77 3.57 3.1 23.3
YOLOv8s-obb 1024 79.5 424.88 4.07 11.4 76.3
YOLOv8m-obb 1024 80.5 763.48 7.61 26.4 208.6
YOLOv8l-obb 1024 80.7 1278.42 11.83 44.5 433.8
YOLOv8x-obb 1024 81.36 1759.10 13.23 69.5 676.7
  • mAPtest values are for single-model multiscale on DOTAv1 test dataset.
    Reproduce by yolo val obb data=DOTAv1.yaml device=0 split=test and submit merged results to DOTA evaluation.
  • Speed averaged over DOTAv1 val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu

Train

Train YOLOv8n-obb on the dota8.yaml dataset for 100 epochs at image size 640. For a full list of available arguments see the Configuration page.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-obb.yaml")  # build a new model from YAML
    model = YOLO("yolov8n-obb.pt")  # load a pretrained model (recommended for training)
    model = YOLO("yolov8n-obb.yaml").load("yolov8n.pt")  # build from YAML and transfer weights

    # Train the model
    results = model.train(data="dota8.yaml", epochs=100, imgsz=640)
    ```

=== "CLI"

    ```py
    # Build a new model from YAML and start training from scratch
    yolo obb train data=dota8.yaml model=yolov8n-obb.yaml epochs=100 imgsz=640

    # Start training from a pretrained *.pt model
    yolo obb train data=dota8.yaml model=yolov8n-obb.pt epochs=100 imgsz=640

    # Build a new model from YAML, transfer pretrained weights to it and start training
    yolo obb train data=dota8.yaml model=yolov8n-obb.yaml pretrained=yolov8n-obb.pt epochs=100 imgsz=640
    ```

Dataset format

OBB dataset format can be found in detail in the Dataset Guide.

Val

Validate trained YOLOv8n-obb model accuracy on the DOTA8 dataset. No argument need to passed as the model
retains its training data and arguments as model attributes.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-obb.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom model

    # Validate the model
    metrics = model.val(data="dota8.yaml")  # no arguments needed, dataset and settings remembered
    metrics.box.map  # map50-95(B)
    metrics.box.map50  # map50(B)
    metrics.box.map75  # map75(B)
    metrics.box.maps  # a list contains map50-95(B) of each category
    ```

=== "CLI"

    ```py
    yolo obb val model=yolov8n-obb.pt data=dota8.yaml  # val official model
    yolo obb val model=path/to/best.pt data=path/to/data.yaml  # val custom model
    ```

Predict

Use a trained YOLOv8n-obb model to run predictions on images.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-obb.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom model

    # Predict with the model
    results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image
    ```

=== "CLI"

    ```py
    yolo obb predict model=yolov8n-obb.pt source='https://ultralytics.com/images/bus.jpg'  # predict with official model
    yolo obb predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg'  # predict with custom model
    ```

See full predict mode details in the Predict page.

Export

Export a YOLOv8n-obb model to a different format like ONNX, CoreML, etc.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-obb.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom trained model

    # Export the model
    model.export(format="onnx")
    ```

=== "CLI"

    ```py
    yolo export model=yolov8n-obb.pt format=onnx  # export official model
    yolo export model=path/to/best.pt format=onnx  # export custom trained model
    ```

Available YOLOv8-obb export formats are in the table below. You can export to any format using the format argument, i.e. format='onnx' or format='engine'. You can predict or validate directly on exported models, i.e. yolo predict model=yolov8n-obb.onnx. Usage examples are shown for your model after export completes.

Format format Argument Model Metadata Arguments
PyTorch - yolov8n-obb.pt -
TorchScript torchscript yolov8n-obb.torchscript imgsz, optimize, batch
ONNX onnx yolov8n-obb.onnx imgsz, half, dynamic, simplify, opset, batch
OpenVINO openvino yolov8n-obb_openvino_model/ imgsz, half, int8, batch
TensorRT engine yolov8n-obb.engine imgsz, half, dynamic, simplify, workspace, int8, batch
CoreML coreml yolov8n-obb.mlpackage imgsz, half, int8, nms, batch
TF SavedModel saved_model yolov8n-obb_saved_model/ imgsz, keras, int8, batch
TF GraphDef pb yolov8n-obb.pb imgsz, batch
TF Lite tflite yolov8n-obb.tflite imgsz, half, int8, batch
TF Edge TPU edgetpu yolov8n-obb_edgetpu.tflite imgsz
TF.js tfjs yolov8n-obb_web_model/ imgsz, half, int8, batch
PaddlePaddle paddle yolov8n-obb_paddle_model/ imgsz, batch
NCNN ncnn yolov8n-obb_ncnn_model/ imgsz, half, batch

See full export details in the Export page.

FAQ

What are Oriented Bounding Boxes (OBB) and how do they differ from regular bounding boxes?

Oriented Bounding Boxes (OBB) include an additional angle to enhance object localization accuracy in images. Unlike regular bounding boxes, which are axis-aligned rectangles, OBBs can rotate to fit the orientation of the object better. This is particularly useful for applications requiring precise object placement, such as aerial or satellite imagery (Dataset Guide).

How do I train a YOLOv8n-obb model using a custom dataset?

To train a YOLOv8n-obb model with a custom dataset, follow the example below using Python or CLI:

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a pretrained model
    model = YOLO("yolov8n-obb.pt")

    # Train the model
    results = model.train(data="path/to/custom_dataset.yaml", epochs=100, imgsz=640)
    ```

=== "CLI"

    ```py
    yolo obb train data=path/to/custom_dataset.yaml model=yolov8n-obb.pt epochs=100 imgsz=640
    ```

For more training arguments, check the Configuration section.

What datasets can I use for training YOLOv8-OBB models?

YOLOv8-OBB models are pretrained on datasets like DOTAv1 but you can use any dataset formatted for OBB. Detailed information on OBB dataset formats can be found in the Dataset Guide.

How can I export a YOLOv8-OBB model to ONNX format?

Exporting a YOLOv8-OBB model to ONNX format is straightforward using either Python or CLI:

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-obb.pt")

    # Export the model
    model.export(format="onnx")
    ```

=== "CLI"

    ```py
    yolo export model=yolov8n-obb.pt format=onnx
    ```

For more export formats and details, refer to the Export page.

How do I validate the accuracy of a YOLOv8n-obb model?

To validate a YOLOv8n-obb model, you can use Python or CLI commands as shown below:

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-obb.pt")

    # Validate the model
    metrics = model.val(data="dota8.yaml")
    ```

=== "CLI"

    ```py
    yolo obb val model=yolov8n-obb.pt data=dota8.yaml
    ```

See full validation details in the Val section.


comments: true
description: Discover how to use YOLOv8 for pose estimation tasks. Learn about model training, validation, prediction, and exporting in various formats.
keywords: pose estimation, YOLOv8, Ultralytics, keypoints, model training, image recognition, deep learning

Pose Estimation

Pose estimation examples

Pose estimation is a task that involves identifying the location of specific points in an image, usually referred to as keypoints. The keypoints can represent various parts of the object such as joints, landmarks, or other distinctive features. The locations of the keypoints are usually represented as a set of 2D [x, y] or 3D [x, y, visible] coordinates.

The output of a pose estimation model is a set of points that represent the keypoints on an object in the image, usually along with the confidence scores for each point. Pose estimation is a good choice when you need to identify specific parts of an object in a scene, and their location in relation to each other.


Watch: Pose Estimation with Ultralytics YOLOv8.

Watch: Pose Estimation with Ultralytics HUB.

!!! Tip "Tip"

YOLOv8 _pose_ models use the `-pose` suffix, i.e. `yolov8n-pose.pt`. These models are trained on the [COCO keypoints](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml) dataset and are suitable for a variety of pose estimation tasks.

In the default YOLOv8 pose model, there are 17 keypoints, each representing a different part of the human body. Here is the mapping of each index to its respective body joint:

0: Nose
1: Left Eye
2: Right Eye
3: Left Ear
4: Right Ear
5: Left Shoulder
6: Right Shoulder
7: Left Elbow
8: Right Elbow
9: Left Wrist
10: Right Wrist
11: Left Hip
12: Right Hip
13: Left Knee
14: Right Knee
15: Left Ankle
16: Right Ankle

Models

YOLOv8 pretrained Pose models are shown here. Detect, Segment and Pose models are pretrained on the COCO dataset, while Classify models are pretrained on the ImageNet dataset.

Models download automatically from the latest Ultralytics release on first use.

Model size
(pixels)
mAPpose
50-95
mAPpose
50
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-pose 640 50.4 80.1 131.8 1.18 3.3 9.2
YOLOv8s-pose 640 60.0 86.2 233.2 1.42 11.6 30.2
YOLOv8m-pose 640 65.0 88.8 456.3 2.00 26.4 81.0
YOLOv8l-pose 640 67.6 90.0 784.5 2.59 44.4 168.6
YOLOv8x-pose 640 69.2 90.2 1607.1 3.73 69.4 263.2
YOLOv8x-pose-p6 1280 71.6 91.2 4088.7 10.04 99.1 1066.4
  • mAPval values are for single-model single-scale on COCO Keypoints val2017 dataset.
    Reproduce by yolo val pose data=coco-pose.yaml device=0
  • Speed averaged over COCO val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val pose data=coco8-pose.yaml batch=1 device=0|cpu

Train

Train a YOLOv8-pose model on the COCO128-pose dataset.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-pose.yaml")  # build a new model from YAML
    model = YOLO("yolov8n-pose.pt")  # load a pretrained model (recommended for training)
    model = YOLO("yolov8n-pose.yaml").load("yolov8n-pose.pt")  # build from YAML and transfer weights

    # Train the model
    results = model.train(data="coco8-pose.yaml", epochs=100, imgsz=640)
    ```

=== "CLI"

    ```py
    # Build a new model from YAML and start training from scratch
    yolo pose train data=coco8-pose.yaml model=yolov8n-pose.yaml epochs=100 imgsz=640

    # Start training from a pretrained *.pt model
    yolo pose train data=coco8-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640

    # Build a new model from YAML, transfer pretrained weights to it and start training
    yolo pose train data=coco8-pose.yaml model=yolov8n-pose.yaml pretrained=yolov8n-pose.pt epochs=100 imgsz=640
    ```

Dataset format

YOLO pose dataset format can be found in detail in the Dataset Guide. To convert your existing dataset from other formats (like COCO etc.) to YOLO format, please use JSON2YOLO tool by Ultralytics.

Val

Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No argument need to passed as the model
retains its training data and arguments as model attributes.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-pose.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom model

    # Validate the model
    metrics = model.val()  # no arguments needed, dataset and settings remembered
    metrics.box.map  # map50-95
    metrics.box.map50  # map50
    metrics.box.map75  # map75
    metrics.box.maps  # a list contains map50-95 of each category
    ```

=== "CLI"

    ```py
    yolo pose val model=yolov8n-pose.pt  # val official model
    yolo pose val model=path/to/best.pt  # val custom model
    ```

Predict

Use a trained YOLOv8n-pose model to run predictions on images.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-pose.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom model

    # Predict with the model
    results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image
    ```

=== "CLI"

    ```py
    yolo pose predict model=yolov8n-pose.pt source='https://ultralytics.com/images/bus.jpg'  # predict with official model
    yolo pose predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg'  # predict with custom model
    ```

See full predict mode details in the Predict page.

Export

Export a YOLOv8n Pose model to a different format like ONNX, CoreML, etc.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-pose.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom trained model

    # Export the model
    model.export(format="onnx")
    ```

=== "CLI"

    ```py
    yolo export model=yolov8n-pose.pt format=onnx  # export official model
    yolo export model=path/to/best.pt format=onnx  # export custom trained model
    ```

Available YOLOv8-pose export formats are in the table below. You can export to any format using the format argument, i.e. format='onnx' or format='engine'. You can predict or validate directly on exported models, i.e. yolo predict model=yolov8n-pose.onnx. Usage examples are shown for your model after export completes.

Format format Argument Model Metadata Arguments
PyTorch - yolov8n-pose.pt -
TorchScript torchscript yolov8n-pose.torchscript imgsz, optimize, batch
ONNX onnx yolov8n-pose.onnx imgsz, half, dynamic, simplify, opset, batch
OpenVINO openvino yolov8n-pose_openvino_model/ imgsz, half, int8, batch
TensorRT engine yolov8n-pose.engine imgsz, half, dynamic, simplify, workspace, int8, batch
CoreML coreml yolov8n-pose.mlpackage imgsz, half, int8, nms, batch
TF SavedModel saved_model yolov8n-pose_saved_model/ imgsz, keras, int8, batch
TF GraphDef pb yolov8n-pose.pb imgsz, batch
TF Lite tflite yolov8n-pose.tflite imgsz, half, int8, batch
TF Edge TPU edgetpu yolov8n-pose_edgetpu.tflite imgsz
TF.js tfjs yolov8n-pose_web_model/ imgsz, half, int8, batch
PaddlePaddle paddle yolov8n-pose_paddle_model/ imgsz, batch
NCNN ncnn yolov8n-pose_ncnn_model/ imgsz, half, batch

See full export details in the Export page.

FAQ

What is Pose Estimation with Ultralytics YOLOv8 and how does it work?

Pose estimation with Ultralytics YOLOv8 involves identifying specific points, known as keypoints, in an image. These keypoints typically represent joints or other important features of the object. The output includes the [x, y] coordinates and confidence scores for each point. YOLOv8-pose models are specifically designed for this task and use the -pose suffix, such as yolov8n-pose.pt. These models are pre-trained on datasets like COCO keypoints and can be used for various pose estimation tasks. For more information, visit the Pose Estimation Page.

How can I train a YOLOv8-pose model on a custom dataset?

Training a YOLOv8-pose model on a custom dataset involves loading a model, either a new model defined by a YAML file or a pre-trained model. You can then start the training process using your specified dataset and parameters.

from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n-pose.yaml")  # build a new model from YAML
model = YOLO("yolov8n-pose.pt")  # load a pretrained model (recommended for training)

# Train the model
results = model.train(data="your-dataset.yaml", epochs=100, imgsz=640)

For comprehensive details on training, refer to the Train Section.

How do I validate a trained YOLOv8-pose model?

Validation of a YOLOv8-pose model involves assessing its accuracy using the same dataset parameters retained during training. Here's an example:

from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n-pose.pt")  # load an official model
model = YOLO("path/to/best.pt")  # load a custom model

# Validate the model
metrics = model.val()  # no arguments needed, dataset and settings remembered

For more information, visit the Val Section.

Can I export a YOLOv8-pose model to other formats, and how?

Yes, you can export a YOLOv8-pose model to various formats like ONNX, CoreML, TensorRT, and more. This can be done using either Python or the Command Line Interface (CLI).

from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n-pose.pt")  # load an official model
model = YOLO("path/to/best.pt")  # load a custom trained model

# Export the model
model.export(format="onnx")

Refer to the Export Section for more details.

What are the available Ultralytics YOLOv8-pose models and their performance metrics?

Ultralytics YOLOv8 offers various pretrained pose models such as YOLOv8n-pose, YOLOv8s-pose, YOLOv8m-pose, among others. These models differ in size, accuracy (mAP), and speed. For instance, the YOLOv8n-pose model achieves a mAPpose50-95 of 50.4 and an mAPpose50 of 80.1. For a complete list and performance details, visit the Models Section.


comments: true
description: Master instance segmentation using YOLOv8. Learn how to detect, segment and outline objects in images with detailed guides and examples.
keywords: instance segmentation, YOLOv8, object detection, image segmentation, machine learning, deep learning, computer vision, COCO dataset, Ultralytics

Instance Segmentation

Instance segmentation examples

Instance segmentation goes a step further than object detection and involves identifying individual objects in an image and segmenting them from the rest of the image.

The output of an instance segmentation model is a set of masks or contours that outline each object in the image, along with class labels and confidence scores for each object. Instance segmentation is useful when you need to know not only where objects are in an image, but also what their exact shape is.



Watch: Run Segmentation with Pre-Trained Ultralytics YOLOv8 Model in Python.

!!! Tip "Tip"

YOLOv8 Segment models use the `-seg` suffix, i.e. `yolov8n-seg.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).

Models

YOLOv8 pretrained Segment models are shown here. Detect, Segment and Pose models are pretrained on the COCO dataset, while Classify models are pretrained on the ImageNet dataset.

Models download automatically from the latest Ultralytics release on first use.

Model size
(pixels)
mAPbox
50-95
mAPmask
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-seg 640 36.7 30.5 96.1 1.21 3.4 12.6
YOLOv8s-seg 640 44.6 36.8 155.7 1.47 11.8 42.6
YOLOv8m-seg 640 49.9 40.8 317.0 2.18 27.3 110.2
YOLOv8l-seg 640 52.3 42.6 572.4 2.79 46.0 220.5
YOLOv8x-seg 640 53.4 43.4 712.1 4.02 71.8 344.1
  • mAPval values are for single-model single-scale on COCO val2017 dataset.
    Reproduce by yolo val segment data=coco.yaml device=0
  • Speed averaged over COCO val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val segment data=coco8-seg.yaml batch=1 device=0|cpu

Train

Train YOLOv8n-seg on the COCO128-seg dataset for 100 epochs at image size 640. For a full list of available arguments see the Configuration page.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-seg.yaml")  # build a new model from YAML
    model = YOLO("yolov8n-seg.pt")  # load a pretrained model (recommended for training)
    model = YOLO("yolov8n-seg.yaml").load("yolov8n.pt")  # build from YAML and transfer weights

    # Train the model
    results = model.train(data="coco8-seg.yaml", epochs=100, imgsz=640)
    ```

=== "CLI"

    ```py
    # Build a new model from YAML and start training from scratch
    yolo segment train data=coco8-seg.yaml model=yolov8n-seg.yaml epochs=100 imgsz=640

    # Start training from a pretrained *.pt model
    yolo segment train data=coco8-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640

    # Build a new model from YAML, transfer pretrained weights to it and start training
    yolo segment train data=coco8-seg.yaml model=yolov8n-seg.yaml pretrained=yolov8n-seg.pt epochs=100 imgsz=640
    ```

Dataset format

YOLO segmentation dataset format can be found in detail in the Dataset Guide. To convert your existing dataset from other formats (like COCO etc.) to YOLO format, please use JSON2YOLO tool by Ultralytics.

Val

Validate trained YOLOv8n-seg model accuracy on the COCO128-seg dataset. No argument need to passed as the model
retains its training data and arguments as model attributes.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-seg.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom model

    # Validate the model
    metrics = model.val()  # no arguments needed, dataset and settings remembered
    metrics.box.map  # map50-95(B)
    metrics.box.map50  # map50(B)
    metrics.box.map75  # map75(B)
    metrics.box.maps  # a list contains map50-95(B) of each category
    metrics.seg.map  # map50-95(M)
    metrics.seg.map50  # map50(M)
    metrics.seg.map75  # map75(M)
    metrics.seg.maps  # a list contains map50-95(M) of each category
    ```

=== "CLI"

    ```py
    yolo segment val model=yolov8n-seg.pt  # val official model
    yolo segment val model=path/to/best.pt  # val custom model
    ```

Predict

Use a trained YOLOv8n-seg model to run predictions on images.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-seg.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom model

    # Predict with the model
    results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image
    ```

=== "CLI"

    ```py
    yolo segment predict model=yolov8n-seg.pt source='https://ultralytics.com/images/bus.jpg'  # predict with official model
    yolo segment predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg'  # predict with custom model
    ```

See full predict mode details in the Predict page.

Export

Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc.

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a model
    model = YOLO("yolov8n-seg.pt")  # load an official model
    model = YOLO("path/to/best.pt")  # load a custom trained model

    # Export the model
    model.export(format="onnx")
    ```

=== "CLI"

    ```py
    yolo export model=yolov8n-seg.pt format=onnx  # export official model
    yolo export model=path/to/best.pt format=onnx  # export custom trained model
    ```

Available YOLOv8-seg export formats are in the table below. You can export to any format using the format argument, i.e. format='onnx' or format='engine'. You can predict or validate directly on exported models, i.e. yolo predict model=yolov8n-seg.onnx. Usage examples are shown for your model after export completes.

Format format Argument Model Metadata Arguments
PyTorch - yolov8n-seg.pt -
TorchScript torchscript yolov8n-seg.torchscript imgsz, optimize, batch
ONNX onnx yolov8n-seg.onnx imgsz, half, dynamic, simplify, opset, batch
OpenVINO openvino yolov8n-seg_openvino_model/ imgsz, half, int8, batch
TensorRT engine yolov8n-seg.engine imgsz, half, dynamic, simplify, workspace, int8, batch
CoreML coreml yolov8n-seg.mlpackage imgsz, half, int8, nms, batch
TF SavedModel saved_model yolov8n-seg_saved_model/ imgsz, keras, int8, batch
TF GraphDef pb yolov8n-seg.pb imgsz, batch
TF Lite tflite yolov8n-seg.tflite imgsz, half, int8, batch
TF Edge TPU edgetpu yolov8n-seg_edgetpu.tflite imgsz
TF.js tfjs yolov8n-seg_web_model/ imgsz, half, int8, batch
PaddlePaddle paddle yolov8n-seg_paddle_model/ imgsz, batch
NCNN ncnn yolov8n-seg_ncnn_model/ imgsz, half, batch

See full export details in the Export page.

FAQ

How do I train a YOLOv8 segmentation model on a custom dataset?

To train a YOLOv8 segmentation model on a custom dataset, you first need to prepare your dataset in the YOLO segmentation format. You can use tools like JSON2YOLO to convert datasets from other formats. Once your dataset is ready, you can train the model using Python or CLI commands:

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a pretrained YOLOv8 segment model
    model = YOLO("yolov8n-seg.pt")

    # Train the model
    results = model.train(data="path/to/your_dataset.yaml", epochs=100, imgsz=640)
    ```

=== "CLI"

    ```py
    yolo segment train data=path/to/your_dataset.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
    ```

Check the Configuration page for more available arguments.

What is the difference between object detection and instance segmentation in YOLOv8?

Object detection identifies and localizes objects within an image by drawing bounding boxes around them, whereas instance segmentation not only identifies the bounding boxes but also delineates the exact shape of each object. YOLOv8 instance segmentation models provide masks or contours that outline each detected object, which is particularly useful for tasks where knowing the precise shape of objects is important, such as medical imaging or autonomous driving.

Why use YOLOv8 for instance segmentation?

Ultralytics YOLOv8 is a state-of-the-art model recognized for its high accuracy and real-time performance, making it ideal for instance segmentation tasks. YOLOv8 Segment models come pretrained on the COCO dataset, ensuring robust performance across a variety of objects. Additionally, YOLOv8 supports training, validation, prediction, and export functionalities with seamless integration, making it highly versatile for both research and industry applications.

How do I load and validate a pretrained YOLOv8 segmentation model?

Loading and validating a pretrained YOLOv8 segmentation model is straightforward. Here's how you can do it using both Python and CLI:

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a pretrained model
    model = YOLO("yolov8n-seg.pt")

    # Validate the model
    metrics = model.val()
    print("Mean Average Precision for boxes:", metrics.box.map)
    print("Mean Average Precision for masks:", metrics.seg.map)
    ```

=== "CLI"

    ```py
    yolo segment val model=yolov8n-seg.pt
    ```

These steps will provide you with validation metrics like Mean Average Precision (mAP), crucial for assessing model performance.

How can I export a YOLOv8 segmentation model to ONNX format?

Exporting a YOLOv8 segmentation model to ONNX format is simple and can be done using Python or CLI commands:

!!! Example

=== "Python"

    ```py
    from ultralytics import YOLO

    # Load a pretrained model
    model = YOLO("yolov8n-seg.pt")

    # Export the model to ONNX format
    model.export(format="onnx")
    ```

=== "CLI"

    ```py
    yolo export model=yolov8n-seg.pt format=onnx
    ```

For more details on exporting to various formats, refer to the Export page.


comments: true
description: Explore Ultralytics callbacks for training, validation, exporting, and prediction. Learn how to use and customize them for your ML models.
keywords: Ultralytics, callbacks, training, validation, export, prediction, ML models, YOLOv8, Python, machine learning

Callbacks

Ultralytics framework supports callbacks as entry points in strategic stages of train, val, export, and predict modes. Each callback accepts a Trainer, Validator, or Predictor object depending on the operation type. All properties of these objects can be found in Reference section of the docs.



Watch: Mastering Ultralytics YOLOv8: Callbacks

Examples

Returning additional information with Prediction

In this example, we want to return the original frame with each result object. Here's how we can do that

from ultralytics import YOLO


def on_predict_batch_end(predictor):
    """Handle prediction batch end by combining results with corresponding frames; modifies predictor results."""
    _, image, _, _ = predictor.batch

    # Ensure that image is a list
    image = image if isinstance(image, list) else [image]

    # Combine the prediction results with the corresponding frames
    predictor.results = zip(predictor.results, image)


# Create a YOLO model instance
model = YOLO("yolov8n.pt")

# Add the custom callback to the model
model.add_callback("on_predict_batch_end", on_predict_batch_end)

# Iterate through the results and frames
for result, frame in model.predict():  # or model.track()
    pass

All callbacks

Here are all supported callbacks. See callbacks source code for additional details.

Trainer Callbacks

Callback Description
on_pretrain_routine_start Triggered at the beginning of pre-training routine
on_pretrain_routine_end Triggered at the end of pre-training routine
on_train_start Triggered when the training starts
on_train_epoch_start Triggered at the start of each training epoch
on_train_batch_start Triggered at the start of each training batch
optimizer_step Triggered during the optimizer step
on_before_zero_grad Triggered before gradients are zeroed
on_train_batch_end Triggered at the end of each training batch
on_train_epoch_end Triggered at the end of each training epoch
on_fit_epoch_end Triggered at the end of each fit epoch
on_model_save Triggered when the model is saved
on_train_end Triggered when the training process ends
on_params_update Triggered when model parameters are updated
teardown Triggered when the training process is being cleaned up

Validator Callbacks

Callback Description
on_val_start Triggered when the validation starts
on_val_batch_start Triggered at the start of each validation batch
on_val_batch_end Triggered at the end of each validation batch
on_val_end Triggered when the validation ends

Predictor Callbacks

Callback Description
on_predict_start Triggered when the prediction process starts
on_predict_batch_start Triggered at the start of each prediction batch
on_predict_postprocess_end Triggered at the end of prediction postprocessing
on_predict_batch_end Triggered at the end of each prediction batch
on_predict_end Triggered when the prediction process ends

Exporter Callbacks

Callback Description
on_export_start Triggered when the export process starts
on_export_end Triggered when the export process ends

FAQ

What are Ultralytics callbacks and how can I use them?

Ultralytics callbacks are specialized entry points triggered during key stages of model operations like training, validation, exporting, and prediction. These callbacks allow for custom functionality at specific points in the process, enabling enhancements and modifications to the workflow. Each callback accepts a Trainer, Validator, or Predictor object, depending on the operation type. For detailed properties of these objects, refer to the Reference section.

To use a callback, you can define a function and then add it to the model with the add_callback method. Here's an example of how to return additional information during prediction:

from ultralytics import YOLO


def on_predict_batch_end(predictor):
    """Handle prediction batch end by combining results with corresponding frames; modifies predictor results."""
    _, image, _, _ = predictor.batch
    image = image if isinstance(image, list) else [image]
    predictor.results = zip(predictor.results, image)


model = YOLO("yolov8n.pt")
model.add_callback("on_predict_batch_end", on_predict_batch_end)
for result, frame in model.predict():
    pass

How can I customize Ultralytics training routine using callbacks?

To customize your Ultralytics training routine using callbacks, you can inject your logic at specific stages of the training process. Ultralytics YOLO provides a variety of training callbacks such as on_train_start, on_train_end, and on_train_batch_end. These allow you to add custom metrics, processing, or logging.

Here's an example of how to log additional metrics at the end of each training epoch:

from ultralytics import YOLO


def on_train_epoch_end(trainer):
    """Custom logic for additional metrics logging at the end of each training epoch."""
    additional_metric = compute_additional_metric(trainer)
    trainer.log({"additional_metric": additional_metric})


model = YOLO("yolov8n.pt")
model.add_callback("on_train_epoch_end", on_train_epoch_end)
model.train(data="coco.yaml", epochs=10)

Refer to the Training Guide for more details on how to effectively use training callbacks.

Why should I use callbacks during validation in Ultralytics YOLO?

Using callbacks during validation in Ultralytics YOLO can enhance model evaluation by allowing custom processing, logging, or metrics calculation. Callbacks such as on_val_start, on_val_batch_end, and on_val_end provide entry points to inject custom logic, ensuring detailed and comprehensive validation processes.

For instance, you might want to log additional validation metrics or save intermediate results for further analysis. Here's an example of how to log custom metrics at the end of validation:

from ultralytics import YOLO


def on_val_end(validator):
    """Log custom metrics at end of validation."""
    custom_metric = compute_custom_metric(validator)
    validator.log({"custom_metric": custom_metric})


model = YOLO("yolov8n.pt")
model.add_callback("on_val_end", on_val_end)
model.val(data="coco.yaml")

Check out the Validation Guide for further insights on incorporating callbacks into your validation process.

How do I attach a custom callback for the prediction mode in Ultralytics YOLO?

To attach a custom callback for the prediction mode in Ultralytics YOLO, you define a callback function and register it with the prediction process. Common prediction callbacks include on_predict_start, on_predict_batch_end, and on_predict_end. These allow for modification of prediction outputs and integration of additional functionalities like data logging or result transformation.

Here is an example where a custom callback is used to log predictions:

from ultralytics import YOLO


def on_predict_end(predictor):
    """Log predictions at the end of prediction."""
    for result in predictor.results:
        log_prediction(result)


model = YOLO("yolov8n.pt")
model.add_callback("on_predict_end", on_predict_end)
results = model.predict(source="image.jpg")

For more comprehensive usage, refer to the Prediction Guide which includes detailed instructions and additional customization options.

What are some practical examples of using callbacks in Ultralytics YOLO?

Ultralytics YOLO supports various practical implementations of callbacks to enhance and customize different phases like training, validation, and prediction. Some practical examples include:

  1. Logging Custom Metrics: Log additional metrics at different stages, such as the end of training or validation epochs.
  2. Data Augmentation: Implement custom data transformations or augmentations during prediction or training batches.
  3. Intermediate Results: Save intermediate results such as predictions or frames for further analysis or visualization.

Example: Combining frames with prediction results during prediction using on_predict_batch_end:

from ultralytics import YOLO


def on_predict_batch_end(predictor):
    """Combine prediction results with frames."""
    _, image, _, _ = predictor.batch
    image = image if isinstance(image, list) else [image]
    predictor.results = zip(predictor.results, image)


model = YOLO("yolov8n.pt")
model.add_callback("on_predict_batch_end", on_predict_batch_end)
for result, frame in model.predict():
    pass

Explore the Complete Callback Reference to find more options and examples.

posted @ 2024-09-05 11:58  绝不原创的飞龙  阅读(0)  评论(0编辑  收藏  举报