Use the widget below to experiment with YOLO26. You can detect COCO classes such as people, vehicles, animals, household items.
YOLO26 is a real-time computer vision model released in late October 2025. It introduces a native end-to-end, NMS-free architecture that removes the need for a separate Non-Maximum Suppression post-processing step, simplifying inference pipelines and reducing latency, particularly on edge and low-power hardware.
YOLO26 incorporates several training and architectural updates, including the MuSGD optimizer and the use of ProgLoss with STAL to improve training stability and small-object performance. The model removes Distribution Focal Loss (DFL), improving compatibility with export targets such as ONNX and TensorRT. YOLO26 supports multiple vision tasks—object detection, segmentation, pose estimation, and oriented bounding box detection—within a unified framework and is available in model sizes ranging from Nano to Extra Large.
YOLO26
is licensed under a
AGPL-3.0
license.
You can use Roboflow Inference to deploy a
YOLO26
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.
