Use the widget below to experiment with MT-YOLOv6. You can detect COCO classes such as people, vehicles, animals, household items.
The YOLOv6 repository was published June 2022 by Meituan, and it claims new state-of-the-art performance on the COCO dataset benchmark. We'll leave it to the community to determine if this name is the best representation for the architecture.
In any case, it's clear MT-YOLOv6 (hereafter YOLOv6 for brevity) is popular. In a couple short weeks, the repo has attracted over 2,000+ stars and 300+ forks.
YOLOv6 claims to set a new state-of-the-art performance on the COCO dataset benchmark. As the authors detail, YOLOv6-s achieves 43.1 mAP on COCO val2017 dataset (with 520 FPS on T4 using TensorRT FP16 for bs32 inference).
(For point of comparison, YOLOv5-s achieves 37.4 mAP @ 0.95% on the same COCO benchmark.)
The YOLOv6 repository authors published the below evaluation graphic, demonstrating YOLOv6 outperforming YOLOv5 and YOLOX at similar sizes.
For further reading check out this blog.
MT-YOLOv6
is licensed under a
GPL-3.0
license.
Model | Size | mAPval 0.5:0.95 |
SpeedT4 trt fp16 b1 (fps) |
SpeedT4 trt fp16 b32 (fps) |
Params (M) |
FLOPs (G) |
---|---|---|---|---|---|---|
YOLOv6-N | 640 | 35.9300e 36.3400e |
802 | 1234 | 4.3 | 11.1 |
YOLOv6-T | 640 | 40.3300e 41.1400e |
449 | 659 | 15.0 | 36.7 |
YOLOv6-S | 640 | 43.5300e 43.8400e |
358 | 495 | 17.2 | 44.2 |
YOLOv6-M | 640 | 49.5 | 179 | 233 | 34.3 | 82.2 |
YOLOv6-L-ReLU | 640 | 51.7 | 113 | 149 | 58.5 | 144.0 |
YOLOv6-L | 640 | 52.5 | 98 | 121 | 58.5 | 144.0 |
YOLOv6 comes in a variety of models. Its largest, YOLOv6-L achieves highest mAP, however it is also the slowest model of the few. In contrast, YOLOv6-N is by far the fastest and smallest model, but sacrifices accuracy for those reasons.
You can use Roboflow Inference to deploy a
MT-YOLOv6
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.