Use the widget below to experiment with YOLOv9. You can detect COCO classes such as people, vehicles, animals, household items.
On February 21st, 2024, Chien-Yao Wang, I-Hau Yeh, and Hong-Yuan Mark Liao released the “YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information'' paper, which introduces a new computer vision model architecture: YOLOv9. Later, the source code was made available, allowing anyone to train their own YOLOv9 models.
According to the YOLOv9 research team, the model architecture achieves a higher mAP than existing popular YOLO models such as YOLOv8, YOLOv7, and YOLOv5, when benchmarked against the MS COCO dataset.
YOLOv9's main contributions are its performance and efficiency, its use of PGIs, and its use of reversible functions. Not only does YOLOv9 beat all previous YOLO models on the COCO dataset, but it also uses 41% less parameters and 21% less computational power. Additionally, YOLOv9's use of reversible functions and PGIs help the model retain more information, which is why the model is so accurate.
YOLOv9 also supports multiple use cases such as object detection and semantic segmentation.
YOLOv9
is licensed under a
GPL-3.0
license.
Based off MS COCO, YOLOv9 outperforms all previous models. This is due its innovative use of PGIs, GELAN, and reversible functions.
You can use Roboflow Inference to deploy a
YOLOv9
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.