Use the widget below to experiment with YOLOX. You can detect COCO classes such as people, vehicles, animals, household items.
YOLOX (You Only Look One-level eXtended) is an advanced object detection model that builds upon the YOLO (You Only Look Once) series, introducing several innovations to enhance performance and efficiency. Released in July 2021 by Megvii Research, YOLOX transitions to an anchor-free approach, making it more robust and simpler compared to its predecessors.
YOLOX excels in real-time object detection, offering a balance of speed and accuracy suitable for various practical applications such as autonomous driving and video surveillance.
YOLOX achieves state-of-the-art performance in terms of speed and accuracy. For instance, YOLOX-L achieves 50.0% AP on the COCO dataset with a speed of 68.9 FPS on a Tesla V100 GPU, making it highly suitable for real-time applications.
YOLOX
is licensed under a
Apache-2.0
license.
Model | size | mAPval 0.5:0.95 |
mAPtest 0.5:0.95 |
Speed V100 (ms) |
Params (M) |
FLOPs (G) |
weights |
---|---|---|---|---|---|---|---|
YOLOX-s | 640 | 40.5 | 40.5 | 9.8 | 9.0 | 26.8 | github |
YOLOX-m | 640 | 46.9 | 47.2 | 12.3 | 25.3 | 73.8 | github |
YOLOX-l | 640 | 49.7 | 50.1 | 14.5 | 54.2 | 155.6 | github |
YOLOX-x | 640 | 51.1 | 51.5 | 17.3 | 99.1 | 281.9 | github |
YOLOX-Darknet53 | 640 | 47.7 | 48.0 | 11.1 | 63.7 | 185.3 | github |
You can use Roboflow Inference to deploy a
YOLOX
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.