Use the widget below to experiment with YOLOR. You can detect COCO classes such as people, vehicles, animals, household items.
You Only Learn One Representation (YOLOR) is a state-of-the-art object detection model. YOLOR pre-trains an implicit knowledge network with all of the tasks present in the COCO dataset, namely object detection, instance segmentation, panoptic segmentation, keypoint detection, stuff segmentation, image caption, multi-label image classification, and long-tail object recognition. When optimizing for the COCO dataset, YOLOR trains another set of parameters that represent explicit knowledge. For prediction, both implicit and explicit knowledge is used.
This novel approach propels YOLOR to the state-of-the-art for object detection in the speed/accuracy tradeoff landscape.
Images in Courtesy of Wong-Kin-Yiu
Train YOLOR on a Custom Dataset: https://blog.roboflow.com/train-yolor-on-a-custom-dataset/
YOLOR Research Paper: https://arxiv.org/abs/2105.04206
YOLOR
is licensed under a
GPL-3.0
license.
You can use Roboflow Inference to deploy a
YOLOR
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.