Use the widget below to experiment with YOLOv4 Tiny. You can detect COCO classes such as people, vehicles, animals, household items.
YOLOv4-tiny is the compressed version of YOLOv4 designed to train on machines that have less computing power. Its model weights are around 16 megabytes large, allowing it to train on 350 images in 1 hour when using a Tesla P100 GPU. YOLOv4-tiny has an inference speed of 3 ms on the Tesla P100, making it one of the fastest object detection models to exist.
YOLOv4-Tiny utilizes a couple of different changes from the original YOLOv4 network to help it achieve these fast speeds. First and foremost, The number of convolutional layers in the CSP backbone are compressed with a total of 29 pretrained convolutional layers. Additionally, the number of YOLO layers has been reduced to two instead of three and there are fewer anchor boxes for prediction.
YOLOv4-Tiny has comparatively competitive results with YOLOv4 given the size reduction. It achieves 40 mAP @.5 on the MS COCO dataset.
Training YOLOv4-tiny on Custom Data for Lightning Fast Object Detection: https://blog.roboflow.com/train-yolov4-tiny-on-custom-data-lighting-fast-detection/
YOLOv4 Tiny
is licensed under a
YOLO
license.
You can use Roboflow Inference to deploy a
YOLOv4 Tiny
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.