Applied to videos, object detection models can yield a range of insights. You can check if an object is or is not present in a video; you can check for how long an object appears; you can record a list of times when an object is or is not present.
In this guide, we are going to show how to run inference with
Transformers
on videos.
We will:
1. Load supervision and an object detection model
2. Create a callback to process a target video
3. Process the target video
Without further ado, let's get started!
We'll be using supervision in this guide, an open source Python package with a range of utilities you can use in building computer vision projects. You can install supervision using the following command:
First, we need to load data into a Python program. We'll also need to load a model for use in inference. Create a new Python file and add the following code:
Replace the model weights file name with the weights for your model.
Next, we need to write a callback that runs inference and applies all of the logic we want to apply to predictions. In the example below, we run inference on our model and plot all predictions.
You can also apply filters to only show predictions that meet a certain criteria. To learn more about filtering detections, refer to the supervision Detections() documentation.
Finally, we need to run our callback script on every frame in our video. We can do so using the following code:
supervision provides an extensive range of functionalities for working with computer vision models. With supervision, you can:
1. Process and filter detections and segmentation masks from a range of popular models (YOLOv5, Ultralytics YOLOv8, MMDetection, and more).
2. Process and filter classifications.
3. Plot bounding boxes and segmentation masks.
And more! To learn about the full range of functionality in supervision, check out the supervision documentation.