Run YOLOv10 models on multiple streams

Learn how to run computer vision models on multiple streams on the same device using Roboflow Inference, a computer vision inference server.

Overview

When you deploy a model into production, you may want to run the model on multiple concurrent streams with a standard set of logic through which all streams should pass.

With Roboflow Inference, you can run

YOLOv10

across multiple streams on the same machine. For example, you could have 10 cameras broadcasting RTSP streams and process all of them in real time on a powerful GPU device.

In this guide, we will show how to use a model with multiple streams. To do this, we will:

1. Install Roboflow Inference
2. Use InferencePipeline to run a model on multiple streams

Without further ado, let's get started!

YOLOv10 and Image Annotation Resources

Explore these resources to enhance your understanding of YOLOv10 and image annotation techniques.

InstIall inference

First, install the Inference pip package:

pip install inference


Once you have installed Inference, you are ready to load and start using your model.

Use InferencePipeline to Process Streams

Let's run a

YOLOv10

model on multiple streams. You can do so using the following code:

Above, replace "microsoft-coco/9" with the model ID of a YOLOv10 model hosted on Roboflow.

Replace [0, 1, 2] with the IDs of the video streams or RTSP URLs on which you want to run inference.

To upload a model to Roboflow, first install the Roboflow Python package:

pip install roboflow

Then, create a new Python file and paste in the following code:


from roboflow import Roboflow

rf = Roboflow(api_key="API_KEY")
project = rf.workspace().project("PROJECT_ID")
project.version(DATASET_VERSION).deploy(model_type="yolov8", model_path=f"{HOME}/runs/detect/train/")

In the code above, add your API key and the path to the model weights you want to upload. Learn how to retrieve your API key. Your weights will be uploaded to Roboflow. Your model will shortly be accessible over an API, and available for use in Inference. To learn more about uploading model weights to Roboflow, check out our full guide to uploading weights to Roboflow.

When you run the script, your predictions will be visualized on a video. You can replace the render_boxes function with a function that runs custom logic on each stream. For example, you could log the predictions from each stream to a JSON file, send them over MQTT, implement object tracking, and more.

To learn more about implementing custom callbacks, refer to the Inference documentation on custom callbacks.