When you deploy a model into production, you may want to run the model on multiple concurrent streams with a standard set of logic through which all streams should pass.
With Roboflow Inference, you can run
YOLO-NAS
across multiple streams on the same machine. For example, you could have 10 cameras broadcasting RTSP streams and process all of them in real time on a powerful GPU device.
In this guide, we will show how to use a model with multiple streams. To do this, we will:
1. Install Roboflow Inference
2. Use InferencePipeline to run a model on multiple streams
Without further ado, let's get started!
First, install the Inference pip package:
Once you have installed Inference, you are ready to load and start using your model.
Let's run a
YOLO-NAS
model on multiple streams. You can do so using the following code:
When you run the script, your predictions will be visualized on a video. You can replace the render_boxes function with a function that runs custom logic on each stream. For example, you could log the predictions from each stream to a JSON file, send them over MQTT, implement object tracking, and more.
To learn more about implementing custom callbacks, refer to the Inference documentation on custom callbacks.