Deploy Object Detection Models on Reolink Cameras

With Roboflow Inference, you can deploy

Object Detection

models on a wide range of compute hardware devices, from NVIDIA Jetsons to T4 GPUs to AI PCs. You can connect

Reolink

cameras to your compute hardware and run inference with your model in real time.

In this guide, we are going to talk about how you can deploy computer vision models with Roboflow Inference and

Reolink

cameras.

Build a Workflow, Read Reolink Frames

Roboflow Workflows is a low-code, web-based computer vision application builder. WIth Workflows, you can build computer vision applications in an afternoon, then deploy them to your own edge hardware with custom cameras.

Below is an example of a Workflow that identifies common objects in images with the SAHI inference technique, which improves performance on small object detection.

This Workflow, like any Roboflow Workflow, can be deployed on your own hardware and infer on frames from

Reolink

cameras.

The Engine: Roboflow Inference

Roboflow Workflows is powered by Inference, an open source computer vision inference server. With Inference, you can run a wide variety of fine-tuned and foundation models, including:

- YOLOv5, YOLOv8, YOLOv9, and YOLOv10 object detection models
- YOLOv7 and YOLOv8 image segmentation models
- YOLOv8 keypoint detection models
- Florence-2
- SAM-2
- PaliGemma

You can deploy all of these models with frames retrieved from a

Reolink

camera.

YOLO models supported in Inference can run at dozens of frames per second on GPU hardware. Foundation models like PaliGemma and Florence-2 can run in close to real time.

Contact Sales to Get Started

The Roboflow 

Reolink

integration is only available to Enterprise customers.

To learn more about our integrations with popular cameras, contact the Roboflow sales team.  

Get Started

You can deploy the Workflow above on your camera with any edge hardware in a few lines of Python code.

To deploy your system, first install Inference: 


pip install inference

Then, create a new Python file and add the following code:


# Import the InferencePipeline object
from inference import InferencePipeline
import cv2

def my_sink(result, video_frame):
    # if result.get("output_image"): # Display an image from the workflow response
    #     cv2.imshow("Workflow Image", result["output_image"].numpy_image)
    #     cv2.waitKey(1)
    print(result) # do something with the predictions of each frame
    

# initialize a pipeline object
pipeline = InferencePipeline.init_with_workflow(
    api_key="API_KEY",
    workspace_name="WORKSPACE_NAME",
    workflow_id="WORKFLOW_ID",
    video_reference=0, # Path to video, device id (int, usually 0 for built in webcams), or RTSP stream url
    max_fps=30,
    on_prediction=my_sink
)
pipeline.start() #start the pipeline
pipeline.join() #wait for the pipeline thread to finish

Above, set your Roboflow API key, your workspace name, and your Workflow ID.

Learn how to retrieve your API key and Workflow ID information.



To learn more about deploying computer vision models with Inference, refer to the Roboflow Inference documentation.

View More Camera Guides