Many computer vision models are deployed using a webcam as an input. The Roboflow Inference Python package enables you to access a webcam and start running inference with a model in a few lines of code.
In this guide, we will show you how to run
YOLOv8 Segmentation
on frames from a webcam stream.
We will:
1. Install supervision and Inference
2. Load the webcam stream and define an inference callback
3. Test the webcam stream
Without further ado, let's get started!
For this tutorial, we will be using supervision, Inference, and OpenCV. supervision provides a range of utilities you can use in computer vision projects. Inference provides a concise utility through which we can load webcam streams. OpenCV is helpful for annotating video frames when supervision does not have a tool we can use.
Run the following command to install the dependencies we will use in this guide:
Once you have installed supervision and Inference, you are ready to define an inference callback and configure a webcam stream.
The Inference Python package provides an inference.Stream() method through which you can access a webcam and run logic on each frame. This callback method works with webcam and RTSP streams, but for this guide we will focus on webcam streams.
In the code below, we use inference.Stream() to read webcam frames and then run inference on each frame using a
YOLOv8 Segmentation
model.
Above, replace the model ID with the model ID of a YOLOv8 segmentation model hosted on Roboflow.
You can use any YOLOv8 segmentation model trained on or uploaded to Roboflow.
To upload a model to Roboflow, first install the Roboflow Python package:
Then, create a new Python file and paste in the following code:
In the code above, add your API key and the path to the model weights you want to upload. Learn how to retrieve your API key. Your weights will be uploaded to Roboflow. Your model will shortly be accessible over an API, and available for use in Inference. To learn more about uploading model weights to Roboflow, check out our full guide to uploading weights to Roboflow.
Above, we:
1. Import the required dependencies.
2. Define the model we want to use.
3. Define a callback function called render() which takes in the predictions from a model and a frame and processes them. We have included some example code to show how to annotate predictions and display them on camera for use in your code.
4. Use inference.Stream() to access a webcam and run our model.
The render() function is run on each frame retrieved from the webcam stream.
In the code above, replace the API_KEY value with your Roboflow API key. Learn how to retrieve your Roboflow API key.
Now that you have configured a model and webcam stream, you are ready to test your webcam.
supervision provides an extensive range of functionalities for working with computer vision models. With supervision, you can:
1. Process and filter detections and segmentation masks from a range of popular models (YOLOv5, Ultralytics YOLOv8, MMDetection, and more).
2. Display predictions (i.e. bounding boxes, segmentation masks).
3. Annotate images (i.e. trace predictions, draw heatmaps).
4. Compute confusion matrices.
And more! To learn about the full range of functionality in supervision, check out the supervision documentation.