Deploy Computer Vision Models

How to Deploy YOLOv10 to AWS EC2

In this guide, we are going to show how to deploy a

YOLOv10

model to

AWS EC2

using Roboflow Inference. Inference is a high-performance inference server with which you can run a range of vision models, from YOLOv8 to CLIP to CogVLM.

To deploy a

YOLOv10

model to

AWS EC2

, we will:

1. Set up our computing environment
2. Download the Roboflow Inference Server
3. Try out our model on an example image

Let's get started!

In this guide, we are going to show how to deploy a

YOLOv10

model to

AWS EC2

using the Roboflow Inference Server. This SDK works with

YOLOv10

models trained on both Roboflow and in custom training processes outside of Roboflow.

To deploy a

YOLOv10

model to

AWS EC2

, we will:

1. Train a model on (or upload a model to) Roboflow
2. Download the Roboflow Inference Server
3. Install the Python SDK to run inference on images
4. Try out the model on an example image

Let's get started!

Train a Model on or Upload a Model to Roboflow

First, create a Roboflow account and create a new project. When you have created a new project, upload your project data, then generate a new dataset version. With that version ready, you can upload your model weights to Roboflow.

Download the Roboflow Python SDK:

pip install roboflow


Then, use the following script to upload your model weights:

from roboflow import Roboflow

home = "/path/to/project/folder"

rf = Roboflow(api_key=os.environ["ROBOFLOW_API_KEY"])
project = rf.workspace().project("PROJECT_ID")

project.version(PROJECT_VERSION).deploy(model_type="yolov5", model_path=f"/{home}/yolov5/runs/train/")


Read the Roboflow model weight upload documentation for more information about uploading model weights.

You will need your project name, version, API key, and model weights. The following documentation shows how to retrieve your API key and project information:

- Retrieve your Roboflow project name and version
- Retrieve your API key

Change the path in the script above to the path where your model weights are stored.

When you have configured the script above, run the code to upload your weights to Roboflow.

Now you are ready to start deploying your model.

Set up a AWS EC2 Virtual Machine

First, you will need to upload your trained YOLOv10 model to Roboflow. This will allow you to download your model weights directly into your Inference deployment, whether you use the SDK or the HTTP API with Docker.

Next, we need to create an AWS EC2 instance. EC2 is Amazon’s compute product that you can use to deploy virtual machines. Search for “EC2” in the search bar and navigate to EC2.

On the EC2 homepage, click the “Launch instance” button:

This button will take you to a page where you can configure the machine to create.

We recommend choosing the Amazon Linux operating system, which has been optimized for use in AWS. You will need to run Inference on a GPU device to run CogVLM. We recommend choosing a machine image optimized for deep learning, such as the Deep Learning Base GPU AMI image. This image will come with some tooling out of the box that will minimize GPU setup. If you want to deploy on a CPU device, the standard Amazon Linux operating system is recommended.

Once you have configured the virtual machine, you can deploy the system.

Next, sign in to your server with SSH. Read the AWS EC2 SSH instructions to learn more.

Download Roboflow Inference

The Roboflow Inference Server allows you to deploy computer vision models to a range of devices, including

AWS EC2

.

You can run Roboflow Inference in Docker, or via the Python SDK.

For this guide, we will run Inference with Docker and use the Python SDK to interface with our Docker deployment. We will deploy our model on a

AWS EC2

.

To install Inference and set up an Inference server in Docker, run:


pip install inference
inference server start


Now you have the Roboflow Inference Server running, you can use your model on

AWS EC2

.

For a Jetson deployment, using Docker is highly recommended since everything has to be installed special depending on your Jetpack version. Absent Docker, it is easy to accidentally do these installs incorrectly and need to reflash everything to the device.

Run Inference on an Image

You can run inference on images with Roboflow Inference.

Create a new Python file and add the following code:

# import client from inference sdk
from inference_sdk import InferenceHTTPClient
# import PIL for loading image
from PIL import Image
# import os for getting api key from environment
import os

# set the project_id, model_version, image_url
project_id = "soccer-players-5fuqs"
model_version = 1
filename = "path/to/local/image.jpg"

# create a client object
client = InferenceHTTPClient(
    api_url="http://localhost:9001",
    api_key=os.environ["API_KEY"],
)

# load the image
pil_image = Image.open(filename)

# run inference
results = client.infer(pil_image, model_id=f"{project_id}/{model_version}")

print(results)

Substitute the model name and version with the values associated with your Roboflow account and project, then run the script.

This code will return a Python object with results from your model.

You can process these results and plot them on an image with the supervision Python package:


detections = sv.Detections.from_inference(results[0])

box_annotator = sv.BoxAnnotator()
label_annotator = sv.LabelAnnotator()

annotated_image = box_annotator.annotate(
    scene=image, detections=detections)
annotated_image = label_annotator.annotate(
    scene=annotated_image, detections=detections)

sv.plot_image(image=annotated_image, size=(16, 16))

Run Inference on a Video

You can run inference on videos with Roboflow Inference and the InferencePipeline feature.

Create a new Python file and add the following code:

# Import the InferencePipeline object
from inference import InferencePipeline
# Import the built in render_boxes sink for visualizing results
from inference.core.interfaces.stream.sinks import render_boxes

# initialize a pipeline object
pipeline = InferencePipeline.init(
    model_id="rock-paper-scissors-sxsw/11", # Roboflow model to use
    video_reference=0, # Path to video, device id (int, usually 0 for built in webcams), or RTSP stream url
    on_prediction=render_boxes, # Function to run after each prediction
)
pipeline.start()
pipeline.join()

Substitute the model name and version with the values associated with your Roboflow account and project, then run the script.

This code will run a model on frames from a webcam stream. To use RTSP, set the video_reference value to an RTSP stream URL. To use video, set the video_reference value to a video file path.

Predictions are annotated using the render_boxes helper function. You can specify any function to process each prediction in the on_prediction parameter.

To learn how to define your own callback function with custom logic, refer to the Define Custom Prediction Logic documentation.

Learn how to deploy models to other devices

Below, you can find our guides on how to deploy

YOLOv10

models to other devices.

Documentation

The following resources are useful reference material for working with your model using Roboflow and the Roboflow Inference Server.

Used by over 16,000 companies for computer vision projects
cardinal healthUSGIntel logoRivian logoMedtronic logoColumn logo