Hosted or on-device deployment
SDKs optimized for maximum performance
Extensive documentation
In this guide, we are going to show how to deploy a
YOLO-World
model to
NVIDIA Jetson
using Roboflow Inference. Inference is a high-performance inference server with which you can run a range of vision models, from YOLOv8 to CLIP to CogVLM.
To deploy a
YOLO-World
model to
NVIDIA Jetson
, we will:
1. Set up our computing environment
2. Download the Roboflow Inference Server
3. Try out our model on an example image
Let's get started!
In this guide, we are going to show how to deploy a
YOLO-World
model to
NVIDIA Jetson
using the Roboflow Inference Server. This SDK works with
YOLO-World
models trained on both Roboflow and in custom training processes outside of Roboflow.
To deploy a
YOLO-World
model to
NVIDIA Jetson
, we will:
1. Train a model on (or upload a model to) Roboflow
2. Download the Roboflow Inference Server
3. Install the Python SDK to run inference on images
4. Try out the model on an example image
Let's get started!
If you want to upload your own model weights, first create a Roboflow account and create a new project. When you have created a new project, upload your project data, then generate a new dataset version. With that version ready, you can upload your model weights to Roboflow.
Download the Roboflow Python SDK:
pip install roboflow
Then, use the following script to upload your model weights:
from roboflow import Roboflow
home = "/path/to/project/folder"
rf = Roboflow(api_key=os.environ["ROBOFLOW_API_KEY"])
project = rf.workspace().project("PROJECT_ID")
project.version(PROJECT_VERSION).deploy(model_type="yolov5", model_path=f"/{home}/yolov5/runs/train/")
You will need your project name, version, API key, and model weights. The following documentation shows how to retrieve your API key and project information:
- Retrieve your Roboflow project name and version
- Retrieve your API key
Change the path in the script above to the path where your model weights are stored.
When you have configured the script above, run the code to upload your weights to Roboflow.
Now you are ready to start deploying your model.
The Roboflow Inference Server allows you to deploy computer vision models to a range of devices, including
NVIDIA Jetson
.
The Inference Server relies on Docker to run. If you don't already have Docker installed on the device(s) on which you want to run inference, install it by following the official Docker installation instructions.
Once you have Docker installed, run the following command to download the Roboflow Inference Server on your
NVIDIA Jetson
.
pip install inference supervision
Now you have the Roboflow Inference Server running, you can use your model on
NVIDIA Jetson
.
The Roboflow Inference Server provides a HTTP API with a range of methods you can use to query your model and various popular models (i.e. SAM, CLIP). You can read more about all of the API methods available on the Roboflow Inference server in the Inference Server documentation.
The Roboflow Python SDK provides abstract convenience methods for interacting with the HTTP API. In this guide, we will use the Python SDK to run inference on a model. You can also query the HTTP API itself.
To install the Python SDK, run the following command:
pip install roboflow
Create a new Python file and add the following code:
from inference_sdk import InferenceHTTPClient,
# import os to get the ROBOFLOW_API_KEY from the environment
import os
# set the project_id, model_version, image_url
project_id = "logistics"
image_url = "forklift.jpeg"
model_version = 1
# create a client object
client = InferenceHTTPClient(
api_url="http://localhost:9001",
api_key=os.environ["ROBOFLOW_API_KEY"],
)
# run inference on the image
results = client.infer(image_url, model_id=f"{project_id}/{model_version}")
detections = sv.Detections.from_inference(results)
print(detections)
image = cv2.imread("forklift.jpeg")
box_annotator = sv.BoxAnnotator()
label_annotator = sv.LabelAnnotator()
annotated_frame = box_annotator.annotate(
scene=image.copy(),
detections=detections,
)
annotated_frame = label_annotator.annotate(
scene=annotated_frame,
detections=detections,
)
cv2.imwrite("forklift_result.jpeg", annotated_frame)
Let's run the code on this image:
The model saves annotated predictions to a file. Here are the annotated predictions:
We take security seriously and have implemented comprehensive measures to keep your sensitive data safe
Below, you can find our guides on how to deploy
YOLO-World
models to other devices.
The following resources are useful reference material for working with your model using Roboflow and the Roboflow Inference Server.