Deploy Vision AI in Minutes

Integrate custom or foundation models into your toolset and codebase
Python
cURL
Javascript
Swift
.Net

from inference_sdk import InferenceHTTPClient

CLIENT = InferenceHTTPClient(
    api_url="https://detect.roboflow.com",
    api_key="****"
)

result = CLIENT.infer(your_image.jpg, model_id="license-plate-recognition-rxg4e/4")

ARM CPU
x86 CPU
Luxonis OAK
NVIDIA GPU
NVIDIA TRT
NVIDIA Jetson
Raspberry Pi

Managed Deployment

Run models directly on Roboflow’s infrastructure through an infinitely-scalable API.
Load balancing and auto-scaling to provide stability during bursts of usage.
Deploy at scale with bulk discounts and dedicated GPU clusters.
Switch to new model versions and architectures without adjusting your inference code.
Strict security and privacy standards, SOC 2 Type II, PCI, and HIPAA compliance.
Read the Hosted API Documentation

Self-hosted Deployment

Roboflow Inference: An open source and scalable way to run models on-device, with or without an internet connection.
Run inference on-device without the headache of environment management, dependencies, managing CUDA versions, and more.
HTTP interfaces for foundation models, like CLIP and SAM, which you can use directly in your application or as part of multi-stage inference processes.
Complex inference features including autobatching inference, multi-model containers, multithreading, and DMZ deployments.
UDP inference to keep latency as low as possible.
Google Cloud
RTSP
Nvidia  TRT
AWS EC2
Nvidia  Jetson
Kubernetes
Raspberry Pi
Nvidia  TRT

Dedicated SDKs

Natively integrated SDKs for device optimized model performance.

Web Browser

Use roboflow.js to run models directly from the browser, allowing you to bring your model to the web.

Luxonis OAK

Deploy using high‑resolution cameras with depth vision and on‑chip machine learning.

iOS

Build vision-enabled iOS applications with out-of-the-box support for building iOS applications.

Snap Lens Studio

Deploy with Snap Lens to reach an audience of millions of people using AR-enabled features.
Read the SDK Documentation

Workflows

Simplify Building and Deploying Vision AI Applications
Integrated workflow builder and deployment ensures what you configure runs in production.
Try Workflows
Combine custom models, open source models, LLM APIs, pre-built logic, and external applications
ARM CPU
x86 CPU
Luxonis OAK
NVIDIA GPU
NVIDIA TRT
NVIDIA Jetson
Raspberry Pi
ARM CPU
x86 CPU
Luxonis OAK
NVIDIA GPU
NVIDIA TRT
NVIDIA Jetson
Raspberry Pi
Python

result = client.run_workflow(
    workspace_name="suvjg",
    workflow_id="abc",
    images={
        "image": "YOUR_IMAGE.jpg"
    }
)
Deploy using a fully managed infrastructure with an API endpoint or on-device, internet connection optional

Model Monitoring

Insights into how your deployed vision models are performing
Monitor which devices are online/offline, inference volume by device, confidence metrics for each model, inference time, and see individual prediction results.
Recent Inferences
Time
8/21/2024, 5:15:39 PM
8/21/2024, 5:15:39 PM
Model
gscs-1
gscs-1
Camera Location
United States
Canada
Add custom metadata to filter and understand performance in various locations across unique devices.
Manage Active Alerts
Alert Name
Too many WBC
RBC confidence defect
Model
bccdxd-4
bccdxd-4
Triggger
More than 10 wbc detections in 1 minute
RBC Average Confidence Below .4
Automatically detect model failures or model drift with a real-time dashboard and create custom alerts when model performance shifts.