No items found.
Use the widget below to experiment with OWLv2. You can detect COCO classes such as people, vehicles, animals, household items.
OWLv2 is a transformer-based object detection model developed by Google Research. OWLv2 is the successor to OWL ViT.
OWLv2
is licensed under a
license.
You can use Roboflow Inference to deploy a
OWLv2
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.
First, install Autodistill and Autodistill OWLv2:
pip install autodistill autodistill-owlv2
Then, run:
from autodistill_owlv2 import OWLv2
from autodistill.detection import CaptionOntology
from autodistill.utils import plot
import cv2
# define an ontology to map class names to our OWLv2 prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = OWLv2(
ontology=CaptionOntology(
{
"person": "person",
"dog": "dog"
}
)
)
# run inference on a single image
results = base_model.predict("dog.jpeg")
plot(
image=cv2.imread("dog.jpeg"),
classes=base_model.ontology.classes(),
detections=results
)