Use the widget below to experiment with YOLOv7 Instance Segmentation. You can detect COCO classes such as people, vehicles, animals, household items.
YOLOv7 was created by WongKinYiu and AlexeyAB, the creators of YOLOv4 Darknet (and the official canonical maintainers of the YOLO lineage according to pjreddie, the original inventor and maintainer of the YOLO architecture).
YOLOv7 Instance Segmentation supports real-time vision, giving it several use cases. The model also is flexible in terms of export formats, where it supports ONNX and TensorRT, giving it seemless integration to hardware devices.
Learn how to deploy your own YOLOv7 model.
On a test size of 640, YOLOv8 Instance Segmentation has the following accuracies on the stated benchmarks.
APbox: 51.4%
AP50box: 69.4%
AP75box: 55.8%
APmask: 41.5%
AP50mask: 65.5%
AP75mask: 43.7%
YOLOv7 Instance Segmentation
is licensed under a
GPL-3.0
license.
You can use Roboflow Inference to deploy a
YOLOv7 Instance Segmentation
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.
You can run fine-tuned YOLOv7 instance segmentation models with Inference.
First, install Inference:
pip install inference
Retrieve your Roboflow API key and save it in an environment variable called ROBOFLOW_API_KEY
:
export ROBOFLOW_API_KEY="your-api-key"
To use your model, run the following code:
import inference
model = inference.load_roboflow_model("model-name/version")
results = model.infer(image="YOUR_IMAGE.jpg")
Above, replace:
YOUR_IMAGE.jpg
with the path to your image.model_id/version
with the YOLOv7 model ID and version you want to use. Learn how to retrieve your model and version ID