How to detect small objects with YOLO-World and SAHI

Before you can train a computer vision model, you need labeled data on which to train your model. The more accurate the labels, or annotations, are, the higher the performance the model will achieve.

Overview

Slicing Aided Hyper Inference (SAHI) is a technique to improve small object detection performance with computer vision models. SAHI cuts an image into smaller images then runs inference on each smaller image. Predictions are then aggregated back together.

In this guide, we are going to walk through how to use SAHI with

YOLO-World

to improve your ability to detect small objects with a vision model.

To use SAHI with YOLO-World, we will:

  1. Install supervision
  2. Load a model
  3. Run inference using the sv.InferenceSlicer object

Let's get started!

YOLO-World and Image Annotation Resources

Explore these resources to enhance your understanding of YOLO-World and image annotation techniques.

Install supervision

First, install the supervision pip package:

pip install supervision


Once you have installed supervision, you are ready to load your data and start writing logic to filter detections.

Load Model

First, we are going to load a model for use in running inference. For this guide, we will use a

YOLO-World

model. We will then define a function that will run inference on an image and load the data into an sv.Detections object.

Let's load our model then define a function that, given an image, will run inference:


from inference.models.yolo_world.yolo_world import YOLOWorld
import supervision as sv
import cv2

model = YOLOWorld( model_id="yolo_world/l")

classes = ["person"]

model.set_classes(classes)

results = model.infer("people.jpeg", text=classes)

detections = sv.Detections.from_inference(results)

Replace the ... with the response object from your model.

Run Inference with sv.InferenceSlicer

The sv.InferenceSlicer object takes a callback function that returns an sv.Detections object. The slicer divides a provided image into smaller parts, runs inference on each, then combines the results into a single sv.Detections object. We can process the Detections object using supervision to accomplish tasks like plotting bounding boxes and filtering predictions.


slicer = sv.InferenceSlicer(callback=callback)
detections = slicer(image=image)

prediction_num = len(sliced_detections.xyxy)

box_annotator = sv.BoxAnnotator()

annotated_frame = box_annotator.annotate(
	scene=image.copy(),
	detections=detections,
	labels=labels
)

sv.plot_image(image=annotated_frame, size=(16, 16))

The above code uses SAHI to process an image then plots the results from inference on an image. This image is then displayed.

You can try SAHI on an example image using a model trained on the Microsoft COCO dataset. The model can detect common objects like car and cell phone, useful for visualizing how SAHI impacts model predictions.

Next Steps

supervision provides an extensive range of functionalities for working with computer vision models. With supervision, you can:

1. Process and filter detections and segmentation masks from a range of popular models (YOLOv5, Ultralytics YOLOv8, MMDetection, and more).
2. Process and filter classifications.
3. Compute confusion matrices.

And more! To learn about the full range of functionality in supervision, check out the supervision documentation.