Use the widget below to experiment with Segment Anything 3. You can detect COCO classes such as people, vehicles, animals, household items.
Segment Anything 3 (SAM-3) is an as-yet-unreleased segmentation model announced by Meta at the LlamaCon 2025 event in April. This would be the latest in the Segment Anything model series that generates accurate masks around the contours of objects in images.
In the SAM-3 model announcement Meta described the model as offering "object segmentation and tracking using natural language prompts," a feature that was not available in previous versions of SAM. This means that SAM-3 could allow you to provide an arbitrary text prompt like “ice” or “pothole” and generate segmentation masks for matching objects:

Meta has a SAM-3 waitlist available on their website that you can use to get notified when the model is released.
You can try out the previous model - SAM 2 - in the interactive demo below.
Segment Anything 3
is licensed under a
license.
You can use Roboflow Inference to deploy a
Segment Anything 3
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.
