Use the widget below to experiment with FastSAM. You can detect COCO classes such as people, vehicles, animals, household items.
FastSAM is an image segmentation model trained using 2% of the data in the Segment Anything Model SA-1B dataset.
FastSAM overcomes the computation requirements barrier associated with using SAM by employing a decoupled approach.
FastSAM divided the segmentation task into two sequential stages: all-instance segmentation and prompt-guided selection.
Use Cases:
Despite being exponentially smaller than its parent model, FastSAM still employs multiple use cases. The model is capable of not only image segmentation, but also zero-shot edge detection, image captioning, visual question answering, etc. Overall, FastSAM is a highly impressive model able to do a number of different computer tasks.
FastSAM
is licensed under a
AGPL-3.0
license.
While having a smaller memory usage, SAM is still able to perform well on a number of benchmarks.
You can use Roboflow Inference to deploy a
FastSAM
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.
To use FastSAM with autodistill, you need to install the following dependency:
pip install autodistill-fastsam
Then, use this code to run inference:
from autodistill_fastsam import FastSAM
from autodistill.detection import CaptionOntology
# define an ontology to map class names to our FastSAM prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = FastSAM(
ontology=CaptionOntology(
{
"person": "person",
"a forklift": "forklift"
}
)
)
results = base_model.predict("image.jpeg")