Models
Grounded SAM vs. SAM-CLIP

Grounded SAM vs. SAM-CLIP

Both Grounded SAM and SAM-CLIP are commonly used in computer vision projects. Below, we compare and contrast Grounded SAM and SAM-CLIP.

Models

icon-model

Grounded SAM

GroundedSAM combines Grounding DINO with the Segment Anything Model to identify and segment objects in an image given text captions.
icon-model

SAM-CLIP

Use Grounding DINO, Segment Anything, and CLIP to label objects in images.
Model Type
Instance Segmentation
--
Instance Segmentation
--
Model Features
Item 1 Info
Item 2 Info
Architecture
Combination of Grounding DINO and Segment Anything
--
Combination of Segment Anything and CLIP
--
Frameworks
--
--
Annotation Format
Instance Segmentation
Instance Segmentation
GitHub Stars
14.0k
--
20
--
License
Apache 2.0
--
--
Training Notebook
Compare Alternatives
--
Compare with...
--
Compare with...

Compare Grounded SAM and SAM-CLIP with Autodistill

Using Autodistill, you can compare Grounded SAM and SAM-CLIP on your own images in a few lines of code.

Here is an example comparison:

To start a comparison, first install the required dependencies:


pip install autodistill autodistill-grounded-sam autodistill-sam-clip

Next, create a new Python file and add the following code:


from autodistill_grounded_sam import GroundedSAM
from autodistill_sam_clip import SAMCLIP

from autodistill.detection import CaptionOntology
from autodistill.utils import compare

ontology = CaptionOntology(
    {
        "solar panel": "solar panel",
    }
)

models = [
    GroundedSAM(ontology=ontology),
    SAMCLIP(ontology=ontology)
]

images = [
    "/home/user/autodistill/solarpanel1.jpg",
    "/home/user/autodistill/solarpanel2.jpg"
]

compare(
    models=models,
    images=images
)

Above, replace the images in the `images` directory with the images you want to use.

The images must be absolute paths.

Then, run the script.

You should see a model comparison like this:

When you have chosen a model that works best for your use case, you can auto label a folder of images using the following code:


base_model.label(
  input_folder="./images",
  output_folder="./dataset",
  extension=".jpg"
)

Models

Grounded SAM vs. SAM-CLIP

.

Both

Grounded SAM

and

SAM-CLIP

are commonly used in computer vision projects. Below, we compare and contrast

Grounded SAM

and

SAM-CLIP
  Grounded SAM SAM-CLIP
Date of Release Jan 25, 2024 Jan 05, 2024
Model Type Instance Segmentation Instance Segmentation
Architecture Combination of Grounding DINO and Segment Anything Combination of Segment Anything and CLIP
GitHub Stars 14000 20

Using Autodistill, you can compare Grounded SAM and SAM-CLIP on your own images in a few lines of code.

Here is an example comparison:

To start a comparison, first install the required dependencies:


pip install autodistill autodistill-grounded-sam autodistill-sam-clip

Next, create a new Python file and add the following code:


from autodistill_grounded_sam import GroundedSAM
from autodistill_sam_clip import SAMCLIP

from autodistill.detection import CaptionOntology
from autodistill.utils import compare

ontology = CaptionOntology(
    {
        "solar panel": "solar panel",
    }
)

models = [
    GroundedSAM(ontology=ontology),
    SAMCLIP(ontology=ontology)
]

images = [
    "/home/user/autodistill/solarpanel1.jpg",
    "/home/user/autodistill/solarpanel2.jpg"
]

compare(
    models=models,
    images=images
)

Above, replace the images in the `images` directory with the images you want to use.

The images must be absolute paths.

Then, run the script.

You should see a model comparison like this:

When you have chosen a model that works best for your use case, you can auto label a folder of images using the following code:


base_model.label(
  input_folder="./images",
  output_folder="./dataset",
  extension=".jpg"
)

Grounded SAM

GroundedSAM combines Grounding DINO with the Segment Anything Model to identify and segment objects in an image given text captions.

How to AugmentHow to LabelHow to Plot PredictionsHow to Filter PredictionsHow to Create a Confusion Matrix

SAM-CLIP

Use Grounding DINO, Segment Anything, and CLIP to label objects in images.

How to AugmentHow to LabelHow to Plot PredictionsHow to Filter PredictionsHow to Create a Confusion Matrix

Compare Grounded SAM to other models

Compare SAM-CLIP to other models

Deploy a computer vision model today

Join 250,000 developers curating high quality datasets and deploying better models with Roboflow.

Get started