Models
OWL ViT vs. CoDet

OWL ViT vs. CoDet

Both OWL ViT and CoDet are commonly used in computer vision projects. Below, we compare and contrast OWL ViT and CoDet.

Models

icon-model

OWL ViT

OWL-ViT is a transformer-based object detection model developed by Google Research.
icon-model

CoDet

CoDet is an open vocabulary zero-shot object detection model.
Model Type
Object Detection
--
Object Detection
--
Model Features
Item 1 Info
Item 2 Info
Architecture
--
--
Frameworks
--
--
Annotation Format
Instance Segmentation
Instance Segmentation
GitHub Stars
--
79
--
License
--
Apache 2.0 License
--
Training Notebook
Compare Alternatives

Compare OWL ViT and CoDet with Autodistill

Using Autodistill, you can compare OWL ViT and CoDet on your own images in a few lines of code.

Here is an example comparison:

To start a comparison, first install the required dependencies:


pip install autodistill autodistill-owl-vit autodistill-codet

Next, create a new Python file and add the following code:


from autodistill_owl_vit import OWLViT
from autodistill_codet import CoDet

from autodistill.detection import CaptionOntology
from autodistill.utils import compare

ontology = CaptionOntology(
    {
        "solar panel": "solar panel",
    }
)

models = [
    OWLViT(ontology=ontology),
    CoDet(ontology=ontology)
]

images = [
    "/home/user/autodistill/solarpanel1.jpg",
    "/home/user/autodistill/solarpanel2.jpg"
]

compare(
    models=models,
    images=images
)

Above, replace the images in the `images` directory with the images you want to use.

The images must be absolute paths.

Then, run the script.

You should see a model comparison like this:

When you have chosen a model that works best for your use case, you can auto label a folder of images using the following code:


base_model.label(
  input_folder="./images",
  output_folder="./dataset",
  extension=".jpg"
)