Models
OWLv2 vs. CoDet

OWLv2 vs. CoDet

Both OWLv2 and CoDet are commonly used in computer vision projects. Below, we compare and contrast OWLv2 and CoDet.

Models

icon-model

OWLv2

OWLv2 is a transformer-based object detection model developed by Google Research. OWLv2 is the successor to OWL ViT.
icon-model

CoDet

CoDet is an open vocabulary zero-shot object detection model.
Model Type
Object Detection
--
Object Detection
--
Model Features
Item 1 Info
Item 2 Info
Architecture
--
--
Frameworks
--
--
Annotation Format
Instance Segmentation
Instance Segmentation
GitHub Stars
--
79
--
License
--
Apache 2.0 License
--
Training Notebook
Compare Alternatives
--
Compare with...

Compare OWLv2 and CoDet with Autodistill

Using Autodistill, you can compare OWLv2 and CoDet on your own images in a few lines of code.

Here is an example comparison:

To start a comparison, first install the required dependencies:


pip install autodistill autodistill-owlv2 autodistill-codet

Next, create a new Python file and add the following code:


from autodistill_owlv2 import OWLv2
from autodistill_codet import CoDet

from autodistill.detection import CaptionOntology
from autodistill.utils import compare

ontology = CaptionOntology(
    {
        "solar panel": "solar panel",
    }
)

models = [
    OWLv2(ontology=ontology),
    CoDet(ontology=ontology)
]

images = [
    "/home/user/autodistill/solarpanel1.jpg",
    "/home/user/autodistill/solarpanel2.jpg"
]

compare(
    models=models,
    images=images
)

Above, replace the images in the `images` directory with the images you want to use.

The images must be absolute paths.

Then, run the script.

You should see a model comparison like this:

When you have chosen a model that works best for your use case, you can auto label a folder of images using the following code:


base_model.label(
  input_folder="./images",
  output_folder="./dataset",
  extension=".jpg"
)