Models
Grounding DINO vs. LLaVA

Grounding DINO vs. LLaVA

Both Grounding DINO and LLaVA-1.5 are commonly used in computer vision projects. Below, we compare and contrast Grounding DINO and LLaVA-1.5.

Models

icon-model

Grounding DINO

Grounding DINO is a state-of-the-art zero-shot object detection model, developed by IDEA Research.
icon-model

LLaVA-1.5

LLaVA is an open source multimodal language model that you can use for visual question answering and has limited support for object detection.
Model Type
Object Detection
--
Object Detection
--
Model Features
Item 1 Info
Item 2 Info
Architecture
--
--
Annotation Format
Instance Segmentation
Instance Segmentation
Framework
--
--
GitHub Stars
4.6k+
--
16,000
--
License
Apache-2.0
--
Apache-2.0
--
Training Notebook

Compare Grounding DINO and LLaVA-1.5 with Autodistill

6924adbab6c1e30f4a047103

Compare Grounding DINO vs. LLaVA

Provide your own image below to test YOLOv8 and YOLOv9 model checkpoints trained on the Microsoft COCO dataset.

COCO can detect 80 common objects, including cats, cell phones, and cars.