We ran seven tests across five state-of-the-art Large Multimodal Models (LMMs) on November 23rd, 2023. LLaVA passed at one of seven tests and BakLLaVA passed at one of seven tests. Here are the results:
Based on our tests, we assess that both LLaVA and BakLLaVA, while notable models, do not perform as well as other LMMs such as Qwen-VL and CogVLM.
.
Both
and
are commonly used in computer vision projects. Below, we compare and contrast
and
We ran seven tests across five state-of-the-art Large Multimodal Models (LMMs) on November 23rd, 2023. LLaVA passed at one of seven tests and BakLLaVA passed at one of seven tests. Here are the results:
Based on our tests, we assess that both LLaVA and BakLLaVA, while notable models, do not perform as well as other LMMs such as Qwen-VL and CogVLM.
LLaVA is an open source multimodal language model that you can use for visual question answering and has limited support for object detection.
How to AugmentHow to LabelHow to Plot PredictionsHow to Filter PredictionsHow to Create a Confusion MatrixBakLLaVA is an LMM developed by LAION, Ontocord, and Skunkworks AI. BakLLaVA uses a Mistral 7B base augmented with the LLaVA 1.5 architecture.
How to AugmentHow to LabelHow to Plot PredictionsHow to Filter PredictionsHow to Create a Confusion MatrixJoin 250,000 developers curating high quality datasets and deploying better models with Roboflow.
Get started