We ran seven tests across five state-of-the-art Large Multimodal Models (LMMs) on November 23rd, 2023. QwenVL passed at five of seven tests and CogVLM passed at five of seven tests.
Both models have their strengths and weaknesses. We encourage you to experiment with them on your own data to learn more about how they work.
For example, CogVLM passed at a document VQA task that involved asking for the price of a menu item, which Qwen-VL was unable to answer. With that said, CogVLM hallucinated an answer to a question about a movie scene, which Qwen-VL was able to answer.
Here are the results:
Based on our tests, QwenVL and CogVLM both perform well on a range of multimodal tasks.
Download the raw image results from our analysis.
Provide your own image below to test YOLOv8 and YOLOv9 model checkpoints trained on the Microsoft COCO dataset.
COCO can detect 80 common objects, including cats, cell phones, and cars.