are commonly used in computer vision projects. Below, we compare and contrast
We ran seven tests across five state-of-the-art Large Multimodal Models (LMMs) on November 23rd, 2023. QwenVL passed at five of seven tests and CogVLM passed at five of seven tests.
Both models have their strengths and weaknesses. We encourage you to experiment with them on your own data to learn more about how they work.
For example, CogVLM passed at a document VQA task that involved asking for the price of a menu item, which Qwen-VL was unable to answer. With that said, CogVLM hallucinated an answer to a question about a movie scene, which Qwen-VL was able to answer.
Here are the results:
Based on our tests, QwenVL and CogVLM both perform well on a range of multimodal tasks.
Qwen-VL is an LMM developed by Alibaba Cloud. Qwen-VL accepts images, text, and bounding boxes as inputs. The model can output text and bounding boxes. Qwen-VL naturally supports English, Chinese, and multilingual conversation.How to AugmentHow to LabelHow to Plot PredictionsHow to Filter PredictionsHow to Create a Confusion Matrix