No items found.
Use the widget below to experiment with BLIP. You can detect COCO classes such as people, vehicles, animals, household items.
BLIP (Bootstrapping Language-Image Preprocessing) is a multimodal model developed by Salesforce Research. BLIP can be used for visual question answering and image classification. BLIP has been superseded by BLIPv2, also maintained by Salesforce Research.
BLIP
is licensed under a
BSD-3-Clause
license.
You can use Roboflow Inference to deploy a
BLIP
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.