No items found.
Use the widget below to experiment with BioCLIP. You can detect COCO classes such as people, vehicles, animals, household items.
Paper abstract:
Images of the natural world, collected by a variety of cameras, from drones to individual phones, are increasingly abundant sources of biological information. There is an explosion of computational methods and tools, particularly computer vision, for extracting biologically relevant information from images for science and conservation. Yet most of these are bespoke approaches designed for a specific task and are not easily adaptable or extendable to new questions, contexts, and datasets. A vision model for general organismal biology questions on images is of timely need. To approach this, we curate and release TreeOfLife-10M, the largest and most diverse ML-ready dataset of biology images. We then develop BioCLIP, a foundation model for the tree of life, leveraging the unique properties of biology captured by TreeOfLife-10M, namely the abundance and variety of images of plants, animals, and fungi, together with the availability of rich structured biological knowledge. We rigorously benchmark our approach on diverse fine-grained biology classification tasks, and find that BioCLIP consistently and substantially outperforms existing baselines (by 17% to 20% absolute). Intrinsic evaluation reveals that BioCLIP has learned a hierarchical representation conforming to the tree of life, shedding light on its strong generalizability. Our code, models and data will be made available at this https URL.
BioCLIP
is licensed under a
license.
You can use Roboflow Inference to deploy a
BioCLIP
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.
First, install Autodistill and Autodistill BioCLIP:
pip install autodistill autodistill-bioclip
Then, run:
from autodistill_bioclip import BioCLIP
from autodistill.detection import CaptionOntology
# define an ontology to map class names to our BioCLIP prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
classes = ["arabica", "robusta"]
base_model = BioCLIP(
ontology=CaptionOntology(
{
item: item for item in classes
}
)
)
results = base_model.predict("../arabica.jpeg")
top = results.get_top_k(1)
top_class = classes[top[0][0]]
print(f"Predicted class: {top_class}")