Autodistill allows you to use state-of-the-art foundation models that know a lot about a variety of objects to label data for your project. You can then train a new model with your labeled data. This whole process uses around a dozen lines of code.
To learn more about how autodistill works, read our overview guide or watch our YouTube tutorial.
In this guide, we will show you how to use
to train a
model.
Let's get started!
First, install Autodistill and the required model dependencies:
pip install autodistill autodistill-kosmos2
Before you can label a dataset, you need a dataset with which to work.
Roboflow has a few resources that can help you create a dataset for your project:
You can use any folder of images you have on your local machine with Autodistill, too.
Autodistill has two model types:
To label your dataset with a Base Model, you need to provide prompt(s) that are relevant to the classes you want to label.
Replace "example" below with the prompt you want to use. Replace "class" with the name of the class you want the prompt results to be saved as in your dataset. Also, replace the IMAGE_NAME with an image from your dataset.
The code cell below loads the base model with your prompt on the provided image, then visualizes the results.
You may need to experiment with a few prompts.
To start labeling your images, run the following code:
To train a
model using your newly-labeled dataset, run the following code:
After running this cell, you will have model weights that you can use to run inference on your new model.
You can deploy your trained model to Roboflow. By deploying your model to Roboflow, you can run inference on our infinitely-scalable API. As your inference demands grow, our you will continue to see high levels of performance thanks to autoscaling infrastructure that is always on.
Roboflow offers:
To deploy your model to Roboflow, run the following code: