Use the widget below to experiment with OpenAI o3-mini. You can detect COCO classes such as people, vehicles, animals, household items.
On January 31st, 2025, OpenAI released o3-mini, the latest model in their reasoning model series. OpenAI o3-mini has been "optimised for STEM reasoning" and achieves "clearer answers, with stronger reasoning abilities" than the O1 model series released last year. You can upload an image or multiple images to an O3 model prompt and ask questions with your images as context.
o3-mini comes in three versions:
All of these models are designed for complex reasoning, where the model will use chains of thought to analyze a question before rendering an answer. This can lead to more thoughtful answers than the output offered by models.
Each model achieves progressively better scores on the benchmarks OpenAI used to evaluate the model. You can read more about the benchmarks run on the model in OpenAI’s o3-mini launch post. Here is an example showing 03-mini High achieve better performance than o1-mini on all tested tasks.
Here is how the model does when compared to other multimodal models we have tested:
OpenAI o3-mini
is licensed under a
license.
You can use Roboflow Inference to deploy a
OpenAI o3-mini
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.