In the olden days, most people rolled their own servers to expose their ML models to client applications. In fact, Roboflow Inference's HTTP interface and REST API are built on FastAPI.
In this day and age, it's certainly still possible to start from scratch, but you'll be reinventing the wheel and will run into a lot of footguns others have already solved along the way. It's usually better and faster to use one of the existing ML-focused servers.
Choose FastAPI or Flask if: your main goal is learning the intricacies ofmaking an inference server.
Triton is a powerhouse tool for machine learning experts to deploy ML models at scale. Its primary focus is on extremely optimized pipelines that run efficiently on NVIDIA hardware. It can be tough to use, tradingoff simplicity and a quick development cycle for raw speed and isgeared towards expert users. It can chain models together, but doingso is a rigid and manual process.
Inference turns any computer or edge device into a command center for your computer vision projects.