Putting machine learning (ML) models in production is considered an operational challenge that is performed after all the hard work on training and optimizing the model is completed. In contrast, serverless ML starts with a minimal model, including the operational feature pipeline(s) and inference pipeline. By Jim Dowling.
In this article, we show you that writing feature pipelines and inference pipelines should not be hard and if you don’t have to configure/build the MLOps infrastructure yourself, getting to a minimal viable production model within a couple of weeks should be feasible for most models. You will learn:
- The MVP for Machine Learning
- 3 Programs: Feature, Training, and Inference Pipelines
- When >90% build complete ML Systems
There is no such thing as a single machine learning pipeline - there are feature pipelines, training pipelines, and inference pipelines. And if you structure your ML systems as such, you too will be able to quickly build an end-to-end working ML system that can be iteratively improved. Good read!
[Read More]