How to deploy a machine learning model with FastAPI, Docker and Github Actions

Click for: original source

You’ve just trained a model and you’re happy with it because it performs well on your local cross-validation. Now is the time to put this model in production so that other teams within your organization can consume it and embed it in their applications. By Ahmed Besbes.

The tutorial also covers:

  • Introduction to production machine learning and APIs
  • A quick overview of FastAPI features
  • Using FastAPI and SpaCy to build an inference API
  • Packaging the API with Docker and docker-compose
  • Deploying the API to AWS EC2 and automating the process with a Github Actions CI/CD

Broadly speaking and without going into many details, putting a model in production is a process in which a model is integrated into an existing IT environment and made available to other teams to use and consume. You will also get links to further resources and reading. Good read!

[Read More]

Tags machine-learning learning big-data devops agile