From the course: MLOps Essentials: Model Deployment and Monitoring
Unlock the full course today
Join today to access over 24,400 courses taught by industry experts.
Deployment pipelines
From the course: MLOps Essentials: Model Deployment and Monitoring
Deployment pipelines
- [Instructor] How do we deploy an ML solution with its model ML and non ML components? Let's discuss the pipeline options in this video. We will start with deploying embedded ML services. As part of the model training, an approved model is delivered, which usually recides in a modern registry. Code for the ML functions as well as the non ML functions for the solution sit in the code report. The build process then integrates the model ML function and non ML functions into a single executable and packages it to a deployable format. The package is then deployed to a test setup and tested for sanctity and regression. After successful testing, it is deployed into production. How does deploying distributed ML services work? As in the case of the embedded services, the model and the code sit in their respective stores. However, each artifact is built and tested independently. Each of them end up being an independently deployable…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.