From the course: Full-Stack Deep Learning with Python
Full-stack deep learning, MLOps, and MLflow - Python Tutorial
From the course: Full-Stack Deep Learning with Python
Full-stack deep learning, MLOps, and MLflow
- [Instructor] Hi, and welcome to this course on full-stack deep learning. In this movie, we'll understand what full-stack deep learning is all about, the role of MLOps in full-stack deep learning, and how we can implement MLOps using the MLflow framework. But first, what is full-stack deep learning? Full-stack deep learning refers to the comprehensive understanding, and expertise in all components and stages of building and deploying deep learning systems. These components include everything in the deep learning lifecycle, data collection and pre-processing, model development, training, hyperparameter tuning, deployment, and monitoring. A full-stack deep learning practitioner is proficient in both the technical, and practical aspects of each of these stages. Everything that's involved in getting deep learning systems from prototype to production. As you might imagine, full-stack deep learning is a very vast topic and really not something that you would cover in a single course. Full-stack deep learning is something that you would typically cover over the course of a semester at a university. Here are the steps involved in full-stack deep learning. The first step is planning and project setup. This is where you figure out what model you want to build. Data collection and labeling, getting the data to train your model. Then we have model training and debugging. And then finally, deploying, testing, and maintenance of models. Let's quickly discuss what's involved in each of these phases. Project planning and setup is the initial phase where we define the project's goals, allocate resources, and choose the necessary tools and frameworks that we are going to use in every step. This is where you'll establish the project environment, set up version control, and ensure that you have the infrastructure in place for all of the steps coming up next. Once you've figured out what kind of problem you're going to solve with your deep learning model, you move on to data collection and labeling. This is where you'll gather and pre-process the data required for training your model. You'll clean and format the data, annotate it if needed, label it, and get things ready for model development. Maybe you'll even split your data into training, validation, and test sets. You'll then pick the right kind of model for your data, whether it's regression, classification, or some other kind of model. You'll design the architecture of your neural network and its layers train the model on the data, monitor the performance of the model, maybe perform hyperparameter tuning to get just the right structure for your model, fine-tune the model, adjust hyper parameters and essentially get the model ready for deployment. Once your model is deployed, well, that's really just the beginning. This is where you're constantly evaluating your model in production, monitoring it and maintaining it, and possibly retraining it on new data. Machine learning models require constant updates so that they continue to perform well in the real world. Full-stack deep learning is not just about taking models from planning and project setup through to deployment. It's an iterative process. You may need to constantly go back to previous stages in this process in order to refine and improve your model. As you can see, full-stack deep learning involves the entire lifecycle of a deep learning model. It's a very vast topic, and it's hard to cover everything in a single course. A really integral part of full-stack deep learning is MLOps, and that's the part that we are going to focus on in this course. MLOps, short for machine learning operations, is a set of practices and tools that aims to streamline and automate the end-to-end machine learning life cycle. It bridges the gap between the development of machine learning models and their deployment into production environments. The term MLOps comes from the term DevOps because MLOps borrows several concepts from development operations and applies them specifically to the machine learning workflow. Here are some important components and concepts in MLOps. The first is version control. Just like in software development, MLOps emphasizes the importance of version control for machine learning code, data, and even models. MLOps incorporates continuous integration to automatically build, test, and validate machine learning models and code changes whenever new code is committed, or new data is available. MLOps also incorporates continuous delivery, or continuous deployment, automating the deployment of machine learning models into production or staging environments without manual intervention. MLOps includes techniques for packaging machine learning models into containers or other deployment-ready formats, making it easier to deploy and manage models consistently across environments. Ongoing monitoring and logging of deployed models are also very important for detecting performance issues, data drift, and model degradation. Another key concept in MLOps is model versioning. Keeping track of different versions of ML models, along with their associated data and configurations, is important for reproducibility and auditing. There are several tools that come together to enable the MLOps workflow, and an important tool amongst these is MLflow. MLflow is an open-source platform for managing the machine learning lifecycle, designed to simplify and streamline the end-to-end process of developing, training, and deploying machine learning models. MLflow was developed by Databricks and is now part of the Linux Foundation. MLflow covers two important steps in the full-stack deep learning workflow, model training and hyperparameter tuning, and model deployment and serving. MLflow helps you with model tracking. You can log and track experiments and model runs. You can record parameters, metrics, and artifacts associated with your different model executions. MLflow enables you to package your machine learning code into projects, which is just a directory containing code, data, and specification files, defining dependencies and entry points, making it easier to organize and manage your ML code. MLflow also allows you to package and share machine learning models in a standardized format. It offers a centralized model registry, making model versioning and management easy. And finally, it contains everything that you need for deploying your model locally or on the cloud. In this course, we'll focus on understanding the full-stack deep learning workflow, and we'll get hands-on with MLflow for tracking experiments runs packaging models, and deploying models on our local machine.
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.