From the course: Deep Learning with Python: Optimizing Deep Learning Models

Applying dropout regularization to a deep learning model - Python Tutorial

From the course: Deep Learning with Python: Optimizing Deep Learning Models

Applying dropout regularization to a deep learning model

- [Instructor] In this video, you will learn how to apply dropout regularization to a deep learning model in order to reduce overfitting. I'll be writing the code in the 02_07e file. You can follow along by completing the empty code cells in the 02_07b file. Make sure to run the previously written code to import and pre-process the data as well as to build and train the baseline model. I've already done so. Looking at the validation and training loss metrics curve, we see that the baseline model overfits against the training data. A clear indicator of overfitting is a divergence we see in the training and validation loss metrics, which is visible in the training curves above. To help minimize overfitting, let's try to apply dropout regularization to the baseline model. Dropout regularization randomly deactivates a fraction of neurons during training. This forces the network to learn robust features that do not depend too heavily on specific neurons. To apply dropout regularization to the baseline model, we simply add a dropout layer after each of the hidden layers in our network. Here we specify the dropout percentage as 0.5, which means that 50% of the neurons will be zeroed out during each forward pass. To begin, we import dropout from tensorflow.keras.layers. Then when we define the structure of our model, we include a dropout layer in between each of the dense layers. Let's go ahead and run our code. Next, we compile the regularized model. Then we train the regularized model against our data. Note that we specified 15 epochs. So it's going to go through 15 training cycles. The batch size is 128, and the validation split is 0.1. So let's give our model some time to train. So we're in the 13th epoch and the 14th, and now we're in the final epoch, 15. So now that that model is done training, we can now plot the train and validation loss metrics to see what impact dropout had on the model. So this time we see that the training and validation loss metrics start off a bit divergent, but then start to converge as training continues. This indicates that dropouts regularization is helping the model to generalize better by preventing it from overfitting to the training data. Excellent work. You now know how to use dropouts regularization to reduce overfitting in a deep learning model in Python.

Contents