From the course: Deep Learning: Getting Started

Gradient descent

From the course: Deep Learning: Getting Started

Gradient descent

- Gradient descent is the process of repeating forward and backward propagations in order to reduce error and move closer to the desired model. To recollect one run of the forward propagation results in predicting the outcomes based on weights and biases. We compute the error using a cost function. We then use back propagation to propagate, edit, and adjust the weights and biases. This is one pass of learning. We have to now repeat this pass again and again, as the weights and biases get refined, and the error gets reduced. This is called gradient descent. In gradient descent, we repeat the learning process of forward propagation, estimating error, backward propagation, and adjusting weights and biases. As we do this, the overall error estimated by the cost function will oscillate around and start moving closer to zero. We keep measuring the error and computing deltas that would minimize the error contribution of individual notes. There are situations where the error will stop producing and there are additional hyperparameters to control that. There are also hyperparameters to speed up or slow down the learning process. We will discuss them in the followup course, deep learning model optimization and tuning.

Contents