From the course: Deep Learning: Getting Started

Measuring accuracy and error

From the course: Deep Learning: Getting Started

Measuring accuracy and error

- [Instructor] Accuracy and Error are alternating terms that can be used to represent the gap between the predicted values and the actual values of the target variables. As we go through forward propagation, we end up with a set of y-hat values that need to be then compared with the actual values of y to compute the error. For computing error, we use two functions, namely, the loss function and the cost function. A loss function measures the prediction error for a single sample, a cost function measures the error across a set of samples. The cost function provides an averaging effect over all the errors found on the training dataset. The terms loss function and cost function are used almost interchangeably and are used to measure the average error over a set of samples. There are a number of popular costs functions available, and they are implemented in all popular Deep Learning Libraries. The Mean Square Error or MSE measures errors in case of regression problems. It computes the difference between the predictor and the actual values, squares them, and sums them across all samples, and finally divides by the number of samples. The Root Mean Square Error or RMSE is more popular, as it provides error values in the same scale as the target variables. This is again used for regression problems. For binary classification, we use Binary Cross Entropy to compute the error. A similar function called Categorical Cross Entropy exist for multi-class classification problems. If you wish to understand the math behind these cost functions, I would recommend referring to additional documentation. So with the cost function, how do we measure accuracy? We send a set of samples through the ANN for forward propagation and predict the outcomes. We estimate the prediction error between the predicted outcome and expected outcome using a cost function. We will then use backward propagation to adjust the weights and biases in the model based on the error obtained. We will discuss backward propagation in the next video.

Contents