How to Detect Exploding Gradients in Neural Networks

The exploding gradients occur in a situation opposite to the vanishing gradient problem. Instead of gradients becoming vanishingly small, they become extremely large during training. This makes your model unstable and unable to learn from your training data. In this post, we will understand the problem of exploding gradients in deep artificial neural networks. Let's begin Overview In this post, we will cover: What exploding gradient is and its causes. How do we know if the model has an exploding gradient problem? How to fix the exploding gradient problem? 1 - What are Exploding Gradients? The exploding gradient problem happens when the gradients in a neural network become so large that it messes up the training process. During backpropagation, the gradient of the loss function w.r.t. network's parameters (such as weights and biases) becomes extremely large. When the gradient becomes too large, it can lead to numerical instability and difficulties in training the neural network effectively. Essentially, the updates to the parameter become so large that they cause the network's parameter to "explode" meaning they grow uncontrollably. This can result in unpredictable behavior during training, making it difficult for the network to converge to a solution and hindering its ability to learn meaningful patterns in the data. 2 - Understanding Exploding Gradients Through Example Let's take the same example that we looked at for vanishing gradient problem, and see what exploding gradients would look like: Figure 1: Neural Network to predict if the person will take insurance or not For example, if we try to calculate the gradient of loss w.r.t. weight , where d1 = and d2 is…

Continue ReadingHow to Detect Exploding Gradients in Neural Networks