How to Handle Overfitting With Regularization

[ad_1]


Overfitting occurs when the model is trying to learn the data too well. In other words, the model attempts to memorize the training dataset. This leads to capturing noise in the training data. 

Learning such data points that are present by random chance and don’t represent true properties of data makes the model more flexible. 

Hence, the model performs too well on the training data but fails to perform well on unseen data.

Overfitting is like training your dog to lay down when you whistle to it, and it learns the trick perfectly after some practice. But it only lays down to only your whistle and not to others. It is because it is trained for your whistle and not someone else. 

The same thing happens to your model when it is not trained correctly or if there is not enough variation in the training activity.

The model only works perfectly for the training dataset but fails to generalize for unseen data.

How to identify Overfitting?

Some of the parameters to look out for to identify if the model is overfitting or not are:

  • Train and test loss comparison
  • Prediction graph
  • Leveraging the cross-validation

Let’s understand these in detail.

Train and Test loss 

Loss is the result of a bad prediction. A loss is a number indicating how bad the model’s prediction was. In general, it is helpful to split the dataset into three parts

  • Training, 
  • Validation, 
  • Testing. 

Validation is for checking the model’s performance before testing it for a completely new dataset.  It is important to calculate losses during training and validation. 

If the training loss is very low, but the validation loss is very high, then your model is overfitting.

We have different version on saying the same, which is bias and variance.

Prediction Graph

If the data points and the fitted curve are plotted in a graph, and if the curve looks too complex, fitting all the data points perfectly, then your model is overfitting.

Techniques to fix Overfitting

There are many techniques that you can follow to prevent the overfitting of your model.

Train with more data

This is not always possible, but if the model is too complicated. Whereas the data available for training is small comparatively, then it is better to increase the size of the training data.

Don’t train with highly complex models

If you are training a very complex model for relatively less complex data, then the chances for overfitting are very high. Hence, it’s better to reduce the model complexity to prevent overfitting.

Cross-validation

Cross-validation is a powerful method to prevent overfitting. The idea of cross-validation is to divide our training dataset into multiple mini train-test splits. Each split is called as a fold

We divide the train set into k folds, and the model is iteratively trained on k-1 folds, and the remaining 1 fold is used as a test fold. Based on the model performance on the test fold, we tune its hyperparameters

This allows us to tune the hyperparameters with only our original training dataset without any involvement of the original test dataset.

You can know more about Cross-validation here on our article.

Remove unnecessary features

If there are too many features, then the model tends to overfit. So, it is better to remove unnecessary features from the dataset before training.

Regularization

 As the word suggests, it is the process of regularizing the model parameters, which discourages learning a more complex model.

In the next session, we will try to understand this process clearly.

What is Regularization?

Read More …

[ad_2]


Write a comment