Last revised by Andrew Murphy on 16 Apr 2021

Underfitting in statistical and machine learning modeling is the counterpart of overfitting.

It happens when a model is not complex enough to accurately capture relationships between a dataset’s features and a target variable, i.e. the network is struggling with learning the patterns in the data. This can be a result of a model needing more training time, more input features, or less regularization

Underfitting can be established by looking at the model's learning curves and observing better performance on the validation set than on the training set.

Some techniques to reduce the underfitting tendency of a machine learning model are to increase:

  • size of the training dataset
  • size or number of parameters in the model
  • complexity of the model
  • raining time

Underfitting is just as bad for generalization of the model as overfitting.

ADVERTISEMENT: Supporters see fewer/no ads

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.