Citation, DOI, disclosures and article data
At the time the article was created Dimitrios Toumpanakis had no recorded disclosures.View Dimitrios Toumpanakis's current disclosures
At the time the article was last revised Andrew Murphy had no recorded disclosures.View Andrew Murphy's current disclosures
Underfitting in statistical and machine learning modeling is the counterpart of overfitting.
It happens when a model is not complex enough to accurately capture relationships between a dataset’s features and a target variable, i.e. the network is struggling with learning the patterns in the data. This can be a result of a model needing more training time, more input features, or less regularization.
Underfitting can be established by looking at the model's learning curves and observing better performance on the validation set than on the training set.
Some techniques to reduce the underfitting tendency of a machine learning model are to increase:
- size of the training dataset
- size or number of parameters in the model
- complexity of the model
- raining time
Underfitting is just as bad for generalization of the model as overfitting.
- 1.Ian Goodfellow, Yoshua Bengio, Aaron Courville. Deep Learning. (2016) ISBN: 9780262035613