Citation, DOI, disclosures and article data
At the time the article was created Candace Makeda Moore had no recorded disclosures.View Candace Makeda Moore's current disclosures
At the time the article was last revised Andrew Murphy had no recorded disclosures.View Andrew Murphy's current disclosures
Overfitting is a problem in machine learning that introduces errors based on noise and meaningless data into prediction or classification. Overfitting tends to happen in cases where training data sets are either of insufficient size or training data sets include parameters and/or unrelated features correlated with a feature of interest non-randomly. For example, an algorithm trained to read chest x-rays may correlate the use of a side marker or absence of a side marker with the absence/non-absence of pathology 1.
Strictly speaking, overfitting applies to fitting a polynomial curve to data points where the polynomial suggests a more complex model than the accurate one. In terms of neural networks, classification results which are misclassified by irrelevant parameters are referred to as examples of overfitting.
When we analyze a machine learning model, underfitting can be established by looking at the model's learning curves and observing high performance on the training set with significantly lower performance on the validation set. In essence, that means that the neural network memorizes the training samples instead of learning the patterns, i.e. struggles to generalize.
There are many techniques that can be used to mitigate overfitting, such as:
- early stopping
It should be noted that some degree of overfitting is common even with effective models and we can only try to minimize this effect.
- 1. John R. Zech, Marcus A. Badgeley, Manway Liu, Anthony B. Costa, Joseph J. Titano, Eric Karl Oermann. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study. (2018) PLOS Medicine. 15 (11): e1002683. doi:10.1371/journal.pmed.1002683 - Pubmed