Validation split (machine learning)
Citation, DOI, disclosures and article data
At the time the article was created David John Wang had no recorded disclosures.View David John Wang's current disclosures
In order to ensure that machine learning models are able to generalize well to new data not seen before by the model, it is important to have several sets of data including training data, test data, and cross-validation split data for the original set of data to obtain the best possible predictive model.
When conducting machine learning, data collection is critical to generate accurate algorithms to make good predictions. A predictive model is created after undergoing training utilizing a training set of known examples.
A credible method is required to test the accuracy of the model after training. Using the same training examples for testing is unlikely to give an accurate representation of the predictive accuracy of the model as the model is likely to be biased towards the training set. Thus, the original data set is usually split to make a test set. The test set is usually used to select the algorithm with the best performance.
Selecting an algorithm based on the test set could lead to further biases. As the algorithm is selected from the best performance based on the same test set, this is not an accurate representation of generalized accuracy to examples never seen before by the algorithm (as a test set is finite and does not necessarily cover the wide variety of real examples). The algorithm selected will likely have an optimistic estimation of the generalization error. Consequently, the original dataset is further split to include a cross-validation set. The cross-validation set is used to select the best performing algorithm, and the test set is used to estimate the generalization error from this algorithm.
data points used to train the algorithm
data points used to select the best algorithm
data points used to test the selected algorithm for the generalization error/accuracy.
A typical split of the original dataset is 60% training, 20% cross-validation and 20% test sets.