Validation split (machine learning)
In order to ensure that machine learning models are able to generalize well to new data not seen before by the model, is it important to have several sets of data including training data, test data, and cross-validation split data for the original set of data to obtain the best possible predictive model.
Training Set
When conducting machine learning, data collection is critical to generate accurate algorithms to make good predictions. A predictive model is created after undergoing training utilizing a training set of known examples.
Test Set
A credible method is required to test the accuracy of the model after training. Using the same training examples for testing is unlikely to give an accurate representation of the predictive accuracy of the model as the model is likely to be biased towards the training set. Thus, the original data set is usually split to make a test set. The test set is usually used to select the algorithm with the best performance.
Cross-validation set
Selecting an algorithm based on the test set could lead to further biases. As the algorithm is selected from the best performance based on the same test set, this isn’t an accurate representation of generalized accuracy to examples never seen before by the algorithm (as a test set is finite and does not necessarily cover the wide variety of real examples). The algorithm selected will likely have an optimistic estimation of the generalization error. Consequently, the original dataset is further split to include a cross validation set. The cross validation set is used to select the best performing algorithm, and the test set is used to estimate the generalization error from this algorithm.
-
training set
- data points used to train the algorithm
-
cross validation set
- data points used to select the best algorithm
-
test set
- data points used to test the selected algorithm for the generalization error/accuracy.
A typical split of the original dataset is 60% training, 20% cross validation and 20% test set
Related Radiopaedia articles
Artificial intelligence
- artificial intelligence (AI)
- imaging data sets
- computer-aided diagnosis (CAD)
- natural language processing
- machine learning (overview)
- visualizing and understanding neural networks
- common data preparation/preprocessing steps
- DICOM to bitmap conversion
- dimensionality reduction
- scaling
- centering
- normalization
- principal component analysis
- training, testing and validation datasets
- augmentation
- loss function
-
optimization algorithms
- ADAM
- momentum (Nesterov)
- stochastic gradient descent
- mini-batch gradient descent
-
regularisation
- linear and quadratic
- batch normalization
- ensembling
- rule-based expert systems
- glossary
- activation function
- anomaly detection
- automation bias
- backpropagation
- batch size
- computer vision
- concept drift
- cost function
- confusion matrix
- convolution
- cross validation
- curse of dimensionality
- dice similarity coefficient
- dimensionality reduction
- epoch
- explainable artificial intelligence/XAI
- feature extraction
- federated learning
- gradient descent
- ground truth
- hyperparameters
- image registration
- imputation
- iteration
- jaccard index
- linear algebra
- noise reduction
- normalization
- R (Programming language)
- Python (Programming language)
- segmentation
- semi-supervised learning
- synthetic and augmented data
- overfitting
- transfer learning