Cost function (machine learning)
Last revised by Matt Adams ◉
on 12 May 2019
Citation, DOI, disclosures and article data
Citation:
Adams M, Murphy A, Cost function (machine learning). Reference article, Radiopaedia.org (Accessed on 03 Dec 2024) https://doi.org/10.53347/rID-56153
Permalink:
rID:
56153
Article created:
15 Oct 2017,
Matt Adams ◉
Disclosures:
At the time the article was created Matt Adams had no recorded disclosures.
View Matt Adams's current disclosures
Last revised:
12 May 2019,
Matt Adams ◉
Disclosures:
At the time the article was last revised Matt Adams had no recorded disclosures.
View Matt Adams's current disclosures
Revisions:
4 times, by
2 contributors -
see full revision history and disclosures
Sections:
Tags:
A cost function is a mechanism utilised in supervised machine learning, the cost function returns the error between predicted outcomes compared with the actual outcomes. The aim of supervised machine learning is to minimise the overall cost, thus optimising the correlation of the model to the system that it is attempting to represent.
NB loss function is defined as the error for one sample, whereas the cost function is the average loss across a number of samples in a given dataset.
Quiz questions
{"containerId":"expandableQuestionsContainer","displayRelatedArticles":true,"displayNextQuestion":true,"displaySkipQuestion":true,"articleId":56153,"questionManager":null,"mcqUrl":"https://radiopaedia.org/articles/cost-function-machine-learning/questions/1651?lang=gb"}
References
- 1. Nikhil Buduma, Nicholas Locascio. Fundamentals of Deep Learning. ISBN: 9781491925614
- 2. Stephen Marsland. Machine Learning. ISBN: 9781498759786
Incoming Links
Articles:
Multiple choice questions:
Related articles: Artificial intelligence
- artificial intelligence (AI)
- imaging data sets
- computer-aided diagnosis (CAD)
- natural language processing
- machine learning (overview)
- visualising and understanding neural networks
- common data preparation/preprocessing steps
- DICOM to bitmap conversion
- dimensionality reduction
- scaling
- centring
- normalisation
- principal component analysis
- training, testing and validation datasets
- augmentation
- loss function
-
optimisation algorithms
- ADAM
- momentum (Nesterov)
- stochastic gradient descent
- mini-batch gradient descent
-
regularisation
- linear and quadratic
- batch normalisation
- ensembling
- rule-based expert systems
- glossary
- activation function
- anomaly detection
- automation bias
- backpropagation
- batch size
- computer vision
- concept drift
- cost function
- confusion matrix
- convolution
- cross validation
- curse of dimensionality
- dice similarity coefficient
- dimensionality reduction
- epoch
- explainable artificial intelligence/XAI
- feature extraction
- federated learning
- gradient descent
- ground truth
- hyperparameters
- image dataset normalisation
- image registration
- imputation
- iteration
- jaccard index
- linear algebra
- noise reduction
- normalisation
- R (Programming language)
- radiomics quality score (RQS)
- Python (Programming language)
- segmentation
- semi-supervised learning
- synthetic and augmented data
- overfitting
- underfitting
- transfer learning