Batch size (machine learning)
Last revised by Andrew Murphy ◉ on 2 May 2019
Citation, DOI, disclosures and article data
Gaillard F, Murphy A, Batch size (machine learning). Reference article, Radiopaedia.org (Accessed on 30 Mar 2023) https://doi.org/10.53347/rID-56140
13 Oct 2017, Frank Gaillard ◉ ◈
At the time the article was created Frank Gaillard had no recorded disclosures.View Frank Gaillard's current disclosures
2 May 2019, Andrew Murphy ◉
At the time the article was last revised Andrew Murphy had no recorded disclosures.View Andrew Murphy's current disclosures
2 times, by 1 contributor - see full revision history and disclosures
Batch size is a term used in machine learning and refers to the number of training examples utilized in one iteration. The batch size can be one of three options:
- batch mode: where the batch size is equal to the total dataset thus making the iteration and epoch values equivalent
- mini-batch mode: where the batch size is greater than one but less than the total dataset size. Usually, a number that can be divided into the total dataset size.
- stochastic mode: where the batch size is equal to one. Therefore the gradient and the neural network parameters are updated after each sample.