Mini-batch gradient descent

Last revised by Andrew Murphy on 23 Jul 2019

The mini-batch gradient descent is a technique that combines properties from batch gradient descent and also stochastic gradient descent to optimise efficiency and accuracy of the gradient descent algorithm. In each iteration, a certain number of examples (a batch) within a data set will undergo batch gradient descent. This batch could be as small as 2 or larger than 200 examples. Compared to batch gradient descent it is significantly faster, and compared with stochastic gradient descent good vectorisation of the number of examples allows the computation to parallelised, hence it can perform faster than a stochastic gradient descent as well.

ADVERTISEMENT: Supporters see fewer/no ads

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.