Optimization algorithms are widely utilized mathematical functions that solve problems via the maximization or minimization of a function. These algorithms are used for a variety of purposes from patient scheduling to radiology.
Optimization algorithms are used in machine learning to reduce a function known as a loss function or error function. By minimizing the loss function, optimization algorithms can achieve minimal or no difference between actual and predicted output, making our model more accurate at a task.
In a neural network, optimization is typically done using backpropagation. The current error or loss is propagated backwards to the previous layer, the gradient is computed, and one of the optimization algorithms is used to perform learning using this gradient. To simplify, learning refers to minimizing the loss function.
There are different types of optimization algorithms used in neural networks:
- 1. Ian Goodfellow, Yoshua Bengio, Aaron Courville. Deep Learning. (2016) ISBN: 9780262035613