Citation, DOI, disclosures and article data
At the time the article was created Edward Chmiel had no recorded disclosures.View Edward Chmiel's current disclosures
At the time the article was last revised Dimitrios Toumpanakis had no recorded disclosures.View Dimitrios Toumpanakis's current disclosures
The concept of transfer learning in artificial neural networks is taking knowledge acquired from training on one particular domain and applying it in learning a separate task.
In recent years, a well-established paradigm has been to pre-train models using large-scale data (e.g., ImageNet) and then to fine-tune the models on target tasks that often have less training data 3. For example, a neural network that has previously been trained to recognize pictures of animals may more effectively learn how to categorize pathology on a chest x-ray. In this example, the initial training of the network in animal image recognition is known as “pre-training”, while training on the subsequent data set of chest x-rays is known as “fine tuning”. This tool is most useful when the number of training examples in the pre-training data set is relatively large (e.g. 100,000 animal images) while the fine-tuning data set is relatively small (e.g. 200 chest x-rays).
The most popular dataset used for pre-training is the ImageNet dataset 5, a very large dataset containing more than 14 million annotated images 4.
The initial layers in a neural network for most image recognition tasks are involved in recognizing simple features such as edges and curves. As such, a network which has been pre-trained on an unrelated image recognition task has already learned to see these lower level features. A network already pre-trained on images of animals does not need to re-learn such features, and is, therefore, able to train for the task of recognizing chest x-ray pathology with fewer training examples.