Transfer learning

The concept of transfer learning in artificial neural networks is taking knowledge acquired from training on one particular domain and applying it in learning a separate task.

For example, a neural network that has previously been trained to recognize pictures of animals may more effectively learn how to categorize pathology on a chest x-ray. In this example, the initial training of the network in animal image recognition is known as “pre-training”, while training on the subsequent data set of chest x-rays is known as “fine tuning”. This tool is most useful when the number of training examples in the pre-training data set is relatively large (e.g. 100,000 animal images) while the fine-tuning data set is relatively small (e.g. 200 chest x-rays).


The initial layers in a neural network for most image recognition tasks are involved in recognizing simple features such as edges and curves. As such, a network which has been pre-trained on an unrelated image recognition task has already learned to see these lower level features. A network already pre-trained on images of animals does not need to re-learn such features, and is, therefore, able to train for the task of recognizing chest x-ray pathology with fewer training examples.

Artificial intelligence

Article information

rID: 69192
Synonyms or Alternate Spellings:

Support Radiopaedia and see fewer ads

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.