A convolutional neural network (CNN) is a particular implementation of a neural network used in machine learning that exclusively processes array data such as images, and is thus frequently used in machine learning applications targeted at medical images.
A convolutional neural network typically consists of the following three components although the architectural implementation varies considerably 5-7:
- input (image, volume or video)
- feature extraction
- classification and output
The most common input is an image, although considerable work has also been performed on so-called 3D convolutional neural networks that can process either volumetric data (3 spatial dimensions) or video (2 spatial dimensions + 1 temporal dimension).
In most implementations, the input needs to be processed to match the particulars of the CNN being used. This may include cropping, reducing the size of the image, identification of a particular region of interest, as well as normalizing pixel values to particular regions.
The feature extraction component of a convolutional neural network is what distinguishes CNNs from other multilayered neural networks. It typically comprises of repeating sets of three sequential steps:
- convolution layer
- pooling layer
- each feature map is downsized to a smaller matrix by pooling the values in adjacent pixels
- non-linear activation unit
- the activation of each neuron is then computed by the application of this non-linear function to the weighted sum of its inputs and an additional bias term. This is what gives the neural network the ability to approximate almost any function.
- a popular activation unit is the rectified linear unit (ReLU)
- during convolution and pooling processes results in some pixels in the matrix having negative values
- the rectified linear unit ensures all negative values are at a zero
These three steps are then repeated many times, each convolution layer acting upon the pooled and rectified feature maps from the preceding layer. The result is an ever smaller matrix size with activation dependent on more and more complex features due to the cumulative interaction of numerous prior convolutions.
Classification and output
The final pooled and rectified feature maps are then used as the input of fully connected layers just like in a fully connected neural network, and thus discussed separately.
Most frequently convolutional neural networks in radiology undergo supervised learning. During training both the weighting factors of the fully connected classification layers and the convolutional kernels undergo modification (backpropagation).
- 1. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 521 (7553): 436-44. doi:10.1038/nature14539 - Pubmed
- 2. Chen H, Wang XH, Ma DQ, Ma BR. Neural network-based computer-aided diagnosis in distinguishing malignant from benign solitary pulmonary nodules by computed tomography. Chinese medical journal. 120 (14): 1211-5. Pubmed
- 3. Cicero M, Bilbily A, Colak E, Dowdell T, Gray B, Perampaladas K, Barfett J. Training and Validating a Deep Convolutional Neural Network for Computer-Aided Detection and Classification of Abnormalities on Frontal Chest Radiographs. Investigative radiology. 52 (5): 281-287. doi:10.1097/RLI.0000000000000341 - Pubmed
- 4. Lakhani P, Sundaram B. Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using Convolutional Neural Networks. Radiology. 284 (2): 574-582. doi:10.1148/radiol.2017162326 - Pubmed
Related Radiopaedia articles
- artificial intelligence (AI)
- imaging data sets
- computer-aided diagnosis (CAD)
- machine learning (overview)
- common data preparation/preprocessing steps
- DICOM to bitmap conversion
- principal component analysis
- training, testing and validation datasets
- mean squared error
- cross entropy
- optimization algorithms
- stochastic gradient descent
- momentum (Nesterov)
- linear and quadratic
- batch normalization
- natural language processing
- rule-based expert systems