Convolutional neural network

Last revised by Benjamin Li Shun Chan on 24 Jul 2023

A convolutional neural network (CNN) is a particular implementation of a neural network used in deep learning that exclusively processes array data such as images, and is thus frequently used in machine learning applications targeted at medical images 1.

Architecture

A convolutional neural network typically consists of the following three components although the architectural implementation varies considerably:

  1. input (image, volume or video)

  2. feature extraction

  3. classification and output 

Input

The most common input is an image, although considerable work has also been performed on so-called 3D convolutional neural networks that can process either volumetric data (3 spatial dimensions) or video (2 spatial dimensions + 1 temporal dimension).

In most implementations, the input needs to be processed to match the particulars of the CNN being used. This may include cropping, reducing the size of the image, identification of a particular region of interest, as well as normalizing pixel values to particular regions. 

Feature extraction

The feature extraction component of a convolutional neural network is what distinguishes CNNs from other multilayered neural networks. It typically comprises of repeating sets of three sequential steps 1:

  • convolution layer

    • input (image) is convoluted by application of numerous kernels

    • each kernel results in a distinct feature map

  • pooling layer

    • each feature map is downsized to a smaller matrix by pooling the values in adjacent pixels

  • non-linear activation unit

    • the activation of each neuron is then computed by the application of this non-linear function to the weighted sum of its inputs and an additional bias term. This is what gives the neural network the ability to approximate almost any function. 

    • a popular activation unit is the rectified linear unit (ReLU)

      • during convolution and pooling processes results in some pixels in the matrix having negative values

      • the rectified linear unit ensures all negative values are at a zero

These three steps are then repeated many times, each convolution layer acting upon the pooled and rectified feature maps from the preceding layer. The result is an ever smaller matrix size with activation dependent on more and more complex features due to the cumulative interaction of numerous prior convolutions. 

Classification and output

The final pooled and rectified feature maps are then used as the input of fully connected layers just like in a fully connected neural network, and thus discussed separately. 

Training

Most frequently convolutional neural networks in radiology undergo supervised learning. During training both the weighting factors of the fully connected classification layers and the convolutional kernels undergo modification (backpropagation)2

ADVERTISEMENT: Supporters see fewer/no ads