Recurrent neural network

Last revised by Daniel J Bell on 6 Jul 2019

Recurrent neural networks (RNNs) are a form of a neural network that recognizes patterns in sequential information via contextual memory. Recurrent neural networks have been applied to many types of sequential information including text, speech, videos, music, genetic sequences and even clinical events 1

RNNs can be contrasted with simple feed forward neural networks. Running multiple inputs through a simple 'feed forward' network will not change the bias of the network once trained. Rather than the traditional 'feed forward' architecture, recurrent neural networks operate with 'loops' (self loops, feedback loops, and backward connections) taking into account information received prior and contextualizing it. 

Recurrent neural networks in radiology  

The recurrent networks ability to 'remember' previous inputs is known as its 'memory', memory is a powerful tool when assessing data that requires context. 

Certain types of imaging are sequential in nature, such as ultrasound video. In theory even individual 2D images can be treated as sequential patterns (if they are turned into a sequence of smaller images). However at present RNNs are more widely used in areas of radiology related to language. At present, their most common use in many radiology practices is in speech recognition software packages that radiologists use to create and transcribe reports. RNNs have also improved the process of disease annotation from electronic health record radiology reports 2 as well as showing potential to generate text reports to accompany abnormality detection algorithms 3.

ADVERTISEMENT: Supporters see fewer/no ads

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.