Recurrent neural network

Case contributed by Andrew Murphy
Diagnosis not applicable
Diagram

Case Discussion

This is a diagram of a simplified recurrent neural network, with an input layer, hidden layers and an output layer with a recurrent loop within the hidden layers. The squares in the hidden layers represent neurons or nodes and the lines represent weighted connections.  The blue lines are weighted connections which could be seen in a variety of neural network architectures, however, the yellow line represents a weighted connection between a neuron and a neuron in the previously hidden layer. Thus within the hidden layers, a previous layer’s input information is merged with the current input.
Weighted connections that go back to previous layers are important when a recurrent network takes into account context, for example, if you are to input the letter ‘r’ followed by the letter ‘a’ followed by the letter ‘d ‘the network will take into account each input contextualised by the last; over time an RNN may conclude that r-a-d in some cases = radiopaedia.
Many recurrent neural networks also allow for weighted connections between neurons in the same hidden layer.

How to use cases

You can use Radiopaedia cases in a variety of ways to help you learn and teach.

Creating your own cases is easy.