Simplified neural network

Case contributed by Andrew Murphy
Diagnosis not applicable
Diagram

Simplified neural network

Case Discussion

This is an example of a simplified neural network, with an input layer, two hidden layers and an output layer.  There are far more complicated ‘deep' neural networks with multiple hidden layers.

Each link between the nodes (boxes) of each layer has an associated weight.In radiology, the features of an image will be multiplied by the weights of each node as it feeds forward into the next layer until the output is reached, via an activation function. 

A feature of neural networks is the ability for the network to adjust weighting based on the error at the output, this is known as backpropagation

In laymen's terms, one begins to learn how to identify fractures by learning features of an image, as one becomes more experienced they know that certain features i.e. break in the context are more reliable than expecting overall deformities, over time human readers adjust their ‘weights’ as they advance.

This diagram is modified from a diagram published 2 in the British Journal of Radiology with permission,  as per the 'Your rights as an author' section of the 'copyright, licences and permissions' on the journal website. 

How to use cases

You can use Radiopaedia cases in a variety of ways to help you learn and teach.

Creating your own cases is easy.