Image normalisation

Last revised by Candace Makeda Moore on 23 Feb 2024

Image normalisation is a process, often used in the preparation of data sets for artificial intelligence (AI), in which multiple images are put into a common statistical distribution in terms of size and pixel values; however, a single image can also be normalised within itself. The process usually includes both spatial and intensity normalisation. Normalisation in some cases is part of the larger process of image dataset harmonization.

Spatial normalisation implies making all the images have the same spatial relationship to what they demonstrate e.g. two hands from the same person that are about the same in size in separate images will be normalised to about the same size even if the original (not normalised) images are very different. Spatial normalisation includes but is not limited to scaling because it can include linear transformations (e.g. rotations) or even deformations, which are non-linear transformations so images can be compared in a similar position at similar sizes. The process of image registration implies spatial normalisation of the registered images.

Normalising overall pixel values in multiple images into the same statistical distribution is called intensity normalisation. In terms of single images, artifacts can create inhomogeneity in images (especially in terms of MRI intensities) that can be corrected with scan bias normalisation, a type of intensity normalisation.

 

ADVERTISEMENT: Supporters see fewer/no ads

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.