Citation, DOI, disclosures and article data
At the time the article was created Candace Makeda Moore had no recorded disclosures.View Candace Makeda Moore's current disclosures
Image normalization is a process, often used in the preparation of data sets for artificial intelligence (AI), in which multiple images are put into a common statistical distribution in terms of size and pixel values; however, a single image can also be normalized within itself. The process usually includes both spatial and intensity normalization. Normalization in some cases is part of the larger process of image dataset harmonization.
Spatial normalization implies making all the images have the same spatial relationship to what they demonstrate e.g. two hands from the same person that are about the same in size in separate images will be normalized to about the same size even if the original (not normalized) images are very different. Spatial normalization includes but is not limited to scaling because it can include linear transformations (e.g. rotations) or even deformations, which are non-linear transformations so images can be compared in a similar position at similar sizes. The process of image registration implies spatial normalization of the registered images.
Normalizing overall pixel values in multiple images into the same statistical distribution is called intensity normalization. In terms of single images, artifacts can create inhomogeneity in images (especially in terms of MRI intensities) that can be corrected with scan bias normalization, a type of intensity normalization.