Normalization (image processing): Difference between revisions

Content deleted Content added
added example section
cleanup
 
(16 intermediate revisions by 15 users not shown)
Line 1:
{{Short description|Image processing}}
In [[image processing]], '''normalization''' is a process that changes the range of [[pixel]] intensity values. Applications include photographs with poor [[contrast (vision)|contrast]] due to glare, for example. Normalization is sometimes called contrast stretching or [[histogram]] stretching. In more general fields of data processing, such as [[digital signal processing]], it is referred to as [[dynamic range]] expansion.<ref>{{Cite book| page=85|
author=Rafael C. González, Richard Eugene Woods |
Line 4 ⟶ 5:
publisher=Prentice Hall |
year=2007 |
isbn=978-0-13-168728-X8
}}</ref>
 
The purpose of dynamic range expansion in the various applications is usually to bring the image, or other type of signal, into a range that is more familiar or normal to the senses, hence the term normalization. Often, the motivation is to achieve consistency in dynamic range for a set of data, signals, or images to avoid mental distraction or fatigue. For example, a newspaper will strive to make all of the images in an issue share a similar range of [[grayscale]].
 
Normalization transforms an n-dimensional [[grayscale]] image
<math>I:\{\mathbb{X}\subseteq\mathbb{R}^n\}\rightarrow\{\text{Min},..,\text{Max}\}</math>
with intensity values in the range <math>(\text{Min},\text{Max})</math>, into a new image
<math>I_N:\{\mathbb{X}\subseteq\mathbb{R}^n\}\rightarrow\{\text{newMin},..,\text{newMax}\}</math>
with intensity values in the range <math>(\text{newMin},\text{newMax})</math>.
 
The [[linear]] normalization of a [[grayscale]] [[digital image]] is performed according to the formula
 
:<math>I_N=(I-\text{Min})\frac{\text{newMax}-\text{newMin}}{\text{Max}-\text{Min}}+\text{newMin}</math>
Line 21 ⟶ 22:
For example, if the intensity range of the image is 50 to 180 and the desired range is 0 to 255 the process entails subtracting 50 from each of pixel intensity, making the range 0 to 130. Then each pixel intensity is multiplied by 255/130, making the range 0 to 255.
 
Normalization might also be non -linear, thisas happens when there isn't a [[linear]]the relationship between <math>I</math> and <math>I_N</math> may not be [[linear]]. An example of non-linear normalization is when the normalization follows a [[sigmoid function]], in thatwhich case, the normalized image is computed according to the formula
 
:<math>I_N=(\text{newMax}-\text{newMin})\frac{1}{1+e^{-\frac{I-\beta}{\alpha}}}+\text{newMin}</math>
Line 29 ⟶ 30:
Auto-normalization in image processing software typically normalizes to the full dynamic range of the number system specified in the image file format.
 
== Contrast Stretching for Image Enhancement ==
== Example ==
This is the most significant and essential technique of spatial-based image enhancement.<ref>{{Cite web |title=Contrast Enhancement Techniques: A Brief and Concise Review |url=https://www.irjet.net/archives/V4/i7/IRJET-V4I7375.pdf}}</ref> The basic intent of the '''contrast enhancement technique''' is to adjust the local contrast in the image so as to bring out the clear regions or objects in the image. Low-contrast images often result from poor or non-uniform lighting conditions, a limited dynamic range of the [[imaging sensor]], or improper settings of the lens aperture.
[[File:Contrast_Stretching_Transformation_Functions.png|thumb|Contrast Stretching Transformation Functions]]
The '''contrast enhancement''' tries to change the intensity of the pixel in the image, particularly in the input image, to obtain an enhanced image. It is based on the number of techniques, namely local, global, dark and bright levels of contrast. The contrast enhancement is considered as the amount of color or gray differentiation that lies among the different features in an image. The contrast enhancement improves the quality of image by increasing the luminance difference between the foreground and background.
 
A '''Contrast Stretching Transformation''' can be achieved by:
From a visual perspective, normalization is similar to moving and scaling an image. During normalization, an image is squeezed or stretched to a desired extent in different axis, until it fits in the normalization model. For a 2-D image, scaling could happen on the X axis or Y axis, or both, depending on the newMax, newMin. Normalization could help to make patterns appear in a desired area, with the normalized size.
[[File:Contrast_Stretching_Transformation_Graph_for_derivation.png|thumb|Contrast Stretching Transformation Graph reference for derivation]]
1. Stretching the dark range of input values into a wider range of output values: This involves increasing the brightness of the darker areas in the image to enhance details and improve visibility.
 
2. Shifting the mid-range of input values: This involves adjusting the brightness levels of the mid-tones in the image to improve overall contrast and clarity.
In a line drawing, normalization might change the length and direction of a line, in respect to the desired normalization range. The example shows that the original line has a range of x=[0,5], y=[1,6]. After normalization, the line falls into the normalized range of x=[0,1], y=[0,1].
[[File:Line drawing example.jpg|thumb|line example]]
 
3. Compressing the bright range of input values: This process involves reducing the brightness of the brighter areas in the image to prevent overexposure resulting in a more balanced and visually appealing image.
Another example is a triangle drawing, with the original pattern has a range of x=[-10,10], y=[0,10]. After normalization, the triangle falls into the normalized range of x=[0,1], y=[0,1].
[[File:Triangle pattern plot example.jpg|thumb|triangle pattern example]]
 
== Local and Global Contrast Stretching ==
Another common function of normalization is changing the grayscale of an actual photo, by fitting the intensity values of the pixels into a normalized range. The example shows that an original photo with density range of [0,255] is normalized to a density range of [0,127]. After normalization, the photo has much lower brightness, but keeps the other characteristics.
Local Contrast Stretching (LCS) is an image enhancement method that focuses on locally adjusting each pixel's value to improve the visualization of structures within an image, particularly in both the darkest and lightest portions. It operates by utilizing sliding windows, known as [[Kernel (image processing)|kernels]], which traverse the image. The central pixel within each kernel is adjusted using the following formula:
[[File:Original photo normalization.jpg|thumb|original photo]] [[File:Photo after normalization.jpg|thumb|photo after normalization]]
 
<math>I_p(x,y)= 255 \times \frac{[I_0(x,y)-min]}{(max-min)}</math>
Where:
''I<sub>p</sub>''(''x'',''y'') is the color level for the output pixel (x,y) after the contrast stretching process.
 
''I<sub>0</sub>''(''x'',''y'') is the color level input for data pixel (x, y).
 
''max'' is the maximum value for color level in the input image within the selected kernel.
 
''min'' is the minimum value for color level in the input image within the selected kernel.<ref>{{Cite web |title=Comparison of Contrast Stretching methods of Image Enhancement Techniques for Acute Leukemia Images |url=https://www.ijert.org/research/comparison-of-contrast-stretching-methods-of-image-enhancement-techniques-for-acute-leukemia-images-IJERTV1IS6319.pdf}}</ref>
 
Local contrast stretching considers each range of color palate in the image (R, G, and B) separately, providing a set of minimum and maximum values for each color palate.
 
Global Contrast Stretching, on the other hand, considers all color palate ranges at once to determine the maximum and minimum values for the entire RGB color image. This approach utilizes the combination of RGB colors to derive a single maximum and minimum value for contrast stretching across the entire image.
 
These contrast stretching techniques play a crucial role in enhancing the clarity and visibility of structures within images, particularly in scenarios with low contrast resulting from factors such as non-uniform lighting conditions or limited dynamic range.
 
== See also ==