Image enhancement method with simultaneous noise reduction, non-uniformity equalization, and contrast enhancement
First Claim
Patent Images
1. A method for enhancing and correcting a digital image made up of a pixel array, the method comprising the steps of:
- a) acquiring pixel data which defines a digital image of internal features of a physical subject;
b) enhancement filtering the digital image including reducing noise, resulting in a first filtered image;
c) correcting for intensity non-uniformities in the first filtered image, resulting in an enhanced, corrected digital image; and
d) enhancing the local contrast between intensity values, resulting in a final digital image.
1 Assignment
0 Petitions
Accused Products
Abstract
A technique is disclosed for enhancing discrete pixel images. Image enhancement is first performed by separating structural regions from non-structural regions and performing separate smoothing and sharpening functions on the two regions. Non-uniform equalization is then performed to reduce differences between high and low intensity values, while maintaining the overall appearance of light and dark regions of the reconstructed image. Contrast enhancement is then performed on the equalized values to bring out details by enhancing local contrast.
157 Citations
20 Claims
-
1. A method for enhancing and correcting a digital image made up of a pixel array, the method comprising the steps of:
-
a) acquiring pixel data which defines a digital image of internal features of a physical subject;
b) enhancement filtering the digital image including reducing noise, resulting in a first filtered image;
c) correcting for intensity non-uniformities in the first filtered image, resulting in an enhanced, corrected digital image; and
d) enhancing the local contrast between intensity values, resulting in a final digital image. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
a) normalizing image data to make the subsequent filtering independent of the intensity range of the data, resulting in a normalized image;
b) smoothing the image data, resulting in a smoothed image;
c) identifying structural features from the smoothed image data, resulting in a structure mask showing the structural regions and non-structural regions;
d) orientation smoothing the structural regions based on the structure mask;
e) homogenization smoothing nonstructural regions in order to blend features of the non-structural regions into an environment surrounding the structural regions;
f) orientation sharpening the structural regions, resulting in a filtered image;
g) re-normalizing the filtered image, resulting in a re-normalized image; and
h) blending texture from the original image data into the data processed in the preceding steps, resulting in a first filtered image.
-
-
3. The method as claimed in claim 2, where the step of normalizing the image data comprises the steps of:
-
a) determining the maximum and minimum intensity values of the original image data;
b) determining a scale factor based on the precision level of the image data and maximum intensity of the image data;
c) scaling the image using the relation I=(I−
MIN_ORIGINAL)*scale, where I represents the data intensity, MIN_ORIGINAL represents the minimum intensity value of the original image data, and scale represents the scaling factor as determined in step b) above;
d) saving the scaled image in a memory circuit; and
e) computing the average intensity value of the image data.
-
-
4. The method as claimed in claim 3, where the step of determining a scaling factor comprises the steps of:
-
a) determining the precision level of the image data; and
b) dividing the precision level of step a) by the maximum intensity of the original image data to obtain a scaling factor.
-
-
5. The method as claimed in claim 2, where the step of smoothing the image data comprises the use of a boxcar smoothing method where the length of the separable kernel is 3.
-
6. The method as claimed in claim 2, where the step of identifying the structural features of the image data and preparing a structure mask comprises the steps of:
-
computing an edge strength threshold;
a) scaling the edge strength threshold by multiplying it by a number selected by a user;
b) creating a binary mask image such that the pixels are set to equal 1 if the corresponding pixel in the gradient image is greater than the scaled edge strength threshold and the pixels are set to equal 0 if the corresponding pixels are less than or equal to the edge strength threshold. c) eliminating small segments in the binary mask image using a connectivity approach, resulting in a mask image that includes significant high gradient regions but is devoid of small islands of high gradient regions;
d) modifying the mask image resulting from step (d) above by changing the value from 0 to 1 any pixel in the mask image whose corresponding pixel in the gradient image is above an intermediate threshold, which is some percentage of the scaled edge strength threshold of step (b) above. e) obtaining an intermediate mask image by changing from 1 to 0 the value of any pixels in the modified mask image when the number of pixels in a neighborhood immediately surrounding the selected pixel whose values are falls below a threshold number; and
f) obtaining a final mask image by changing from 0 to 1 the value of any pixels in the modified mask image when the number of pixels in a neighborhood immediately surrounding the selected pixel whose values are 1 exceeds a threshold number.
-
-
7. The method as claimed in claim 6, where the step of computing an edge strength threshold comprises the steps of:
-
a) computing the magnitude and direction of the gradient of every pixel in the image;
b) determining an initial gradient threshold using a gradient histogram;
c) for each pixel whose gradient magnitude is greater than the initial gradient threshold, counting the number of pixels in the neighborhood surrounding the selected pixel whose gradient magnitudes are above the initial gradient threshold and whose gradient directions do not differ from the gradient direction of the selected pixel by more than a predetermined angle. d) labeling each pixel as a relevant edge pixel when the number of neighborhood pixels surrounding the pixel counted according to the criteria in step (c) above exceeds a predetermined number;
e) eliminating isolated small segments of relevant edge pixels using a connectivity approach; and
f) computing an edge strength threshold by determining the gradient on a gradient histogram above which there are the number of gradient counts equal to the number of edge pixels as determined by step (e) above.
-
-
8. The method as claimed in claim 7, where the step of eliminating small isolated segments of relevant edge pixels comprises the steps of:
-
a) obtaining a binary image by setting to the value 1 each pixel in the binary image when the gradient of its corresponding pixel in the gradient image is above a predetermined threshold;
b) assigning an index label to each pixel in the binary image by scanning in a line-by-line basis and incrementing the label index each time a pixel is labeled;
c) merging connected pixels in the binary image by scanning the binary image from top to bottom and bottom to top for a selected number of iterations and replacing the current index label of each pixel scanned with the lowest index value in the neighborhood of the scanned pixel;
d) obtaining a histogram of the index labels; and
e) setting to 0 the value in the binary image of a pixel when the number of pixels in the histogram obtained in step (d) above for each index label falls below a predetermined threshold.
-
-
9. The method as claimed in claim 2, where the step of orientation smoothing the structural regions comprises the steps of:
-
a) determining the dominant orientation of a structural region;
b) making a consistency decision using the dominant orientation and its orthogonal orientation; and
c) smoothing along the dominant direction in that region when a consistency decision is reached.
-
-
10. The method as claimed in claim 9, where the step of determining the dominant orientation of a structural region comprises the steps of:
-
a) scanning the structural region and obtaining a local orientation map by assigning an orientation number to each pixel in the structural region;
b) re-scanning the structural region and counting the number of different orientations in the neighborhood surrounding each pixel in a structural region;
c) labeling as the dominant orientation the orientation getting the maximum number of counts in the selected neighborhood.
-
-
11. The method as claimed in claim 9 where the step of making a consistency decision is satisfied if any of the following criteria are true:
-
a) the orientation getting the maximum counts is greater than a predetermined percentage of the total neighborhood counts and the orthogonal orientation gets the minimum counts;
b) the orientation getting the maximum counts greater than a predetermined percentage of the total neighborhood counts, which is smaller than the percentage of section (i) above, and the orthogonal orientation gets the minimum counts, and the ratio of the dominant and its orthogonal counts is greater that a predetermined number;
orc) the ratio of dominant orientation counts to its orthogonal orientation count is greater than 10.
-
-
12. The method as claimed in claim 2, where the step of orientation sharpening the structural regions comprises the steps of:
-
a) obtaining a maximum directional edge strength image by computing the four edge strengths of each pixel and selecting the highest value;
b) smoothing along the edges of the edge strength image to obtain a smoothed edge strength image; and
c) multiplying the edge strength image by a multiplier and adding it to the image resulting from orientation smoothing.
-
-
13. The method claimed in claim 2, where the step of re-normalizing the filtered data comprises the steps of:
-
a) computing the average pixel intensity in the filtered image;
b) computing a normalization factor by dividing the average intensity of the original image by the average intensity of the filtered image; and
c) computing the normalized image by multiplying the intensity of each pixel of the filtered image by the normalization factor and adding the minimum intensity of the original image to the resulting sum.
-
-
14. The method as claimed in claim 2, where the step of blending texture from the original image with the re-normalized image is achieved by using the equation I(x,y)=alpha*(I(x,y)−
- I1(x,y))+I1(x,y), where I(x,y) is the filtered image, I1(x,y) is the original image, and alpha is a user-selected parameter such that 0<
alpha<
1.
- I1(x,y))+I1(x,y), where I(x,y) is the filtered image, I1(x,y) is the original image, and alpha is a user-selected parameter such that 0<
-
15. The method as claimed in claim 1, where the step of correcting for intensity non-uniformities in the first filtered image comprises the steps of:
-
a) reading the intensity of each pixel of the first filtered image;
b) obtaining the average intensity of the first filtered image by summing the intensities of all the pixels with an intensity greater than 0 and dividing by the number of such pixels;
c) multiplying the average intensity by a user-selected value to obtain a threshold value;
d) reducing the image by forming each pixel of the reduced image from the average of the non-overlapping pixel neighborhoods of a predetermined size;
e) obtaining an expanded image by expanding each dimension of the reduced image by an input parameter and mirroring the pixels to avoid discontinuities;
f) obtaining an expanded threshold image according to the following criteria;
i) if the value of a pixel in the expanded image from step (e) above is greater than the threshold value from step (c) above, then the value of the corresponding pixel in the expanded threshold image is set equal to the average intensity from step (b) above;
orii) if the value of a pixel in the expanded image from step (e) above is less than or equal to the threshold value from step (c) above, then the value of the corresponding pixel in the expanded threshold image is set equal to 0;
g) smoothing the expanded image and the expanded threshold image, resulting in a filtered expanded image and a filtered expanded threshold image;
h) obtaining an equalization function by dividing the maximum of the filtered expanded image and a user-defined constant by the maximum of the filtered expanded threshold image and the same user-defined constant such that the user-defined constant has a value between 0 and 1;
i) performing a bilinear interpolation on the equalization function, resulting in an intermediate image;
j) obtaining a corrected image such that the value of each pixel in the corrected image is equal to g*h/(h*h+N), where g is the first filtered image from step (a) above, h is the intermediate image from step (i) above, and N is a user-defined constant such that 100<
N<
5000;
k) multiplying the corrected image by the average intensity of the first finalized image from step (b) above and dividing the product by the average intensity of the corrected image, resulting in an equalized image;
l) forming an enhanced, corrected image according to the following criteria;
i) the value of a pixel in the enhanced, corrected image is set equal to the value of the corresponding pixel in the first filtered image if any of the following are true;
(1) the value of the corresponding pixel in the equalized image is less than the corresponding pixel in the first filtered image;
(2) the value of the corresponding pixel in the equalized image is less than 1;
(3) the value of the corresponding pixel in the equalized image is greater than the value of the corresponding pixel in the first filtered image, and the value of the corresponding pixel in the first filtered image is less than the threshold value from step (b) above;
or(4) the maximum value of any pixel in the first filtered image multiplied by a user-supplied constant is greater than the value of the corresponding pixel in the first filtered image;
ii) if none of the criteria in the immediately preceding step are true, then the value of the pixel in the enhanced, corrected image is set equal to the value of the corresponding pixel in the equalized image.
-
-
16. The method as claimed in claim 15, where the step of filtering the expanded image and the expanded threshold image comprises using a separable boxcar filter.
-
17. The method as claimed in claim 15, where the step of filtering the expanded image and the expanded threshold image comprises the steps of:
-
a) multiplying the value of each pixel in the expanded image and the expanded threshold image by +1 or −
1 depending on the value of an index;
b) obtaining transforms of the expanded image and the expanded threshold image;
c) performing a filtering operation using predetermined Gaussian coefficients;
d) performing an inverse transform on the filtered image; and
e) multiplying the value of each pixel in the transformed image by −
1 or +1 depending on the value of an index.
-
-
18. The method as claimed in claim 1, where the step of enhancing the local contrast between intensity values comprises the steps of:
-
a) obtaining the value of each pixel in the enhanced, corrected image;
b) summing the values of all of the pixels in the enhanced, corrected image whose value is greater than 0 and dividing the sum by the number of such pixels, resulting in an average value;
c) multiplying the average value by a user-supplied constant, resulting in a threshold value;
d) reducing the image by forming each pixel of the reduced image from the average of the non-overlapping pixel neighborhoods of a predetermined size;
e) obtaining an expanded image by expanding each dimension of the reduced image by an input parameter and mirroring the pixels to avoid discontinuities;
f) obtaining an expanded threshold image according to the following criteria;
i) if the value of a pixel in the expanded image from step (e) above is greater than the threshold value from step (c) above, then the value of the corresponding pixel in the expanded threshold image is set equal to the average intensity from step (b) above;
orii) if the value of a pixel in the expanded image from step (e) above is less than or equal to the threshold value from step (c) above, then the value of the corresponding pixel in the expanded threshold image is set equal to 0;
iii) smoothing the expanded image and the expanded threshold image using a separable boxcar filter, resulting in a filtered expanded image and a filtered expanded threshold image;
g) obtaining a gain function by dividing the maximum of the filtered expanded image and a user-defined constant by the maximum of the filtered expanded threshold image and the same user-defined constant such that the user-defined constant has a value between 0 and 1;
h) multiplying the gain function by the ratio of the average intensity of the filtered expanded image to the average intensity of the reduced image, resulting in a scaled image;
i) performing a bilinear interpretation operation on the scaled image, resulting in an intermediate image;
j) forming a boosted image by combining the intermediate image with the enhanced, corrected image from step (a) above;
k) mapping the boosted image onto a second boosted image using a non-linear lookup table; and
l) multiplying the second boosted image by the ratio of the average intensity of the enhanced, corrected image to the average intensity of the second boosted image, resulting in a final image.
-
-
19. A method for enhancing and correcting a digital image made up of a pixel array, the method comprising the steps of:
-
a) acquiring pixel data which defines a digital image of internal features of a physical subject;
b) enhancement filtering the digital image, resulting in a first filtered image;
c) correcting for intensity non-uniformities in the first filtered image, resulting in an enhanced, corrected digital image; and
d) enhancing the local contrast between intensity values without suppressing the local contrast of any of the intensity values or any other intensity values, resulting in a final digital image. - View Dependent Claims (20)
-
Specification