Systems and methods for measuring depth in the presence of occlusions using a subset of images
First Claim
1. A method of estimating distances to objects within a scene from a set of images captured from different viewpoints using a processor configured by an image processing application, the method comprising:
- selecting a reference viewpoint relative to the viewpoints of the set of images captured from different viewpoints;
normalizing the set of images to increase the similarity of corresponding pixels within the set of images;
determining initial depth estimates for pixel locations in an image from the reference viewpoint based upon the disparity at which corresponding pixels in the set of images have the highest degree of similarity;
comparing the similarity of the corresponding pixels in the set of images to detect mismatched pixels;
when an initial depth estimate does not result in the detection of a mismatch between corresponding pixels in the set of images, selecting the initial depth estimate as the depth estimate for the pixel location in the image from the reference viewpoint; and
when an initial depth estimate results in the detection of a mismatch between corresponding pixels in the set of images, updating the depth estimate for the pixel location in the image from the reference viewpoint by;
determining a set of candidate depth estimates using a plurality of competing subsets of the set of images based upon the disparities at which corresponding pixels in each of the plurality of competing subsets of images have the highest degree of similarity; and
selecting the candidate depth of the subset having the corresponding pixels with the highest degree of similarity as the updated depth estimate for the pixel location in the image from the reference viewpoint.
13 Assignments
0 Petitions
Accused Products
Abstract
Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.
-
Citations
30 Claims
-
1. A method of estimating distances to objects within a scene from a set of images captured from different viewpoints using a processor configured by an image processing application, the method comprising:
-
selecting a reference viewpoint relative to the viewpoints of the set of images captured from different viewpoints; normalizing the set of images to increase the similarity of corresponding pixels within the set of images; determining initial depth estimates for pixel locations in an image from the reference viewpoint based upon the disparity at which corresponding pixels in the set of images have the highest degree of similarity; comparing the similarity of the corresponding pixels in the set of images to detect mismatched pixels; when an initial depth estimate does not result in the detection of a mismatch between corresponding pixels in the set of images, selecting the initial depth estimate as the depth estimate for the pixel location in the image from the reference viewpoint; and when an initial depth estimate results in the detection of a mismatch between corresponding pixels in the set of images, updating the depth estimate for the pixel location in the image from the reference viewpoint by; determining a set of candidate depth estimates using a plurality of competing subsets of the set of images based upon the disparities at which corresponding pixels in each of the plurality of competing subsets of images have the highest degree of similarity; and selecting the candidate depth of the subset having the corresponding pixels with the highest degree of similarity as the updated depth estimate for the pixel location in the image from the reference viewpoint. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24)
-
-
25. A method of synthesizing a higher resolution image from a set of lower resolution images captured from different viewpoints, the method comprising:
-
estimating distances to objects within a scene from a set of images captured from different viewpoints using a processor configured by an image processing application, the method comprising; selecting a reference viewpoint relative to the viewpoints of the set of images captured from different viewpoints; normalizing the set of images to increase the similarity of corresponding pixels within the set of images; determining initial depth estimates for pixel locations in an image from the reference viewpoint based upon the disparity at which corresponding pixels in the set of images have the highest degree of similarity; comparing the similarity of the corresponding pixels in the set of images to detect mismatched pixels; when an initial depth estimate does not result in the detection of a mismatch between corresponding pixels in the set of images, selecting the initial depth estimate as the depth estimate for the pixel location in the image from the reference viewpoint; and when an initial depth estimate results in the detection of a mismatch between corresponding pixels in the set of images, updating the depth estimate for the pixel location in the image from the reference viewpoint by; determining a set of candidate depth estimates using a plurality of competing subsets of the set of images based upon the disparities at which corresponding pixels in each of the plurality of competing subsets of images have the highest degree of similarity; and selecting the candidate depth of the subset having the corresponding pixels with the highest degree of similarity as the depth estimate for the pixel location in the image from the reference viewpoint; determining the visibility of the pixels in the set of images from the reference viewpoint using the processor configured by the image processing application by; identifying corresponding pixels in the set of images using the depth estimates; and determining that a pixel in a given image is not visible in the reference viewpoint when the pixel fails a photometric similarity criterion determined based upon a comparison of corresponding pixels; and fusing pixels from the set of images using the processor configured by the image processing application based upon the depth estimates to create a fused image having a resolution that is greater than the resolutions of the images in the set of images by; identifying the pixels from the set of images that are visible in an image from the reference viewpoint using the visibility information; and applying scene dependent geometric shifts to the pixels from the set of images that are visible in an image from the reference viewpoint to shift the pixels into the reference viewpoint, where the scene dependent geometric shifts are determined using the depth estimates; and fusing the shifted pixels from the set of images to create a fused image from the reference viewpoint having a resolution that is greater than the resolutions of the images in the set of images. - View Dependent Claims (26)
-
-
27. An image processing system, comprising:
-
a processor; memory containing a set of images captured from different viewpoints and an image processing application; wherein the image processing application stored in memory directs the processor to; select a reference viewpoint relative to the viewpoints of the set of images captured from different viewpoints; normalize the set of images to increase the similarity of corresponding pixels within the set of images; determine initial depth estimates for pixel locations in an image from the reference viewpoint based upon the disparity at which corresponding pixels in the set of images have the highest degree of similarity; compare the similarity of the corresponding pixels in the set of images to detect mismatched pixels; when an initial depth estimate does not result in the detection of a mismatch between corresponding pixels in the set of images, selecting the initial depth estimate as the depth estimate for the pixel location in the image from the reference viewpoint; and when an initial depth estimate results in the detection of a mismatch between corresponding pixels in the set of images, updating the depth estimate for the pixel location in the image from the reference viewpoint by; determining a set of candidate depth estimates using a plurality of competing subsets of the set of images based upon the disparities at which corresponding pixels in each of the plurality of competing subsets of images have the highest degree of similarity; and selecting the candidate depth of the subset having the corresponding pixels with the highest degree of similarity as the updated depth estimate for the pixel location in the image from the reference viewpoint. - View Dependent Claims (28, 29, 30)
-
Specification