Systems and Methods for Normalizing Image Data Captured by Camera Arrays
First Claim
1. A method for normalizing image data captured by camera arrays, comprising;
- obtaining calibration data for an imager array by capturing images using the imager array, where the calibration data indicates mappings between addresses of physical pixels in imagers and logical addresses within an image;
storing the calibration data in a storage device;
normalizing a set of images with respect to an image captured by a baseline imager within the imager array based upon the calibration data stored in a storage device using an address conversion module, where the set of images comprises a plurality of images that are;
captured from different viewpoints;
include different occlusions sets;
wherein the occlusion set of a first image is the portion of a scene visible in a second image that is occluded in the first image; and
wherein normalizing a set of images with respect to an image captured by a baseline image comprises;
correcting color differences between the images with respect to an image captured by a baseline imager; and
correcting geometric distortion differences between the captured images with respect to an image captured by a baseline imager.
15 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. An imaging device in accordance with one embodiment of the invention includes at least one imager array, and each imager in the array comprises a plurality of light sensing elements and a lens stack including at least one lens surface, where the lens stack is configured to form an image on the light sensing elements, control circuitry configured to capture images formed on the light sensing elements of each of the imagers, and a super-resolution processing module configured to generate at least one higher resolution super-resolved image using a plurality of the captured images.
-
Citations
19 Claims
-
1. A method for normalizing image data captured by camera arrays, comprising;
-
obtaining calibration data for an imager array by capturing images using the imager array, where the calibration data indicates mappings between addresses of physical pixels in imagers and logical addresses within an image; storing the calibration data in a storage device; normalizing a set of images with respect to an image captured by a baseline imager within the imager array based upon the calibration data stored in a storage device using an address conversion module, where the set of images comprises a plurality of images that are; captured from different viewpoints; include different occlusions sets; wherein the occlusion set of a first image is the portion of a scene visible in a second image that is occluded in the first image; and wherein normalizing a set of images with respect to an image captured by a baseline image comprises; correcting color differences between the images with respect to an image captured by a baseline imager; and correcting geometric distortion differences between the captured images with respect to an image captured by a baseline imager. - View Dependent Claims (2, 3, 4, 5, 11, 12, 16, 18)
-
-
6. The method of claim 6, wherein obtaining a normalization plane comprises:
-
capturing images of a scene with flat reflectance and calculating a color ratio surface; removing the black level offset from the pixel values in the captured images; and low pass filtering the pixel values in the captured images to reduce noise; and calculating a normalization plane. - View Dependent Claims (8, 10)
-
-
7. The method of claim 7, wherein:
-
the imager array comprises Green imagers that include Green spectral filters including a baseline Green imager, Red imagers that include Red spectral filters, and Blue spectral filters; a normalization plane is obtained for each of the Red and Blue imagers; and a normalization plane is calculated for at least one of Red imagers by determining;
Norm R=G(i,j)/(R(i,j)×
(Gcenter/Rcenter))where G is the baseline Green imager, R is a Red imager being normalized with respect to the baseline Green imager, (i, j) describe the pixel position, and Gcenter, and Rcenter, are the pixel values at the center position.
-
-
9. The method of claim 9, wherein a 6th order polynomial represented using seven coefficients is fitted to the normalization plane using the space filling curve.
- 13. The method of claim 13, wherein aligning portions of images captured by different imagers to compensate for parallax using an image pixel correlation module further comprises determining appropriate X and Y offsets to be applied to logical pixel address calculations using an address conversion module based upon the detected and metered parallax and the stored calibration data.
-
14. The method of claim 14, wherein the address conversion module, the parallax confirmation and measurement module, the image pixel correlation module, and the super-resolution module are implemented using a general-purpose computer selectively reconfigured by a computer program stored in the computer.
-
17. The method of claim 17, wherein the baseline imager includes a Green filter.
-
19. A method for normalizing image data captured by camera arrays, comprising:
-
obtaining calibration data for imagers in an imager array by capturing images using the imager array, where the calibration data indicates mappings between addresses of physical pixels in imagers and logical addresses within an image; storing the calibration data in a storage device; normalizing a set of images with respect to an image captured by a baseline imager within the imager array based upon the calibration data stored in a storage device using an address conversion module, where the set of images comprises a plurality of images that are; captured from different viewpoints; include different occlusions sets; wherein the occlusion set of a first image is the portion of a scene visible in a second image that is occluded in the first image; and
wherein obtaining calibration data comprises;obtaining a normalization plane by; capturing images of scene with flat reflectance and calculating a color ratio surface; removing the black level offset from the pixel values in the captured images; low pass filtering the pixel values in the captured images to reduce noise; and calculating a normalization plane; fitting a polynomial to the normalization plane by scanning the normalization plane using a space filling curve; and storing fitted polynomials as calibration data in the storage device; and wherein normalizing a set of images with respect to an image captured by a baseline image comprises; correcting color differences between images with respect to an image captured by a baseline imager using the fitted polynomials; and correcting geometric distortion differences between the captured images with respect to an image captured by a baseline imager.
-
Specification