Extended color processing on pelican array cameras
First Claim
1. A method of generating an image of a scene using a camera array including at least one camera that captures an RGB image of a scene and at least one camera that captures near-infrared (IR) spectral wavelengths of the scene, the method comprising:
- obtaining input images captured by a plurality of cameras that includes a camera that captures an RGB image and a camera that captures near-IR wavelengths, where the input images includes a first input image that includes image information captured in at least three channels (RGB) of information and a second input image that includes image information captured in at least a near-IR channel of information;
generate a fused image using a processor configured by software to;
measure parallax using the input images captured by the plurality of cameras to produce a depth map;
normalize the second input image in the photometric reference space of the first input image;
cross-channel normalize the first input image with respect to the second input image by applying gains and offsets to pixels of the first input image; and
perform cross-channel fusion using the first input image and the second input image to produce an image.
1 Assignment
0 Petitions
Accused Products
Abstract
Systems and methods for extended color processing on Pelican array cameras in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating a high resolution image includes obtaining input images, where a first set of images includes information in a first band of visible wavelengths and a second set of images includes information in a second band of visible wavelengths and non-visible wavelengths, determining an initial estimate by combining the first set of images into a first fused image, combining the second set of images into a second fused image, spatially registering the fused images, denoising the fused images using bilateral filters, normalizing the second fused image in the photometric reference space of the first fused image, combining the fused images, determining a high resolution image that when mapped through a forward imaging transformation matches the input images within at least one predetermined criterion.
-
Citations
19 Claims
-
1. A method of generating an image of a scene using a camera array including at least one camera that captures an RGB image of a scene and at least one camera that captures near-infrared (IR) spectral wavelengths of the scene, the method comprising:
-
obtaining input images captured by a plurality of cameras that includes a camera that captures an RGB image and a camera that captures near-IR wavelengths, where the input images includes a first input image that includes image information captured in at least three channels (RGB) of information and a second input image that includes image information captured in at least a near-IR channel of information; generate a fused image using a processor configured by software to; measure parallax using the input images captured by the plurality of cameras to produce a depth map; normalize the second input image in the photometric reference space of the first input image; cross-channel normalize the first input image with respect to the second input image by applying gains and offsets to pixels of the first input image; and perform cross-channel fusion using the first input image and the second input image to produce an image. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. An array camera configured to generate an image of a scene using an array camera including at least one camera that captures an RGB image of a scene and at least one camera that captures at least near-infrared (IR) spectral wavelengths of the scene, the array camera comprising:
-
an array camera including a plurality of cameras that includes a camera that captures an RGB image and a camera that captures at least near-IR spectral wavelengths; and a processor configured by software to; obtain input images captured by the plurality of cameras that includes the camera that captures an RGB image and the camera that captures at least near-IR spectral wavelengths, where the input images includes a first input image that includes image information captured in at least three channels (RGB) of information and a second input image that includes image information captured in a near-IR channel of information; generate a fused image by; measuring parallax using the input images captured by the plurality of cameras to produce a depth map; normalizing the second input image in the photometric reference space of the first input image; cross-channel normalize the first input image with respect to the second input image by applying gains and offsets to pixels of the first input image; and perform cross-channel fusion using the first input image the second input image to produce an image. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19)
-
Specification