Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
First Claim
1. An array camera, comprising:
- a processor; and
a memory connected to the processor and configured to store an image deconvolution application;
wherein the image deconvolution application configures the processor to;
obtain light field image data, where the light field image data comprises an image having a plurality of pixels, a depth map, and metadata describing the motion associated with a capturing device that captured the light field image data;
determine motion data based on the metadata contained in the light field image data;
generate a depth-dependent point spread function based on the image, the depth map, and the motion data, where the depth-dependent point spread function describes the blurriness of points within the image based on the motion of the capturing device and the depth of the pixels described in the depth map;
measure the quality of the image based on the generated depth-dependent point spread function;
when the measured quality of the image is within a quality threshold, incorporate the image into the light field image data; and
when the measured quality of the image is outside the quality threshold;
determine updated motion data based on the measured quality and the depth-dependent point spread function;
generate an updated depth-dependent point spread function based on the updated motion data; and
synthesize a new image based on the updated depth-dependent point spread function.
6 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods for synthesizing high resolution images using image deconvolution and depth information in accordance embodiments of the invention are disclosed. In one embodiment, an array camera includes a processor and a memory, wherein an image deconvolution application configures the processor to obtain light field image data, determine motion data based on metadata contained in the light field image data, generate a depth-dependent point spread function based on the synthesized high resolution image, the depth map, and the motion data, measure the quality of the synthesized high resolution image based on the generated depth-dependent point spread function, and when the measured quality of the synthesized high resolution image is within a quality threshold, incorporate the synthesized high resolution image into the light field image data.
-
Citations
20 Claims
-
1. An array camera, comprising:
-
a processor; and a memory connected to the processor and configured to store an image deconvolution application; wherein the image deconvolution application configures the processor to; obtain light field image data, where the light field image data comprises an image having a plurality of pixels, a depth map, and metadata describing the motion associated with a capturing device that captured the light field image data; determine motion data based on the metadata contained in the light field image data; generate a depth-dependent point spread function based on the image, the depth map, and the motion data, where the depth-dependent point spread function describes the blurriness of points within the image based on the motion of the capturing device and the depth of the pixels described in the depth map; measure the quality of the image based on the generated depth-dependent point spread function; when the measured quality of the image is within a quality threshold, incorporate the image into the light field image data; and when the measured quality of the image is outside the quality threshold; determine updated motion data based on the measured quality and the depth-dependent point spread function; generate an updated depth-dependent point spread function based on the updated motion data; and synthesize a new image based on the updated depth-dependent point spread function. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
-
-
16. A method for performing image deconvolution, comprising:
-
obtain light field image data using an array camera having a plurality of cameras, wherein the light field image data comprises an image having a plurality of pixels, a depth map, and metadata describing the motion associated with the array camera that captured the light field image data; determine motion based on the metadata contained in the light field image data; generate a depth-dependent point spread function based on the image, the depth map, and the motion data, where the depth-dependent point spread function describes a blurriness of points within the image based on the motion of the array camera and the depth of the pixels described in the depth map; measure the quality of the image based on the generated depth-dependent point spread function; when the measured quality of the image is within a quality threshold, incorporate the image into the light field image data; and when the measured quality of the image is outside the quality threshold; determine updated motion data based on the measured quality and the depth-dependent point spread function; generate an updated depth-dependent point spread function based on the updated motion data; and synthesize a new image based on the updated depth-dependent point spread function. - View Dependent Claims (17, 18, 19, 20)
-
Specification