METHOD AND SYSTEM FOR PRODUCING A VIRTUAL OUTPUT IMAGE FROM DATA OBTAINED BY AN ARRAY OF IMAGE CAPTURING DEVICES
First Claim
Patent Images
1. Method for producing output image data for a virtual output image (Ī
-
0(s,t)) from input image data of individual input images (I1(s,t), I2(s,t), IK(s,t), IN (s,t)) provided by an array of image capturing devices (C1, C2, CK, CN) for capturing a number of individual images from different viewpoints,the method comprising adding and inverse filtering of corresponding pixel values of the individual input images based on multiple depth layers in a scene,characterized in that a virtual camera position (u0,v0) is provided by an input virtual camera position signal (C0) representing the virtual camera position (u0, v0) and the adding and inverse filtering is performed in dependence of the virtual camera position (u0, v0) to produce the image data for the virtual output image (Ī
0 (s,t)) as seen from the virtual camera position (u0,v0), wherein the array of image capturing devices is an array of cameras around a display screen and the virtual camera position is a point at the display screen, wherein the virtual camera position is determined by measuring the position of eyes on the display screen.
1 Assignment
0 Petitions
Accused Products
Abstract
In a method and system for providing virtual output images from an array of image capturing devices image data (Ii(s,t), I2(s,t)) is taken from the devices (Ci, C2). This image data is processed by convolving the image data with a function, e.g. the path (S) and thereafter deconvolving them, either after or before summation (SUM), with an inverse point spread function (IPSF) or a filter (HP) equivalent thereto to produce all-focus image data (I0(s,t)).
163 Citations
27 Claims
-
1. Method for producing output image data for a virtual output image (Ī
-
0(s,t)) from input image data of individual input images (I1(s,t), I2(s,t), IK(s,t), IN (s,t)) provided by an array of image capturing devices (C1, C2, CK, CN) for capturing a number of individual images from different viewpoints,
the method comprising adding and inverse filtering of corresponding pixel values of the individual input images based on multiple depth layers in a scene, characterized in that a virtual camera position (u0,v0) is provided by an input virtual camera position signal (C0) representing the virtual camera position (u0, v0) and the adding and inverse filtering is performed in dependence of the virtual camera position (u0, v0) to produce the image data for the virtual output image (Ī
0 (s,t)) as seen from the virtual camera position (u0,v0), wherein the array of image capturing devices is an array of cameras around a display screen and the virtual camera position is a point at the display screen, wherein the virtual camera position is determined by measuring the position of eyes on the display screen.- View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 12, 13, 14, 25, 26, 27)
d. generating a point spread function (PSF) by integration of the projected paths e. deconvolving the integrated image (Ĩ
0(s,t)), using the inverse point spread function (IPSF) of the point spread function (PSF), for producing the image data for the virtual output image (Ī
0 (s,t)) at the vertical camera position (u0,v0) provided by the input virtual camera position signal (C0).
-
0(s,t)) from input image data of individual input images (I1(s,t), I2(s,t), IK(s,t), IN (s,t)) provided by an array of image capturing devices (C1, C2, CK, CN) for capturing a number of individual images from different viewpoints,
-
5. Method as claimed in claim 1 wherein adding and inverse filtering comprises the method steps of:
-
a. generating a projected path (S) of the image shift for each image capturing devices of the array with respect to the virtual camera position (u0,v0), the projected path (S) being the sum of translating impulse response functions for aligning the image with the virtual camera position at a number of depths b. generating a point spread function (PSF) by integration of the projected paths (S) c. for each image convolving the respective projected path (S) with the inverse point spread function (IPSF) d. convolving each image (I1(s,t), I2(s,t)) with the convolution of path and inverse point spread function (S*IPSF) providing a convolved image e. adding the convolved images for producing the image data for the virtual output image (Ī
0 (s,t)) at the vertical camera position (u0,v0) provided by the input virtual camera position signal (C0).
-
-
6. Method of claim 1, wherein adding and inverse filtering comprises the method steps of:
-
a. generating a projected path (S) of the image shift for each image capturing device of the array (C1, C2, CK, CN), the path being determined with respect to the virtual camera position (u0,v0), the projected path (S) being the sum of translating impulse response functions for aligning the image of a image capturing device with the virtual camera position (u0,v0) at a number of depths b. convolving the individual camera images (I1, I2, . . . ) with their respective projected path (S) c. convolving the image of each image capturing device with a highpass filter (HP) perpendicular to the projected path, d. adding the convolved images for producing the image data for the virtual output image (Ī
0 (s,t)) at the vertical camera position (u0,v0) provided by the input virtual camarea position signal (C0).
-
-
7. Method as claimed in claim 1 wherein adding and inverse filtering comprises the method steps of:
-
a. generating a projected path (S) of the image shift for each image capturing device of the array, the path (S) being determined with respect to the virtual camera position (u0,v0), the projected path (S) being the sum of translating impulse response functions for aligning the image of a image capturing device with the virtual camera position (u0,v0) at a number of depths b. generating a convolution filter for each image by convolving its respective projected path (S) with a highpass filter (S*HP) perpendicular to it c. convolving the individual input images (I1(s,t), I2(s,t)) with their respective filter as generated in b d. adding the convolved images for producing the image data for the virtual output image (Ī
0 (s,t)) at the vertical camera position (u0,v0) provided by the input virtual camarea position signal (C0).
-
-
8. Method as claimed in claim 1 wherein multiple virtual viewpoints ((u0,v0)B, (u0,v0)C) are generated simultaneously from a set of images captured by an array of image capturing devices.
-
12. Method as claimed in claim 1 wherein two virtual camera positions for left-right eye virtual camera positions are provided to provide two virtual output images at two virtual camera positions.
-
13. Computer program comprising program code means for performing a method as claimed in claim 1 when said program is run on a computer.
-
14. Computer program product comprising program code means stored on a computer readable medium for performing a method according to claim 1 when said program is run on a computer.
-
25. System as claimed in claim 1 wherein the array is subdivided into two or more subarrays.
-
26. System as claimed in claim 25, wherein the system is arranged to allocate different virtual camera positions to the subarrays.
-
27. Method as claimed in claim 1, wherein the at least one virtual camera position is automatically and dynamically determined by an eye detection position means.
-
9-11. -11. (canceled)
-
15. System for constructing an output image from input images obtained by an array of image capturing devices, the system comprising an array of image capturing devices (C1, C2, CK, CN) for capturing images from different view points, wherein the system comprises a means for capturing image data (I1(s,t), I2(s,t)) from the image capturing devices of the array, the system comprising means for adding and inverse filtering of corresponding pixel values of the individual input images based on multiple depth layers in a scene,
characterized in that the system comprises a means to provide a selected virtual camera position (u0,v0) by an input virtual camera position signal (C0) representing the virtual camera position (u0, v0) and the means for adding and inverse filtering are arranged to operate as a function of the selected virtual camera position (u0, v0) to produce the image data for the virtual output image (Ī - 0 (s,t)) at the virtual camera position (u0,v0) provided by the input virtual camera position signal (C0) wherein the array of image capturing devices is an array of cameras around a display screen and the virtual camera position is a point at the display screen, wherein the system comprises two interacting sub-systems, each subsystem comprising an array of cameras around a display screen, and the virtual camera position at the display screen of one of the systems is determined by the other of the systems by measuring the position of eyes of a viewer of the other of the systems.
- View Dependent Claims (16, 17, 18, 19, 20, 21)
-
22-24. -24. (canceled)
Specification