Spatio-temporal light field cameras
First Claim
1. A method comprising:
- with a number of participants, each having a respective mobile device with an embedded digital camera to capture 2D image data that represents a one to one correspondence in light originating from a point in a viewed scene;
each embedded digital camera having;
a micro photo-detector array device to capture 2D image data and mounted in the digital camera to be temporally angularly articulated about two orthogonal axes parallel to a plane of a light detection surface of the micro photo-detector array device and at least through maximum articulation angles with a periodicity selected to enable temporal coverage of the maximum articulation angles within an image frame capture duration;
augmenting the 2D image data of the viewed scene captured by each participant'"'"'s embedded digital camera with location, orientation of the mobile device as augmented with instantaneous values of the articulation angles at the time of capture, and time of capture of the respective 2D image data of the viewed scene;
transforming the augmented 2D image data of the viewed scene captured by each participant'"'"'s embedded digital camera to computationally fuse the augmented 2D image data into a single light field data set that represents a collective light field captured by all participants'"'"' embedded digital cameras that captured a partial 2D perspective of the viewed scene.
1 Assignment
0 Petitions
Accused Products
Abstract
Spatio-temporal light field cameras that can be used to capture the light field within its spatio temporally extended angular extent. Such cameras can be used to record 3D images, 2D images that can be computationally focused, or wide angle panoramic 2D images with relatively high spatial and directional resolutions. The light field cameras can be also be used as 2D/3D switchable cameras with extended angular extent. The spatio-temporal aspects of the novel light field cameras allow them to capture and digitally record the intensity and color from multiple directional views within a wide angle. The inherent volumetric compactness of the light field cameras make it possible to embed in small mobile devices to capture either 3D images or computationally focusable 2D images. The inherent versatility of these light field cameras makes them suitable for multiple perspective light field capture for 3D movies and video recording applications.
134 Citations
9 Claims
-
1. A method comprising:
-
with a number of participants, each having a respective mobile device with an embedded digital camera to capture 2D image data that represents a one to one correspondence in light originating from a point in a viewed scene; each embedded digital camera having; a micro photo-detector array device to capture 2D image data and mounted in the digital camera to be temporally angularly articulated about two orthogonal axes parallel to a plane of a light detection surface of the micro photo-detector array device and at least through maximum articulation angles with a periodicity selected to enable temporal coverage of the maximum articulation angles within an image frame capture duration; augmenting the 2D image data of the viewed scene captured by each participant'"'"'s embedded digital camera with location, orientation of the mobile device as augmented with instantaneous values of the articulation angles at the time of capture, and time of capture of the respective 2D image data of the viewed scene; transforming the augmented 2D image data of the viewed scene captured by each participant'"'"'s embedded digital camera to computationally fuse the augmented 2D image data into a single light field data set that represents a collective light field captured by all participants'"'"' embedded digital cameras that captured a partial 2D perspective of the viewed scene. - View Dependent Claims (2)
-
-
3. A method comprising:
-
with a number of participants, each having a respective mobile device with an embedded spatio-temporal light field camera to capture light field data of a viewed scene; each embedded spatio-temporal light field camera having; a two dimensional photo-detector array of pixels to capture 2D image data, subdivided into two dimensional groups of pixels with a micro lens array of micro lens elements, each micro lens element of the micro lens array being associated and aligned relative to a respective group of pixels, with each micro lens element optically mapping light that impinges an aperture of the respective micro lens element from each of a discrete set of directions within a light field, as defined by an angular extent of the respective micro lens element, onto a respective pixel in the respective group of pixels, the discrete set of directions defining an angular resolution between adjacent directions and an angular extent of the discrete set of directions, the two dimensional photo-detector array of pixels and the micro lens array being assembled as a single assemble and mounted in the spatio-temporal light field camera to be temporally angularly articulated about two orthogonal axes parallel to a plane of a light detection surface of the micro photo-detector array device and at least through maximum articulation angles with a periodicity selected to enable temporal coverage of the maximum articulation angles within an image frame capture duration; the temporal angular articulation having a periodicity selected to enable temporal coverage of the maximum articulation angle within an image frame capture duration; augmenting the light field data of the viewed scene captured by each participant'"'"'s embedded spatio-temporal light field camera with location, orientation of the mobile device as augmented with instantaneous values of the articulation angles at the time of capture, and time of capture of the respective light field data of the viewed scene; transforming the augmented light field data of the viewed scene captured by each participant'"'"'s embedded spatio-temporal light field camera to computationally fuse the augmented light field data into a single light field data set that represents a collective light field captured by all participants'"'"' embedded spatio-temporal light field cameras that captured a partial light field perspective of the viewed scene. - View Dependent Claims (4, 5, 6, 7, 8)
-
-
9. A method comprising:
-
with a number of participants, each having a respective mobile device with an embedded digital camera to capture 2D image data that represents a one to one correspondence in light originating from a point in a viewed scene; each embedded spatio-temporal light field camera having; a micro photo-detector array device having a light detection surface defining a two dimensional array of pixels, each pixel in the two dimensional array of pixels being a light detector that is individually addressable to output an electrical signal responsive to an intensity of light coupled into an aperture of the respective pixel, the two dimensional array of pixels being subdivided into two dimensional groups of pixels; and a micro lens array of micro lens elements; the micro photo-detector array device and the micro lens array being assembled together as a single assembly; each micro lens element of the micro lens array being associated and aligned relative to a respective group of pixels, with each micro lens element optically mapping light that impinges an aperture of the respective micro lens element from each of a discrete set of directions within a light field, as defined by an angular extent of the respective micro lens element, onto a respective pixel in the respective group of pixels, the discrete set of directions defining an angular resolution between adjacent directions and an angular extent of the discrete set of directions; the micro photo-detector array device and the micro lens array being mounted to be temporally angularly articulated about two orthogonal axes parallel to a plane of a light detection surface of the micro photo-detector array device and at least through a maximum articulation angle; the temporal angular articulation having a periodicity selected to enable temporal coverage of the maximum articulation angle within an image frame capture duration augmenting the 2D image data of the viewed scene captured by each participant'"'"'s embedded digital camera with location, orientation of the mobile device as augmented with instantaneous values of the articulation angles at the time of capture, and time of capture of the respective 2D image data of the viewed scene; transforming the augmented 2D image data of the viewed scene captured by each participant'"'"'s embedded digital camera to computationally fuse the augmented 2D image data into a single light field data set that represents a collective light field captured by all participants'"'"' embedded digital cameras that captured a partial 2D perspective of the viewed scene; wherein the computational fusing comprises exchanging the augmented 2D image data between respective other mobile devices to transfer the exchanged 2D image data from a coordinate of a set of respective embedded cameras that captured it to a set of viewing scene coordinates used as common coordinates.
-
Specification