Systems and methods for 2D image and spatial data capture for 3D stereo imaging
First Claim
1. A method of converting two-dimensional (2D) images of a scene having therein at least one object to one or more three-dimensional (3D) images of the scene, comprising:
- simultaneously capturing at least first and second 2D images of the scene from corresponding at least first and second cameras having respective camera positions and orientations measured relative to a reference coordinate system;
forming a disparity map from the at least first and second 2D images, wherein the disparity map has a gray scale that corresponds to distance information of the at least one object relative to the reference coordinate system; and
forming from the disparity map a 3D point cloud P(x,y,z) representative of the at least one object, wherein the point cloud is configured to support first and second virtual cameras to create a stereo camera pair arrangeable in substantially arbitrary virtual locations.
2 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods for 2D image and spatial data capture for 3D stereo imaging are disclosed. The system utilizes a cinematography camera and at least one reference or “witness” camera spaced apart from the cinematography camera at a distance much greater that the interocular separation to capture 2D images over an overlapping volume associated with a scene having one or more objects. The captured image date is post-processed to create a depth map, and a point cloud is created form the depth map. The robustness of the depth map and the point cloud allows for dual virtual cameras to be placed substantially arbitrarily in the resulting virtual 3D space, which greatly simplifies the addition of computer-generated graphics, animation and other special effects in cinemagraphic post-processing.
-
Citations
21 Claims
-
1. A method of converting two-dimensional (2D) images of a scene having therein at least one object to one or more three-dimensional (3D) images of the scene, comprising:
-
simultaneously capturing at least first and second 2D images of the scene from corresponding at least first and second cameras having respective camera positions and orientations measured relative to a reference coordinate system; forming a disparity map from the at least first and second 2D images, wherein the disparity map has a gray scale that corresponds to distance information of the at least one object relative to the reference coordinate system; and forming from the disparity map a 3D point cloud P(x,y,z) representative of the at least one object, wherein the point cloud is configured to support first and second virtual cameras to create a stereo camera pair arrangeable in substantially arbitrary virtual locations. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. A method of forming a distance representation of a scene from two-dimensional (2D) images of the scene, comprising:
-
simultaneously capturing at least first and second 2D images of the scene from corresponding at least first and second cameras having respective camera positions and orientations measured relative to a reference coordinate system; defining one or more regions of interest in the at least first and second 2D images; associating differences between pixels in the at least first and second cameras with distances from a reference point; and assigning different gray-scale intensities to different ones of the distances. - View Dependent Claims (16, 17, 18, 19, 20, 21)
-
Specification