Glancing angle exclusion
First Claim
Patent Images
1. A computer-implemented process for creating a synthetic video from images captured from an array of cameras, comprising the process actions of:
- (a) capturing images of a scene using the array of cameras arranged in three dimensional (3D) space relative to the scene;
(b) estimating camera data and 3D geometric information that describes objects in the captured scene both spatially and temporally;
(c) generating a set of geometric proxies which describe objects in the scene as a function of time using the extracted camera and 3D geometric data(d) determining silhouette boundaries of the geometric proxies in the captured images of the scene;
(e) applying projective texture from the captured images to the geometric proxies while masking the projective texture which exceeds the boundaries of the silhouette by using depth map discontinuities comprising;
for each pixel of the projective texture within a first number of pixels from a depth map discontinuity, determining if the pixel is on the near side or the far side of the discontinuity,if the pixel is on the near side, not using the pixel for rendering if the pixel is within a second number of pixels from the discontinuity, andif the pixel is on the far side, not using the pixel for rendering if the pixel is within a third number of pixels from the discontinuity.
2 Assignments
0 Petitions
Accused Products
Abstract
The glancing angle exclusion technique described herein selectively limits projective texturing near depth map discontinuities. A depth discontinuity is defined by a jump between a near-depth surface and a far-depth surface. The claimed technique can limit projective texturing on near and far surfaces to a different degree—for example, the technique can limit far-depth projective texturing within a certain distance to a depth discontinuity but not near-depth projective texturing.
112 Citations
20 Claims
-
1. A computer-implemented process for creating a synthetic video from images captured from an array of cameras, comprising the process actions of:
-
(a) capturing images of a scene using the array of cameras arranged in three dimensional (3D) space relative to the scene; (b) estimating camera data and 3D geometric information that describes objects in the captured scene both spatially and temporally; (c) generating a set of geometric proxies which describe objects in the scene as a function of time using the extracted camera and 3D geometric data (d) determining silhouette boundaries of the geometric proxies in the captured images of the scene; (e) applying projective texture from the captured images to the geometric proxies while masking the projective texture which exceeds the boundaries of the silhouette by using depth map discontinuities comprising; for each pixel of the projective texture within a first number of pixels from a depth map discontinuity, determining if the pixel is on the near side or the far side of the discontinuity, if the pixel is on the near side, not using the pixel for rendering if the pixel is within a second number of pixels from the discontinuity, and if the pixel is on the far side, not using the pixel for rendering if the pixel is within a third number of pixels from the discontinuity. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A computer-implemented process for generating a 3D spatial video, comprising:
-
capturing images of a scene using an array of sensors arranged in three dimensional (3D) space relative to the scene, wherein the sensors capture intensity data and depth data of the scene; synthesizing a three dimensional video frame of the scene comprising; creating a geometric proxy of at least one object in the scene using estimated sensor geometry and the intensity data and depth data; applying projective texturing to the geometric proxy of the at least one object by using one or more depth map discontinuities comprising examining pixels of texture data within a first prescribed number of pixels from a depth map discontinuity, and not using pixels within a second prescribed number of pixels from a near side of the discontinuity and a third prescribed number of pixels from a far side of the discontinuity for applying the projective texturing. - View Dependent Claims (11, 12, 13, 14)
-
-
15. A system for generating a 3D spatial video, comprising:
-
a computing device; a computer program comprising program modules executable by the general purpose computing device, wherein the computing device is directed by the program modules of the computer program to, (a) input captured depth images and corresponding RGB images of a scene that were captured using an array of cameras arranged in three dimensional (3D) space relative to the scene; (b) generating a set of geometric proxies which describe objects in the scene; (c) projecting the geometric proxies onto a depth map corresponding to each RGB image; (d) running an edge filter on the depth map to find edges of the geometric proxies in the depth map; (e) computing a projective texture mask to use when applying a projective texture to the geometric proxies by locating depth map discontinuities and determining a distance in pixels from the discontinuities as the edges of the mask, wherein in the mask any large depth pixels of the projective texture that are within a variable number of pixels from an edge shared with a small depth pixel are not used when applying the projective textures to the geometric proxies; (f) applying projective texture to the geometric proxies while avoiding applying projective texture to the boundaries of the geometric proxies by using the projective texture mask. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification