Interactive viewpoint video employing viewpoints forming an array
First Claim
1. A computer-implemented process for generating an interactive viewpoint video, comprising using a computer to perform the following process actions:
- inputting three or more synchronized video streams each depicting a portion of the same scene captured from different viewpoints, wherein said viewpoints form an array;
inputting calibration data defining geometric and photometric parameters associated with each video stream; and
for each group of contemporaneous frames from the synchronized video streams,generating a 3D reconstruction of the scene,using the reconstruction to compute a disparity map for each frame in the group of contemporaneous frames, andfor each frame in the group of contemporaneous frames,identifying areas of significant depth discontinuities based on its disparity map,identifying background pixel information and foreground pixel information for pixels in said areas of significant depth discontinuities,generating a main layer comprising pixel information associated with areas in the frame that do not exhibit depth discontinuities exceeding a prescribed threshold and the background pixel information from areas having depth discontinuities above the threshold,generating a boundary layer comprising the foreground pixel information associated with areas having depth discontinuities that exceed the threshold, to produce a layered representation for the frame under consideration, andstoring the main layer and boundary layer in a computer readable medium that has a physical form.
2 Assignments
0 Petitions
Accused Products
Abstract
A system and process for generating, and then rendering and displaying, an interactive viewpoint video in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. In general, the interactive viewpoint video is generated using a small number of cameras to capture multiple video streams. A multi-view 3D reconstruction and matting technique is employed to create a layered representation of the video frames that enables both efficient compression and interactive playback of the captured dynamic scene, while at the same time allowing for real-time rendering.
57 Citations
20 Claims
-
1. A computer-implemented process for generating an interactive viewpoint video, comprising using a computer to perform the following process actions:
-
inputting three or more synchronized video streams each depicting a portion of the same scene captured from different viewpoints, wherein said viewpoints form an array; inputting calibration data defining geometric and photometric parameters associated with each video stream; and for each group of contemporaneous frames from the synchronized video streams, generating a 3D reconstruction of the scene, using the reconstruction to compute a disparity map for each frame in the group of contemporaneous frames, and for each frame in the group of contemporaneous frames, identifying areas of significant depth discontinuities based on its disparity map, identifying background pixel information and foreground pixel information for pixels in said areas of significant depth discontinuities, generating a main layer comprising pixel information associated with areas in the frame that do not exhibit depth discontinuities exceeding a prescribed threshold and the background pixel information from areas having depth discontinuities above the threshold, generating a boundary layer comprising the foreground pixel information associated with areas having depth discontinuities that exceed the threshold, to produce a layered representation for the frame under consideration, and storing the main layer and boundary layer in a computer readable medium that has a physical form. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A system for generating an interactive viewpoint video, comprising:
-
a video capture sub-system comprising, three or more video cameras for capturing multiple video streams each depicting a portion of the same scene captured from different viewpoints which form an array, synchronization equipment for synchronizing the video streams to create a sequence of groups of contemporaneously captured video frames each depicting a portion of the same scene, one or more general purpose computing devices; a first computer program having program modules executable by at least one of said one or more general purpose computing devices, said modules comprising, a camera calibration module for computing geometric and photometric parameters associated with each video stream; and a second computer program having program modules executable by at least one of said one or more general purpose computing devices, said modules comprising, a 3D reconstruction module which generates a 3D reconstruction of the scene depicted in each group of contemporaneous frames from the synchronized video streams, and which uses the reconstruction to compute a disparity map for each frame in the group of contemporaneous frames, a matting module which, for each frame in each group of contemporaneous frames, identifies areas of significant depth discontinuities based on the frame'"'"'s disparity map, and a layered representation module which, for each frame in each group of contemporaneous frames, identifies background pixel information and foreground pixel information for pixels in said areas of significant depth discontinuities, generates a main layer comprising pixel information associated with areas in the frame that do not exhibit depth discontinuities exceeding a prescribed threshold and the background pixel information from pixels in areas having depth discontinuities exceeding the threshold, and generates a boundary layer comprising the foreground pixel information associated with areas having depth discontinuities that exceed the threshold, to produce a layered representation for the frame under consideration. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A computer-implemented process for rendering and displaying an interactive viewpoint video from data comprising layered representations of video frames generated from sequential groups of contemporaneously input video frames each depicting a portion of the same scene and exhibiting different viewpoints which form an array, and comprising calibration data comprising geometric parameters and photometric parameters associated with the capture of each video frame, said process comprising using a computer to perform the following process actions for each frame of the interactive viewpoint video to be rendered:
-
identifying a current user-specified viewpoint; identifying the frame or frames from a group of contemporaneously captured frames corresponding with a current temporal portion of the video being rendered that are needed to render the scene depicted therein from the identified viewpoint; inputting the layered representations of the identified video frame or frames; and rendering and displaying the frame of the interactive viewpoint video from the viewpoint currently specified by the user using the inputted layered frame representations;
wherein,background pixel information and foreground pixel information have been identified for pixels in areas of each input frame that have significant depth discontinuities, and the layer representation of each input frame comprises, a main layer comprising pixel information associated with areas in the frame that do not exhibit depth discontinuities exceeding a prescribed threshold and the background pixel information from areas of depth discontinuities above the threshold, and a boundary layer comprising the foreground pixel information associated with areas having depth discontinuities that exceed the threshold. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification