Dynamic POV composite 3D video system
First Claim
Patent Images
1. A computer-implemented method comprising:
- obtaining (i) a first image of a scene as viewed by a first depth camera from a first perspective, and (ii) a second image of the scene as viewed by a second depth camera from a different, second perspective;
generating, based on the first image that corresponds to the scene as viewed by the first depth camera from the first perspective, first data that references (i) a three-dimensional coordinate position associated with each point in the first image, (ii) a brightness characteristic associated with each point in the first image, and (iii) a color characteristic associated with each point in the first image;
generating, based on the second image that corresponds to the scene as viewed by the second depth camera from the second perspective, second data that references (i) a three-dimensional coordinate position associated with each point in the second image, (ii) a brightness characteristic associated with each point in the second image, and (iii) a color characteristic associated with each point in the second image;
generating, based at least on the first data and the second data, a three-dimensional projection of the scene;
determining a different, third perspective of the scene as the scene would be viewed from a particular position in three-dimensional space;
generating, based at least on the generated three-dimensional projection of the scene, a virtual image of the scene as the scene would be as viewed from the third perspective; and
providing the virtual image of the scene for output.
2 Assignments
0 Petitions
Accused Products
Abstract
Systems and techniques are disclosed for visually rendering a requested scene based on a virtual camera perspective request as well as a projection of two or more video streams. The video streams can be captured using two dimensional cameras or three dimensional depth cameras and may capture different perspectives. The projection may be an internal projection that maps out the scene in three dimensions based on the two or more video streams. An object internal or external to the scene may be identified and the scene may be visually rendered based on a property of the object. For example, a scene may be visually rendered based on where an mobile object is located within the scene.
-
Citations
31 Claims
-
1. A computer-implemented method comprising:
-
obtaining (i) a first image of a scene as viewed by a first depth camera from a first perspective, and (ii) a second image of the scene as viewed by a second depth camera from a different, second perspective; generating, based on the first image that corresponds to the scene as viewed by the first depth camera from the first perspective, first data that references (i) a three-dimensional coordinate position associated with each point in the first image, (ii) a brightness characteristic associated with each point in the first image, and (iii) a color characteristic associated with each point in the first image; generating, based on the second image that corresponds to the scene as viewed by the second depth camera from the second perspective, second data that references (i) a three-dimensional coordinate position associated with each point in the second image, (ii) a brightness characteristic associated with each point in the second image, and (iii) a color characteristic associated with each point in the second image; generating, based at least on the first data and the second data, a three-dimensional projection of the scene; determining a different, third perspective of the scene as the scene would be viewed from a particular position in three-dimensional space; generating, based at least on the generated three-dimensional projection of the scene, a virtual image of the scene as the scene would be as viewed from the third perspective; and providing the virtual image of the scene for output. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A system comprising:
one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising; obtaining (i) a first image of a scene as viewed by a first depth camera from a first perspective, and (ii) a second image of the scene as viewed by a second depth camera from a different, second perspective; generating, based on the first image that corresponds to the scene as viewed by the first depth camera from the first perspective, first data that references (i) a three-dimensional coordinate position associated with each point in the first image, (ii) a brightness characteristic associated with each point in the first image, and (iii) a color characteristic associated with each point in the first image; generating, based on the second image that corresponds to the scene as viewed by the second depth camera from the second perspective, second data that references (i) a three-dimensional coordinate position associated with each point in the second image, (ii) a brightness characteristic associated with each point in the second image, and (iii) a color characteristic associated with each point in the second image; generating, based at least on the first data and the second data, a three-dimensional projection of the scene; determining a different, third perspective of the scene as the scene would be viewed from a particular position in three-dimensional space; generating, based at least on the generated three-dimensional projection of the scene, a virtual image of the scene as the scene would be as viewed from the third perspective; and providing the virtual image of the scene for output. - View Dependent Claims (14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24)
-
25. A non-transitory computer-readable storage device having instructions stored thereon that, when executed by a computing device, cause the computing device to perform operations comprising:
-
obtaining (i) a first image of a scene as viewed by a first depth camera from a first perspective, and (ii) a second image of the scene as viewed by a second depth camera from a different, second perspective; generating, based on the first image that corresponds to the scene as viewed by the first depth camera from the first perspective, first data that references (i) a three-dimensional coordinate position associated with each point in the first image, (ii) a brightness characteristic associated with each point in the first image, and (iii) a color characteristic associated with each point in the first image; generating, based on the second image that corresponds to the scene as viewed by the second depth camera from the second perspective, second data that references (i) a three-dimensional coordinate position associated with each point in the second image, (ii) a brightness characteristic associated with each point in the second image, and (iii) a color characteristic associated with each point in the second image; generating, based at least on the first data and the second data, a three-dimensional projection of the scene; determining a different, third perspective of the scene as the scene would be viewed from a particular position in three-dimensional space; generating, based at least on the generated three-dimensional projection of the scene, a virtual image of the scene as the scene would be as viewed from the third perspective; and providing the virtual image of the scene for output. - View Dependent Claims (26, 27, 28, 29, 30, 31)
-
Specification