Methods and systems for rendering virtual reality content based on two-dimensional (“2D”) captured imagery of a three-dimensional (“3D”) scene
First Claim
1. A method comprising:
- receiving, by a virtual reality content rendering system, two-dimensional (“
2D”
) color data and depth data captured by a plurality of capture devices disposed at different vantage points in relation to a three-dimensional (“
3D”
) scene;
receiving, by the virtual reality content rendering system, metadata for the 2D color data and the depth data;
generating, by the virtual reality content rendering system, for each vantage point associated with each respective capture device included in the plurality of capture devices, and based on the metadata and the depth data, a partial 3D mesh projected into a virtual 3D space to produce a partial representation of the 3D scene in the virtual 3D space; and
generating, by the virtual reality content rendering system based on the partial 3D meshes projected into the virtual 3D space, and from an arbitrary viewpoint within the virtual 3D space, an image view of the virtual 3D space, the generating of the image view comprisingaccumulating the partial 3D meshes projected into the virtual 3D space, wherein the accumulating of the partial 3D meshes projected into the virtual 3D space comprises accumulating color samples for the partial 3D meshes in a frame buffer of a graphics processing unit (“
GPU”
), andadditively blending, based on the 2D color data, the color samples for overlapping sections of the partial 3D meshes in the frame buffer of the GPU to form the image view of the virtual 3D space, wherein the additively blending of the color samples for the overlapping sections of the partial 3D meshes comprises writing each color sample to the frame buffer when that color sample is sampled and selected to be written to the frame buffer, and additively blending each color sample with any previously written color samples in response to each color sample be written to the frame buffer.
1 Assignment
0 Petitions
Accused Products
Abstract
An exemplary method includes a virtual reality content rendering system receiving two-dimensional (“2D”) color data and depth data captured by a plurality of capture devices disposed at different vantage points in relation to a three-dimensional (“3D”) scene, receiving metadata, generating, for each vantage point associated with each respective capture device included in the plurality of capture devices, and based on the metadata and the depth data, a partial 3D mesh projected into a virtual 3D space to produce a partial representation of the 3D scene in the virtual 3D space, and generating, based on the partial 3D meshes projected into the virtual 3D space, and from an arbitrary viewpoint within the virtual 3D space, an image view of the virtual 3D space. The generating of the image view may comprise accumulating the partial 3D meshes projected into the virtual 3D space.
-
Citations
17 Claims
-
1. A method comprising:
-
receiving, by a virtual reality content rendering system, two-dimensional (“
2D”
) color data and depth data captured by a plurality of capture devices disposed at different vantage points in relation to a three-dimensional (“
3D”
) scene;receiving, by the virtual reality content rendering system, metadata for the 2D color data and the depth data; generating, by the virtual reality content rendering system, for each vantage point associated with each respective capture device included in the plurality of capture devices, and based on the metadata and the depth data, a partial 3D mesh projected into a virtual 3D space to produce a partial representation of the 3D scene in the virtual 3D space; and generating, by the virtual reality content rendering system based on the partial 3D meshes projected into the virtual 3D space, and from an arbitrary viewpoint within the virtual 3D space, an image view of the virtual 3D space, the generating of the image view comprising accumulating the partial 3D meshes projected into the virtual 3D space, wherein the accumulating of the partial 3D meshes projected into the virtual 3D space comprises accumulating color samples for the partial 3D meshes in a frame buffer of a graphics processing unit (“
GPU”
), andadditively blending, based on the 2D color data, the color samples for overlapping sections of the partial 3D meshes in the frame buffer of the GPU to form the image view of the virtual 3D space, wherein the additively blending of the color samples for the overlapping sections of the partial 3D meshes comprises writing each color sample to the frame buffer when that color sample is sampled and selected to be written to the frame buffer, and additively blending each color sample with any previously written color samples in response to each color sample be written to the frame buffer. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A non-transitory computer-readable medium storing instructions that, when executed, direct at least one processor of a computing device to:
-
receive two-dimensional (“
2D”
) color data and depth data captured by a plurality of capture devices disposed at different vantage points in relation to a three-dimensional (“
3D”
) scene,receive metadata for the 2D color data and the depth data, generate, for each vantage point associated with each respective capture device included in the plurality of capture devices, and based on the metadata and the depth data, a partial 3D mesh projected into a virtual 3D space to produce a partial representation of the 3D scene in the virtual 3D space, and generate, based on the partial 3D meshes projected into the virtual 3D space, and from an arbitrary viewpoint within the virtual 3D space, an image view of the virtual 3D space, the generating of the image view comprising accumulating color samples for the partial 3D meshes in a frame buffer of a graphics processing unit (“
GPU”
), and additively blending, based on the 2D color data, the color samples for overlapping sections of the partial 3D meshes in the frame buffer of the GPU to form the image view of the virtual 3D space, wherein the additively blending of the color samples for the overlapping sections of the partial 3D meshes comprises writing each color sample to the frame buffer when that color sample is sampled and selected to be written to the frame buffer, and additively blending each color sample with any previously written color samples in response to each color sample be written to the frame buffer. - View Dependent Claims (9, 10, 11, 12)
-
-
13. A system comprising:
-
at least one computer processor; and a virtual reality rendering facility that directs the at least one computer processor to; receive two-dimensional (“
2D”
) color data and depth data captured by a plurality of capture devices disposed at different vantage points in relation to a three-dimensional (“
3D”
) scene,receive metadata for the 2D color data and the depth data, generate, for each vantage point associated with each respective capture device included in the plurality of capture devices, and based on the metadata and the depth data, a partial 3D mesh projected into a virtual 3D space to produce a partial representation of the 3D scene in the virtual 3D space, and generate, based on the partial 3D meshes projected into the virtual 3D space, and from an arbitrary viewpoint within the virtual 3D space, an image view of the virtual 3D space, the generating of the image view comprising accumulating color samples for the partial 3D meshes in a frame buffer of a graphics processing unit (“
GPU”
), and additively blending, based on the 2D color data, the color samples for overlapping sections of the partial 3D meshes in the frame buffer of the GPU to form the image view of the virtual 3D space, wherein the additively blending of the color samples for the overlapping sections of the partial 3D meshes comprises writing each color sample to the frame buffer when that color sample is sampled and selected to be written to the frame buffer, and additively blending each color sample with any previously written color samples in response to each color sample be written to the frame buffer. - View Dependent Claims (14, 15, 16, 17)
-
Specification