Systems and methods for generating 360 degree mixed reality environments
First Claim
1. A system for visualizing controllable virtual 3D objects within a mixed reality application using real-world video captured from a plurality of cameras, comprising:
- a processor;
memory including a mixed reality application; and
wherein the mixed reality application directs the processor to;
obtain a plurality of real-world videos captured by one or more cameras, each real-world video capturing a different portion of a surrounding real-world environment as the one or more cameras move through the environment;
for each real-world video, extract information comprising camera movement coordinates information, path coordinates information, and point cloud coordinates information, including a depth of objects shown in the real-world video and translating the information into three dimensional (3D) coordinates;
for each real-world video, generate a 3D mixed reality environment comprising a plurality of separate, synched layers that includes (1) the real-world video as a background layer of the 3D mixed reality environment, (2) an occlusion layer that includes one or more transparent 3D objects that replicate real-world objects, including movement and rotation of the real-world objects, within the real-world video, and (3) one or more virtual synthetic objects, wherein the virtual synthetic objects interact with the transparent 3D objects based on 3D space locations of the objects and wherein the occlusion layer is used as a guide for the virtual synthetic objects to appear to move within the same environment as the real-world objects and to hide any virtual synthetic object that appears behind a transparent 3D object based on the depth information extracted from the real-world video; and
combine at least one 3D mixed reality environment generated for a real-world video with a different 3D mixed reality environment generated for a different real-world video to provide a 3D mixed reality environment that replicates a larger portion of the surrounding real-world environment for use by the mixed reality application, wherein frames of the background layer of each real-world video are combined using the camera movement coordinates information of each real-world video.
16 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods for generating a 360 degree mixed virtual reality environment that provides a 360 degree view of an environment in accordance with embodiments of the invention are described. In a number of embodiments, the 360 degree mixed virtual reality environment is obtained by (1) combining one or more real world videos that capture images of an environment with (2) a virtual world environment that includes various synthetic objects that may be placed within the real world clips. Furthermore, the virtual objects embedded within the 360 degree mixed reality environment interact with the real world objects depicted in the real world environment to provide a realistic mixed reality experience.
-
Citations
18 Claims
-
1. A system for visualizing controllable virtual 3D objects within a mixed reality application using real-world video captured from a plurality of cameras, comprising:
-
a processor; memory including a mixed reality application; and wherein the mixed reality application directs the processor to; obtain a plurality of real-world videos captured by one or more cameras, each real-world video capturing a different portion of a surrounding real-world environment as the one or more cameras move through the environment; for each real-world video, extract information comprising camera movement coordinates information, path coordinates information, and point cloud coordinates information, including a depth of objects shown in the real-world video and translating the information into three dimensional (3D) coordinates; for each real-world video, generate a 3D mixed reality environment comprising a plurality of separate, synched layers that includes (1) the real-world video as a background layer of the 3D mixed reality environment, (2) an occlusion layer that includes one or more transparent 3D objects that replicate real-world objects, including movement and rotation of the real-world objects, within the real-world video, and (3) one or more virtual synthetic objects, wherein the virtual synthetic objects interact with the transparent 3D objects based on 3D space locations of the objects and wherein the occlusion layer is used as a guide for the virtual synthetic objects to appear to move within the same environment as the real-world objects and to hide any virtual synthetic object that appears behind a transparent 3D object based on the depth information extracted from the real-world video; and combine at least one 3D mixed reality environment generated for a real-world video with a different 3D mixed reality environment generated for a different real-world video to provide a 3D mixed reality environment that replicates a larger portion of the surrounding real-world environment for use by the mixed reality application, wherein frames of the background layer of each real-world video are combined using the camera movement coordinates information of each real-world video. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A non-transitory computer-readable medium containing computer-executable instructions that, when executed by a hardware processor, cause the hardware processor to perform a method for rendering a mixed reality environment, the method comprising:
-
obtaining a plurality of real-world videos captured by one or more cameras, each real-world video capturing a different portion of a surrounding real-world environment as the one or more cameras move through the environment; for each real-world video, extracting information comprising camera movement coordinates information, path coordinates information, and point cloud coordinates information, including a depth of objects shown in the real-world video and translating the information into three dimensional (3D) coordinates; for each real-world video, generating a 3D mixed reality environment comprising a plurality of separate, synched layers that includes (1) the real-world video as a background layer of the 3D mixed reality environment, (2) an occlusion layer that includes one or more transparent 3D objects that replicate real-world objects, including movement and rotation of the real-world objects, within the real-world video, and (3) one or more virtual synthetic objects, wherein the virtual synthetic objects interact with the transparent 3D objects based on 3D space locations of the objects, and wherein the occlusion layer is used as a guide for the virtual synthetic objects to appear to move within the same environment as the real-world objects and to hide any virtual synthetic object that appears behind a transparent 3D object based on the depth information extracted from the real-world video; and combining at least one 3D mixed reality environment generated for a real-world video with a different 3D mixed reality environment generated for a different real-world video to provide a 3D mixed reality environment that replicates a larger portion of the surrounding real-world environment for use by the mixed reality application, wherein frames of the background layer of each real-world video are combined using the camera movement coordinates information of each real-world video. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16, 17, 18)
-
Specification