SYSTEM AND METHOD FOR GENERATING A MIXED REALITY ENVIRONMENT
First Claim
1. A method of rendering a synthetic object onto a user-worn display showing the user'"'"'s view of a real world scene, the method comprising the steps of:
- capturing real world scene information using one or more user-worn sensors;
producing a pose estimation data set and a depth data set based on at least a portion of the captured real world scene information;
receiving the synthetic object generated in accordance with the pose estimation data set and the depth data set; and
rendering the synthetic object onto the user-worn display in accordance with the pose estimation data set and the depth data set, thereby integrating the synthetic object into the user'"'"'s view of the real world scene.
3 Assignments
0 Petitions
Accused Products
Abstract
A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user'"'"'s pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user'"'"'s real world life scene or environment indicating a user'"'"'s pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user'"'"'s field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.
182 Citations
32 Claims
-
1. A method of rendering a synthetic object onto a user-worn display showing the user'"'"'s view of a real world scene, the method comprising the steps of:
-
capturing real world scene information using one or more user-worn sensors; producing a pose estimation data set and a depth data set based on at least a portion of the captured real world scene information; receiving the synthetic object generated in accordance with the pose estimation data set and the depth data set; and rendering the synthetic object onto the user-worn display in accordance with the pose estimation data set and the depth data set, thereby integrating the synthetic object into the user'"'"'s view of the real world scene. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
-
-
16. A method of rendering a synthetic object onto a first user-worn display showing the first user'"'"'s view of a real world scene and the synthetic object onto a second user-worn showing the second user'"'"'s view of the real world scene, the method comprising the steps of:
-
capturing a first set of real world scene information using one or more sensors worn by the first user; capturing a second set of real world scene information using one or more sensors worn by the second user; producing a first pose estimation data set and a first depth data set based on at least a portion of the first set of real world scene information; producing a second pose estimation data set and a second depth data set based on at least a portion of the second set of real world scene information; receiving the synthetic object generated in accordance with the first pose estimation data set and the first depth data set; receiving the synthetic object generated in accordance with the second pose estimation data set and the second depth data set; rendering the synthetic object onto the first user-worn display in accordance with the first pose estimation data set and the first depth data set, thereby integrating the synthetic object into the first user'"'"'s perception of the real world scene; and rendering the synthetic object onto the second user-worn display in accordance with the second pose estimation data set and the second depth data set, thereby integrating the synthetic object into the second user'"'"'s perception of the real world scene, wherein the synthetic object appears consistent within the first user'"'"'s perception of the real world scene and the second user'"'"'s perception of the real world scene
-
-
17. A system for rendering a synthetic object onto a user-worn display showing the user'"'"'s view of a real world scene, comprising:
-
a user-worn computer module configured to; capture real world scene information using at least one user-worn sensor; producing a pose estimation data set and depth data set based on at least a portion of the real world scene information; receive the synthetic object generated in accordance with the pose estimation data set and the depth data set; and render the synthetic object onto the user-worn display, in accordance with the pose estimation data set and the depth data set, thereby integrating the synthetic object into the user'"'"'s view of the real world scene; a synthetic object computer module configured to; retrieve the synthetic object from a database in accordance with the pose estimation data set and the depth data set; and transmit the synthetic object to the user-worn computer module. - View Dependent Claims (18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31)
-
-
32. A system for rendering a synthetic object onto a first user-worn display showing a first user'"'"'s view of a real world scene and the synthetic object onto a second user-worn display showing a second user'"'"'s view of a real world scene, comprising:
-
a first user-worn computer module configured to; capture a first set of real world scene information using at least one user-worn sensor; producing a first pose estimation data set and a first depth data set based on at least a portion of the first set of real world scene information; receive the synthetic object generated in accordance with the first pose estimation data set and the first depth data set; and render the synthetic object onto the first user-worn display, in accordance with the first pose estimation data set and the first depth data set, thereby integrating the synthetic object into the first user'"'"'s view of the real world scene; a second user-worn computer module configured to; capture a second set of real world scene information using at least one user-worn sensor; producing a second pose estimation data set and a second depth data set based on at least a portion of the second set of real world scene information; receive the synthetic object generated in accordance with the second pose estimation data set and the second depth data set; and render the synthetic object onto the second user-worn display, in accordance with the second pose estimation data set and the second depth data set, thereby integrating the second embodiment of the synthetic object into the second user'"'"'s view of the real world scene, wherein the synthetic object appears consistent within the first user'"'"'s perception of the real world scene and the second user'"'"'s perception of the real world scene; a synthetic object computer module configured to; retrieve first embodiment of the synthetic object and the second embodiment of the synthetic object from a database in accordance with the pose estimation data set and the depth data set; and transmit the first embodiment of the synthetic object to the first user-worn computer module and the second embodiment of the synthetic object to the second user-worm computer module.
-
Specification