LOW-LATENCY FUSING OF VIRTUAL AND REAL CONTENT
First Claim
Patent Images
1. A system for presenting a mixed reality experience to one or more users, the system comprising:
- one or more display devices for the one or more users, each display device including a first set of sensors for sensing data relating to a position of the display device and a display unit for displaying a virtual image to the user of the display device;
one or more processing units, each associated with a display device of the one or more display devices and each receiving sensor data from the associated display device; and
a hub computing system operatively coupled to each of the one or more processing units, the hub computing device including a second set of sensors, the hub computing system and the one or more processing units collaboratively determining a three-dimensional map of the environment in which the system is used, based on data from the first and second sets of sensors.
2 Assignments
0 Petitions
Accused Products
Abstract
A system that includes a head mounted display device and a processing unit connected to the head mounted display device is used to fuse virtual content into real content. In one embodiment, the processing unit is in communication with a hub computing device. The processing unit and hub may collaboratively determine a map of the mixed reality environment. Further, state data may be extrapolated to predict a field of view for a user in the future at a time when the mixed reality is to be displayed to the user. This extrapolation can remove latency from the system.
329 Citations
20 Claims
-
1. A system for presenting a mixed reality experience to one or more users, the system comprising:
-
one or more display devices for the one or more users, each display device including a first set of sensors for sensing data relating to a position of the display device and a display unit for displaying a virtual image to the user of the display device; one or more processing units, each associated with a display device of the one or more display devices and each receiving sensor data from the associated display device; and a hub computing system operatively coupled to each of the one or more processing units, the hub computing device including a second set of sensors, the hub computing system and the one or more processing units collaboratively determining a three-dimensional map of the environment in which the system is used, based on data from the first and second sets of sensors. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A system for presenting a mixed reality experience to one or more users, the system comprising:
-
a first head mounted display device including;
a camera for obtaining image data of an environment in which the first head mounted display device is used, inertial sensors for providing inertial measurements of the first head mounted display, and a display device for displaying virtual images to a user of the first head mounted display device, anda first processing unit, associated with the first head mounted display, for determining a three-dimensional map of the environment in which the first head mounted display device is used and a field of view from which the first head mounted display device views the three-dimensional map. - View Dependent Claims (11, 12, 13)
-
-
14. A method of presenting a mixed reality experience to one or more users, the system comprising:
-
(a) determining state information for at least two periods of time, the state information relating to a field of view of a user of an environment, the environment including a mixed reality of one or more real world objects and one or more virtual objects; (b) extrapolating the state information relating to the field of view of the user of the environment for a third period of time, the third period of time being a time in the future when the one or more virtual objects of the mixed reality are to be displayed to the user; and (c) displaying at least one virtual object of the one or more virtual objects to the user at the third period of time based on the state information relating to the field of view of the user extrapolated in said step (b). - View Dependent Claims (15, 16, 17, 18, 19, 20)
-
Specification