GAZE-BASED OBJECT PLACEMENT WITHIN A VIRTUAL REALITY ENVIRONMENT
First Claim
1. A method performed by a head mounted display (HMD) device that supports rendering of a virtual reality environment, comprising:
- obtaining sensor data describing a real world physical environment adjoining a user of the HMD device;
using the sensor data, reconstructing a geometry of the physical environment;
tracking the user'"'"'s head and gaze in the physical environment using the reconstructed geometry to determine a field of view and view position;
projecting a gaze ray outward from the view position;
identifying an intersection between the projected gaze ray and the virtual reality environment; and
placing a virtual object at the intersection within the current field in response to user input.
1 Assignment
0 Petitions
Accused Products
Abstract
A head mounted display (HMD) device operating in a real world physical environment is configured with a sensor package that enables determination of an intersection of a device user'"'"'s projected gaze with a location in a virtual reality environment so that virtual objects can be placed into the environment with high precision. Surface reconstruction of the physical environment can be applied using data from the sensor package to determine the user'"'"'s view position in the virtual world. A gaze ray originating from the view position is projected outward and a cursor or similar indicator is rendered on the HMD display at the ray'"'"'s closest intersection with the virtual world such as a virtual object, floor/ground, etc. In response to user input, such as a gesture, voice interaction, or control manipulation, a virtual object is placed at the point of intersection between the projected gaze ray and the virtual reality environment.
-
Citations
20 Claims
-
1. A method performed by a head mounted display (HMD) device that supports rendering of a virtual reality environment, comprising:
-
obtaining sensor data describing a real world physical environment adjoining a user of the HMD device; using the sensor data, reconstructing a geometry of the physical environment; tracking the user'"'"'s head and gaze in the physical environment using the reconstructed geometry to determine a field of view and view position; projecting a gaze ray outward from the view position; identifying an intersection between the projected gaze ray and the virtual reality environment; and placing a virtual object at the intersection within the current field in response to user input. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A head mounted display (HMD) device operable by a user in a physical environment, comprising:
-
one or more processors; a display for rendering a virtual reality environment to the user, a field of view of the rendered virtual reality environment being variable depending at least in part on a pose of the user'"'"'s head in the physical environment; a sensor package; and one or more memory devices storing computer-readable instructions which, when executed by the one or more processors, perform a method comprising the steps of; generating surface reconstruction data for at least a portion of the physical environment using the sensor package, dynamically tracking a view position of the user for the virtual reality environment using the surface reconstruction data, locating an intersection between a ray projected from the view position along the user'"'"'s gaze direction and a point of the virtual reality environment within a current field of view, and operating the HMD device to render a cursor at the point of intersection. - View Dependent Claims (12, 13, 14, 15, 16)
-
-
17. One or more computer readable memories storing computer-executable instructions for rendering a virtual reality environment within a variable field of view of a head mounted display (HMD) device located in a real world environment, the method comprising the steps of:
-
using data from a sensor package incorporated into the HMD device to a) dynamically generate a surface reconstruction model of the real world environment and b) generate a gaze ray that is projected from a view position of a user of the HMD device; determining a field of view of the virtual reality environment using the model; receiving an input to the HMD device from the user; and placing a virtual object within the field of view at a point of intersection between the gaze ray and the virtual reality environment in response to the received user input. - View Dependent Claims (18, 19, 20)
-
Specification