Selection of objects in three-dimensional space
First Claim
1. A system comprising:
- a display to present a scene, the scene comprising at least one virtual object;
one or more sensors to capture data from a real world environment of a user;
one or more processors communicatively coupled to the one or more sensors and the display; and
memory having computer-executable instructions stored thereupon which, when executed by the one or more processors, cause the computing device to perform operations comprising;
detecting, by the one or more sensors, a starting action;
obtaining, from the one or more sensors, movement data corresponding to a movement of an input object in the real world environment of the user;
detecting, by the one or more sensors, an ending action;
identifying a shape corresponding to the movement of the input object between the starting action and the ending action;
obtaining, from the one or more sensors, gaze tracking data including a location of the eyes of the user in the real world environment;
determining, based at least in part on the shape and the gaze tracking data, a three-dimensional (3D ) selection space in the scene by;
identifying a first location and a second location along the shape;
calculating a first vector originating from a location at or near an eye of the user and intersecting the first location along the shape;
calculating a second vector originating from the location at or near the eye of the user and intersecting the second location along the shape; and
extending the first vector and the second vector in a direction substantially parallel to a third vector which extends from the location near or at the eyes of the user to a location at or near the shape;
identifying one or more objects in the scene located in or at least partially in the 3D selection space, the objects including at least one of a physical object in the real world environment or the at least one virtual object; and
performing an operation on the one or more objects.
1 Assignment
0 Petitions
Accused Products
Abstract
A user may select or interact with objects in a scene using gaze tracking and movement tracking. In some examples, the scene may comprise a virtual reality scene or a mixed reality scene. A user may move an input object in an environment and be facing in a direction towards the movement of the input object. A computing device may use sensors to obtain movement data corresponding to the movement of the input object, and gaze tracking data including to a location of eyes of the user. One or more modules of the computing device may use the movement data and gaze tracking data to determine a three-dimensional selection space in the scene. In some examples, objects included in the three-dimensional selection space may be selected or otherwise interacted with.
26 Citations
17 Claims
-
1. A system comprising:
-
a display to present a scene, the scene comprising at least one virtual object; one or more sensors to capture data from a real world environment of a user; one or more processors communicatively coupled to the one or more sensors and the display; and memory having computer-executable instructions stored thereupon which, when executed by the one or more processors, cause the computing device to perform operations comprising; detecting, by the one or more sensors, a starting action; obtaining, from the one or more sensors, movement data corresponding to a movement of an input object in the real world environment of the user; detecting, by the one or more sensors, an ending action; identifying a shape corresponding to the movement of the input object between the starting action and the ending action; obtaining, from the one or more sensors, gaze tracking data including a location of the eyes of the user in the real world environment; determining, based at least in part on the shape and the gaze tracking data, a three-dimensional (3D ) selection space in the scene by; identifying a first location and a second location along the shape; calculating a first vector originating from a location at or near an eye of the user and intersecting the first location along the shape; calculating a second vector originating from the location at or near the eye of the user and intersecting the second location along the shape; and extending the first vector and the second vector in a direction substantially parallel to a third vector which extends from the location near or at the eyes of the user to a location at or near the shape; identifying one or more objects in the scene located in or at least partially in the 3D selection space, the objects including at least one of a physical object in the real world environment or the at least one virtual object; and performing an operation on the one or more objects. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A computer-implemented method comprising:
-
obtaining, from one or more sensors, movement data corresponding to a movement of an input object in a real world environment; obtaining, from the one or more sensors, gaze tracking data including a location of eyes of a user in the environment; presenting, on a display of a computing device associated with the user, a scene comprising at least one virtual object; calculating, based at least in part on the gaze tracking data and the movement data, multiple vectors, each of the multiple vectors originating at or near an eye of the user and passing through a location along a shape defined by the movement of the input object; defining a three-dimensional (3D) volume in the scene by extending the multiple vectors in a direction substantially parallel to a reference vector extending from a location at or near the eye of the user to a location at or near the shape, the extending the multiple vectors in the direction substantially parallel to the reference vector to define the 3D volume comprising presenting a virtual representation of the 3D volume on the display of the computing device associated with the user; identifying one or more objects in the scene included in or at least partially in the 3D volume, the one or more objects including at least one of a physical object in the real world environment or the at least one virtual object; and performing an operation on the one or more objects included in or at least partially in the 3D volume. - View Dependent Claims (9, 10, 11, 12, 13)
-
-
14. A wearable computing device comprising:
-
one or more sensors to obtain data from an environment, the environment comprising at least one of a virtual reality environment or a mixed reality environment; one or more processors communicatively coupled to the one or more sensors; and memory having computer-executable instructions stored thereupon which, when executed by the one or more processors, cause the computing device to perform operations comprising; obtaining, from the one or more sensors, movement data corresponding to a movement of an input object in the environment; obtaining, from the one or more sensors, gaze tracking data corresponding to a location of eyes of a user; analyzing the movement data to calculate a shape defined by the movement of the input object; calculating vectors originating at a location near or at an eye of the user and intersecting with points along the shape; extending the vectors in a direction substantially parallel to a reference vector which extends from the location near or at the eye of the user to a location at or near the shape to create a three-dimensional (3D) volume; identifying one or more objects included in or partially included in the 3D volume; and selecting the one or more objects included in or partially included in the 3D volume, the one or more objects including at least one of a physical object or a virtual object. - View Dependent Claims (15, 16, 17)
-
Specification