Gaze detection in a see-through, near-eye, mixed reality display
First Claim
1. A method for determining gaze in a near-eye mixed reality display device comprising:
- determining boundaries of a three dimensional gaze detection coordinate system for each eye based on positions of glints detected on the respective eye, positions on the near-eye display device of illuminators for generating the glints, and a position of at least one sensor for capturing reflected eye data of the respective eye from which the positions of glints are detected;
automatically checking periodically the reflected eye data for any change along a depth axis of the gaze detection coordinate system, the depth axis extending from a respective display optical system of the near-eye, mixed reality display device toward the respective eye;
responsive to detecting any change along at least one depth axis, triggering redetermination of the boundaries of the three dimensional gaze detection coordinate system;
determining a gaze vector for each eye based on reflected eye data including the glints and the boundaries of the three dimensional gaze detection coordinate system;
determining a point of gaze based on the gaze vectors for the eyes in a three-dimensional (3D) user field of view of the near-eye mixed reality display device including real and virtual objects; and
identifying any object at the point of gaze in the 3D user field of view.
3 Assignments
0 Petitions
Accused Products
Abstract
The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.
107 Citations
20 Claims
-
1. A method for determining gaze in a near-eye mixed reality display device comprising:
-
determining boundaries of a three dimensional gaze detection coordinate system for each eye based on positions of glints detected on the respective eye, positions on the near-eye display device of illuminators for generating the glints, and a position of at least one sensor for capturing reflected eye data of the respective eye from which the positions of glints are detected; automatically checking periodically the reflected eye data for any change along a depth axis of the gaze detection coordinate system, the depth axis extending from a respective display optical system of the near-eye, mixed reality display device toward the respective eye; responsive to detecting any change along at least one depth axis, triggering redetermination of the boundaries of the three dimensional gaze detection coordinate system; determining a gaze vector for each eye based on reflected eye data including the glints and the boundaries of the three dimensional gaze detection coordinate system; determining a point of gaze based on the gaze vectors for the eyes in a three-dimensional (3D) user field of view of the near-eye mixed reality display device including real and virtual objects; and identifying any object at the point of gaze in the 3D user field of view. - View Dependent Claims (2, 3)
-
-
4. A method for determining gaze in a near-eye mixed reality display device comprising:
-
determining boundaries of a three dimensional gaze detection coordinate system for each eye based on positions of glints detected on the respective eye, positions on the near-eye display device of illuminators for generating the glints, and a position of at least one sensor for capturing reflected eye data of the respective eye from which the positions of glints are detected; generating and storing respective training gaze data sets based on the boundaries of the three dimensional gaze detection coordinate system, each training gaze data set including pupil position data and a gaze vector; based on the reflected eye data of the respective eye, determining current pupil position data for the respective eye; determining a gaze vector for each eye based on comparison of its current pupil position data with its training gaze data sets; determining a point of gaze based on the gaze vectors for the eyes in a three-dimensional (3D) user field of view of the near-eye mixed reality display device including real and virtual objects; identifying any object at the point of gaze in the 3D user field of view; automatically checking periodically the reflected eye data for any change indicating the training gaze data sets are to be re-calibrated; and responsive to detecting any change indicating the training gaze data sets are to be re-calibrated along at least one of the depth axes, triggering re-calibration of the training gaze data sets for the eyes. - View Dependent Claims (5, 6, 7, 8, 9)
-
-
10. A mixed reality display system with gaze determination comprising:
-
a near-eye, mixed reality display device; at least one image generation unit for generating at least one virtual image for display by the near-eye, mixed reality display device; a respective arrangement of gaze detection elements positioned on the display device for forming a spatial relationship between the gaze detection elements and each eye, the gaze detection elements including a set of illuminators for generating glints on the respective eye each illuminator positioned on the see-through display device at a respective predetermined position and generating illumination about a predetermined wavelength, and at least one respective sensor for capturing light reflected from the respective eye and generating data representing the captured reflected light, the at least one respective sensor positioned at a predetermined position in relation to predetermined positions of the set of illuminators on the see-through display device; a memory for storing software and the data; and one or more processors communicatively coupled to the at least one respective sensor to receive the data representing the captured reflected light and having access to the memory for storing the data, the one or more processors determining a gaze vector for each respective eye based on the data representing the captured reflected light and a point of gaze based on the gaze vectors in a three-dimensional (3D) user field of view; wherein data representing the captured reflected light includes glint intensity data and wherein for each eye, the one or more processors determining the gaze vector comprises; in a time period, using a first technique that comprises determining a center of a cornea and a center of the pupil based on image data of the respective eye a first number of times, and during the same time period, using a second different technique based on the glint intensity data independent of the image data of the respective eye for a second number of times; the one or more processors automatically checking periodically the reflected eye data for any change along a depth axis of the gaze detection coordinate system, the depth axis extending from a respective display optical system of the near-eye, mixed reality display device toward the respective eye; and responsive to detecting any change along at least one depth axis, the one or more boundaries triggering redetermination of the boundaries of the three dimensional gaze detection coordinate system. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18, 19, 20)
-
Specification