GAZE DETECTION IN A SEE-THROUGH, NEAR-EYE, MIXED REALITY DISPLAY
First Claim
1. A mixed reality display system with gaze determination comprising:
- a see-through, near-eye, mixed reality display device;
at least one image generation unit for generating at least one virtual image for display by the see-through, near-eye, mixed reality display device;
a respective arrangement of gaze detection elements positioned on the display device for forming a spatial relationship between the gaze detection elements and each eye, the gaze detection elements includinga set of illuminators for generating glints on the respective eye each illuminator positioned on the see-through display device at a respective predetermined position and generating illumination about a predetermined wavelength, andat least one respective sensor for capturing light reflected from the respective eye and generating data representing the captured reflected light, the at least one respective sensor positioned at a predetermined position in relation to predetermined positions of the set of illuminators on the see-through display device;
a memory for storing software and the data; and
one or more processors communicatively coupled to the at least one respective sensor to receive the data representing the captured reflected light and having access to the memory for storing the data, the one or more processors determining a gaze vector for each respective eye based on the data representing the captured reflected light and a point of gaze based on the gaze vectors in a three-dimensional (3D) user field of viewwherein data representing the captured reflected light includes glint intensity data and wherein for each eye, the one or more processors determining the gaze vector comprises;
in a time period, using a first technique that comprises determining a center of a cornea and a center of the pupil based on image data of the respective eye a first number of times, andduring the same time period, using a second different technique based on the glint intensity data independent of the image data of the respective eye for a second number of times.
2 Assignments
0 Petitions
Accused Products
Abstract
The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.
-
Citations
25 Claims
-
1. A mixed reality display system with gaze determination comprising:
-
a see-through, near-eye, mixed reality display device; at least one image generation unit for generating at least one virtual image for display by the see-through, near-eye, mixed reality display device; a respective arrangement of gaze detection elements positioned on the display device for forming a spatial relationship between the gaze detection elements and each eye, the gaze detection elements including a set of illuminators for generating glints on the respective eye each illuminator positioned on the see-through display device at a respective predetermined position and generating illumination about a predetermined wavelength, and at least one respective sensor for capturing light reflected from the respective eye and generating data representing the captured reflected light, the at least one respective sensor positioned at a predetermined position in relation to predetermined positions of the set of illuminators on the see-through display device; a memory for storing software and the data; and one or more processors communicatively coupled to the at least one respective sensor to receive the data representing the captured reflected light and having access to the memory for storing the data, the one or more processors determining a gaze vector for each respective eye based on the data representing the captured reflected light and a point of gaze based on the gaze vectors in a three-dimensional (3D) user field of view wherein data representing the captured reflected light includes glint intensity data and wherein for each eye, the one or more processors determining the gaze vector comprises; in a time period, using a first technique that comprises determining a center of a cornea and a center of the pupil based on image data of the respective eye a first number of times, and during the same time period, using a second different technique based on the glint intensity data independent of the image data of the respective eye for a second number of times. - View Dependent Claims (3, 4, 6, 21, 22, 23)
-
-
2. (canceled)
-
5. (canceled)
-
7. A method for determining gaze in a see-through, near-eye mixed reality display device comprising:
-
determining boundaries of a three dimensional gaze detection coordinate system based on positions of glints detected on a user eye, positions on the near-eye display device of illuminators for generating the glints and of at least one sensor for detecting the glints further comprising determining a position of a center of a cornea of each respective eye with respect to the positions of the illuminators and the at least one sensor based on positions of glints detected on the cornea surface of the respective eye, determining a pupil center from image data generated from the at least one sensor further comprising further comprising identifying a black pixel area as a black pupil area in a number of image samples, average the black pupil areas to adjust for headshake, performing an ellipse fitting algorithm on the average black pupil area for determining an ellipse representing the pupil, and determining the center of the pupil by determining the center of the ellipse representing the pupil, and determining a position of a fixed center of eyeball rotation of each eye based on the position of the cornea center and the pupil center; determining a gaze vector for each eye based on reflected eye data including the glints; determining a point of gaze based on the gaze vectors for the two eyes in a three-dimensional (3D) user field of view including real and virtual objects; and identifying any object at the point of gaze in the 3D user field of view. - View Dependent Claims (8, 9, 11, 12, 13, 24, 25)
-
-
10. (canceled)
-
14. (canceled)
-
15. A mixed reality display system with gaze determination based on glints comprising:
-
a see-through, near-eye, mixed reality display device; at least one image generation unit for generating at least one virtual image for display by the see-through, near-eye, mixed reality display device; a set of infra-red illuminators for producing glints for each eye, each illuminator positioned on the see-through display device at a respective predetermined position and generating infra-red radiation about a predetermined wavelength; at least one respective sensor for detecting the glints reflected from each eye and generating glint data including intensity data for the glints, the at least one respective sensor positioned in relation to one or more respective illuminator positions on the see-through display device at least one predetermined position to detect infra-red reflected radiation about the predetermined wavelength; a memory for storing software and data including glint data; and one or more processors communicatively coupled to the at least one respective sensor to receive the glint data and having access to the memory for storing the glint data, the one or more processors determining one or more glint positions relative to an eye part based on the glint intensity data generated by the at least one respective sensor, the respective predetermined position of each illuminator in the set of infra-red illuminators, and the at least one predetermined position of the at least one respective sensor for detecting the glints reflected from each eye; the one or more processors determining a gaze vector for each eye based on the one or more glint positions relative to an eye part; and the one or more processors determining a point of gaze based on the gaze vectors in a three dimensional user field of view. - View Dependent Claims (16, 19, 20)
-
-
17. (canceled)
-
18. (canceled)
Specification