GAZE DETECTION IN A SEE-THROUGH, NEAR-EYE, MIXED REALITY DISPLAY
First Claim
1. A gaze vector determination system for use in a mixed reality display system comprising:
- at least one image generation unit for generating at least one virtual image for display by a near-eye, mixed reality display device;
a respective arrangement of gaze detection elements positioned on the display device for forming a spatial relationship between the gaze detection elements and at least one eye, the gaze detection elements includinga set of illuminators for generating glints on the at least one eye each illuminator positioned on the display device at a respective predetermined position and generating illumination about a predetermined wavelength, andat least one sensor for capturing light reflected from the at least one eye and generating data representing the captured reflected light, the at least one sensor positioned at a predetermined position in relation to predetermined positions of the set of illuminators on the display device;
a memory for storing software and the data; and
one or more processors communicatively coupled to the at least one sensor to receive the data representing the captured reflected light and having access to the memory for storing the data, the one or more processors determining a gaze vector for the at least one eye based on the data representing the captured reflected light, wherein data representing the captured reflected light includes glint intensity data and wherein for the at least one eye, the one or more processors determining the gaze vector comprises;
in a time period, using a first technique that comprises determining at least one of a center of a cornea and a center of a pupil based on image data of the at least one eye a first number of times, andduring the same time period, using a second different technique based on the glint intensity data independent of the image data of the at least one eye for a second number of times.
2 Assignments
0 Petitions
Accused Products
Abstract
The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.
-
Citations
20 Claims
-
1. A gaze vector determination system for use in a mixed reality display system comprising:
-
at least one image generation unit for generating at least one virtual image for display by a near-eye, mixed reality display device; a respective arrangement of gaze detection elements positioned on the display device for forming a spatial relationship between the gaze detection elements and at least one eye, the gaze detection elements including a set of illuminators for generating glints on the at least one eye each illuminator positioned on the display device at a respective predetermined position and generating illumination about a predetermined wavelength, and at least one sensor for capturing light reflected from the at least one eye and generating data representing the captured reflected light, the at least one sensor positioned at a predetermined position in relation to predetermined positions of the set of illuminators on the display device; a memory for storing software and the data; and one or more processors communicatively coupled to the at least one sensor to receive the data representing the captured reflected light and having access to the memory for storing the data, the one or more processors determining a gaze vector for the at least one eye based on the data representing the captured reflected light, wherein data representing the captured reflected light includes glint intensity data and wherein for the at least one eye, the one or more processors determining the gaze vector comprises; in a time period, using a first technique that comprises determining at least one of a center of a cornea and a center of a pupil based on image data of the at least one eye a first number of times, and during the same time period, using a second different technique based on the glint intensity data independent of the image data of the at least one eye for a second number of times. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method for determining a gaze vector of a user eye based on a gaze detection coordinate system of a near-eye display device comprising:
-
determining boundaries of the three dimensional gaze detection coordinate system based on positions of glints detected on the user eye, positions on the near-eye display device of illuminators for generating the glints and of at least one sensor for detecting the glints further comprising determining a position of a center of a cornea of the user eye with respect to the positions of the illuminators and the at least one sensor based on positions of glints detected on a cornea surface of the user eye, determining a position of the pupil center from image data generated from the at least one sensor further comprising identifying a black pixel area as a black pupil area in a number of image samples, averaging the black pupil areas to adjust for headshake, performing an ellipse fitting algorithm on the average black pupil area for determining an ellipse representing the pupil, and determining the center of the pupil by determining the center of the ellipse representing the pupil, and determining a position of a fixed center of eyeball rotation of the user eye based on the positions of the cornea center and the pupil center; and determining a gaze vector for the user eye based on the positions of the center of the cornea, the pupil center and fixed center of eyeball rotation. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16)
-
-
17. A gaze vector determination system for use in a mixed reality display system comprising:
-
at least one image generation unit for generating at least one virtual image for display by a near-eye, mixed reality display device; a set of infra-red illuminators for producing glints for a user eye, each illuminator having a predetermined position on the display device and generating infra-red radiation about a predetermined wavelength; at least one sensor for detecting the glints reflected from the user eye and generating glint data including intensity data for the glints, the at least one sensor having a predetermined position in relation to one or more illuminator predetermined positions on the display device to detect infra-red reflected radiation about the predetermined wavelength; a memory for storing software and data including glint data; and one or more processors communicatively coupled to the at least one sensor to receive the glint data and having access to the memory for storing the glint data, the one or more processors determining one or more glint positions relative to an eye part based on the glint intensity data, the predetermined position of each illuminator in the set of infra-red illuminators, and the at least one predetermined position of the at least one sensor for detecting the glints reflected from the user eye; and the one or more processors determining a gaze vector for the user eye based on the one or more glint positions relative to an eye part. - View Dependent Claims (18, 19, 20)
-
Specification