Eye tracking systems and method for augmented or virtual reality
First Claim
Patent Images
1. A method, comprising:
- rendering one or more virtual objects in a virtual or augmented reality environment with a virtual image generation subsystem of a virtual content display system altering, with a variable focus element, one or more rays or cones of light among multiple planes that correspond to respective virtual depths towards at least one eye of a user, whereinthe virtual image generation subsystem comprising a graphics processing unit, at least one projector, the variable focus element, a first assembly, and a second assembly that is operatively coupled to the first assembly,the first assembly and second assembly receive power from a portable power source separate from the virtual image generation subsystem,the first assembly includes a first board and a projector driver providing image information and signals to the at least one projector,the second assembly includes a second board, a microprocessor, the graphics processing unit, and one or more motion sensors or transducers, andthe at least one projector projects image information of the one or more virtual objects to the at least one eye of the user;
representing the at least one eye with an eye model that comprises a first circular shape or circle and a second circular shape or circle, wherein the second circular shape or circle represents a cornea of the at least one eye and is layered on top of the first circular shape or circle;
detecting, with at least an eye tracking device, one or more characteristics pertaining to an interaction between the at least one eye of the user and reflected light from the at least one eye at least by;
capturing, by a first set of sensors or transducers in the first assembly, a first pattern emitted or reflected from one or more ambient light sources in a real-world environment;
projecting, with at least the projector driver in the first assembly and the at least one projector, a second light pattern generated by a set of light sources in the first assembly to the at least one eye of the user;
in response to the first light pattern, detecting the reflected light from the at least one eye using one or more second sensors or transducers in the first assembly; and
determining, by the microprocessor in the second assembly, the interaction at least by correlating the reflected light with the first light pattern and the second light pattern;
determining, by the microprocessor in the second assembly, an eye pointing vector and a center of rotation of the at least one eye in the eye model using at least the first circular shape or circle and the second circular shape or circle based at least in part upon a characteristic of a cross-section of the at least one eye and a range of movement of the at least one eye;
determining, by the microprocessor in the second assembly, a vectored distance for the eye pointing vector based at least in part upon the one or more characteristics and the center of rotation for the at least one eye in the eye model; and
determining, by the microprocessor in the second assembly, at least one movement or pose for both the at least one eye and another eye of the user at least by using the vectored distance of the at least one eye and further by extrapolating one or more eye movement or pose characteristics with at least one or more parameters pertaining to the interaction and captured by one or more sensors for the at least one eye of the user.
3 Assignments
0 Petitions
Accused Products
Abstract
An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems.
461 Citations
20 Claims
-
1. A method, comprising:
-
rendering one or more virtual objects in a virtual or augmented reality environment with a virtual image generation subsystem of a virtual content display system altering, with a variable focus element, one or more rays or cones of light among multiple planes that correspond to respective virtual depths towards at least one eye of a user, wherein the virtual image generation subsystem comprising a graphics processing unit, at least one projector, the variable focus element, a first assembly, and a second assembly that is operatively coupled to the first assembly, the first assembly and second assembly receive power from a portable power source separate from the virtual image generation subsystem, the first assembly includes a first board and a projector driver providing image information and signals to the at least one projector, the second assembly includes a second board, a microprocessor, the graphics processing unit, and one or more motion sensors or transducers, and the at least one projector projects image information of the one or more virtual objects to the at least one eye of the user; representing the at least one eye with an eye model that comprises a first circular shape or circle and a second circular shape or circle, wherein the second circular shape or circle represents a cornea of the at least one eye and is layered on top of the first circular shape or circle; detecting, with at least an eye tracking device, one or more characteristics pertaining to an interaction between the at least one eye of the user and reflected light from the at least one eye at least by; capturing, by a first set of sensors or transducers in the first assembly, a first pattern emitted or reflected from one or more ambient light sources in a real-world environment; projecting, with at least the projector driver in the first assembly and the at least one projector, a second light pattern generated by a set of light sources in the first assembly to the at least one eye of the user; in response to the first light pattern, detecting the reflected light from the at least one eye using one or more second sensors or transducers in the first assembly; and determining, by the microprocessor in the second assembly, the interaction at least by correlating the reflected light with the first light pattern and the second light pattern; determining, by the microprocessor in the second assembly, an eye pointing vector and a center of rotation of the at least one eye in the eye model using at least the first circular shape or circle and the second circular shape or circle based at least in part upon a characteristic of a cross-section of the at least one eye and a range of movement of the at least one eye; determining, by the microprocessor in the second assembly, a vectored distance for the eye pointing vector based at least in part upon the one or more characteristics and the center of rotation for the at least one eye in the eye model; and determining, by the microprocessor in the second assembly, at least one movement or pose for both the at least one eye and another eye of the user at least by using the vectored distance of the at least one eye and further by extrapolating one or more eye movement or pose characteristics with at least one or more parameters pertaining to the interaction and captured by one or more sensors for the at least one eye of the user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A virtual content display system for tracking one or more movements or poses of a user'"'"'s eyes, comprising:
-
a virtual image generation subsystem configured to rendering one or more virtual objects in a virtual or augmented reality environment with a virtual image generation subsystem of a virtual content display system altering, with a variable focus element, one or more rays or cones of light among multiple planes that corresponds to respective virtual depths towards at least one eye of a user, wherein the virtual image generation subsystem comprising a graphics processing unit, at least one projector, the variable focus element, a first assembly, and a second assembly that is operatively coupled to the first assembly, the first assembly and second assembly receive power from a portable power source separate from the virtual image generation subsystem, the first assembly includes a first board and a projector driver providing image information and signals to the at least one projector, the second assembly includes a second board, a microprocessor, the graphics processing unit, and one or more motion sensors or transducers, and the at least one projector projects image information of the one or more virtual objects to the at least one eye of the user; an eye model that represents the at least one eye with a first circular shape or circle and a second circular shape or circle that models a cornea of the at least one eye and is layered on top of the first circular shape or circle; an eye tracking device positioned in relation to the virtual image generation subsystem as well as the at least one eye of the user and configured at least to; detect one or more characteristics pertaining to an interaction between the at least one eye of the user and reflected light from the at least one eye at least by; capturing, by a first set of sensors or transducers in the first assembly, a first pattern emitted or reflected from one or more ambient light sources in a real-world environment; projecting, with at least the projector driver in the first assembly and the at least one projector, a second light pattern generated by a set of light sources in the first assembly to the at least one eye of the user; in response to the first light pattern, detecting the reflected light from the at least one eye using one or more second sensors or transducers in the first assembly; and determining, by the microprocessor in the second assembly, the interaction at least by correlating the reflected light with the first light pattern and the second light pattern; the microprocessor in the second assembly further configured to determine an eye pointing vector and a center of rotation of the at least one eye in the eye model using at least the first circular shape or circle and the second circular shape or circle based at least in part upon a characteristic of a cross-section of the at least one eye and a range of movement of the at least one eye; the microprocessor in the second assembly further configured to determine a vectored distance for the eye pointing vector based at least in part upon the one or more characteristics and the center of rotation for the at least one eye in the eye model; and the microprocessor in the second assembly further configured to determine at least one movement or pose for both the at least one eye and another eye of the user at least by using the vectored distance of the at least one eye and further by extrapolating one or more eye movement or pose characteristics with at least one or more parameters pertaining to the interaction and captured by one or more sensors for the at least one eye of the user. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
-
Specification