Human-computer interface using high-speed and accurate tracking of user interactions
First Claim
Patent Images
1. An apparatus, comprising:
- a display configured to present an interactive environment to a user;
an eye-tracker coupled to the display, the eye-tracker including at least two sensors, the at least two sensors being configured to record a plurality of sets of eye-movement signals from an eye of the user, each set of eye-movement signals being recorded, by each sensor from the at least two sensors;
an interfacing device operatively coupled to the display and the eye-tracker, the interfacing device including;
a memory; and
a processor operatively coupled to the memory and configured to;
receive the eye-movement signals from the at least two sensors in the eye-tracker;
generate and present a stimulus, via the interactive environment and via the display, to the user;
compute, based on each set of eye-movement signals from the plurality of sets of eye-movement signals, a set of gaze vectors, each gaze vector from the set of gaze vectors being associated with each sensor from the at least two sensors, the gaze vector associated with each sensor indicating a gaze angle of the eye of the user;
determine a degree of obliqueness of each gaze vector associated with each sensor from the at least two sensors, the degree of obliqueness being relative to a vertical angle associated with that sensor;
determine, based on the degree of obliqueness of each gaze vector associated with each sensor from the at least two sensors, a weight associated with each sensor from the at least two sensors, to generate a set of weights; and
apply the set of weights to the plurality of sets of eye-movement signals to determine a set of calibrated eye-movement signals;
determine, based on the calibrated eye-movement signals, a point of focus of the user;
determine, based on the point of focus of the user, an action intended by the user; and
implement the action intended by the user.
1 Assignment
0 Petitions
Accused Products
Abstract
Embodiments described herein relate to systems, devices, and methods for use in the implementation of a human-computer interface using high-speed, and efficient tracking of user interactions with a User Interface/User Experience that is strategically presented to the user. Embodiments described herein also relate to the implementation of a hardware agnostic human-computer interface that uses neural, oculomotor, and/or electromyography signals to mediate user manipulation of machines and devices.
-
Citations
32 Claims
-
1. An apparatus, comprising:
-
a display configured to present an interactive environment to a user; an eye-tracker coupled to the display, the eye-tracker including at least two sensors, the at least two sensors being configured to record a plurality of sets of eye-movement signals from an eye of the user, each set of eye-movement signals being recorded, by each sensor from the at least two sensors; an interfacing device operatively coupled to the display and the eye-tracker, the interfacing device including; a memory; and a processor operatively coupled to the memory and configured to; receive the eye-movement signals from the at least two sensors in the eye-tracker; generate and present a stimulus, via the interactive environment and via the display, to the user; compute, based on each set of eye-movement signals from the plurality of sets of eye-movement signals, a set of gaze vectors, each gaze vector from the set of gaze vectors being associated with each sensor from the at least two sensors, the gaze vector associated with each sensor indicating a gaze angle of the eye of the user; determine a degree of obliqueness of each gaze vector associated with each sensor from the at least two sensors, the degree of obliqueness being relative to a vertical angle associated with that sensor; determine, based on the degree of obliqueness of each gaze vector associated with each sensor from the at least two sensors, a weight associated with each sensor from the at least two sensors, to generate a set of weights; and apply the set of weights to the plurality of sets of eye-movement signals to determine a set of calibrated eye-movement signals; determine, based on the calibrated eye-movement signals, a point of focus of the user; determine, based on the point of focus of the user, an action intended by the user; and implement the action intended by the user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A method, comprising:
-
presenting, to a user and via a display, a stimulus included in an interactive user interface; receiving, from an eye-tracker, eye-movement signals associated with the users behavior, the eye-movement signals being recorded independently by at least two sensors positioned on the eye-tracker; receiving information related to the presented stimulus; computing based on the eye-movement signals, a gaze vector associated with each sensor from the at least two sensors, the gaze vector indicating a gaze angle of the eye of the user; determining a degree of obliqueness of the gaze vector associated with each sensor from the at least two sensors, the degree of obliqueness of the gaze vector being relative to a vertical angle associated with that sensor from the at least two sensors; defining, based on the degree of obliqueness of the gaze vector associated with each sensor from the at least two sensors, a weights associated with each sensor from the at least two sensors, to generate a set of weights; applying the set of weights to the eye-movement signals recorded independently by each sensor from the at least two sensors, to compute a set of calibrated eye-movement signals, the determining of the point of focus being based on the set of calibrated eye-movement signals; determining, based on the calibrated eye-movement signals, a point of focus of the user; determining, based on the point of focus and the stimulus, an action intended by the user; and implementing the action via the interactive user interface. - View Dependent Claims (12, 13, 14)
-
-
15. An apparatus, comprising:
-
a display configured to present an interactive environment to a user; an eye-tracker coupled to the display, the eye-tracker including at least two sensors, the at least two sensors being configured to record a plurality of sets of eye-movement signals from an eye of the user, each set of eye-movement signals being recorded, independently, by each sensor from the at least two sensors; an interfacing device operatively coupled to the display and the eye-tracker, the interfacing device including; a memory; and a processor operatively coupled to the memory and configured to; receive the eye-movement signals from the at least two sensors in the eye-tracker; generate and present a stimulus, via the interactive environment and via the display, to the user; compute, based on each sets of eye-movement signals from the plurality of sets of eye-movement signals, a set of gaze vectors, each gaze vector from the set of gaze vectors being associated with each sensor indicating a gaze angle of the eye of the user; determine a degree of obliqueness of each gaze vector associated with each sensor from the at least two sensors, the degree of obliqueness being relative to a vertical angle associated with that sensor; determine, based on the degree of obliqueness of each gaze vector associated with each sensor from the at least two sensors and an empirically pre-determined weighting function, a weight associated with each sensor from the at least two sensors, to generate a set of weights; apply the set of weights to the plurality of sets of eye-movement signals to determine a set of calibrated eye-movement signals; determine, based on the calibrated eye-movement signals, a point of focus of the user; determine, based on the point of focus of the user, an action intended by the user; and implement the action intended by the user. - View Dependent Claims (16, 17, 18, 19, 20, 21)
-
-
22. An apparatus, comprising:
-
a display configured to present an interactive environment to a user; an eye-tracker coupled to the display, the eye-tracker including at least two sensors, the at least two sensors being configured to record a plurality of sets of eye-movement signals from an eye of the user, each set of eye-movement signals being recorded, independently, by each sensor from the at least two sensors; an interfacing device operatively coupled to the display and the eye-tracker, the interfacing device including; a memory; and a processor operatively coupled to the memory and configured to; receive the eye-movement signals from the at least two sensors in the eye-tracker; generate and present a stimulus, via the interactive environment and via the display, to the user; identify a set of missing data points in the plurality of sets of eye-movement signals; receive, from the eye-tracker, information related to the at least two sensors; generate, based on the information related to the at least two sensors, a kinematics model of a set of simulated eye-movements of a simulated user; compute, based on the kinematics model, a plurality of sets of simulated eye-movement signals, each set of simulated eye-movement signals being associated with each sensor from the at least two sensors; compute a set of replacement data points to replace the set of missing data points in the eye-movement signals received from the at least two sensors, based on the plurality of sets of simulated eye-movement signals; incorporate the set of replacement data points to replace the set of missing data points and to generate calibrated eye-movement signals associated with each sensor from the at least two sensors, the point of focus of the user being determined based on the calibrated eye-movement signals; determine, based on the calibrated eye-movement signals, a point of focus of the user; determine, based on the point of focus of the user, an action intended by the user; and implement the action intended by the user. - View Dependent Claims (23, 24, 25, 26, 27, 28)
-
-
29. A method, comprising:
-
presenting, to a user and via a display, a stimulus included in an interactive user interface; receiving, from an eye-tracker, eye-movement signals associated with the users behavior, the eye-movement signals being recorded independently by at least two sensors positioned on the eye-tracker; receiving information related to the presented stimulus; computing, based on the eye-movement signals, a set of gaze vectors, of an eye of the user, each gaze vector from the set of gaze vectors being associated with each sensor of the at least four sensors; resolving the set of gaze vectors along a first axis and a second axis orthogonal to the first axis; computing a first average gaze vector based on a first weighted average of gaze vectors associated with a first set of sensors, the first set of sensors being grouped along the first axis; computing a second average gaze vector based on a second weighted average of gaze vectors associated with a second set of sensors, the second set of sensors being grouped along the second axis; and computing a calibrated gaze vector based on the first average gaze vector and the second average gaze vector; determining, based on the calibrated gaze vector, a point of focus of the user; determining, based on the point of focus and the stimulus, an action intended by the user; and implementing the action via the interactive user interface. - View Dependent Claims (30, 31, 32)
-
Specification