Determining gaze target based on facial features
First Claim
1. A computing device, comprising:
- a display device for presenting a display;
an image sensor configured to capture images of at least a portion of a user of the computing device; and
at least one processor;
wherein the at least one processor is configured to determine a first facial feature based on at least one captured image, the first facial feature comprising an eye lid position of an eye of the user, and the first facial feature being designated for control of the display in a vertical direction;
determine a second facial feature based on the at least one captured image, the second facial feature being designated for control of the display in a horizontal direction, the second facial feature comprising a head pose;
determine an orientation value, for the head pose of the user based on the at least one captured image, the orientation value indicating an offset of the head pose from a direct forward line of sight, wherein the orientation value is less than or equal to a threshold orientation value of at least seven degrees;
in the event the orientation value does not exceed a threshold orientation value, determine a gaze target of the user on the display based on the first facial feature and the second facial feature by at least;
determining a vertical component of the gaze target based on an openness of the eye lid position wherein;
a first eye lid position at an upper location on the eye of the user corresponds with the gaze target being adjacent an upper portion of the display; and
a second eye lid position at a lower location on the eye of the user corresponds with the gaze target being adjacent a bottom portion of the display; and
determining a horizontal component of the gaze target based on the second facial feature; and
scroll content displayed on the display based on the gaze target being within a predefined scrod zone on the display.
1 Assignment
0 Petitions
Accused Products
Abstract
A computer system can be controlled with non-contact inputs, such as eye-tracking devices. A visual indicator can be presented on a display to indicate the location where a computer function will take place (e.g., a common cursor). The visual indicator can be moved to a gaze target in response to continued detection of an action (e.g., touchpad touch) by a user for a predetermined period of time. A delay between the action and the movement of the visual indicator can allow a user time to “abort” movement of the visual indicator. Additionally, once the visual indicator has moved, the visual indicator can be controlled with additional precision as the user moves the gaze while continuing the action (e.g., continued holding of the touchpad).
146 Citations
17 Claims
-
1. A computing device, comprising:
- a display device for presenting a display;
an image sensor configured to capture images of at least a portion of a user of the computing device; and
at least one processor;
wherein the at least one processor is configured to determine a first facial feature based on at least one captured image, the first facial feature comprising an eye lid position of an eye of the user, and the first facial feature being designated for control of the display in a vertical direction;
determine a second facial feature based on the at least one captured image, the second facial feature being designated for control of the display in a horizontal direction, the second facial feature comprising a head pose;
determine an orientation value, for the head pose of the user based on the at least one captured image, the orientation value indicating an offset of the head pose from a direct forward line of sight, wherein the orientation value is less than or equal to a threshold orientation value of at least seven degrees;
in the event the orientation value does not exceed a threshold orientation value, determine a gaze target of the user on the display based on the first facial feature and the second facial feature by at least;
determining a vertical component of the gaze target based on an openness of the eye lid position wherein;
a first eye lid position at an upper location on the eye of the user corresponds with the gaze target being adjacent an upper portion of the display; and
a second eye lid position at a lower location on the eye of the user corresponds with the gaze target being adjacent a bottom portion of the display; and
determining a horizontal component of the gaze target based on the second facial feature; and
scroll content displayed on the display based on the gaze target being within a predefined scrod zone on the display. - View Dependent Claims (2, 3, 4, 5, 10, 11, 12, 13, 14)
- a display device for presenting a display;
-
6. A method of providing user input to a computer device, the method comprising:
- determining a first facial feature based on at least one captured image, the first facial feature comprising an eye lid position of an eye of a user, and the first facial feature being designated for control of a display in a vertical direction;
determining a second facial feature based on the at least one captured image, the second facial feature being designated for control of the display in a horizontal direction, the second facial feature comprising a head pose;
determine an orientation value for the head pose of the user based on the at least one captured image, the orientation value indicating an offset of the head pose from a direct forward line of sight, wherein the orientation value is less than or equal to a threshold orientation value of at least seven degrees;
in the event the orientation value does not exceed a threshold orientation value, determining a gaze target of the user on the display based on the first facial feature and the second facial feature by at least;
determining a vertical component of the gaze target based solely on an openness of the eye lid position wherein;
a first eye lid position at an upper location on the eye of the user corresponds with the gaze target being adjacent an upper portion of the display; and
a second eye lid position at a lower location on the eye of the user corresponds with the gaze target being adjacent a bottom portion of the display; and
determining a horizontal component of the gaze target based on the second facial feature; and
scrolling content displayed on the display based on the gaze target being within a predefined scroll zone on the display. - View Dependent Claims (7, 8, 15, 16)
- determining a first facial feature based on at least one captured image, the first facial feature comprising an eye lid position of an eye of a user, and the first facial feature being designated for control of a display in a vertical direction;
-
9. A non-transitory computer-readable storage medium storing program instructions. wherein the program instructions, when executed by one or more processors, perform at least the steps of:
- determining a first facial feature based on at least one captured image, the first facial feature comprising an eye lid position of an eye of a user, and the first facial feature being designated for control of a display in a vertical direction;
determining a second facial feature based on the at least one captured image, the second facial feature being designated for control of the display in a horizontal direction, the second facial feature comprising a head pose;
determine an orientation value for the head pose of the user based on the at least one captured image, the orientation value indicating an offset of the head pose from a direct forward line of sight;
in the event the orientation value is less than or equal to a threshold orientation value of at least seven degrees, determining a gaze target of the user on a display of a display device based on the first facial feature and the second facial feature by at least;
determining a vertical component of the gaze target based solely on an openness of the eye lid position; and
determining a horizontal component of the gaze target based on the second facial feature; and
scrolling content displayed on the display based on the gaze target being within a predefined scroll zone on the display. - View Dependent Claims (17)
- determining a first facial feature based on at least one captured image, the first facial feature comprising an eye lid position of an eye of a user, and the first facial feature being designated for control of a display in a vertical direction;
Specification