HEAD POSE ASSESSMENT METHODS AND SYSTEMS
First Claim
8. A system comprising:
- a display device;
an image capturing device; and
a computing device operatively coupled to the display device and the image capturing device, and including;
a display module configured to output at least one signal suitable for causing the display device to display at least two different selectable regions; and
an iterated pose estimation module configured to determine a first head pose based on a first image and at least a second head pose based on a second image temporally subsequent to the first image, and automatically switch an operative user input focus between the at least two selectable regions based on at least one difference between the first head pose and at least the second head pose,the iterated pose estimation module being configured to estimate a configuration for a plurality of key facial points associated with at least one of the first image or the second image, and iteratively optimize one or more pose parameters to minimize a distance between a projection of the estimated configuration for the plurality of key facial points and a corresponding actual configuration of the plurality of key facial points.
2 Assignments
0 Petitions
Accused Products
Abstract
Improvements are provided to effectively assess a user'"'"'s face and head pose such that a computer or like device can track the user'"'"'s attention towards a display device(s). Then the region of the display or graphical user interface that the user is turned towards can be automatically selected without requiring the user to provide further inputs. A frontal face detector is applied to detect the user'"'"'s frontal face and then key facial points such as left/right eye center, left/right mouth corner, nose tip, etc., are detected by component detectors. The system then tracks the user'"'"'s head by an image tracker and determines yaw, tilt and roll angle and other pose information of the user'"'"'s head through a coarse to fine process according to key facial points and/or confidence outputs by pose estimator.
25 Citations
20 Claims
-
8. A system comprising:
-
a display device; an image capturing device; and a computing device operatively coupled to the display device and the image capturing device, and including; a display module configured to output at least one signal suitable for causing the display device to display at least two different selectable regions; and an iterated pose estimation module configured to determine a first head pose based on a first image and at least a second head pose based on a second image temporally subsequent to the first image, and automatically switch an operative user input focus between the at least two selectable regions based on at least one difference between the first head pose and at least the second head pose, the iterated pose estimation module being configured to estimate a configuration for a plurality of key facial points associated with at least one of the first image or the second image, and iteratively optimize one or more pose parameters to minimize a distance between a projection of the estimated configuration for the plurality of key facial points and a corresponding actual configuration of the plurality of key facial points. - View Dependent Claims (9, 10, 11, 12)
-
-
13. A computer-implemented method operable on a processor, the method comprising:
-
receiving a first image and at least a second image from an image capturing device, the second image temporally subsequent to the first image; determining, at the processor, a first head pose based on the first image; determining, at the processor, at least a second head pose based on the second image; switching an operative user input focus between at least two selectable regions of a display device based on at least one difference between the first head pose and at least the second head pose, the switching including storing a present work status associated with the first head pose and restoring a previously stored work status associated with the second head pose. - View Dependent Claims (1, 2, 3, 4, 5, 6, 7, 14, 15, 16, 17, 19, 20)
-
-
16-1. The method of claim 13, further comprising classifying each of a plurality of portions of image data associated with the first image based on at least one classifying parameter to determine at least one facial region associated with at least one portion of a face of a user, wherein the face of the user is captured by the first image.
Specification