User interface control using gaze tracking
First Claim
1. A method performed by data processing apparatus, the method comprising:
- receiving a first image of a sequence of images, the first image determined to depict at least a face of a user;
in response to determining that the first image depicts the face, fitting a shape model to the face, the fitting including;
identifying, for the face, a shape model that includes one or more facial feature points; and
fitting the shape model to the face in the first image to generate a fitted shape model by adjusting a location of each of the facial feature points of the shape model to overlap with a corresponding facial feature point of the face in the first image; and
generating, from the first image and based on the fitted shape model, a template image for each facial feature point of the face in the first image, the template image for each facial feature point of the face depicting a portion of the face at a location of the facial feature point of the face in the first image, the portion of the face for each template image being less than all of the face;
for each subsequent image in the sequence of images;
for each facial feature point of the face;
comparing the template image for the facial feature point of the face to a respective image portion of the subsequent image located at a same location in the subsequent image as a location at which the facial feature point of the face was identified in a previous image; and
for at least one facial feature point of the face for which the facial feature point'"'"'s template image does not match the respective image portion, comparing the template image for the at least one facial feature point to one or more additional image portions of the subsequent image until a match is found between the template image for the at least one facial feature point and one of the one or more additional image portions; and
determining, for the subsequent image, a direction in which the user is looking based on a location of a matching image portion in the subsequent image for each facial feature point of the face.
2 Assignments
0 Petitions
Accused Products
Abstract
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for identifying a direction in which a user is looking. In one aspect, a method includes receiving an image of a sequence of images. The image can depict a face of a user. A template image for each particular facial feature point can be compared to one or more image portions of the image. The template image for the particular facial feature point can include a portion of a previous image of the sequence of images that depicted the facial feature point. Based on the comparison, a matching image portion of the image that matches the template image for the particular facial feature point is identified. A location of the matching image portion is identified in the image. A direction in which the user is looking is determined based on the identified location for each template image.
22 Citations
16 Claims
-
1. A method performed by data processing apparatus, the method comprising:
-
receiving a first image of a sequence of images, the first image determined to depict at least a face of a user; in response to determining that the first image depicts the face, fitting a shape model to the face, the fitting including; identifying, for the face, a shape model that includes one or more facial feature points; and fitting the shape model to the face in the first image to generate a fitted shape model by adjusting a location of each of the facial feature points of the shape model to overlap with a corresponding facial feature point of the face in the first image; and generating, from the first image and based on the fitted shape model, a template image for each facial feature point of the face in the first image, the template image for each facial feature point of the face depicting a portion of the face at a location of the facial feature point of the face in the first image, the portion of the face for each template image being less than all of the face; for each subsequent image in the sequence of images; for each facial feature point of the face; comparing the template image for the facial feature point of the face to a respective image portion of the subsequent image located at a same location in the subsequent image as a location at which the facial feature point of the face was identified in a previous image; and for at least one facial feature point of the face for which the facial feature point'"'"'s template image does not match the respective image portion, comparing the template image for the at least one facial feature point to one or more additional image portions of the subsequent image until a match is found between the template image for the at least one facial feature point and one of the one or more additional image portions; and determining, for the subsequent image, a direction in which the user is looking based on a location of a matching image portion in the subsequent image for each facial feature point of the face. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A system, comprising:
-
a data processing apparatus; and a memory storage apparatus in data communication with the data processing apparatus, the memory storage apparatus storing instructions executable by the data processing apparatus and that upon such execution cause the data processing apparatus to perform operations comprising; receiving a first image of a sequence of images, the first image determined to depict at least a face of a user; in response to determining that the first image depicts the face, fitting a shape model to the face, the fitting including; identifying, for the face, a shape model that includes one or more facial feature points; and fitting the shape model to the face in the first image to generate a fitted shape model by adjusting a location of each of the facial feature points of the shape model to overlap with a corresponding facial feature point of the face in the first image; and generating, from the first image and based on the fitted shape model, a template image for each facial feature point of the face in the first image, the template image for each facial feature point of the face depicting a portion of the face at a location of the facial feature point of the face in the first image, the portion of the face for each template image being less than all of the face; for each subsequent image in the sequence of images; for each facial feature point of the face; comparing the template image for the facial feature point of the face to a respective image portion of the subsequent image located at a same location in the subsequent image as a location at which the facial feature point of the face was identified in a previous image; and for at least one facial feature point of the face for which the facial feature point'"'"'s template image does not match the respective image portion, comparing the template image for the at least one facial feature point to one or more additional image portions of the subsequent image until a match is found between the template image for the at least one facial feature point and one of the one or more additional image portions; and determining, for the subsequent image, a direction in which the user is looking based on a location of a matching image portion in the subsequent image for each facial feature point of the face. - View Dependent Claims (10, 11, 12, 13, 14, 15)
-
-
16. A non-transitory computer storage medium encoded with a computer program, the program comprising instructions that when executed by a data processing apparatus cause the data processing apparatus to perform operations comprising:
-
receiving a first image of a sequence of images, the first image determined to depict at least a face of a user; in response to determining that the first image depicts the face, fitting a shape model to the face, the fitting including; identifying, for the face, a shape model that includes one or more facial feature points; and fitting the shape model to the face in the first image to generate a fitted shape model by adjusting a location of each of the facial feature points of the shape model to overlap with a corresponding facial feature point of the face in the first image; and generating, from the first image and based on the fitted shape model, a template image for each facial feature point of the face in the first image, the template image for each facial feature point of the face depicting a portion of the face at a location of the facial feature point of the face in the first image, the portion of the face for each template image being less than all of the face; for each subsequent image in the sequence of images; for each facial feature point of the face; comparing the template image for the facial feature point of the face to a respective image portion of the subsequent image located at a same location in the subsequent image as a location at which the facial feature point of the face was identified in a previous image; and for at least one facial feature point of the face for which the facial feature point'"'"'s template image does not match the respective image portion, comparing the template image for the at least one facial feature point to one or more additional image portions of the subsequent image until a match is found between the template image for the at least one facial feature point and one of the one or more additional image portions; and determining, for the subsequent image, a direction in which the user is looking based on a location of a matching image portion in the subsequent image for each facial feature point of the face.
-
Specification