Face and eye tracking using facial sensors within a head-mounted display
First Claim
1. A head mounted display (HMD) comprising:
- a display element configured to display content to a user wearing the HMD;
an optics block configured to direct light from the display element to an exit pupil of the HMD;
a plurality of light sources positioned at discrete locations around the optics block, the plurality of light sources configured to illuminate portions of a face, inside the HMD, of the user;
a facial sensor configured to capture one or more facial data of a portion of the face illuminated by one or more of the plurality of light sources; and
a controller configured to;
receive a plurality of captured facial data from the facial sensor;
identify a plurality of surfaces of the portion of the face based at least in part on the plurality of captured facial data;
retrieve a plurality of landmarks each indicating a position of a different feature of the face, the plurality of landmarks determined based on calibration attributes corresponding to a plurality of different facial expressions performed by the user for a calibration process;
retrieve global calibration attributes, the global calibration attributes generated based on facial expressions performed by a population of users;
responsive to determining that a difference between the calibration attributes and the global calibration attributes is less than a threshold value;
map the plurality of surfaces to one of the plurality of landmarks; and
generate facial animation information describing the portion of the face of the user based at least in part on the plurality of captured facial data and the mapping.
3 Assignments
0 Petitions
Accused Products
Abstract
A head mounted display (HMD) in a VR system includes sensors for tracking the eyes and face of a user wearing the HMD. The VR system records calibration attributes such as landmarks of the face of the user. Light sources illuminate portions of the user'"'"'s face covered by the HMD. In conjunction, facial sensors capture facial data. The VR system analyzes the facial data to determine the orientation of planar sections of the illuminated portions of face. The VR system aggregates planar sections of the face and maps the planar sections to landmarks of the face to generate a facial animation of the user, which can also include eye orientation information. The facial animation is represented as a virtual avatar and presented to the user.
28 Citations
20 Claims
-
1. A head mounted display (HMD) comprising:
-
a display element configured to display content to a user wearing the HMD; an optics block configured to direct light from the display element to an exit pupil of the HMD; a plurality of light sources positioned at discrete locations around the optics block, the plurality of light sources configured to illuminate portions of a face, inside the HMD, of the user; a facial sensor configured to capture one or more facial data of a portion of the face illuminated by one or more of the plurality of light sources; and a controller configured to; receive a plurality of captured facial data from the facial sensor; identify a plurality of surfaces of the portion of the face based at least in part on the plurality of captured facial data; retrieve a plurality of landmarks each indicating a position of a different feature of the face, the plurality of landmarks determined based on calibration attributes corresponding to a plurality of different facial expressions performed by the user for a calibration process; retrieve global calibration attributes, the global calibration attributes generated based on facial expressions performed by a population of users; responsive to determining that a difference between the calibration attributes and the global calibration attributes is less than a threshold value; map the plurality of surfaces to one of the plurality of landmarks; and generate facial animation information describing the portion of the face of the user based at least in part on the plurality of captured facial data and the mapping. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A head mounted display (HMD) comprising:
-
a display element configured to display content to a user wearing the HMD; a plurality of light sources positioned at discrete locations outside a line of sight of the user, the plurality of light sources configured to illuminate portions of a face, inside the HMD, of the user; a facial sensor configured to capture one or more facial data of a portion of the face illuminated by one or more of the plurality of light sources; and a controller configured to; receive a plurality of captured facial data from the facial sensor; identify a plurality of surfaces of the portion of the face based at least in part on the plurality of captured facial data; retrieve a plurality of landmarks each indicating a position of a different feature of the face, the plurality of landmarks determined based on calibration attributes corresponding to a plurality of different facial expressions performed by the user for a calibration process; retrieve global calibration attributes, the global calibration attributes generated based on facial expressions performed by a population of users; responsive to determining that a difference between the calibration attributes and the global calibration attributes is less than a threshold value; map the plurality of surfaces to one of the plurality of landmarks; and generate facial animation information describing the portion of the face of the user based at least in part on the plurality of captured facial data and the mapping. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17)
-
-
18. A head mounted display (HMD) comprising:
-
a display element configured to display content to a user wearing the HMD; an optics block configured to direct light from the display element to an exit pupil of the HMD; a plurality of light sources positioned at discrete locations around the optics block, the plurality of light sources configured to illuminate portions of a face, inside the HMD, of the user; a facial sensor configured to capture one or more facial data of a portion of the face illuminated by one or more of the plurality of light sources; and a controller configured to; receive a plurality of captured facial data from the facial sensor; identify a plurality of surfaces of the portion of the face based at least in part on the plurality of captured facial data; retrieve global calibration attributes, the global calibration attributes generated based on facial expressions performed by a population of users; responsive to determining that a difference between calibration attributes of the user and the global calibration attributes is less than a threshold value; provide the plurality of surfaces to a virtual reality (VR) console; and receive, from the VR console a virtual animation of the portion of the face of the user, the virtual animation generated by mapping the plurality of surfaces to one of a plurality of landmarks each indicating a position of a different feature of the face, the plurality of landmarks determined based on the calibration attributes of the user corresponding to a plurality of different facial expressions performed by the user for a calibration process. - View Dependent Claims (19, 20)
-
Specification