Facial animation using facial sensors within a head-mounted display
First Claim
1. A method comprising:
- causing, inside a head mounted display (HMD), a plurality of light sources to emit light by a single light source at a time in a particular sequence to illuminate a portion of a face of a user wearing the HMD, wherein the portion of the face includes the eyes of the user and portions of an eyebrow and a cheek of the user;
capturing a plurality of facial data of the portion of the face using one or more facial sensors located inside the HMD, wherein the plurality of facial data describes a plurality of frames of an image including a plurality of pixels, each pixel associated with a different coordinate location of the image, each of the plurality of frames captured when a different single light source of the plurality of light sources illuminated the portion of the face;
for each coordinate location of the image;
identifying a frame of the plurality of frames having the greatest intensity value based on reflected light from the portion of the face at the coordinate location;
identifying a position of a light source of the plurality of light sources illuminating the portion of the face when the frame was captured, the reflected light originating from the light source;
determining a planar section of the portion of the face based on the position of the light source and the coordinate location; and
determining a normal vector to the planar section based on the position of the light source illuminating the portion of the face when the frame of the plurality of frames having the greatest intensity value was captured;
generating a virtual surface describing orientation of the portion of the face by aggregating the normal vectors for the planar sections;
mapping the virtual surface to one or more landmarks of the face; and
generating facial animation information based at least in part on the mapping and the virtual surface, the facial animation information describing a portion of a virtual face corresponding to the portion of the face.
3 Assignments
0 Petitions
Accused Products
Abstract
A facial tracking system generates a virtual rendering of a portion of a face of a user wearing a head-mounted display (HMD). The facial tracking system illuminates portions of the face inside the HMD. The facial tracking system captures a plurality of facial data of the portion of the face using one or more facial sensors located inside the HMD. A plurality of planar sections of the portion of the face are identified based at least in part on the plurality of facial data. The plurality of planar sections are mapped to one or more landmarks of the face. Facial animation information is generated based at least in part on the mapping, the facial animation information describing a portion of a virtual face corresponding to the portion of the user'"'"'s face.
25 Citations
20 Claims
-
1. A method comprising:
-
causing, inside a head mounted display (HMD), a plurality of light sources to emit light by a single light source at a time in a particular sequence to illuminate a portion of a face of a user wearing the HMD, wherein the portion of the face includes the eyes of the user and portions of an eyebrow and a cheek of the user; capturing a plurality of facial data of the portion of the face using one or more facial sensors located inside the HMD, wherein the plurality of facial data describes a plurality of frames of an image including a plurality of pixels, each pixel associated with a different coordinate location of the image, each of the plurality of frames captured when a different single light source of the plurality of light sources illuminated the portion of the face; for each coordinate location of the image; identifying a frame of the plurality of frames having the greatest intensity value based on reflected light from the portion of the face at the coordinate location; identifying a position of a light source of the plurality of light sources illuminating the portion of the face when the frame was captured, the reflected light originating from the light source; determining a planar section of the portion of the face based on the position of the light source and the coordinate location; and determining a normal vector to the planar section based on the position of the light source illuminating the portion of the face when the frame of the plurality of frames having the greatest intensity value was captured; generating a virtual surface describing orientation of the portion of the face by aggregating the normal vectors for the planar sections; mapping the virtual surface to one or more landmarks of the face; and generating facial animation information based at least in part on the mapping and the virtual surface, the facial animation information describing a portion of a virtual face corresponding to the portion of the face. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A method comprising:
-
receiving calibration attributes including one or more landmarks of a face, inside a head mounted display (HMD), of a user wearing the HMD; capturing a plurality of facial data of a portion of the face using a plurality of light sources and one or more facial sensors located inside the HMD and off a line of sight of the user, wherein the portion of the face includes the eyes of the user and portions of an eyebrow and a cheek of the user, and the plurality of facial data describes a plurality of frames of an image including a plurality of pixels, each pixel associated with a different coordinate location of the image, each of the plurality of frames captured when a different single light source of the plurality of light sources illuminated the portion of the face, the plurality of light sources emitting light by a single light source at a time in a particular sequence to illuminate the portion of the face; for each coordinate location of the image; identifying a frame of the plurality of frames having the greatest intensity value based on reflected light from the portion of the face at the coordinate location; identifying a position of a light source of the plurality of light sources illuminating the portion of the face when the frame was captured, the reflected light originating from the light source; determining a planar section of the portion of the face based on the position of the light source and the coordinate location; and determining a normal vector to the planar section based on the position of the light source illuminating the portion of the face when the frame of the plurality of frames having the greatest intensity value was captured; generating a virtual surface describing orientation of the portion of the face by aggregating the normal vectors for the planar sections; mapping the virtual surface to the one or more landmarks of the face; generating facial animation information based at least in part on the mapping and the virtual surface; and providing the facial animation information to a display of the HMD for presentation to the user. - View Dependent Claims (14, 15, 16, 17, 18)
-
-
19. A method comprising:
-
causing, inside a head mounted display (HMD), a plurality of light sources to emit light by a single light source at a time in a particular sequence to illuminate a portion of a face of a user wearing the HMD, wherein the portion of the face includes the eyes of the user and portions of an eyebrow and a cheek of the user; capturing a plurality of facial data of the portion of the face using one or more facial sensors located inside the HMD, wherein the plurality of facial data describes a plurality of frames of an image including a plurality of pixels, each pixel associated with a different coordinate location of the image, each of the plurality of frames captured when a different single light source of the plurality of light sources illuminated the portion of the face; for each coordinate location of the image; identifying a frame of the plurality of frames having the greatest intensity value based on reflected light from the portion of the face at the coordinate location; identifying a position of a light source of the plurality of light sources illuminating the portion of the face when the frame was captured, the reflected light originating from the light source; determining a planar section of the portion of the face based on the position of the light source and the coordinate location; and determining a normal vector to the planar section based on the position of the light source illuminating the portion of the face when the frame of the plurality of frames having the greatest intensity value was captured; generating a virtual surface describing orientation of the portion of the face by aggregating the normal vectors for the planar sections; mapping the virtual surface to one or more landmarks of the face; providing the mapping and the virtual surface to a virtual reality (VR) console; receiving, from the VR console, facial animation information based at least in part on the mapping and the virtual surface; and providing the facial animation information to a display of the HMD for presentation to the user. - View Dependent Claims (20)
-
Specification