FACIAL ANIMATION USING FACIAL SENSORS WITHIN A HEAD-MOUNTED DISPLAY
First Claim
1. A method comprising:
- illuminating, via a plurality of light sources, portions of a face, inside a head mounted display (HMD), of a user wearing the HMD;
capturing a plurality of facial data of the portion of the face using one or more facial sensors located inside the HMD, wherein the plurality of facial data describes frames of an image including a plurality of pixels, each pixel associated with a different coordinate location of the image;
for each coordinate location of the image;
identifying a planar section corresponding to the brightest pixel value at the coordinate location;
identifying a position of one of the plurality of light sources corresponding to the identified planar section;
generating a virtual surface describing orientation of the portion of the face by aggregating the identified planar sections based at least in part on the identified positions;
mapping the virtual surface to one or more landmarks of the face; and
generating facial animation information based at least in part on the mapping and the virtual surface, the facial animation information describing a portion of a virtual face corresponding to the portion of the face.
3 Assignments
0 Petitions
Accused Products
Abstract
A facial tracking system generates a virtual rendering of a portion of a face of a user wearing a head-mounted display (HMD). The facial tracking system illuminates portions of the face inside the HMD. The facial tracking system captures a plurality of facial data of the portion of the face using one or more facial sensors located inside the HMD. A plurality of planar sections of the portion of the face are identified based at least in part on the plurality of facial data. The plurality of planar sections are mapped to one or more landmarks of the face. Facial animation information is generated based at least in part on the mapping, the facial animation information describing a portion of a virtual face corresponding to the portion of the user'"'"'s face.
-
Citations
21 Claims
-
1. A method comprising:
-
illuminating, via a plurality of light sources, portions of a face, inside a head mounted display (HMD), of a user wearing the HMD; capturing a plurality of facial data of the portion of the face using one or more facial sensors located inside the HMD, wherein the plurality of facial data describes frames of an image including a plurality of pixels, each pixel associated with a different coordinate location of the image; for each coordinate location of the image; identifying a planar section corresponding to the brightest pixel value at the coordinate location; identifying a position of one of the plurality of light sources corresponding to the identified planar section; generating a virtual surface describing orientation of the portion of the face by aggregating the identified planar sections based at least in part on the identified positions; mapping the virtual surface to one or more landmarks of the face; and generating facial animation information based at least in part on the mapping and the virtual surface, the facial animation information describing a portion of a virtual face corresponding to the portion of the face. - View Dependent Claims (2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 21)
-
-
7. (canceled)
-
13. A method comprising:
-
receiving calibration attributes including one or more landmarks of a face, inside a head mounted display (HMD), of a user wearing the HMD; capturing a plurality of facial data of a portion of the face using a plurality of light sources and one or more facial sensors located inside the HMD and off a line of sight of the user, wherein the plurality of facial data describes frames of an image including a plurality of pixels, each pixel associated with a different coordinate location of the image; for each coordinate location of the image; identifying a planar section corresponding to the brightest pixel value at the coordinate location; identifying a position of one of the plurality of light sources corresponding to the identified planar section; generating a virtual surface describing orientation of the portion of the face by aggregating the identified planar sections based at least in part on the identified positions; mapping the virtual surface to the one or more landmarks of the face; generating facial animation information based at least in part on the mapping and the virtual surface; and providing the facial animation information to a display of the HMD for presentation to the user. - View Dependent Claims (14, 15, 16, 17, 18)
-
-
19. (canceled)
-
20. A method comprising:
-
illuminating, via a plurality of light sources, portions of a face, inside a head mounted display (HMD), of a user wearing the HMD; capturing a plurality of facial data of a portion of the face using one or more facial sensors located inside the HMD, wherein the plurality of facial data describes frames of an image including a plurality of pixels, each pixel associated with a different coordinate location of the image; for each coordinate location of the image; identifying a planar section corresponding to the brightest pixel value at the coordinate location; identifying a position of one of the plurality of light sources corresponding to the identified planar section; generating a virtual surface describing orientation of the portion of the face by aggregating the identified planar sections based at least in part on the identified positions; mapping the virtual surface to one or more landmarks of the face; providing the mapping and the virtual surface to a virtual reality (VR) console; receiving, from the VR console, facial animation information based at least in part on the mapping and the virtual surface; and providing the facial animation information to a display of the HMD for presentation to the user.
-
Specification