Method for mapping facial animation values to head mesh positions
First Claim
Patent Images
1. A method for translating facial animation values to head mesh positions for rendering facial features of an animated avatar, the method comprising:
- providing an animation vector a of dimension Na, where Na is a number of facial animation values in the animation vector;
applying a mapping algorithm F to the animation vector to generate a target mix vector g of dimension M, where M is a number of targets associated with the head mesh positions;
applying a calibration vector c to the target mix vector g to generate a calibrated target mix vector gc; and
deforming the head mesh positions based on the calibrated target mix vector gc.
2 Assignments
0 Petitions
Accused Products
Abstract
The present invention provides a technique for translating facial animation values to head mesh positions for rendering facial features of an animated avatar. In the method, an animation vector of dimension Na is provided. Na is the number of facial animation values in the animation vector. A mapping algorithm F is applied to the animation vector to generate a target mix vector of dimension M. M is the number of targets associated with the head mesh positions. The head mesh positions are deformed based on the target mix vector.
78 Citations
14 Claims
-
1. A method for translating facial animation values to head mesh positions for rendering facial features of an animated avatar, the method comprising:
-
providing an animation vector a of dimension Na, where Na is a number of facial animation values in the animation vector;
applying a mapping algorithm F to the animation vector to generate a target mix vector g of dimension M, where M is a number of targets associated with the head mesh positions;
applying a calibration vector c to the target mix vector g to generate a calibrated target mix vector gc; and
deforming the head mesh positions based on the calibrated target mix vector gc. - View Dependent Claims (2, 3, 4)
-
-
5. A method for translating facial animation values to head mesh positions for rendering facial features of an animated avatar, the method comprising:
-
providing an animation vector a of dimension Na where Na is a number of facial animation values in the animation vector;
defining groups that associate sets of the animation values with sets of targets;
applying a mapping algorithm F independently to each grouped set of animation values to generate corresponding target mix group-vectors;
combining the target mix group-vectors to generate a target mix vector g of dimension M, where M is a number of targets associated with the head mesh positions; and
deforming the head mesh position based on the target mix vector g. - View Dependent Claims (6)
-
-
7. A system for translating facial animation value to head mesh positions for rendering facial features of an animated avatar, the system comprising:
-
means for providing an animation vector a of dimension Na, where Na is a number of facial animation values in the animation vector;
means for applying a mapping algorithm F to the animation vector to generate a target mix vector g of dimension M, where M is a number of targets associated with the head mesh positions;
means for applying a calibration vector c to the target mix vector g to generate a calibrated target mix vector gc; and
means for deforming the head mesh positions based on the calibrated target mix vector gc.
-
-
8. An article of manufacture, comprising:
-
a machine-readable medium having instructions stored thereon that are executable by a processor to translate facial animation values to head mesh positions for rendering facial features of an animated avatar, by;
obtaining an animation vector a of dimension Na, where Na is a number of facial animation values in the animation vector;
applying a mapping algorithm F to the animation vector to generate a target mix vector g of dimension M, where M is a number of targets associated with the head mesh positions;
applying a calibration vector c to the target mix vector g to generate a calibrated target mix vector gc; and
deforming the head mesh positions based on the calibrated target mix vector gc. - View Dependent Claims (9, 10, 11)
-
-
12. A system for translating facial animation value to head mesh positions for rendering facial features of an animated avatar, the system comprising:
-
means for obtaining an animation vector a of dimension Na, where Na is a number of facial animation values in the animation vector;
means for defining groups that associate sets of the animation values with sets of targets;
means for applying a mapping algorithm F independently to each grouped set of animation values to generate corresponding target mix group-vectors;
means for combining the target mix group-vectors to generate a target mix vector g of dimension M, where M is a number of targets associated with the head mesh positions; and
means for deforming the head mesh position based on the target mix vector g.
-
-
13. An article of manufacture, comprising:
-
a machine-readable medium having instructions stored thereon that are executable by a processor to translate facial animation values to head mesh positions for rendering facial features of an animated avatar, by;
obtaining an animation vector a of dimension Na, where Na is a number of facial animation values in the animation vector;
defining groups that associate sets of the animation values with sets of targets;
applying a mapping algorithm F independently to each grouped set of animation values to generate corresponding target mix group-vectors;
combining the target mix group-vectors to generate a target mix vector g of dimension M, where M is a number of targets associated with the head mesh positions; and
deforming the head mesh position based on the target mix vector g. - View Dependent Claims (14)
-
Specification