Method for tracking motion of a face
First Claim
1. A method for tracking motion of a face comprising the steps of:
- determining the calibration parameter of a camera;
marking salient features of an object with markers for motion tracking;
acquiring a plurality of initial 2-D images of the object;
calculating 3-D locations of the salient features of the object in accordance with the calibration parameter of the camera;
calculating 3-D locations of the global and local markers in the neutral state; and
calculating 3-D locations of the local markers in each action state;
receiving a chronologically ordered sequence of 2-D images of the object;
storing or transmitting tracked motion of the object;
calculating the 3-D locations of the local markers in each action state by estimating the orientation and position of the face in each 2-D image of the action state to conform to the 3-D and 2-D locations of the global markers under a perspective projection model and calculating the 3-D locations of the local markers to conform to the estimated orientation and position of the face and the 2-D locations of the local markers under a perspective projection model.
3 Assignments
0 Petitions
Accused Products
Abstract
A method for tracking the motion of a person'"'"'s face for the purpose of animating a 3-D face model of the same or another person is disclosed. The 3-D face model carries both the geometry (shape) and the texture (color) characteristics of the person'"'"'s face. The shape of the face model is represented via a 3-D triangular mesh (geometry mesh), while the texture of the face model is represented via a 2-D composite image (texture image). Both the global motion and the local motion of the person'"'"'s face are tracked. Global motion of the face involves the rotation and the translation of the face in 3-D. Local motion of the face involves the 3-D motion of the lips, eyebrows, etc., caused by speech and facial expressions. The 2-D positions of salient features of the person'"'"'s face and/or markers placed on the person'"'"'s face are automatically tracked in a time-sequence of 2-D images of the face. Global and local motion of the face are separately calculated using the tracked 2-D positions of the salient features or markers. Global motion is represented in a 2-D image by rotation and position vectors while local motion is represented by an action vector that specifies the amount of facial actions such as smiling-mouth, raised-eyebrows, etc.
-
Citations
44 Claims
-
1. A method for tracking motion of a face comprising the steps of:
-
determining the calibration parameter of a camera; marking salient features of an object with markers for motion tracking; acquiring a plurality of initial 2-D images of the object; calculating 3-D locations of the salient features of the object in accordance with the calibration parameter of the camera; calculating 3-D locations of the global and local markers in the neutral state; and calculating 3-D locations of the local markers in each action state; receiving a chronologically ordered sequence of 2-D images of the object; storing or transmitting tracked motion of the object; calculating the 3-D locations of the local markers in each action state by estimating the orientation and position of the face in each 2-D image of the action state to conform to the 3-D and 2-D locations of the global markers under a perspective projection model and calculating the 3-D locations of the local markers to conform to the estimated orientation and position of the face and the 2-D locations of the local markers under a perspective projection model. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35)
-
-
36. A method for tracking motion of an object in a chronologically ordered sequence of 2-D images of the object comprising the steps of:
-
selecting global and local salient features of the object for tracking by fixing markers to the object; calculating 3-D locations of the markers at the global and local salient features for a neutral state of the object, and calculating 3-D locations of the local salient features for action states of the object; calculating the 3-D locations of the global and local markers in a neutral state; and calculating the 3-D locations of the local markers in each action state by estimating the orientation and position of the face in each 2-D image of the action state to conform to the 3-D and 2-D locations of the global markers under a perspective projection model and calculating the 3-D locations of the local markers to conform to the estimated orientation and position of the face and the 2-D locations of the local markers under a perspective projection model; predicting 2-D locations of the markers at the global and local salient features in a 2-D image; detecting 2-D locations of the markers at the global and local salient features in the 2-D image; and estimating the global and local motion of the object in the 2-D image. - View Dependent Claims (37, 38, 39, 40, 41, 42, 43, 44)
-
Specification