Wavelet-based facial motion capture for avatar animation
First Claim
1. A method for generating a head model specific to a person for use in animating a display image having the person'"'"'s features, the method comprising the steps for:
- providing a frontal facial image of the person, the frontal facial image having an array of pixels;
providing a profile facial image of the person, the profile facial image having an array of pixels;
transforming the frontal facial image using wavelet transformations to generate a transformed frontal image having an array of pixels, each pixel of the transformed frontal image being associated with a respective pixel of the frontal facial image and being represented by a frontal transform jet associated with a predetermined number of wavelet component values;
transforming the profile facial image using wavelet transformations to generate a transformed profile image having an array of pixels, each pixel of the transformed profile image being associated with a respective pixel of the profile facial image and being represented by a profile transform jet associated with a predetermined number of wavelet component values;
locating predetermined facial features in the transformed frontal image based on a comparison between the frontal transform jets of the transformed frontal facial image and frontal graph jets at sensing nodes of a frontal graph;
locating predetermined facial features in the transformed profile image based on a comparison between the profile transform jets of the transformed profile facial image and sensing nodes at profile graph jets of a profile graph;
producing a three-dimensional model of the person'"'"'s head using the frontal facial image and the profile facial image and based on the positions of the predetermined features located in the transformed frontal facial image and the predetermined features located in the transformed profile facial image;
displaying the head model as a display image.
4 Assignments
0 Petitions
Accused Products
Abstract
The present invention is embodied in an apparatus, and related method, for sensing a person'"'"'s facial movements, features and characteristics and the like to generate and animate an avatar image based on facial sensing. The avatar apparatus uses an image processing technique based on model graphs and bunch graphs that efficiently represent image features as jets. The jets are composed of wavelet transforms processed at node or landmark locations on an image corresponding to readily identifiable features. The nodes are acquired and tracked to animate an avatar image in accordance with the person'"'"'s facial movements. Also, the facial sensing may use jet similarity to determine the person'"'"'s facial features and characteristic thus allows tracking of a person'"'"'s natural characteristics without any unnatural elements that may interfere or inhibit the person'"'"'s natural characteristics.
-
Citations
12 Claims
-
1. A method for generating a head model specific to a person for use in animating a display image having the person'"'"'s features, the method comprising the steps for:
-
providing a frontal facial image of the person, the frontal facial image having an array of pixels;
providing a profile facial image of the person, the profile facial image having an array of pixels;
transforming the frontal facial image using wavelet transformations to generate a transformed frontal image having an array of pixels, each pixel of the transformed frontal image being associated with a respective pixel of the frontal facial image and being represented by a frontal transform jet associated with a predetermined number of wavelet component values;
transforming the profile facial image using wavelet transformations to generate a transformed profile image having an array of pixels, each pixel of the transformed profile image being associated with a respective pixel of the profile facial image and being represented by a profile transform jet associated with a predetermined number of wavelet component values;
locating predetermined facial features in the transformed frontal image based on a comparison between the frontal transform jets of the transformed frontal facial image and frontal graph jets at sensing nodes of a frontal graph;
locating predetermined facial features in the transformed profile image based on a comparison between the profile transform jets of the transformed profile facial image and sensing nodes at profile graph jets of a profile graph;
producing a three-dimensional model of the person'"'"'s head using the frontal facial image and the profile facial image and based on the positions of the predetermined features located in the transformed frontal facial image and the predetermined features located in the transformed profile facial image;
displaying the head model as a display image. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A method for providing a display avatar having a specific person'"'"'s features, the method comprising the steps for:
-
providing a frontal facial image of the person, the frontal facial image having an array of pixels;
providing a profile facial image of the person, the profile facial image having an array of pixels;
transforming the frontal facial image using wavelet transformations to generate a transformed frontal image having an array of pixels, each pixel of the transformed frontal image being associated with a respective pixel of the frontal facial image and being represented by a frontal transform jet associated with a predetermined number of wavelet component values;
transforming the profile facial image using wavelet transformations to generate a transformed profile image having an array of pixels, each pixel of the transformed profile image being associated with a respective pixel of the profile facial image and being represented by a profile transform jet associated with a predetermined number of wavelet component values;
locating predetermined facial features in the transformed frontal facial image based on a comparison between the frontal transform jets of the transformed frontal facial image and frontal graph jets at sensing nodes of a frontal graph;
locating predetermined facial features in the transformed profile facial image based on a comparison between the profile transform jets of the transformed profile facial image and sensing nodes at profile graph jets of a profile graph;
producing the display avatar using the frontal facial image and the profile facial image and based on the positions of the predetermined features located in the transformed frontal facial image and the predetermined features located in the transformed profile facial image;
displaying the display avatar. - View Dependent Claims (8, 9, 10, 11, 12)
-
Specification