×

Method for driving virtual facial expressions by automatically detecting facial expressions of a face image

  • US 7,751,599 B2
  • Filed: 08/09/2006
  • Issued: 07/06/2010
  • Est. Priority Date: 08/09/2006
  • Status: Active Grant
First Claim
Patent Images

1. A method for driving virtual facial expressions by automatically detecting facial expressions of a face image, which is applied to a digital image capturing device, comprising:

  • learning and correcting a plurality of front face image samples to obtain an average position of each key point of a face, eyes, a nose and a mouth of each of said front face image samples on a standard image for imitating a virtual front face, wherein said standard image is a greyscale figure having a fixed size and using a front face as a center without an inclination;

    using a Gabor wavelet algorithm to sample a series of Gabor Jets from said key points of said front face image samples to form a Gabor Jet Bunch;

    automatically detecting positions of a face, eyes, a nose and a mouth in a target image captured by said digital image capturing device, and converting said positions onto said standard image;

    performing a fitting or regression calculation for said positions of eyes, nose and mouth of said target image and said average positions corresponding thereto to obtain initial positions of key points of eyes, nose and mouth of said target image on said standard image;

    calculating exact positions of said key points of said target image on said standard image by using a point within a neighborhood of each of said initial positions as a selecting point, comparing a Gabor Jet of each of said initial positions with each of said Gabor Jets in said Gabor Jet Bunch, selecting said Gabor Jet in said Gabor Jet Bunch having a highest similarity with said selecting point as said exact position of said key point on said standard image corresponding to said key point of said target image, inversely aligning said exact positions of said key points on said standard image onto said target image, and labeling said exact positions as exact positions of said key points on said target image; and

    automatically tracking positions of key points of other target image captured by said digital image capturing device later, and correcting said exact position of said key point on said standard image through obtaining a motion parameter (dx, dy) between said key points corresponding to said target image and said other target image by using an optical flow technique, calculating an error ε

    (dx, dy) of said motion parameter (dx, dy) according to the following formula, wherein I (x,y) represents gray scales of said target image, J(x,y) represents gray scales of said other target image, and x, y represent coordinates of each of said key points of said target image or said other target image;


    ε

    (d)=ε

    (dx,dy)=Σ

    Σ

    (I(x,y)−

    J(x+dx,y+dy))2,to find dx, dy that minimize said error ε

    (dx, dy) and obtain an estimated position of said key point of said other target image based on said position of said key point of said target image corresponding thereto, and performing a fitting or regression calculation for said estimated positions and said average positions to calculate estimated positions of key points of said other target image on said standard image.

View all claims
  • 5 Assignments
Timeline View
Assignment View
    ×
    ×