Face feature analysis for automatic lipreading and character animation
First Claim
1. A face feature analysis method for distinguishing features of a face of a subject in variable lighting conditions, comprising the steps of:
- a) generating a plurality of eye, nose and mouth candidates within a current image frame which includes the face of the subject;
b) defining a nostril tracking window within the current image frame around one of the nose candidates, wherein said nostril tracking window is defined so as to exclude the eye and mouth candidates and to assure inclusion of nose nostrils of the subject in a next frame after a fastest possible head movement between said current and next frames; and
c) analyzing each pixel within said nostril tracking window to determine whether the nose candidate enclosed within said nostril tracking window represents actual nostrils by performing a skin color area test and a nostril area test.
4 Assignments
0 Petitions
Accused Products
Abstract
A face feature analysis which begins by generating multiple face feature candidates, e.g., eyes and nose positions, using an isolated frame face analysis. Then, a nostril tracking window is defined around a nose candidate and tests are applied to the pixels therein based on percentages of skin color area pixels and nostril area pixels to determine whether the nose candidate represents an actual nose. Once actual nostrils are identified, size, separation and contiguity of the actual nostrils is determined by projecting the nostril pixels within the nostril tracking window. A mouth window is defined around the mouth region and mouth detail analysis is then applied to the pixels within the mouth window to identify inner mouth and teeth pixels and therefrom generate an inner mouth contour. The nostril position and inner mouth contour are used to generate a synthetic model head. A direct comparison is made between the inner mouth contour generated and that of a synthetic model head and the synthetic model head is adjusted accordingly. Vector quantization algorithms may be used to develop a codebook of face model parameters to improve processing efficiency. The face feature analysis is suitable regardless of noise, illumination variations, head tilt, scale variations and nostril shape.
-
Citations
17 Claims
-
1. A face feature analysis method for distinguishing features of a face of a subject in variable lighting conditions, comprising the steps of:
-
a) generating a plurality of eye, nose and mouth candidates within a current image frame which includes the face of the subject; b) defining a nostril tracking window within the current image frame around one of the nose candidates, wherein said nostril tracking window is defined so as to exclude the eye and mouth candidates and to assure inclusion of nose nostrils of the subject in a next frame after a fastest possible head movement between said current and next frames; and c) analyzing each pixel within said nostril tracking window to determine whether the nose candidate enclosed within said nostril tracking window represents actual nostrils by performing a skin color area test and a nostril area test. - View Dependent Claims (2, 3)
-
-
4. A face feature analysis method for distinguishing features of a face of a subject in variable lighting conditions, comprising the steps of:
-
a) generating a plurality of eye, nose and mouth candidates within a current image frame which includes the face of the subject; b) defining a nostril tracking window within the current image frame around one of the nose candidates, wherein said nostril tracking window is defined so as to exclude the eye and mouth candidates and to assure inclusion of nose nostrils of the subject in a next frame after a fastest possible head movement between said current and next frames; and c) analyzing each pixel within said nostril tracking window to determine whether the nose candidate enclosed within said nostril tracking window represents actual nostrils by; d) comparing an intensity of each pixel within said nostril tracking window to a range of pixel intensities corresponding to a range of possible skin colors of the face of the subject and classifying the pixel as a skin color area pixel if the intensity is within the range of possible skin colors; e) identifying a percentage area of skin color area pixels within the nostril tracking window; f) comparing said percentage area of skin color area pixels to an acceptable range of percentage area of skin color area pixels, wherein if said percentage area of skin color area pixels is within the acceptable range of percentage area of skin color area pixels then actual nostrils have been detected within said nostril tracking window, and if said percentage area of skin color area pixels is outside the acceptable range of percentage area of skin color area pixels then the actual nostrils are not enclosed within said nostril tracking window and said step c) is repeated to define a nostril tracking window around another nose candidate and steps d) through f) are repeated after said repeated step c). - View Dependent Claims (5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
-
-
16. A face feature analysis method for distinguishing features of a face of a subject in variable lighting conditions, comprising the steps of:
-
a) generating an actual inner lip contour including an actual upper and an actual lower inner lip contour; b) generating a synthetic inner lip contour including a synthetic upper and a synthetic lower inner lip contour based on said actual upper and lower inner lip contours; c) directly comparing said actual inner lip contour with said synthetic inner lip contour by determining a distance measure representing a difference between a vertical distance between said actual upper and lower inner lip contours and a vertical distance between said synthetic upper and lower inner lip contour; and d) adjusting said synthetic inner lip contour and repeating step c) until a best match is achieved. - View Dependent Claims (17)
-
Specification