Method and apparatus for capturing facial expressions
First Claim
1. A facial image capturing method, comprising:
- receiving a plurality of captured images including human faces;
obtaining regional features of the human faces from the captured images, and generating a target feature vector by a feature-point-positioning procedure on the captured images, wherein a plurality of deviation-determination values are obtained therefrom and one of the captured images is selected for generating the target feature vector, wherein the target feature vector is generated according to a displacement analysis of the captured images, and wherein the displacement is generated according to a difference between coordinates of a left eye in the image currently processed and a left eye in the image previously processed as well as a difference between coordinates of a right eye in the image currently processed and a right eye in the image previously processed;
comparing the target feature vector with a plurality of previously stored feature vectors, and generating a parameter value accordingly, wherein when the parameter value is higher than a threshold, one of the captured images is selected as a target image, and the target feature vector corresponding to the target image is stored to be as a new feature vector for comparing with next target feature vector; and
recognizing the target image to obtain a facial expression state and classifying the target image according to the facial expression state.
1 Assignment
0 Petitions
Accused Products
Abstract
A method and an apparatus for capturing facial expressions are provided, in which different facial expressions of a user are captured through a face recognition technique. In the method, a plurality of sequentially captured images containing human faces is received. Regional features of the human faces in the images are respectively captured to generate a target feature vector. The target feature vector is compared with a plurality of previously stored feature vectors to generate a parameter value. When the parameter value is higher than a threshold, one of the images is selected as a target image. Moreover, a facial expression recognition and classification procedures can be further performed. For example, the target image is recognized to obtain a facial expression state, and the image is classified according to the facial expression state.
-
Citations
9 Claims
-
1. A facial image capturing method, comprising:
-
receiving a plurality of captured images including human faces; obtaining regional features of the human faces from the captured images, and generating a target feature vector by a feature-point-positioning procedure on the captured images, wherein a plurality of deviation-determination values are obtained therefrom and one of the captured images is selected for generating the target feature vector, wherein the target feature vector is generated according to a displacement analysis of the captured images, and wherein the displacement is generated according to a difference between coordinates of a left eye in the image currently processed and a left eye in the image previously processed as well as a difference between coordinates of a right eye in the image currently processed and a right eye in the image previously processed; comparing the target feature vector with a plurality of previously stored feature vectors, and generating a parameter value accordingly, wherein when the parameter value is higher than a threshold, one of the captured images is selected as a target image, and the target feature vector corresponding to the target image is stored to be as a new feature vector for comparing with next target feature vector; and recognizing the target image to obtain a facial expression state and classifying the target image according to the facial expression state. - View Dependent Claims (2, 3, 4)
-
-
5. A facial image capturing apparatus, comprising:
-
an image capturing unit, for capturing a plurality of images comprising human faces; a feature-point-positioning unit, for receiving the captured images and generating a target feature vector according to regional features of the human faces in the images, wherein by a feature-point-positioning procedure is performed by the feature-point-positioning unit on the captured images, and a plurality of deviation-determination values are obtained therefrom and one of the captured images is selected for generating the target feature vector, wherein the target feature vector is generated according to a displacement analysis of the captured images, and wherein the displacement is generated according to a difference between coordinates of a left eye in the image currently processed and a left eye in the image previously processed as well as a difference between coordinates of a right eye in the image currently processed and a right eye in the image previously processed; and an analysis unit , for receiving the target feature vector and comparing the target feature vector with a plurality of previously stored feature vectors to generate a parameter value, wherein when the parameter value is higher than a threshold, the analysis unit selects one of the images as a target image and adds the target feature vector corresponding to the target image into the feature vectors; and a specific-expression-classification unit, wherein the specific-expression-classification unit recognizes the target image to obtain a facial expression state and classifies the target image according to the facial expression state. - View Dependent Claims (6, 7, 8, 9)
-
Specification