Method and apparatus for recognising expression using expression-gesture dictionary
First Claim
1. An apparatus for recognizing an expression using an expression-gesture dictionary, the apparatus comprising:
- a learning image acquisitor configured to obtain position data of a face and eyes from a learning expression, perform a first normalization based on the obtained position data, track a change of a dense motion of the learning expression from a reference frame of a neutral expression, and generate expression learning data;
an expression-gesture dictionary and expression-gesture dictionary learner configured to represent and store a numerical value for expression recognition for each expression according to a dictionary learning method satisfying a given limiting condition using a local support map in an image coordinate space for a motion flow with respect to a set of changes of the dense motion of the learning expression after initializing the expression-gesture dictionary;
an expression classifier learner configured to learn an expression classification for each expression based on a weight of data on the expression-gesture dictionary;
a recognition image acquisitor configured to obtain position data of a face and eyes from a recognition target, perform a second normalization based on the obtained position data, track a change of a dense motion of the recognition target from the reference frame of the neutral expression, and generate recognition data; and
an expression recognizer configured to analyze an expression weight on data to be recognized, determine a closest classification by the expression classifier learner, and recognize an expression,wherein the first normalization removes a peripheral region regardless of an expression in a facial region by giving an offset as a predetermined ratio based on positions of two eyes after aligning a center of the two eyes to be a reference point based on positions of the two eyes which are detected, and sets a position coordinate of a feature portion.
1 Assignment
0 Petitions
Accused Products
Abstract
An apparatus for recognizing expression using an expression-gesture dictionary, includes a learning image acquisitor to obtain data from a learning expression, perform a normalization based on the data, track a change of a dense motion from a reference frame, and generate expression learning data, an expression-gesture dictionary and expression-gesture dictionary learner to represent and store a numerical value for expression recognition for each expression using a local support map in an image coordinate space for a motion flow with respect to a set of changes of the dense motion, an expression classifier learner to learn an expression classification for each expression based on a weight of data on the expression-gesture dictionary, a recognition image acquisitor to obtain data from a recognition target, and generate recognition data, and an expression recognizer to analyze an expression weight on the recognition data, and recognize an expression by the expression classifier learner.
-
Citations
7 Claims
-
1. An apparatus for recognizing an expression using an expression-gesture dictionary, the apparatus comprising:
-
a learning image acquisitor configured to obtain position data of a face and eyes from a learning expression, perform a first normalization based on the obtained position data, track a change of a dense motion of the learning expression from a reference frame of a neutral expression, and generate expression learning data; an expression-gesture dictionary and expression-gesture dictionary learner configured to represent and store a numerical value for expression recognition for each expression according to a dictionary learning method satisfying a given limiting condition using a local support map in an image coordinate space for a motion flow with respect to a set of changes of the dense motion of the learning expression after initializing the expression-gesture dictionary; an expression classifier learner configured to learn an expression classification for each expression based on a weight of data on the expression-gesture dictionary; a recognition image acquisitor configured to obtain position data of a face and eyes from a recognition target, perform a second normalization based on the obtained position data, track a change of a dense motion of the recognition target from the reference frame of the neutral expression, and generate recognition data; and an expression recognizer configured to analyze an expression weight on data to be recognized, determine a closest classification by the expression classifier learner, and recognize an expression, wherein the first normalization removes a peripheral region regardless of an expression in a facial region by giving an offset as a predetermined ratio based on positions of two eyes after aligning a center of the two eyes to be a reference point based on positions of the two eyes which are detected, and sets a position coordinate of a feature portion. - View Dependent Claims (2, 3, 4)
-
-
5. A method for recognizing an expression using an expression-gesture dictionary, comprising:
-
obtaining a learning image including obtaining position data of a face and eyes from a learning expression, performing a first normalization based on the obtained position data, tracking a change of a dense motion from a reference frame of a neutral expression, and generating expression learning data; learning an expression-gesture dictionary including representing and storing a numerical value for expression recognition for each expression according to a dictionary learning method satisfying a given limiting condition using a local support map in an image coordinate space for a motion flow with respect to a set of changes of the dense motion for the learning expression after initializing the expression-gesture dictionary; learning an expression classifier including learning an expression classification for each expression based on a weight of data on the expression-gesture dictionary; obtaining a recognition image including obtaining position data of a face and eyes from a recognition target, performing a second normalization based on the obtained position data, tracking a change of a dense motion from the reference frame of the neutral expression, and generating recognition data; and recognizing an expression including analyzing an expression weight on data to be recognized, determining a closest classification by the expression classifier learner, and recognizing an expression, wherein the first normalization removes a peripheral region regardless of an expression in a facial region by giving an offset as a predetermined ratio based on positions of two eyes after aligning a center of the two eyes to be a reference point based on positions of the two eyes which are detected, and sets a position coordinate of a feature portion. - View Dependent Claims (6, 7)
-
Specification