ROBOT AND METHOD FOR RECOGNIZING HUMAN FACES AND GESTURES THEREOF
First Claim
1. A method for recognizing human faces and gestures that are suitable for recognizing movement of a specific user to operate a robot, the method comprising:
- processing a plurality of face regions within an image sequence captured by the robot through a first classifier, so as to locate a current position of the specific user according to the face regions, the image sequence comprising a plurality of images;
tracking the changes of the current position of the specific user and moving the robot based on the current position of the specific user, such that the specific user can be constantly appear in the image sequence continuously captured by the robot;
analyzing the image sequence to extract a gesture feature of the specific user;
processing the gesture feature through a second classifier to recognize an operating instruction corresponding to the gesture feature; and
controlling the robot to execute an action based on the operating instruction.
1 Assignment
0 Petitions
Accused Products
Abstract
A robot and a method for recognizing human faces and gestures are provided, and the method is applicable to a robot. In the method, a plurality of face regions within an image sequence captured by the robot are processed by a first classifier, so as to locate a current position of a specific user from the face regions. Changes of the current position of the specific user are tracked to move the robot accordingly. While the current position of the specific user is tracked, a gesture feature of the specific user is extracted by analyzing the image sequence. An operating instruction corresponding to the gesture feature is recognized by processing the gesture feature through a second classifier, and the robot is controlled to execute a relevant action according to the operating instruction.
-
Citations
16 Claims
-
1. A method for recognizing human faces and gestures that are suitable for recognizing movement of a specific user to operate a robot, the method comprising:
-
processing a plurality of face regions within an image sequence captured by the robot through a first classifier, so as to locate a current position of the specific user according to the face regions, the image sequence comprising a plurality of images; tracking the changes of the current position of the specific user and moving the robot based on the current position of the specific user, such that the specific user can be constantly appear in the image sequence continuously captured by the robot; analyzing the image sequence to extract a gesture feature of the specific user; processing the gesture feature through a second classifier to recognize an operating instruction corresponding to the gesture feature; and controlling the robot to execute an action based on the operating instruction. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A robot comprising:
-
an image extraction apparatus; a marching apparatus; and a processing module coupled to the image extraction apparatus and the marching apparatus, wherein the processing module processes a plurality of face regions within an image sequence captured by the image extraction apparatus through a first classifier, locates a current position of a specific user from the face regions, tracks changes of the current position of the specific user, and controls the marching apparatus to move the robot based on the current position of the specific user so as to ensure that the specific user constantly appears in the image sequence continuously captured by the image extraction apparatus, the image sequence comprising a plurality of images, the processing module analyses the image sequence to extract a gesture feature of the specific user and processing the gesture feature through a second classifier to recognize an operating instruction corresponding to the gesture feature, and controls the robot to execute an action according to the operating instruction. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16)
-
Specification