SYNCHRONIZED GESTURE AND SPEECH PRODUCTION FOR HUMANOID ROBOTS
First Claim
1. A computer-implemented method of generating a gesture in a robot, comprising:
- receiving a text of a speech including one or more speech elements;
analyzing the speech text by a plurality of pattern modules to identify one or more candidate gestures associated with the speech elements in the speech, each of the plurality of pattern modules configured to apply a different set of rules to identify one or more corresponding candidate gestures;
selecting a gesture from the one or more candidate gestures identified by the plurality of pattern modules; and
generating the selected gesture by controlling actuators in the robot.
1 Assignment
0 Petitions
Accused Products
Abstract
A system or method for generating gestures in a robot during generation of a speech output by the robot by analyzing a speech text and selecting appropriate gestures from a plurality of candidate gestures. The speech text is analyzed and tagged with information relevant to generating of the gestures. Based on the speech text, the tagged information and other relevant information, a gesture identifier is selected. A gesture template corresponding to the gesture identifier is retrieved and then processed by adding relevant parameter to generate a gesture descriptor representing a gesture to be taken by the robot. A gesture motion is planned based on the gesture descriptor and analysis of timing associated with the speech. Actuator signals for controlling the actuators such as arms and hands are generated based on the planned gesture motion.
19 Citations
20 Claims
-
1. A computer-implemented method of generating a gesture in a robot, comprising:
-
receiving a text of a speech including one or more speech elements; analyzing the speech text by a plurality of pattern modules to identify one or more candidate gestures associated with the speech elements in the speech, each of the plurality of pattern modules configured to apply a different set of rules to identify one or more corresponding candidate gestures; selecting a gesture from the one or more candidate gestures identified by the plurality of pattern modules; and generating the selected gesture by controlling actuators in the robot. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 19)
-
-
11. A robot configured to generate a gesture, comprising:
-
a gesture generator configured to; receive a text of a speech including one or more speech elements; analyze the speech text by a plurality of pattern modules to identify one or more candidate gestures associated with the speech elements in the speech, each of the plurality of pattern modules configured to apply a different set of rules to identify one or more corresponding candidate gestures; and select a gesture from the one or more candidate gestures identified by the plurality of pattern modules; a motion generator configured to generate control signals based on the selected gesture; and at least one actuator configured to cause relative movements on parts of the robot according to the control signals. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18)
-
-
20. A non-transitory computer readable storage medium for recognizing verbal commands, the computer readable storage medium structured to store instructions, when executed, cause a processor to:
-
receive a text of a speech including one or more speech elements; analyze the speech text by a plurality of pattern modules to identify one or more candidate gestures associated with the speech elements in the speech, each of the plurality of pattern modules configured to apply a different set of rules to identify one or more corresponding candidate gestures; select a gesture from the one or more candidate gestures identified by the plurality of pattern modules; and generate the selected gesture by controlling actuators in a robot.
-
Specification