Method and apparatus for translating hand gestures
First Claim
1. A sign language recognition apparatus comprising an input assembly for detecting sign language, a computer connected to said input assembly and generating an output signal for producing a visual or audible output corresponding to said sign language, said input assembly comprising:
- a glove to be worn by a user, said glove having sensors for detecting dynamic hand movements of each finger and thumb;
an elbow sensor for detecting and measuring flexing and positioning of the forearm about the elbow; and
a shoulder sensor for detecting movement and position of the arm with respect to the shoulder, wherein said input assembly further comprises a frame having a first section for coupling to the upper arm of the user and a second section for coupling to the forearm of the user, said first sections being coupled together by a hinge, said elbow sensor being positioned on said frame for measuring flexing and positioning of the forearm, and second section.
2 Assignments
0 Petitions
Accused Products
Abstract
A sign language recognition apparatus and method is provided for translating hand gestures into speech or written text. The apparatus includes a number of sensors on the hand, arm and shoulder to measure dynamic and static gestures. The sensors are connected to a microprocessor to search a library of gestures and generate output signals that can then be used to produce a synthesized voice or written text. The apparatus includes sensors such as accelerometers on the fingers and thumb and two accelerometers on the back of the hand to detect motion and orientation of the hand. Sensors are also provided on the back of the hand or wrist to detect forearm rotation, an angle sensor to detect flexing of the elbow, two sensors on the upper arm to detect arm elevation and rotation, and a sensor on the upper arm to detect arm twist. The sensors transmit the data to the microprocessor to determine the shape, position and orientation of the hand relative to the body of the user.
162 Citations
9 Claims
-
1. A sign language recognition apparatus comprising an input assembly for detecting sign language, a computer connected to said input assembly and generating an output signal for producing a visual or audible output corresponding to said sign language, said input assembly comprising:
-
a glove to be worn by a user, said glove having sensors for detecting dynamic hand movements of each finger and thumb; an elbow sensor for detecting and measuring flexing and positioning of the forearm about the elbow; and a shoulder sensor for detecting movement and position of the arm with respect to the shoulder, wherein said input assembly further comprises a frame having a first section for coupling to the upper arm of the user and a second section for coupling to the forearm of the user, said first sections being coupled together by a hinge, said elbow sensor being positioned on said frame for measuring flexing and positioning of the forearm, and second section. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A method for translating a sign into a phoneme, comprising:
-
determining an initial and final pose of the sign, and a movement of the sign, the movement occurring between the initial and final pose, the pose of the sign comprised of an initial posture part and a hand location part; matching a determined initial posture of the sign with one or more initial postures of all known signs, and defining a first list of candidate signs as those more than one signs whose posture matches the determined initial posture or, if there is only one match, returning a first most likely sign corresponding to the match; matching a captured initial hand location of the sign with one or more hand locations of the first list of candidate signs, and defining a second list of candidate signs as those more than one signs whose hand locations matches the determined initial hand location, or, if there is only one match, returning a second most likely sign corresponding to the match; matching a captured movement of the sign with one or more movements of the second list of candidate signs, and defining a third list of candidate signs as those more than one signs whose movements matches the determined movements, or, if there is only one match, returning a third most likely sign corresponding to the match; matching a determined final posture of the sign with one or more postures of the third list of candidate signs, and defining a fourth list of candidate signs as those more than one signs whose final posture matches the determined final posture, or, if there is only one match, returning a fourth most likely sign corresponding to the match; matching a determined final hand location of the sign with one hand location of the fourth list of candidate signs, and returning a fifth most likely sign corresponding to the match; and converting the first, second, third, fourth or fifth sign into a stream of ASCII characters to be displayed as text and/or to a voice synthesizer to be reproduced as speech. - View Dependent Claims (8, 9)
-
Specification