METHOD AND APPARATUS FOR TRANSLATING HAND GESTURES
First Claim
1. A gesture recognition apparatus for use by a user, comprising:
- an input assembly detecting movement and position of each finger with respect to the user'"'"'s hand, and movement and position of the user'"'"'s hand with respect to the user'"'"'s body, said input assembly generating at least one value corresponding to one or more phonemes; and
a word storage device for storing words, each word associated with a stored value corresponding to a sequence of one or more phonemes, receiving said at least one value from said input assembly, matching said at least one received value with the stored value, and producing an output value corresponding to the word associated with the matched stored value.
1 Assignment
0 Petitions
Accused Products
Abstract
A sign language recognition apparatus and method is provided for translating hand gestures into speech or written text. The apparatus includes a number of sensors on the hand, arm and shoulder to measure dynamic and static gestures. The sensors are connected to a microprocessor to search a library of gestures and generate output signals that can then be used to produce a synthesized voice or written text. The apparatus includes sensors such as accelerometers on the fingers and thumb and two accelerometers on the back of the hand to detect motion and orientation of the hand. Sensors are also provided on the back of the hand or wrist to detect forearm rotation, an angle sensor to detect flexing of the elbow, two sensors on the upper arm to detect arm elevation and rotation, and a sensor on the upper arm to detect arm twist. The sensors transmit the data to the microprocessor to determine the shape, position and orientation of the hand relative to the body of the user.
-
Citations
22 Claims
-
1. A gesture recognition apparatus for use by a user, comprising:
-
an input assembly detecting movement and position of each finger with respect to the user'"'"'s hand, and movement and position of the user'"'"'s hand with respect to the user'"'"'s body, said input assembly generating at least one value corresponding to one or more phonemes; and a word storage device for storing words, each word associated with a stored value corresponding to a sequence of one or more phonemes, receiving said at least one value from said input assembly, matching said at least one received value with the stored value, and producing an output value corresponding to the word associated with the matched stored value. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A gesture recognition apparatus comprising:
-
an input assembly comprising a glove having hand sensors for detecting hand movement, an elbow sensor for detecting forearm movement, an arm sensor for detecting arm orientation, and a shoulder sensor for detecting arm rotation, said input assembly further comprising a frame having an upper arm section and a forearm section, said upper arm section and said forearm section coupled together by a hinge, and a computer connected to said input assembly and generating an output signal for producing a visual or audible output corresponding to the detected hand movement, forearm movement, or arm rotation. - View Dependent Claims (12, 13, 14, 15, 16, 21, 22)
-
-
17. A method for translating a user'"'"'s gesture of a signed language composed of an initial pose, movement, hand location, and a final pose, the method comprising:
-
determining the initial pose, the hand location and the final pose of the gesture, and a movement of the gesture, the movement occurring between the initial pose and the final pose; matching the determined initial pose with one or more initial poses of all known gestures, and defining a first list of candidate gestures as those whose pose matches the determined initial pose or, if there is only one match, returning a first most likely gesture corresponding to the match; matching the determined hand location with one or more hand locations of the first list of candidate gestures, and defining a second list of candidate gestures as those whose hand locations match the determined hand location, or, if there is only one match, returning a second most likely gesture corresponding to the match; matching the determined movement with one or more movements of the second list of candidate gestures, and defining a third list of candidate gestures as those more than one gestures whose movements match the determined movement, or, if there is only one match, returning a third most likely gesture corresponding to the match; matching the determined final pose with one or more poses of the third list of candidate gestures, and defining a fourth list of candidate gestures as those more than one gestures whose final pose matches the determined final pose, or, if there is only one match, returning a fourth most likely gesture corresponding to the match; matching the determined final hand location of the gesture with a hand location of the fourth list of candidate signs, and returning a fifth most likely gesture corresponding to the match; and converting the first, second, third, fourth or fifth gesture into a stream of ASCII characters to be displayed as text and/ or to a voice synthesizer to be reproduced as speech. - View Dependent Claims (18, 19, 20)
-
Specification