Machine based sign language interpreter
First Claim
1. A computer implemented method for interpreting sign language, comprising:
- capturing a scene using a capture device, the scene including a human target;
tracking movements of the human target in the scene;
detecting one or more gestures of the human target in the scene;
comparing the one or more gestures to a library of sign language signs;
determining a match between the one or more gestures and one or more sign by adjusting a probability weight of each sign based on acquired individual profiles of user motion for users applying know tendencies to motion and gesture detection and grammatical information, and comparing each probability weight against other signs likely to be assigned to a detected gesture; and
displaying an output comprising a written language display of a visual translation of the one or more signs on a display device.
2 Assignments
0 Petitions
Accused Products
Abstract
A computer implemented method for performing sign language translation based on movements of a user is provided. A capture device detects motions defining gestures and detected gestures are matched to signs. Successive signs are detected and compared to a grammar library to determine whether the signs assigned to gestures make sense relative to each other and to a grammar context. Each sign may be compared to previous and successive signs to determine whether the signs make sense relative to each other. The signs may further be compared to user demographic information and a contextual database to verify the accuracy of the translation. An output of the match between the movements and the sign is provided.
-
Citations
19 Claims
-
1. A computer implemented method for interpreting sign language, comprising:
-
capturing a scene using a capture device, the scene including a human target; tracking movements of the human target in the scene; detecting one or more gestures of the human target in the scene; comparing the one or more gestures to a library of sign language signs; determining a match between the one or more gestures and one or more sign by adjusting a probability weight of each sign based on acquired individual profiles of user motion for users applying know tendencies to motion and gesture detection and grammatical information, and comparing each probability weight against other signs likely to be assigned to a detected gesture; and displaying an output comprising a written language display of a visual translation of the one or more signs on a display device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A computer storage device including instructions for programming a processing device to perform a series of steps, comprising:
-
capturing a scene using a capture device, the scene including a human target; tracking movements of a human target in the scene; detecting a plurality of gestures of the human target in the scene; assigning a first sign to a detected gesture by assigning a probability weight indicating strength of the match between the detected gesture and the first sign; assigning a second sign to an adjacent detected gesture by assigning a probability weight indicating the strength of the match between the adjacent detected gesture and the second sign; acquiring individual profiles of user motion for human targets applying known tendencies to motion and gesture detection for each human target; comparing the first sign and the second sign to verify accuracy of the second sign, including applying profile information of a detected human target to adjust a probability weight assigned for each sign; and generating an output to a display device indicating a written language visual translation of the first and second sign on the display device. - View Dependent Claims (11, 12, 13, 14, 15, 16)
-
-
17. An image capture system including:
-
a capture device including an RGB sensor and a depth sensor; a host device, the host device providing a video output, the host device including a processor and instructions for programming the processor to perform a method comprising; tracking movements of a human target in a scene acquired by the capture device, detecting one or more gestures of the human target in the scene; comparing the one or more gestures to a library of sign language signs and assigning a first sign to a first detected gesture, assigning a second sign to a second detected gesture, and assigning a third sign after the second sign to a third detected gesture, each said assigning including assigning a probability weight indicating strength of a match between the gesture and each said sign, comparing the first sign and the third sign to the second sign to determine the accuracy of the second sign to the first and third signs, said comparing including assigning a probability weight to each of the first and third signs indicating a strength of the match between the gesture and the sign based on a comparison to the second sign; acquiring individual profiles of user motion for users applying known tendencies to motion and gesture detection for a user to increase accuracy of gesture detection and sign language translation, each said comparing step using information from a human target profile in assigning a probability weight; and generating an output comprising a written language display of a meaning of the first, second and third signs based on the comparing steps. - View Dependent Claims (18, 19)
-
Specification