MACHINE BASED SIGN LANGUAGE INTERPRETER
First Claim
1. A computer implemented method for interpreting sign language, comprising:
- capturing a scene using a capture device, the scene including a human target;
tracking movements of the human target in the scene;
detecting one or more gestures of the human target in the scene;
comparing the one or more gestures to a library of sign language signs;
determining a match between the one or more gestures and one or more sign; and
generating an output indicating a visual translation of the one or more signs.
2 Assignments
0 Petitions
Accused Products
Abstract
A computer implemented method for performing sign language translation based on movements of a user is provided. A capture device detects motions defining gestures and detected gestures are matched to signs. Successive signs are detected and compared to a grammar library to determine whether the signs assigned to gestures make sense relative to each other and to a grammar context. Each sign may be compared to previous and successive signs to determine whether the signs make sense relative to each other. The signs may further be compared to user demographic information and a contextual database to verify the accuracy of the translation. An output of the match between the movements and the sign is provided.
190 Citations
20 Claims
-
1. A computer implemented method for interpreting sign language, comprising:
-
capturing a scene using a capture device, the scene including a human target; tracking movements of the human target in the scene; detecting one or more gestures of the human target in the scene; comparing the one or more gestures to a library of sign language signs; determining a match between the one or more gestures and one or more sign; and generating an output indicating a visual translation of the one or more signs. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A computer readable medium including instructions for programming a processing device to perform a series of steps, comprising:
-
capturing a scene using a capture device, the scene including a human target; tracking movements of the human target in the scene; detecting a plurality of gestures of the human target in the scene; assigning a first sign to a detected gesture; assigning a second sign to an adjacent detected gesture; comparing the first and the second sign to verify the accuracy of the second sign; and generating an output indicating a visual translation of the first and second sign. - View Dependent Claims (11, 12, 13, 14, 15, 16)
-
-
17. An image capture system including:
-
a capture device including an RGB sensor and a depth sensor; a host device, the host device providing a video output, the host device including a processor and instructions for programming the processor to perform a method comprising; tracking movements of the human target in a scene acquired by the capture device, detecting one or more gestures of the human target in the scene; comparing the one or more gestures to a library of sign language signs and assigning a first sign to a first detected gesture, assigning a second sign to an a second detected gesture, and assigning a third sign after the second sign to a third detected gesture, comparing the first sign and the third sign to the second sign to determine the accuracy of the second sign to the first and third signs; and generating an output reflecting a meaning of the first, second and third signs. - View Dependent Claims (18, 19, 20)
-
Specification