Systems and methods for sign language recognition
First Claim
Patent Images
1. A wearable system for sign language recognition, the wearable system comprising:
- a head-mounted display configured to present virtual content to a user;
an audio sensor configured to detect audio from an environment of the user;
an outward-facing imaging system configured to image the environment of the user; and
a hardware processor in communication with the head-mounted display and the imaging system, and programmed to;
receive a plurality of images captured by the outward-facing imaging system;
detect at least one set of hands in the plurality of images with an object recognizer;
determine a relative size of the at least one set of hands;
identify a source of the at least one set of hands among a plurality of persons based on the relative size of the at least one set of hands;
detect at least one gesture by the at least one set of hands in the plurality of images with the object recognizer;
determine that the source of the at least one gesture belongs to a person other than the user;
recognize a meaning of the at least one gesture in a sign language in response to the determination;
identify a target language based on contextual information associated with the user, wherein the contextual information comprises;
speech of the user obtained from analysis of audio data obtained by the audio sensor;
translate the at least one gesture into the target language based on the recognized meaning and further based on determining that the source of the at least one gesture belongs to the person other than the user;
generate virtual content based at least partly on a translation of the gesture into the target language; and
cause the head-mounted display to render the virtual content to the user.
3 Assignments
0 Petitions
Accused Products
Abstract
A sensory eyewear system for a mixed reality device can facilitate user'"'"'s interactions with the other people or with the environment. As one example, the sensory eyewear system can recognize and interpret a sign language, and present the translated information to a user of the mixed reality device. The wearable system can also recognize text in the user'"'"'s environment, modify the text (e.g., by changing the content or display characteristics of the text), and render the modified text to occlude the original text.
-
Citations
20 Claims
-
1. A wearable system for sign language recognition, the wearable system comprising:
-
a head-mounted display configured to present virtual content to a user; an audio sensor configured to detect audio from an environment of the user; an outward-facing imaging system configured to image the environment of the user; and a hardware processor in communication with the head-mounted display and the imaging system, and programmed to; receive a plurality of images captured by the outward-facing imaging system; detect at least one set of hands in the plurality of images with an object recognizer; determine a relative size of the at least one set of hands; identify a source of the at least one set of hands among a plurality of persons based on the relative size of the at least one set of hands; detect at least one gesture by the at least one set of hands in the plurality of images with the object recognizer; determine that the source of the at least one gesture belongs to a person other than the user; recognize a meaning of the at least one gesture in a sign language in response to the determination; identify a target language based on contextual information associated with the user, wherein the contextual information comprises; speech of the user obtained from analysis of audio data obtained by the audio sensor; translate the at least one gesture into the target language based on the recognized meaning and further based on determining that the source of the at least one gesture belongs to the person other than the user; generate virtual content based at least partly on a translation of the gesture into the target language; and cause the head-mounted display to render the virtual content to the user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A method for sign language recognition, the method comprising:
-
under control of a wearable system comprising a head-mounted display configured to present virtual content to a user of the wearable system; receiving an image captured by an imaging system of the wearable system; analyzing the image to detect a gesture of a person; determining a relative size of a set of hands associated with the gesture of the person; identifying a source of the gesture of the person based on the relative size; determining that the source of the gesture of the person belongs to a person other than the user; detecting a presence of a communication in a sign language based at least partly on the detected gesture; recognizing a meaning of the gesture in the sign language if the gesture of the person is not the gesture of the user; identifying a target language into which the gesture will be translated, wherein identifying the target language comprises; analyzing audio data obtained by the wearable system to detect speech of the user; translating the gesture into the target language based on the recognized meaning and further based on determining that the source of the gesture of the person belongs to a person other than the user; generating virtual content based at least partly on a translation of the gesture into the target language; and causing the head-mounted display to render the virtual content to the user. - View Dependent Claims (15, 16, 17, 18, 19, 20)
-
Specification