SENSORY EYEWEAR
First Claim
Patent Images
1. A wearable system for sign language recognition, the wearable system comprising:
- a head-mounted display configured to present virtual content to a user;
an imaging system configured to image an environment of the user; and
a hardware processor in communication with the head-mounted display and the imaging system, and programmed to;
receive an image captured by the imaging system;
detect a gesture in the image with an object recognizer;
recognize a meaning of the gesture in a sign language;
identify a target language based on contextual information associated with the user;
translate the gesture into the target language based on the recognized meaning;
generate virtual content based at least partly on a translation of the gesture into the target language; and
cause the head-mounted display to render the virtual content to the user.
3 Assignments
0 Petitions
Accused Products
Abstract
A sensory eyewear system for a mixed reality device can facilitate user'"'"'s interactions with the other people or with the environment. As one example, the sensory eyewear system can recognize and interpret a sign language, and present the translated information to a user of the mixed reality device. The wearable system can also recognize text in the user'"'"'s environment, modify the text (e.g., by changing the content or display characteristics of the text), and render the modified text to occlude the original text.
91 Citations
20 Claims
-
1. A wearable system for sign language recognition, the wearable system comprising:
-
a head-mounted display configured to present virtual content to a user; an imaging system configured to image an environment of the user; and a hardware processor in communication with the head-mounted display and the imaging system, and programmed to; receive an image captured by the imaging system; detect a gesture in the image with an object recognizer; recognize a meaning of the gesture in a sign language; identify a target language based on contextual information associated with the user; translate the gesture into the target language based on the recognized meaning; generate virtual content based at least partly on a translation of the gesture into the target language; and cause the head-mounted display to render the virtual content to the user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A method for sign language recognition, the method comprising:
-
receiving an image captured by an imaging system; analyzing the image to detect a gesture of a user; detecting a presence of a communication in a sign language based at least partly on the detected gesture; recognizing a meaning of the gesture in the sign language; identifying a target language to which the gesture will be translated into; translating the gesture into the target language based on the recognized meaning; generating virtual content based at least partly on a translation of the gesture into the target language; and causing a head-mounted display to render the virtual content to the user. - View Dependent Claims (15, 16, 17, 18, 19, 20)
-
Specification