INTELLIGENT TRANSLATIONS IN PERSONAL SEE THROUGH DISPLAY
First Claim
1. A method for presenting a translation of a real world expression to a wearer of a see through head mounted display apparatus, comprising:
- determining a gaze of a wearer looking through the see-through display of the apparatus;
determining a three dimensional location of one or more objects in the field of view of the user through the see-through display, the determining of the three dimensional location of the object is performed using the one or more sensors;
receiving a selection of data for translation in the field of view of the wearer by reference to the gaze of the wearer at one of the objects;
analyzing the data for translation to provide input data;
translating the input data into a translated form for the user; and
rendering the translation in an audio or visual format in the see through head mounted display.
2 Assignments
0 Petitions
Accused Products
Abstract
A see-through, near-eye, mixed reality display apparatus for providing translations of real world data for a user. A wearer'"'"'s location and orientation with the apparatus is determined and input data for translation is selected using sensors of the apparatus. Input data can be audio or visual in nature, and selected by reference to the gaze of a wearer. The input data is translated for the user relative to user profile information bearing on accuracy of a translation and determining from the input data whether a linguistic translation, knowledge addition translation or context translation is useful.
-
Citations
20 Claims
-
1. A method for presenting a translation of a real world expression to a wearer of a see through head mounted display apparatus, comprising:
-
determining a gaze of a wearer looking through the see-through display of the apparatus; determining a three dimensional location of one or more objects in the field of view of the user through the see-through display, the determining of the three dimensional location of the object is performed using the one or more sensors; receiving a selection of data for translation in the field of view of the wearer by reference to the gaze of the wearer at one of the objects; analyzing the data for translation to provide input data;
translating the input data into a translated form for the user; andrendering the translation in an audio or visual format in the see through head mounted display. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A see through head mounted display apparatus presenting translations of input data to a wearer, comprising:
-
a see-through, near-eye, augmented reality display that is worn by a wearer, the apparatus includes one or more input sensors; one or more processing devices in communication with apparatus, the one or more processing devices automatically determine that the wearer is at a location, the one or more processing devices access a wearer profile for the wearer and identify one or more characteristics of the wearer that match characteristics affecting translation of input data, the one or more processing devices determine input data from real world objects and persons in the field of view of the wearer and provide a translation the input data to provide contextual data, knowledge augmentation or a linguistic translation on the input data; the translation based on known characteristics of the wearer used to improve the translation, the translation rendered by the one or more processors in the see-though near eye display apparatus by the one or more processors. - View Dependent Claims (11, 12, 13, 14, 15, 16)
-
-
17. A method for presenting a translation of information at a location to a wearer of a see through head mounted display apparatus, comprising:
-
receiving from the wearer a selection of input data for translation at the location of a user by reference to the gaze of a user, including determining a gaze of a wearer looking through the see-through display apparatus, the apparatus includes one or more sensors and a see-through display; determining three dimensional locations of objects within a field of view of the wearer at the location; translating the input data into a translated form for the user, the translating including retrieving user profile information bearing on accuracy of a translation; determining from the input data whether a linguistic translation, knowledge addition translation or context translation is useful; performing one or more of a linguistic translation, knowledge addition translation or context translation on the input data; and rendering the translation in an audio or visual format in the see through head mounted display. - View Dependent Claims (18, 19, 20)
-
Specification