HEAD-MOUNTED INTEGRATED INTERFACE
First Claim
1. A user-wearable apparatus comprising:
- at least one display disposed in front of an eye of a user and configured to present a computer-generated image to the user;
at least two cameras, each camera being associated with a respective field-of-view, wherein the respective fields-of-view overlap to allow a gesture of the user to be captured;
a gesture input module coupled to the at least two cameras and configured to receive visual data from the at least two cameras and identify the user gesture within the visual data, wherein the identified gesture is used to affect the computer-generated image presented to the user.
1 Assignment
0 Petitions
Accused Products
Abstract
A head mounted integrated interface (HMII) is presented that may include a wearable head-mounted display unit supporting two compact high resolution screens for outputting a right eye and left eye image in support of the stereoscopic viewing, wireless communication circuits, three-dimensional positioning and motion sensors, and a processing system which is capable of independent software processing and/or processing streamed output from a remote server. The HMII may also include a graphics processing unit capable of also functioning as a general parallel processing system and cameras positioned to track hand gestures. The HMII may function as an independent computing system or as an interface to remote computer systems, external GPU clusters, or subscription computational services, The HMII is also capable linking and streaming to a remote display such as a large screen monitor.
200 Citations
20 Claims
-
1. A user-wearable apparatus comprising:
-
at least one display disposed in front of an eye of a user and configured to present a computer-generated image to the user; at least two cameras, each camera being associated with a respective field-of-view, wherein the respective fields-of-view overlap to allow a gesture of the user to be captured; a gesture input module coupled to the at least two cameras and configured to receive visual data from the at least two cameras and identify the user gesture within the visual data, wherein the identified gesture is used to affect the computer-generated image presented to the user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A method for providing an image to a user, the method comprising:
-
capturing visual data using at least two cameras disposed on a user-wearable apparatus, each camera being associated with a respective field-of-view, wherein the respective fields-of-view overlap to allow a gesture of the user to be captured; indentifying, using the user-wearable apparatus, the user gesture within the visual data; and generating the image displayed to the user based on the identified gesture, wherein the image is presented on a display integrated into the user-wearable apparatus. - View Dependent Claims (11, 12, 13, 14, 15, 16)
-
-
17. A system comprising:
-
an apparatus configured to be wearable on a head of a user;
the apparatus comprising;at least one display disposed in front of an eye of the user and configured to present a computer-generated image to the user, a wireless network adapter; and a remote computing system configured to communicate with the wireless network adapter, the remote computing system comprising graphic processing unit (GPU) cluster configured to generate at least a portion of the computer-generated image presented on the display of the apparatus. - View Dependent Claims (18, 19, 20)
-
Specification