EYE-WEARABLE DEVICE USER INTERFACE AND AUGMENTED REALITY METHOD
First Claim
1. A method of controlling a head-mountable device for transmitting and receiving information from a human user with at least one eye, said device having a virtual display and connectivity to at least one user input device, wherein said at least one user input device comprises any of a touchpad or at least one eye position tracking sensor;
- said device comprising a frame configured to attach to the head of a user;
said method comprising;
using at least one processor to display a plurality of visual targets on said virtual display, said plurality of visual targets each comprising a visual element area embedded within a visual element position zone with an area that is equal to or larger than said visual element, each visual element area and visual element position zone mapping to a touch sensitive touch element on a touchpad position zone;
using said at least one user input device to register that a virtual key corresponding to said at least one of said plurality of visual targets has been selected by said user, thereby creating a selected virtual key;
using at least one user input device to register that said selected virtual key has been accepted by said user, thereby creating virtual key activation;
using said virtual key activation to control either transmitting or receiving said information.
2 Assignments
0 Petitions
Accused Products
Abstract
A software controlled user interface and method for a head-mountable device equipped with at least one display or connectivity to at least one touch pad. The method can scale between displaying a small to large number of different eye gaze target symbols at any given time, yet still transmit a large array of different symbols to outside devices. At least part of the method may be implemented by way of a virtual window onto the surface of a virtual cylinder, with touchpad or eye gaze sensitive symbols that can be rotated by touchpad touch or eye gaze thus bringing various groups of symbols into view, and then selected. Specific examples of use of this interface and method on a head-mountable device are disclosed, along with various operation examples including transmitting data including text, functionality in virtual or augmented reality, control of robotic devices and control of remote vehicles.
-
Citations
23 Claims
-
1. A method of controlling a head-mountable device for transmitting and receiving information from a human user with at least one eye, said device having a virtual display and connectivity to at least one user input device, wherein said at least one user input device comprises any of a touchpad or at least one eye position tracking sensor;
-
said device comprising a frame configured to attach to the head of a user; said method comprising; using at least one processor to display a plurality of visual targets on said virtual display, said plurality of visual targets each comprising a visual element area embedded within a visual element position zone with an area that is equal to or larger than said visual element, each visual element area and visual element position zone mapping to a touch sensitive touch element on a touchpad position zone; using said at least one user input device to register that a virtual key corresponding to said at least one of said plurality of visual targets has been selected by said user, thereby creating a selected virtual key; using at least one user input device to register that said selected virtual key has been accepted by said user, thereby creating virtual key activation; using said virtual key activation to control either transmitting or receiving said information. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A method of controlling a head-mountable device for transmitting and receiving information from a human user with at least one eye, said device having a virtual display and connectivity to at least one user input device, wherein said at least one user input device comprises both an touchpad and at least one eye position tracking sensor;
-
said device comprising a frame configured to attach to the head of a user; said method comprising; using at least one processor to display a plurality of visual targets on said virtual display, said plurality of visual targets each comprising a visual element area embedded within a visual element position zone with an area that is equal to or larger than said visual element, each visual element area and visual element position zone mapping to a touch sensitive touch element on a touchpad position zone; further using said at least one eye position tracking sensor to determine when said at least one eye is on average gazing in a direction that is within said visual element position zone of at least one of said visual elements, thereby selecting at least one of said visual elements; further signaling said selection, prior to using said at least one user input device to register that a virtual key corresponding to said at least one of said plurality of visual targets has been activated by said user; using said at least one user input device to register that a virtual key corresponding to said at least one of said plurality of visual targets has been selected by said user, thereby creating a selected virtual key; using at least one user input device to register that said selected virtual key has been accepted by said user, thereby creating virtual key activation; and using said virtual key activation to control either transmitting or receiving said information. - View Dependent Claims (13, 14, 15, 16, 17)
-
-
18. A head-mountable device for transmitting and receiving information from a human user with at least one eye, said device having a virtual display, at least one processor, and connectivity to at least one touchpad;
-
said device further comprising a frame configured to attach to the head of a user; wherein said connectivity to at least one touchpad is achieved by either a touchpad that is physically attached to said frame, thereby forming a head mounted touchpad, or wherein connectivity to at least one touchpad comprises a wireless or wired connection to at least one touchpad not physically attached to said frame; said device further comprising; at least one processor configured to display a plurality of visual targets on said virtual display, said plurality of visual targets each comprising a visual element area embedded within a visual element position zone with an area that is equal to or larger than said visual element each visual element area and visual element position zone mapping to a touch sensitive touch element on a touchpad position zone; said at least one processor configured to accept input from at least one touchpad to determine when at least one user touch is on average touching an area that selects a visual element position zone of at least one of said visual element; said at least one processor further configured to accept input from said at least one touchpad to register that a virtual key corresponding to said at least one of said plurality of visual targets has been activated by said user; said at least one processor further configured to use said virtual key activation to control either transmitting or receiving said information. - View Dependent Claims (19, 20, 21, 22, 23)
-
Specification