TWO-HAND INTERACTION WITH NATURAL USER INTERFACE
First Claim
1. On a computing device, a method comprising:
- detecting via image data received by the computing device a context-setting input performed by a first hand of a user;
sending to a display a user interface positioned based on a virtual interaction coordinate system, the virtual coordinate system being positioned based upon a position of the first hand of the user;
detecting via image data received by the computing device an action input performed by a second hand of the user, the action input performed while the first hand of the user is performing the context-setting input; and
sending to the display a response based on the context-setting input and an interaction between the action input and the virtual interaction coordinate system.
3 Assignments
0 Petitions
Accused Products
Abstract
Two-handed interactions with a natural user interface are disclosed. For example, one embodiment provides a method comprising detecting via image data received by the computing device a context-setting input performed by a first hand of a user. and sending to a display a user interface positioned based on a virtual interaction coordinate system, the virtual coordinate system being positioned based upon a position of the first hand of the user. The method further includes detecting via image data received by the computing device an action input performed by a second hand of the user, the action input performed while the first hand of the user is performing the context-setting input, and sending to the display a response based on the context-setting input and an interaction between the action input and the virtual interaction coordinate system.
-
Citations
20 Claims
-
1. On a computing device, a method comprising:
-
detecting via image data received by the computing device a context-setting input performed by a first hand of a user; sending to a display a user interface positioned based on a virtual interaction coordinate system, the virtual coordinate system being positioned based upon a position of the first hand of the user; detecting via image data received by the computing device an action input performed by a second hand of the user, the action input performed while the first hand of the user is performing the context-setting input; and sending to the display a response based on the context-setting input and an interaction between the action input and the virtual interaction coordinate system. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A near-eye display system, comprising:
-
a see-through near-eye display; one or more image sensors; a logic machine; and a storage machine including instructions executable by the logic machine to; receive image information from the one or more image sensors; detect a context-setting input performed by a first hand of a user based on the image information received; in response to detecting the context-setting input, define a virtual interaction coordinate system comprising a spatial region positioned based on a position of the first hand; send to the see-through near-eye display an augmented reality user interface comprising one or more user interface elements positioned based upon the position of the first hand; detect an action input performed by a second hand of the user based on the image information received, the action input performed while the first hand of the user is performing the context-setting input; and send to the see-through near-eye display a response based on the context-setting input and an interaction between the action input and the virtual interaction coordinate system. - View Dependent Claims (10, 11, 12, 13)
-
-
14. A near-eye display system, comprising:
-
a see-through near-eye display; one or more image sensors; a logic machine; and a storage machine including instructions executable by the logic machine to; detect a context-setting input triggering a cursor control mode, the context-setting input performed by a first hand of a user; in response to detecting the context-setting input, define a virtual interaction coordinate system comprising a spatial region positioned based on a position of the first hand; send to the see-through near-eye display an augmented reality user interface comprising one or more elements selectable by a cursor, the augmented reality user interface positioned based on the position of the first hand; detect a cursor-initiating input comprising a movement of a second hand of the user across a plane within the virtual interaction coordinate system; and in response to detecting the cursor-initiating input, send to the see-through near-eye display the cursor. - View Dependent Claims (15, 16, 17, 18, 19, 20)
-
Specification