Two-hand interaction with natural user interface
First Claim
1. On a computing device, a method comprising:
- detecting via image data received by the computing device a first posture of a context-setting input performed by a first hand of a user to trigger execution of a drawing program;
in response to detecting the first posture of the context-setting input, sending to a display a holographic stereoscopic user interface for the drawing program positioned based on a virtual interaction coordinate system, the virtual interaction coordinate system comprising a reference plane positioned based upon a position of the first hand of the user;
detecting via image data received by the computing device a change in the first posture to a second posture of the context-setting input performed by the first hand, and in response, triggering a sub-context for the drawing program to detect drawing input;
detecting via image data received by the computing device a drawing input performed by a second hand of the user relative to the holographic stereoscopic user interface and relative to the reference plane within the virtual reference coordinate system, the drawing input performed while the first hand of the user is performing the second posture of the context-setting input; and
displaying a drawing in the holographic stereoscopic user interface based on the drawing input relative to the reference plane within the virtual interaction coordinate system.
2 Assignments
0 Petitions
Accused Products
Abstract
Two-handed interactions with a natural user interface are disclosed. For example, one embodiment provides a method comprising detecting via image data received by the computing device a context-setting input performed by a first hand of a user, and sending to a display a user interface positioned based on a virtual interaction coordinate system, the virtual coordinate system being positioned based upon a position of the first hand of the user. The method further includes detecting via image data received by the computing device an action input performed by a second hand of the user, the action input performed while the first hand of the user is performing the context-setting input, and sending to the display a response based on the context-setting input and an interaction between the action input and the virtual interaction coordinate system.
-
Citations
19 Claims
-
1. On a computing device, a method comprising:
-
detecting via image data received by the computing device a first posture of a context-setting input performed by a first hand of a user to trigger execution of a drawing program; in response to detecting the first posture of the context-setting input, sending to a display a holographic stereoscopic user interface for the drawing program positioned based on a virtual interaction coordinate system, the virtual interaction coordinate system comprising a reference plane positioned based upon a position of the first hand of the user; detecting via image data received by the computing device a change in the first posture to a second posture of the context-setting input performed by the first hand, and in response, triggering a sub-context for the drawing program to detect drawing input; detecting via image data received by the computing device a drawing input performed by a second hand of the user relative to the holographic stereoscopic user interface and relative to the reference plane within the virtual reference coordinate system, the drawing input performed while the first hand of the user is performing the second posture of the context-setting input; and displaying a drawing in the holographic stereoscopic user interface based on the drawing input relative to the reference plane within the virtual interaction coordinate system. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A near-eye display system, comprising:
-
a see-through near-eye display; one or more image sensors; a logic machine; and a storage machine including instructions executable by the logic machine to; receive image information from the one or more image sensors; detect a first posture of a context-setting input performed by a first hand of a user based on the image information received to trigger execution of a drawing program; in response to detecting the first posture of the context-setting input, define a virtual interaction coordinate system comprising a spatial region including a reference plane positioned based on a position of the first hand; send to the see-through near-eye display an augmented reality user interface comprising one or more user interface elements positioned based upon the position of the first hand; detect a second posture of the context-setting input performed by the first hand based on the image information received, and in response, trigger a sub-context for the drawing program to detect drawing input; detect a drawing input performed by a second hand of the user based on the image information received, the drawing input performed at a location relative to the reference plane within the virtual interaction coordinate system and defined by a location of the first hand and while the first hand of the user is performing the second posture of the context-setting input; and display via the see-through near-eye display a drawing based on the drawing input relative to the reference plane within the virtual interaction coordinate system. - View Dependent Claims (10, 11, 12, 13)
-
-
14. A near-eye display system, comprising:
-
a see-through near-eye display; one or more image sensors; a logic machine; and a storage machine including instructions executable by the logic machine to; detect a first posture of a context-setting input triggering a drawing program, the first posture of the context-setting input performed by a first hand of a user; in response to detecting the first posture of the context-setting input, define a virtual interaction coordinate system comprising a spatial region including a reference plane positioned based on a position of the first hand; send to the see-through near-eye display a holographic stereoscopic augmented reality user interface positioned based on the position of the first hand; detect a second posture of the context-setting input performed by the first hand, and in response, trigger a sub-context for the drawing program to detect drawing input; detect a drawing input comprising a movement of a second hand of the user relative to the reference plane within the virtual interaction coordinate system while the first hand is performing the second posture of the context-setting input; in response to detecting the drawing input, display via the see-through near-eye display a drawing based on movement of the second hand; and detect a change in the second posture to the first posture of the context-setting input performed by the first hand, and in response, trigger an end of the drawing input by the second hand. - View Dependent Claims (15, 16, 17, 18, 19)
-
Specification