DEPTH-BASED USER INTERFACE GESTURE CONTROL
1 Assignment
0 Petitions
Accused Products
Abstract
Technologies for depth-based gesture control include a computing device having a display and a depth sensor. The computing device is configured to recognize an input gesture performed by a user, determine a depth relative to the display of the input gesture based on data from the depth sensor, assign a depth plane to the input gesture as a function of the depth, and execute a user interface command based on the input gesture and the assigned depth plane. The user interface command may control a virtual object selected by depth plane, including a player character in a game. The computing device may recognize primary and secondary virtual touch planes and execute a secondary user interface command for input gestures on the secondary virtual touch plane, such as magnifying or selecting user interface elements or enabling additional functionality based on the input gesture. Other embodiments are described and claimed.
-
Citations
45 Claims
-
1-25. -25. (canceled)
-
26. A computing device for depth-based gesture control, the computing device comprising:
-
a display to define a surface normal; a depth sensor to generate depth sensor data indicative of a depth relative to the display of an input gesture performed by a user of the computing device in front of the display; a gesture recognition module to recognize the input gesture; a depth recognition module to; receive the depth sensor data from the depth sensor; determine the depth of the input gesture as a function of the depth sensor data; and assign a depth plane to the input gesture as a function of the depth of the input gesture, wherein each depth plane is positioned parallel to the display and intersects the surface normal; and a user command module to execute a user interface command based on the input gesture and the assigned depth plane. - View Dependent Claims (27, 28, 29, 30, 31, 32, 33, 34, 35)
-
-
36. A method for depth-based gesture control, the method comprising:
-
recognizing, on a computing device, an input gesture performed by a user of the computing device in front of a display of the computing device; receiving, on the computing device, depth sensor data indicative of a depth relative to the display of the input gesture from a depth sensor of the computing device; determining, on the computing device, the depth of the input gesture as a function of the depth sensor data; assigning, on the computing device, a depth plane to the input gesture as a function of the depth of the input gesture, wherein each depth plane is parallel to the display and intersects a surface normal of the display; and executing, on the computing device, a user interface command based on the input gesture and the assigned depth plane. - View Dependent Claims (37, 38, 39)
-
-
40. One or more machine readable storage media comprising a plurality of instructions that in response to being executed cause a computing device to:
-
recognize an input gesture performed by a user of the computing device in front of a display of the computing device; receive depth sensor data indicative of a depth relative to the display of the input gesture from a depth sensor of the computing device; determine the depth of the input gesture as a function of the depth sensor data; assign a depth plane to the input gesture as a function of the depth of the input gesture, wherein each depth plane is parallel to the display and intersects a surface normal of the display; and execute a user interface command based on the input gesture and the assigned depth plane. - View Dependent Claims (41, 42, 43, 44, 45)
-
Specification