Depth-based user interface gesture control
First Claim
1. A computing device for depth-based gesture control, the computing device comprising:
- a display to define a surface normal;
a depth sensor to;
generate depth sensor data indicative of a depth relative to the display of an input gesture performed by a user of the computing device in front of the display;
generate second depth sensor data indicative of a second depth relative to the display of a second input gesture performed by a second user of the computing device in front of the display; and
generate third depth sensor data indicative of a third depth relative to the display of a third input gesture performed by the second user of the computing device in front of the display;
a gesture recognition module to recognize the input gesture, the second input gesture, and the third input gesture;
a depth recognition module to;
receive the depth sensor data, the second depth sensor data, and the third depth sensor data from the depth sensor;
determine the depth of the input gesture as a function of the depth sensor data;
determine the second depth of the second input gesture as a function of the second depth sensor data;
determine the third depth of the third input gesture as a function of the third depth sensor data;
assign a depth plane to the input gesture as a function of the depth of the input gesture;
assign a second depth plane different from the depth plane to the second input gesture as a function of the second depth of the second input gesture; and
assign a third depth plane different from the second depth plane to the third input gesture as a function of the third depth of the third input gesture;
wherein each depth plane is positioned parallel to the display and intersects the surface normal; and
a user command module to;
designate the second depth plane as an accessible depth plane for the second user;
execute a user interface command based on the input gesture and the assigned depth plane;
execute a second user interface command based on the second input gesture and the assigned second depth plane;
determine whether the third depth is associated with the accessible depth plane for the second user; and
reject the third input gesture in response to a determination that the third depth is not associated with the accessible depth plane for the second user,wherein to execute the second user interface command comprises to;
determine whether the second assigned depth plane comprises a secondary virtual touch plane of the computing device; and
execute a secondary user interface command in response to a determination that the assigned second depth plane comprises the secondary virtual touch plane, wherein to execute the secondary user interface command comprises to display a contextual command menu on the display.
1 Assignment
0 Petitions
Accused Products
Abstract
Technologies for depth-based gesture control include a computing device having a display and a depth sensor. The computing device is configured to recognize an input gesture performed by a user, determine a depth relative to the display of the input gesture based on data from the depth sensor, assign a depth plane to the input gesture as a function of the depth, and execute a user interface command based on the input gesture and the assigned depth plane. The user interface command may control a virtual object selected by depth plane, including a player character in a game. The computing device may recognize primary and secondary virtual touch planes and execute a secondary user interface command for input gestures on the secondary virtual touch plane, such as magnifying or selecting user interface elements or enabling additional functionality based on the input gesture. Other embodiments are described and claimed.
60 Citations
18 Claims
-
1. A computing device for depth-based gesture control, the computing device comprising:
-
a display to define a surface normal; a depth sensor to; generate depth sensor data indicative of a depth relative to the display of an input gesture performed by a user of the computing device in front of the display; generate second depth sensor data indicative of a second depth relative to the display of a second input gesture performed by a second user of the computing device in front of the display; and generate third depth sensor data indicative of a third depth relative to the display of a third input gesture performed by the second user of the computing device in front of the display; a gesture recognition module to recognize the input gesture, the second input gesture, and the third input gesture; a depth recognition module to; receive the depth sensor data, the second depth sensor data, and the third depth sensor data from the depth sensor; determine the depth of the input gesture as a function of the depth sensor data; determine the second depth of the second input gesture as a function of the second depth sensor data; determine the third depth of the third input gesture as a function of the third depth sensor data; assign a depth plane to the input gesture as a function of the depth of the input gesture; assign a second depth plane different from the depth plane to the second input gesture as a function of the second depth of the second input gesture; and assign a third depth plane different from the second depth plane to the third input gesture as a function of the third depth of the third input gesture; wherein each depth plane is positioned parallel to the display and intersects the surface normal; and a user command module to; designate the second depth plane as an accessible depth plane for the second user; execute a user interface command based on the input gesture and the assigned depth plane; execute a second user interface command based on the second input gesture and the assigned second depth plane; determine whether the third depth is associated with the accessible depth plane for the second user; and reject the third input gesture in response to a determination that the third depth is not associated with the accessible depth plane for the second user, wherein to execute the second user interface command comprises to; determine whether the second assigned depth plane comprises a secondary virtual touch plane of the computing device; and execute a secondary user interface command in response to a determination that the assigned second depth plane comprises the secondary virtual touch plane, wherein to execute the secondary user interface command comprises to display a contextual command menu on the display. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A method for depth-based gesture control, the method comprising:
-
recognizing, on a computing device, an input gesture performed by a user of the computing device in front of a display of the computing device; recognizing, on a computing device, a second input gesture performed by a second user of the computing device in front of the display of the computing device; recognizing, on a computing device, a third input gesture performed by the second user of the computing device in front of the display of the computing device; receiving, on the computing device, depth sensor data indicative of a depth relative to the display of the input gesture from a depth sensor of the computing device; receiving, on the computing device, second depth sensor data indicative of a second depth relative to the display of the second input gesture from the depth sensor; receiving, on the computing device, third depth sensor data indicative of a third depth relative to the display of the third input gesture from the depth sensor; determining, on the computing device, the depth of the input gesture as a function of the depth sensor data; determining, on the computing device, the second depth of the second input gesture as a function of the second depth sensor data; determining, on the computing device, the third depth of the third input gesture as a function of the third depth sensor data; assigning, on the computing device, a depth plane to the input gesture as a function of the depth of the input gesture; assigning, on the computing device, a second depth plane different from the depth plane to the second input gesture as a function of the second depth of the second input gesture; assigning, on the computing device, a third depth plane different from the second depth plane to the third input gesture as a function of the third depth of the third input gesture; wherein each depth plane is parallel to the display and intersects a surface normal of the display; designating, on the computing device, the second depth plane as an accessible depth plane for the second user; executing, on the computing device, a user interface command based on the input gesture and the assigned depth plane; executing, on the computing device, a second user interface command based on the input gesture and the assigned second depth plane; determining, on the computing device, whether the third depth is associated with the accessible depth plane for the second user; and rejecting, on the computing device, the third input gesture in response to a determination that the third depth is not associated with the accessible depth plane for the second user, wherein executing the second user interface command comprises; determining whether the assigned second depth plane comprises a secondary virtual touch plane of the computing device; and executing a secondary user interface command in response to determining the assigned second depth plane comprises the secondary virtual touch plane, wherein executing the secondary user interface command comprises displaying a contextual command menu. - View Dependent Claims (13, 14)
-
-
15. One or more non-transitory, machine readable storage media comprising a plurality of instructions that in response to being executed cause a computing device to:
-
recognize an input gesture performed by a user of the computing device in front of a display of the computing device; recognize a second input gesture performed by a second user of the computing device in front of a display of the computing device; recognize a third input gesture performed by the second user of the computing device in front of the display of the computing device; receive depth sensor data indicative of a depth relative to the display of the input gesture from a depth sensor of the computing device; receive second depth sensor data indicative of a second depth relative to the display of the second input gesture from the depth sensor; receive third depth sensor data indicative of a third depth relative to the display of the third input gesture from the depth sensor; determine the depth of the input gesture as a function of the depth sensor data; determine the second depth of the second input gesture as a function of the second depth sensor data; determine the third depth of the third input gesture as a function of the third depth sensor data; assign a depth plane to the input gesture as a function of the depth of the input gesture, assign a second depth plane different from the depth plane to the second input gesture as a function of the second depth of the second input gesture; assign a third depth plane different from the second depth plane to the third input gesture as a function of the third depth of the third input gesture; wherein each depth plane is parallel to the display and intersects a surface normal of the display; designate the second depth plane as an accessible depth plane for the second user; execute a user interface command based on the input gesture and the assigned depth plane; execute, on the computing device, a second user interface command based on the input gesture and the assigned second depth plane; determine whether the third depth is associated with the accessible depth plane for the second user; and reject the third input gesture in response to a determination that the third depth is not associated with the accessible depth plane for the second user, wherein to execute the second user interface command comprises to; determine whether the assigned second depth plane comprises a secondary virtual touch plane of the computing device; and execute a secondary user interface command in response to a determination that the assigned depth plane comprises the secondary virtual touch plane, wherein to execute the secondary user interface command comprises to display a contextual command menu on the display. - View Dependent Claims (16, 17, 18)
-
Specification