Gesture based human computer interaction
First Claim
1. One or more non-transitory computer-readable media having instructions stored thereon that, when executed by a computing device, provide the computing device with a three-dimensional (3-D) interaction module configured to:
- analyze sensor data associated with individual finger positions of a user of the computing device, only within a 3-D interaction region, sensed by a plurality of sensors of the computing device that include a plurality of light sources and a camera, wherein the plurality of light sources and the camera are complementarily disposed for the camera to capture the individual finger positions in three dimensions, and wherein the 3-D interaction region is a region in which light emitted from the plurality of light sources intersects a viewing angle of the camera and within a viewing area of a display such that the individual finger positions within the viewing angle of the camera and outside of the 3-D interaction region are not to be included as a gesture of the user;
detect a gesture of the user, based at least in part on a result of the analysis,wherein the gesture comprises at least a first individual finger of a hand hovering in a same position for a predetermined period of time and at least one movement using the first individual finger, using a second individual finger of the hand, or using the first individual finger and the second individual finger;
determine a location of the first individual finger or the second individual finger within the 3-D interaction region;
correlate the location of the first individual finger or the second individual finger, within the 3-D interaction region, with a location of a cursor rendered on a display of the computing device,wherein the at least one movement comprises holding the first individual finger of the hand, extending beyond remaining fingers of the hand, in a fixed position within the 3-D interaction region for a first predetermined period of time to cause the correlated location of the cursor to freeze for a second predetermined period of time, and the at least one movement further comprises a forward or downward movement of a tip of the first individual finger within the second predetermined period of time; and
execute a user control action corresponding to the detected gesture, to control the computing device, and wherein to execute further comprises execution of a user control action proximate to the correlated location of the cursor rendered on the display.
1 Assignment
0 Petitions
Accused Products
Abstract
Apparatus, computer-readable storage medium, and method associated with human computer interaction. In embodiments, a computing device may include a plurality of sensors, including a plurality of light sources and a camera, to create a three dimensional (3-D) interaction region within which to track individual finger positions of a user of the computing device. The light sources and the camera may be complementarily disposed for the camera to capture the finger or hand positions. The computing device may further include a 3-D interaction module configured to analyze the individual finger positions within the 3-D interaction region, the individual finger movements captured by the camera, to detect a gesture based on a result of the analysis, and to execute a user control action corresponding to the gesture detected. Other embodiments may be described and/or claimed.
-
Citations
21 Claims
-
1. One or more non-transitory computer-readable media having instructions stored thereon that, when executed by a computing device, provide the computing device with a three-dimensional (3-D) interaction module configured to:
-
analyze sensor data associated with individual finger positions of a user of the computing device, only within a 3-D interaction region, sensed by a plurality of sensors of the computing device that include a plurality of light sources and a camera, wherein the plurality of light sources and the camera are complementarily disposed for the camera to capture the individual finger positions in three dimensions, and wherein the 3-D interaction region is a region in which light emitted from the plurality of light sources intersects a viewing angle of the camera and within a viewing area of a display such that the individual finger positions within the viewing angle of the camera and outside of the 3-D interaction region are not to be included as a gesture of the user; detect a gesture of the user, based at least in part on a result of the analysis, wherein the gesture comprises at least a first individual finger of a hand hovering in a same position for a predetermined period of time and at least one movement using the first individual finger, using a second individual finger of the hand, or using the first individual finger and the second individual finger; determine a location of the first individual finger or the second individual finger within the 3-D interaction region; correlate the location of the first individual finger or the second individual finger, within the 3-D interaction region, with a location of a cursor rendered on a display of the computing device, wherein the at least one movement comprises holding the first individual finger of the hand, extending beyond remaining fingers of the hand, in a fixed position within the 3-D interaction region for a first predetermined period of time to cause the correlated location of the cursor to freeze for a second predetermined period of time, and the at least one movement further comprises a forward or downward movement of a tip of the first individual finger within the second predetermined period of time; and execute a user control action corresponding to the detected gesture, to control the computing device, and wherein to execute further comprises execution of a user control action proximate to the correlated location of the cursor rendered on the display. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A computing device for human computer interaction comprising:
-
a plurality of sensors, including a plurality of light sources and a camera, to create a three-dimensional (3-D) interaction region in which light emitted from the plurality of light sources intersects a viewing angle of the camera and is within a viewing area of a display, within which to track individual finger positions of a user of the computing device, wherein the plurality of light sources and the camera are complementarily disposed to enable the camera to capture the individual finger positions in three dimensions; a 3-D interaction module coupled with the plurality of sensors, the 3-D interaction module to analyze the individual finger positions, in relation to one another, only within the 3-D interaction region such that the individual finger positions within the viewing angle of the camera and outside of the 3-D interaction region are not to be included as a gesture of the user, captured by the camera, to detect a gesture based on a result of the analysis, and to execute a user control action corresponding to the gesture detected; and a display coupled with the 3-D interaction module, wherein the 3-D interaction module is to further determine a location of the individual fingers within the 3-D interaction region, and to correlate the location of the individual fingers, within the 3-D interaction region, with a location of a cursor rendered on the display, wherein the detected gesture comprises an individual finger of the individual fingers to be extended beyond remaining fingers of the individual fingers in a fixed position within the 3-D interaction region for a first predetermined period of time to cause the correlated location of the cursor to freeze for a second predetermined period of time, and the gesture detected further comprises a forward or downward movement of a tip of the individual finger within the second predetermined period of time, and wherein the 3-D interaction module is to execute a user control action at the correlated location of the cursor rendered on the display. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. A computer-implemented method for human computer interaction comprising:
-
analyzing, by a three-dimensional (3-D) interaction module of a computing device, sensor data associated with individual finger positions of a user of the computing device, only within a 3-D interaction region, sensed by a plurality of sensors of the computing device that include a plurality of light sources and a camera, wherein the plurality of light sources and the camera are complementarily disposed for the camera to capture the individual finger positions in three dimensions, and wherein the 3-D interaction region is a region in which light emitted from the plurality of light sources intersects a viewing angle of the camera and is within a viewing area of a display such that the individual finger positions within the viewing angle of the camera and outside of the 3-D interaction region are not to be included as a gesture of the user; detecting, by the 3-D interaction module, a gesture of the user, based at least in part on a result of the analysis, wherein the gesture comprises at least an individual finger of a hand hovering in a same position for a predetermined period of time and at least one of a pecking motion in a z-direction of the 3-D interaction region or a tapping motion in a y-direction of the 3-D interaction region; and executing, by the 3-D interaction module, a user control action corresponding to the gesture detected, to control the computing device. - View Dependent Claims (21)
-
Specification