Gestures And Gesture Recognition For Manipulating A User-Interface
First Claim
1. A method of operating a user-interface using mid-air motion of a human target, comprising:
- receiving a plurality of images from a capture device, the plurality of images including the human target;
tracking movement of the human target from the plurality of images using skeletal mapping of the human target;
determining from the skeletal mapping whether the movement of the human target satisfies one or more filters for a first mid-air gesture, the one or more filters specifying that the first mid-air gesture be performed by a particular hand or by both hands; and
if the movement of the human target satisfies the one or more filters, performing at least one user-interface action corresponding to the mid-air gesture.
2 Assignments
0 Petitions
Accused Products
Abstract
Symbolic gestures and associated recognition technology are provided for controlling a system user-interface, such as that provided by the operating system of a general computing system or multimedia console. The symbolic gesture movements in mid-air are performed by a user with or without the aid of an input device. A capture device is provided to generate depth images for three-dimensional representation of a capture area including a human target. The human target is tracked using skeletal mapping to capture the mid-air motion of the user. The skeletal mapping data is used to identify movements corresponding to pre-defined gestures using gesture filters that set forth parameters for determining when a target'"'"'s movement indicates a viable gesture. When a gesture is detected, one or more pre-defined user-interface control actions are performed.
191 Citations
20 Claims
-
1. A method of operating a user-interface using mid-air motion of a human target, comprising:
-
receiving a plurality of images from a capture device, the plurality of images including the human target; tracking movement of the human target from the plurality of images using skeletal mapping of the human target; determining from the skeletal mapping whether the movement of the human target satisfies one or more filters for a first mid-air gesture, the one or more filters specifying that the first mid-air gesture be performed by a particular hand or by both hands; and if the movement of the human target satisfies the one or more filters, performing at least one user-interface action corresponding to the mid-air gesture. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A system for tracking user movement to control a user-interface, comprising:
-
an operating system providing the user-interface; a tracking system in communication with an image capture device to receive depth information of a capture area including a human target and to create a skeletal model mapping movement of the human target over time; a gestures library storing a plurality of gesture filters, each gesture filter containing information for at least one gesture, wherein one or more of the plurality of gesture filters specify that a corresponding gesture be performed by a particular hand or both hands; and a gesture recognition engine in communication with the gestures library for receiving the skeletal model and determining whether the movement of the human target satisfies one or more of the plurality of gesture filters, the gesture recognition engine providing an indication to the operating system when one or more of the plurality of gesture filters are satisfied by the movement of the human target. - View Dependent Claims (14, 15, 16)
-
-
17. One or more processor readable storage devices having processor readable code embodied on the one or more processor readable storage devices, the processor readable code for programming one or more processors to perform a method comprising:
-
providing at least one gesture filter corresponding to each of a plurality of mid-air gestures for controlling an operating system user-interface, the plurality of mid-air gestures including at least two of a horizontal fling gesture, a vertical fling gesture, a one-handed press gesture, a back gesture, a two-handed press gesture and a two-handed compression gesture; tracking movement of a human target from a plurality of depth images using skeletal mapping of the human target in a known three-dimensional coordinate system; determining from the skeletal mapping whether the movement of the human target satisfies the at least one gesture filter for each of the plurality of mid-air gestures; and controlling the operating system user-interface in response to determining that the movement of the human target satisfies one or more of the gesture filters. - View Dependent Claims (18, 19, 20)
-
Specification