Dynamic Hand Gesture Recognition Using Depth Data
First Claim
1. In a computing environment, a method performed at least in part on at least one processor comprising:
- sensing depth data for a plurality of frames that include hand movement;
for each frame of at least some of the frames, processing the depth data for that frame, including detecting a hand represented in the depth data, and extracting feature values corresponding to the hand; and
recognizing the hand movement as a hand gesture based upon the feature values.
2 Assignments
0 Petitions
Accused Products
Abstract
The subject disclosure is directed towards a technology by which dynamic hand gestures are recognized by processing depth data, including in real-time. In an offline stage, a classifier is trained from feature values extracted from frames of depth data that are associated with intended hand gestures. In an online stage, a feature extractor extracts feature values from sensed depth data that corresponds to an unknown hand gesture. These feature values are input to the classifier as a feature vector to receive a recognition result of the unknown hand gesture. The technology may be used in real time, and may be robust to variations in lighting, hand orientation, and the user'"'"'s gesturing speed and style.
62 Citations
20 Claims
-
1. In a computing environment, a method performed at least in part on at least one processor comprising:
-
sensing depth data for a plurality of frames that include hand movement; for each frame of at least some of the frames, processing the depth data for that frame, including detecting a hand represented in the depth data, and extracting feature values corresponding to the hand; and recognizing the hand movement as a hand gesture based upon the feature values. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
- 10. A system comprising, a classifier trained from feature values extracted from frames of depth data that are associated with intended hand gestures, and a feature extractor configured to extract feature values from sensed depth data that corresponds to an unknown hand gesture, at least some of the feature values extracted from the sensed depth data represented as a feature data input to the classifier to receive a recognition result of the unknown hand gesture.
-
17. One or more computer-readable media having computer-executable instructions, which when executed perform steps, comprising:
-
processing sensed depth data for a plurality of frames that include hand movement, including segmenting the depth data to isolate a hand represented in the frames of depth data, and extracting feature values corresponding to the hand; and recognizing the hand movement as a hand gesture based upon the feature values. - View Dependent Claims (18, 19, 20)
-
Specification