Gesture detection based on joint skipping
First Claim
1. In a system comprising a computing environment coupled to a capture device for capturing user position, a method of generating pose information for use in determining whether a user has performed a given gesture, the method comprising:
- (a) detecting via the capture device a location of a first skeletal joint of the user position;
(b) detecting via the capture device a location of a second skeletal joint of the user position that is not a joint adjacent to the first skeletal joint; and
(c) generating the pose information, directly including the location of the first skeletal joint relative the location of the second skeletal joint, for use in determining whether a user has performed a given gesture;
wherein joint adjacency is determined along a user'"'"'s body.
2 Assignments
0 Petitions
Accused Products
Abstract
A system is disclosed for detecting or confirming gestures performed by a user by identifying a vector formed by non-adjacent joints and identifying the angle the vector forms with a reference point. Thus, the system skips one or more intermediate joints between an end joint and a proximal joint closer to the body core of a user. Skipping one or more intermediate joints results in a more reliable indication of the position or movement performed by the user, and consequently a more reliable indication of a given gesture.
-
Citations
20 Claims
-
1. In a system comprising a computing environment coupled to a capture device for capturing user position, a method of generating pose information for use in determining whether a user has performed a given gesture, the method comprising:
-
(a) detecting via the capture device a location of a first skeletal joint of the user position; (b) detecting via the capture device a location of a second skeletal joint of the user position that is not a joint adjacent to the first skeletal joint; and (c) generating the pose information, directly including the location of the first skeletal joint relative the location of the second skeletal joint, for use in determining whether a user has performed a given gesture; wherein joint adjacency is determined along a user'"'"'s body. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method performed by a computing environment of detecting whether a user has performed a given gesture, the method comprising:
-
(a) detecting a location of a first skeletal joint of the user position; (b) detecting a location of a second skeletal joint of the user position that is not a joint adjacent to the first skeletal joint; (c) generating a non-adjacent joint position vector having as end points the locations of the first and second skeletal joints detected in steps (a) and (b); and (d) using the non-adjacent joint position vector generated in said step (c) to determine whether or not the user has performed a given gesture; wherein joint adjacency is determined along a user'"'"'s body. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16)
-
-
17. A system for detecting gestures performed by a user in real world space, the system comprising:
-
a capture device for capturing a depth image of a user within the field of view; a computing environment for receiving the depth image from the capture device and determining a location of a plurality of joints, the plurality of joints including an end joint, at least one intermediate joint proximal of the end joint, and a core body joint proximal of the at least one intermediate joint; and a processor within one of the capture device and computing environment for generating a non-adjacent joint position vector having end points at an end joint and a joint that is not adjacent to the end joint from one of the at least one intermediate joints and core body joint, the non-adjacent joint position vector used to determine whether the user has performed a predefined gesture; wherein joint adjacency is determined along a user'"'"'s body. - View Dependent Claims (18, 19, 20)
-
Specification