Predictive information for free space gesture control and communication
First Claim
1. A computer implemented method of simplifying updating of a predictive model using clustering observed points, the method including:
- receiving observation information capturing a set of points in a three-dimensional (3D) sensory space;
determining surface normal directions from the points;
clustering the points by their surface normal directions and adjacency;
accessing a predictive model of a hand;
matching the clusters of the points to segments of the predictive model; and
using the matched clusters to refine positions of matched segments.
9 Assignments
0 Petitions
Accused Products
Abstract
The technology disclosed relates to simplifying updating of a predictive model using clustering observed points. In particular, it relates to observing a set of points in 3D sensory space, determining surface normal directions from the points, clustering the points by their surface normal directions and adjacency, accessing a predictive model of a hand, refining positions of segments of the predictive model, matching the clusters of the points to the segments, and using the matched clusters to refine the positions of the matched segments. It also relates to distinguishing between alternative motions between two observed locations of a control object in a 3D sensory space by accessing first and second positions of a segment of a predictive model of a control object such that motion between the first position and the second position was at least partially occluded from observation in a 3D sensory space.
4 Citations
20 Claims
-
1. A computer implemented method of simplifying updating of a predictive model using clustering observed points, the method including:
-
receiving observation information capturing a set of points in a three-dimensional (3D) sensory space; determining surface normal directions from the points; clustering the points by their surface normal directions and adjacency; accessing a predictive model of a hand; matching the clusters of the points to segments of the predictive model; and using the matched clusters to refine positions of matched segments. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A computer implemented method of distinguishing between alternative motions between two observed locations of a control object in a three-dimensional (3D) sensory space, the method including:
-
accessing a first position and a second position of a segment of a control object, wherein motion between the first position and the second position was at least partially occluded from observation in a three-dimensional (3D) sensory space; receiving two or more alternative interpretations of movement from the first position to the second position; estimating entropy or extent of motion involved in the alternative interpretations; selecting an alternative interpretation with lower entropy or extent of motion than other interpretations; and applying the alternative interpretation selected to predicting further positioning of the segment, and of any other segments of the control object from additional observations in the three-dimensional (3D) sensory space. - View Dependent Claims (13, 14)
-
-
15. A system enabling simplifying updating of a predictive model using clustering observed points, comprising:
-
at least one camera oriented towards a field of view; a gesture database comprising a series of electronically stored records, each of the records relating a predictive model of a hand; and an image analyzer coupled to the camera and the database and configured to; observe a set of points in a three-dimensional (3D) sensory space using at least one image captured by the camera; determine surface normal directions from the points; cluster the points by their surface normal directions and adjacency; access a particular predictive model of the hand; match the clusters of the points to segments of the predictive model; and use the matched clusters to refine positions of matched segments. - View Dependent Claims (16, 17, 18, 19)
-
-
20. A system to distinguish between alternative motions between two observed locations of a control object in a three-dimensional (3D) sensory space, comprising:
-
at least one camera oriented towards a field of view; a gesture database comprising a series of electronically stored records, each of the records relating a predictive model of a hand; and an image analyzer coupled to the camera and the database and configured to; accessing a first position and a second position of a segment of a control object, wherein motion between the first position and the second position was at least partially occluded from observation in a three-dimensional (3D) sensory space; receive two or more alternative interpretations of movement from the first position to the second position; estimate entropy or extent of motion involved in the alternative interpretations; select an alternative interpretation with lower entropy or extent of motion than other interpretations; and applying the alternative interpretation selected to predicting further positioning of the segment, and of any other segments of the control object from additional observations in the three-dimensional (3D) sensory space.
-
Specification