Enhanced detection of circular engagement gesture
First Claim
Patent Images
1. A computer-implemented method, comprising:
- receiving motion data, wherein the motion data comprises multiple images captured of an object over a period of time;
generating a motion history map from the received motion data;
defining a plurality of points within the motion history map, wherein;
each point of the plurality of points corresponds to a point in time,the plurality of points are positioned within the motion history map and aligned with a shape stored prior to when the multiple images are captured, the shape inscribed within the boundaries of the motion data, anda quantity of the plurality of points is based, at least in part, on a size of the shape;
sampling the motion history map at the plurality of points;
determining that the object is performing a gesture corresponding to the shape based on the sampled motion history map by, for each point of the plurality of points, comparing a time associated with the sampled motion history map at that point with an expected time for that point; and
controlling an application at least partially based on determining that the object is performing the gesture.
2 Assignments
0 Petitions
Accused Products
Abstract
The enhanced detection of a circular engagement gesture, in which a shape is defined within motion data, and the motion data is sampled at points that are aligned with the defined shape. It is determined whether a moving object is performing a gesture correlating to the defined shape based on a pattern exhibited by the sampled motion data. An application is controlled if determining that the moving object is performing the gesture.
61 Citations
31 Claims
-
1. A computer-implemented method, comprising:
-
receiving motion data, wherein the motion data comprises multiple images captured of an object over a period of time; generating a motion history map from the received motion data; defining a plurality of points within the motion history map, wherein; each point of the plurality of points corresponds to a point in time, the plurality of points are positioned within the motion history map and aligned with a shape stored prior to when the multiple images are captured, the shape inscribed within the boundaries of the motion data, and a quantity of the plurality of points is based, at least in part, on a size of the shape; sampling the motion history map at the plurality of points; determining that the object is performing a gesture corresponding to the shape based on the sampled motion history map by, for each point of the plurality of points, comparing a time associated with the sampled motion history map at that point with an expected time for that point; and controlling an application at least partially based on determining that the object is performing the gesture. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A system comprising:
-
a processor; and a memory communicatively coupled with and readable by the processor and having stored therein processor-readable instructions which, when executed by the processor, cause the processor to; receive motion data, wherein the motion data comprises multiple images captured of an object over a period of time; generate a motion history map from the received motion data; define a plurality of points within the motion history map, wherein; each point of the plurality of points corresponds to a point in time, the plurality of points are positioned within the motion history map and aligned with a shape stored prior to when the multiple images are captured, the shape inscribed within the boundaries of the motion data, and a quantity of the plurality of points is based, at least in part, on a size of the shape; sample the motion history map at the plurality of points; determine that the object is performing a gesture correlated to the shape based on the sampled motion history map by, for each point of the plurality of points, comparing a time associated with the sampled motion history map at that point with an expected time for that point; and control an application at least partially based on determining that the object is performing the gesture. - View Dependent Claims (15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26)
-
-
27. An apparatus comprising:
-
means for receiving motion data, wherein the motion data comprises multiple images captured of an object over a period of time; means for generating a motion history map from the received motion data; means for defining a plurality of points within the motion history map, wherein; each point of the plurality of points corresponds to a point in time, the plurality of points are positioned within the motion history map and aligned with a shape stored prior to when the multiple images are captured, the shape inscribed within the boundaries of the motion data, and a quantity of the plurality of points is based, at least in part, on a size of the shape; means for sampling the motion history map at the plurality of points; means for determining that the object is performing a gesture corresponding to the shape based on the sampled motion history map by, for each point of the plurality of points, comparing a time associated with the sampled motion history map at that point with an expected time for that point; and means for controlling an application at least partially based on determining that the object is performing the gesture. - View Dependent Claims (28, 29, 30)
-
-
31. A non-transitory computer-readable storage medium encoded with processor-readable instructions that, when executed, cause a processing device to perform operations comprising:
-
receiving motion data, wherein the motion data comprises multiple images captured of an object over a period of time; generating a motion history map from the received motion data; defining a plurality of points within the motion history map, wherein; each point of the plurality of points corresponds to a point in time, the plurality of points are positioned within the motion history map and aligned with a shape stored prior to when the multiple images are captured, the shape inscribed within the boundaries of the motion data, and a quantity of the plurality of points is based, at least in part, on a size of the shape; sampling the motion history map at the plurality of points; determining that the object is performing a gesture corresponding to the shape based on the sampled motion history map by, for each point of the plurality of points, comparing a time associated with the sampled motion history map at that point with an expected time for that point; and controlling an application at least partially based on determining that the object is performing the gesture.
-
Specification