Gesture recognition method and interactive input system employing same
First Claim
Patent Images
1. A gesture recognition method comprising:
- capturing images using imaging sensors having fields of view aimed generally across or at an input surface from different vantages;
processing the captured images to detect a pair of hands brought into contact with said input surface and for each detected hand calculating a bounding box, the calculated bounding box surrounding either a cluster of proximate touch points resulting from multiple fingers of the hand being in contact with said input surface or a single large touch region exceeding a threshold size resulting from a palm region of the hand being in contact with said input surface;
creating an observation for each bounding box in each captured image, each observation in each captured image defined by the area formed between two straight lines, one line of which extends from the focal point of the imaging sensor that captured the image and crosses the right edge of the bounding box and the other line of which extends from the focal point of the imaging sensor that captured the image and crosses the left edge of the bounding box;
in response to relative movement of the hands over the input surface, recognizing a gesture based on corresponding relative movement of the created observations;
executing a command associated with the recognized gesture; and
updating an image displayed on said input surface in accordance with the executed command.
7 Assignments
0 Petitions
Accused Products
Abstract
A gesture recognition method comprises capturing images, processing the images to identify at least two clusters of touch points associated with at least two pointers, recognizing a gesture based on motion of the clusters, and updating a display in accordance with the recognized gesture.
-
Citations
14 Claims
-
1. A gesture recognition method comprising:
-
capturing images using imaging sensors having fields of view aimed generally across or at an input surface from different vantages; processing the captured images to detect a pair of hands brought into contact with said input surface and for each detected hand calculating a bounding box, the calculated bounding box surrounding either a cluster of proximate touch points resulting from multiple fingers of the hand being in contact with said input surface or a single large touch region exceeding a threshold size resulting from a palm region of the hand being in contact with said input surface; creating an observation for each bounding box in each captured image, each observation in each captured image defined by the area formed between two straight lines, one line of which extends from the focal point of the imaging sensor that captured the image and crosses the right edge of the bounding box and the other line of which extends from the focal point of the imaging sensor that captured the image and crosses the left edge of the bounding box; in response to relative movement of the hands over the input surface, recognizing a gesture based on corresponding relative movement of the created observations; executing a command associated with the recognized gesture; and updating an image displayed on said input surface in accordance with the executed command. - View Dependent Claims (3, 4, 5, 6, 7)
-
-
2. An interactive input system comprising:
-
an input surface; at least two imaging sensors having fields of view aimed generally across or at said input surface from different vantages; and processing structure communicating with said at least one imaging sensor, said processing structure being configured to; analyze images captured by said at least one imaging sensor to detect multiple hands brought into contact with said input surface, for each detected hand, calculate a bounding box, the bounding box surrounding either a cluster of proximate touch points resulting from multiple fingers of the hand being in contact with said input surface or a single large touch region exceeding a threshold size resulting from a palm region of the hand being in contact with said input surface, create an observation for each bounding box in each captured image, wherein each observation in each captured image defined by the area formed between two straight lines, one line of which extends from the focal point of the imaging sensor that captured the image and crosses the right edge of the bounding box and the other line of which extends from the focal point of the imaging sensor that captured the image and crosses the left edge of the bounding box; in response to relative movement of the hands over the input surface, recognize a gesture based on corresponding relative movement of the created bounding boxes, execute a command associated with said recognized gesture, and update an image displayed on said input surface in accordance with the executed command. - View Dependent Claims (8, 9, 10, 11, 12, 13, 14)
-
Specification