Sign based human-machine interaction
First Claim
1. A computer based method for man-machine interaction comprising the steps of:
- receiving a digital image of a target, the digital image comprising depth data associated with image pixels, the depth data corresponding to the distance to the camera;
determining a location of a hand of the target within the pixels of the image based on the depth data associated with the pixels; and
matching a candidate shape of the hand of the target captured in the digital image with a stored shape image from a plurality of shape images by comparing one or more of the plurality of stored shape images with the candidate shape.
2 Assignments
0 Petitions
Accused Products
Abstract
Communication is an important issue in man-to-robot interaction. Signs can be used to interact with machines by providing user instructions or commands. Embodiment of the present invention include human detection, human body parts detection, hand shape analysis, trajectory analysis, orientation determination, gesture matching, and the like. Many types of shapes and gestures are recognized in a non-intrusive manner based on computer vision. A number of applications become feasible by this sign-understanding technology, including remote control of home devices, mouse-less (and touch-less) operation of computer consoles, gaming, and man-robot communication to give instructions among others. Active sensing hardware is used to capture a stream of depth images at a video rate, which is consequently analyzed for information extraction.
374 Citations
20 Claims
-
1. A computer based method for man-machine interaction comprising the steps of:
-
receiving a digital image of a target, the digital image comprising depth data associated with image pixels, the depth data corresponding to the distance to the camera;
determining a location of a hand of the target within the pixels of the image based on the depth data associated with the pixels; and
matching a candidate shape of the hand of the target captured in the digital image with a stored shape image from a plurality of shape images by comparing one or more of the plurality of stored shape images with the candidate shape. - View Dependent Claims (2, 3, 4)
-
-
5. A computer based method for man-machine interaction comprising the steps of:
-
receiving a digital image of a target;
pre-processing the digital image to determine a location of at least a hand and at least one of a head or a torso of the target;
matching a shape of the hand with one of a plurality of stored hand shape images to provide a matched shape associated with the digital image;
creating a candidate image data object including information associated with the location of a hand with respect to one of a head or a torso of the target and information indicating the matched shape associated with the digital image; and
matching a gesture captured by one or more digital images with a template gesture by comparing the candidate image data object with one ore more stored gesture profiles for template gestures, the gesture profiles including hand to body part location information and hand shape information for comparing with the candidate image data object. - View Dependent Claims (6, 7, 8, 9, 10)
-
-
11. The method of 10, wherein the shape of hand information further comprises one or more attributes from the group consisting of number of fingers, finger identifiers, finger shapes, and finger orientations.
-
12. A computer based method for human-machine interaction comprising:
-
matching a hand shape captured in one or more digital images with a plurality of hand shape patterns to determine a candidate gesture hand shape;
determining a trajectory curve of the hand from the one ore more digital images;
matching the trajectory curve of the hand with a plurality of trajectory curve templates to determine a candidate gesture motion; and
determining a gesture corresponding to the candidate gesture hand shape and the candidate gesture motion by comparing the candidate gesture hand shape and the candidate gesture motion with a plurality of gesture profiles with associated gesture hand shapes and gesture motions. - View Dependent Claims (13)
-
-
14. A system for human-machine interaction comprising:
-
an image data pre-processing module for receiving frames of image data comprising human image information, the image pre-processing module configured to determine locations of one or more body parts of the human in the frames;
a shape matching module coupled to the image data pre-processing module for receiving the determined locations and configured to match a shape of at least one of the body parts by comparing information associated with the image data with stored shape profiles in a shape database; and
a gesture matching module coupled to the image data-preprocessing module and the shape matching module for receiving shape and location information, the gesture matching module configured to match a gesture of the at least one body part by comparing gesture attributes of the at least one body part with gesture profiles stored in a gesture database, the gesture attributes comprising values associated with body part locations, and shape information. - View Dependent Claims (15, 16, 17, 18)
-
-
19. A computer readable media for human-machine interaction comprising:
-
program instructions for receiving a digital image of a target;
program instructions for pre-processing the digital image to determine a location of at least a hand and at least one of a head or a torso of the target;
program instructions for matching a shape of the hand with one of a plurality of stored hand shape images to provide a matched shape associated with the digital image;
program instructions for creating a candidate image data object including information associated with the location of a hand with respect to one of a head or a torso of the target and information indicating the matched shape associated with the digital image; and
program instructions for matching a gesture captured by one or more digital images with a template gesture by comparing the candidate image data object with one ore more stored gesture profiles for template gestures, the gesture profiles including hand to body part location information and hand shape information for comparing with the candidate image data object.
-
-
20. A computer based system for human-machine interaction comprising:
-
means for receiving a digital image of a target;
means for pre-processing the digital image to determine a location of at least a hand and at least one of a head or a torso of the target;
means for matching a shape of the hand with one of a plurality of stored hand shape images to provide a matched shape associated with the digital image;
means for creating a candidate image data object including information associated with the location of a hand with respect to one of a head or a torso of the target and information indicating the matched shape associated with the digital image; and
means for matching a gesture captured by one or more digital images with a template gesture by comparing the candidate image data object with one ore more stored gesture profiles for the template gestures, the gesture profiles including hand to body part location information and hand shape information for comparing with the candidate image data object.
-
Specification