Pose-robust recognition
First Claim
Patent Images
1. A method comprising:
- under control of one or more processors configured with executable instructions,determining a first pose category of at least a portion of a first image from among a plurality of pose categories;
transforming at least the portion of the first image to a second pose category corresponding to a second image to obtain a transformed image, wherein at least a portion of the transformed image is divided into a plurality of cells;
using multiple descriptors to extract features for each cell of the portion of the transformed image;
rescaling the results of the multiple descriptors determined for each cell;
concatenating the rescaled results to obtain a combined feature vector for each cell of the plurality of cells;
compressing the combined feature vectors to obtain a compact final descriptor representative of at least the portion of the transformed image; and
comparing the transformed image with the second image to determine, at least in part, whether the first image is a match with the second image.
2 Assignments
0 Petitions
Accused Products
Abstract
Some implementations provide techniques and arrangements to address intrapersonal variations encountered during facial recognition. For example, some implementations transform at least a portion of an image from a first intrapersonal condition to a second intrapersonal condition to enable more accurate comparison with another image. Some implementations may determine a pose category of an input image and may modify at least a portion of the input image to a different pose category of another image for comparing the input image with the other image. Further, some implementations provide for compression of data representing at least a portion of the input image to decrease the dimensionality of the data.
54 Citations
20 Claims
-
1. A method comprising:
-
under control of one or more processors configured with executable instructions, determining a first pose category of at least a portion of a first image from among a plurality of pose categories; transforming at least the portion of the first image to a second pose category corresponding to a second image to obtain a transformed image, wherein at least a portion of the transformed image is divided into a plurality of cells; using multiple descriptors to extract features for each cell of the portion of the transformed image; rescaling the results of the multiple descriptors determined for each cell; concatenating the rescaled results to obtain a combined feature vector for each cell of the plurality of cells; compressing the combined feature vectors to obtain a compact final descriptor representative of at least the portion of the transformed image; and comparing the transformed image with the second image to determine, at least in part, whether the first image is a match with the second image. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. Computer-readable storage media maintaining instructions executable by one or more processors to perform operations comprising:
-
receiving an image; dividing a face in the image into a plurality of facial components; using multiple descriptors to extract features from a facial component of the plurality of facial components; rescaling the features extracted using the multiple descriptors; concatenating the results of the rescaling to obtain a combined feature vector representative of the facial component; and compressing results of the concatenating to obtain a compact final descriptor representative of the facial component. - View Dependent Claims (13, 14, 15, 16)
-
-
17. A computing device comprising:
-
one or more processors in operable communication with computer-readable media; a feature extraction module maintained on the computer-readable media and executed on the one or more processors to use multiple descriptors to extract features from a facial component of an input image; a compression module maintained on the computer-readable media and executed on the one or more processors to compress the features extracted using the multiple descriptors to obtain a final descriptor representative of the facial component; and a feature combination module maintained on the computer-readable media and executed on the one or more processors to rescale the features extracted using the multiple descriptors with an associated code number and concatenate the features extracted using the multiple descriptors to obtain a combined feature vector representative of at least a portion of the facial component. - View Dependent Claims (18, 19, 20)
-
Specification