Pose-aligned networks for deep attribute modeling
First Claim
1. A method, performed by a computing device having one or more processing units, for recognizing human attributes from digital images, comprising:
- locating, by the one or more processing units, at least two part patches from a digital image, wherein each of the two part patches comprises at least a portion of the digital image corresponding to a recognized human body portion or pose, wherein said locating comprises;
scanning the digital image using multiple windows having various sizes, andcomparing scanned portions of the digital image confined by the windows with multiple training patches from a database,wherein the training patches are annotated with keypoints of body parts and the database contains the training patches that form a cluster in a 3D configuration space corresponding to a recognized human body portion or pose;
providing each of the part patches as an input to one of multiple convolutional neural networks;
for at least two selected convolutional neural networks among the multiple convolutional neural networks, applying multiple stages of convolution operations to a part patch associated with the selected convolutional neural networks to generate a set of feature data as an output of the selected convolutional neural networks;
concatenating the sets of feature data from the at least two convolutional neural networks to generate a set of concatenated feature data;
feeding the set of concatenated feature data into a classification engine for predicting a human attribute; and
determining, based on a result provided by the classification engine, whether a human attribute exists in the digital image.
2 Assignments
0 Petitions
Accused Products
Abstract
Technology is disclosed for inferring human attributes from images of people. The attributes can include, for example, gender, age, hair, and/or clothing. The technology uses part-based models, e.g., Poselets, to locate multiple normalized part patches from an image. The normalized part patches are provided into trained convolutional neural networks to generate feature data. Each convolution neural network applies multiple stages of convolution operations to one part patch to generate a set of fully connected feature data. The feature data for all part patches are concatenated and then provided into multiple trained classifiers (e.g., linear support vector machines) to predict attributes of the image.
33 Citations
12 Claims
-
1. A method, performed by a computing device having one or more processing units, for recognizing human attributes from digital images, comprising:
-
locating, by the one or more processing units, at least two part patches from a digital image, wherein each of the two part patches comprises at least a portion of the digital image corresponding to a recognized human body portion or pose, wherein said locating comprises; scanning the digital image using multiple windows having various sizes, and comparing scanned portions of the digital image confined by the windows with multiple training patches from a database, wherein the training patches are annotated with keypoints of body parts and the database contains the training patches that form a cluster in a 3D configuration space corresponding to a recognized human body portion or pose; providing each of the part patches as an input to one of multiple convolutional neural networks; for at least two selected convolutional neural networks among the multiple convolutional neural networks, applying multiple stages of convolution operations to a part patch associated with the selected convolutional neural networks to generate a set of feature data as an output of the selected convolutional neural networks; concatenating the sets of feature data from the at least two convolutional neural networks to generate a set of concatenated feature data; feeding the set of concatenated feature data into a classification engine for predicting a human attribute; and determining, based on a result provided by the classification engine, whether a human attribute exists in the digital image.
-
-
2. The method of claim 1, wherein one of the convolution operations uses multiple filters having dimensions of more than one.
-
3. The method of claim 1, wherein the filters are capable of detecting spatially local correlations present in the part patches.
-
4. The method of claim 1, further comprising:
for the at least two selected convolutional neural networks among the multiple convolutional neural networks, applying a normalization operation to the part patch after one of the multiple stages of convolution operations has been applied to the part patch.
-
5. The method of claim 1, further comprising:
for the at least two selected convolutional neural networks among the multiple convolutional neural networks, applying a max-pooling operation to the part patch after one of the multiple stages of convolution operations has been applied to the part patch.
-
6. The method of claim 1, further comprising:
resizing the part patches to a common resolution, where the common resolution is a required resolution for inputs of the convolutional neural networks.
-
7. The method of claim 1, further comprising:
breaking down the part patches into three layers based on the red, green and blue channels of the part patches.
-
8. The method of claim 1, further comprising:
presenting, through an output interface of the computing device, a signal indicating whether the human attribute exists in the digital image.
-
9. The method of claim 1, further comprising:
-
locating a whole-body portion from the digital image, wherein the whole-body portion covers an entire human body depicted in the digital image; feeding the whole-body portion into a deep neural network to generate a set of whole-body feature data; and incorporating the set of whole-body feature data into the set of concatenated feature data.
-
-
10. The method of claim 1, wherein the result provided by the classification engine comprises a prediction score indicating the likelihood of the human attribute existing in the digital image.
-
11. The method of claim 1, wherein the human attribute comprises gender, age, race, hair or clothing.
-
12. The method of claim 1, wherein the classification engine comprises a linear support vector machine that is trained using training data associated with the human attribute.
Specification