Generation of normalized 2D imagery and ID systems via 2D to 3D lifting of multifeatured objects
First Claim
1. A method of estimating a 3D shape of a target head from at least one source 2D image of the head, the method comprising:
- providing a library of candidate 3D avatar models; and
searching among the candidate 3D avatar models to locate a best-fit 3D avatar, said searching involving for each 3D avatar model among the library of 3D avatar models computing a measure of fit between a 2D projection of that 3D avatar model and the at least one source 2D image, the measure of fit being based on at least one of (i) a correspondence between feature points in a 3D avatar and feature points in the at least one source 2D image, wherein at least one of the feature points in the at least one source 2D image is unlabeled, and (ii) a correspondence between feature points in a 3D avatar and their reflections in an avatar plane of symmetry, and feature points in the at least one source 2D image, wherein the best-fit 3D avatar is the 3D avatar model among the library of 3D avatar models that yields a best measure of fit and wherein the estimate of the 3D shape of the target head is derived from the best-fit 3D avatar.
1 Assignment
0 Petitions
Accused Products
Abstract
A method of generating a normalized image of a target head from at least one source 2D image of the head. The method involves estimating a 3D shape of the target head and projecting the estimated 3D target head shape lit by normalized lighting into an image plane corresponding to a normalized pose. The estimation of the 3D shape of the target involves searching a library of 3D avatar models, and may include matching unlabeled feature points in the source image to feature points in the models, and the use of a head'"'"'s plane of symmetry. Normalizing source imagery before providing it as input to traditional 2D identification systems enhances such systems'"'"' accuracy and allows systems to operate effectively with oblique poses and non-standard source lighting conditions.
99 Citations
31 Claims
-
1. A method of estimating a 3D shape of a target head from at least one source 2D image of the head, the method comprising:
providing a library of candidate 3D avatar models; and
searching among the candidate 3D avatar models to locate a best-fit 3D avatar, said searching involving for each 3D avatar model among the library of 3D avatar models computing a measure of fit between a 2D projection of that 3D avatar model and the at least one source 2D image, the measure of fit being based on at least one of (i) a correspondence between feature points in a 3D avatar and feature points in the at least one source 2D image, wherein at least one of the feature points in the at least one source 2D image is unlabeled, and (ii) a correspondence between feature points in a 3D avatar and their reflections in an avatar plane of symmetry, and feature points in the at least one source 2D image, wherein the best-fit 3D avatar is the 3D avatar model among the library of 3D avatar models that yields a best measure of fit and wherein the estimate of the 3D shape of the target head is derived from the best-fit 3D avatar.- View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
9. A method of estimating a 3D shape of a target head from at least one source 2D image of the head, the method comprising:
providing a library of candidate 3D avatar models; and
searching among the candidate 3D avatar models and among deformations of the candidate 3D avatar models to locate a best-fit 3D avatar, said searching involving, for each 3D avatar model among the library of 3D avatar models and each of its deformations, computing a measure of fit between a 2D projection of that deformed 3D avatar model and the at least one source 2D image, the measure of fit being based on at least one of (i) a correspondence between feature points in a deformed 3D avatar and feature points in the at least one source 2D image, wherein in at least one of the feature points in the at least one source 2D image is unlabeled, and (ii) a correspondence between feature points in a deformed 3D avatar and their reflections in an avatar plane of symmetry, and feature points in the at least one source 2D image, wherein the best-fit deformed 3D avatar is the deformed 3D avatar model that yields a best measure of fit and wherein the estimate of the 3D shape of the target head is derived from the best-fit deformed 3D avatar.- View Dependent Claims (10, 11, 12, 13)
-
14. A method of generating a geometrically normalized 3D representation of a target head from at least one source 2D projection of the head, the method comprising:
-
providing a library of candidate 3D avatar models; and
searching among the candidate 3D avatar models and among deformations of the candidate 3D avatar models to locate a best-fit 3D avatar, said searching involving, for each 3D avatar model among the library of 3D avatar models and each of its deformations, computing a measure of fit between a 2D projection of that deformed 3D avatar model and the at least one source 2D image, the deformations corresponding to permanent and non-permanent features of the target head, wherein the best-fit deformed 3D avatar is the deformed 3D avatar model that yields a best measure of fit; and
generating a geometrically normalized 3D representation of the target head from the best-fit deformed 3D avatar by removing deformations corresponding to non-permanent features of the target head. - View Dependent Claims (15, 16, 17, 18, 19, 20, 21, 22)
-
-
23. A method of estimating a 3D shape of a target head from source 3D feature points of the head, the method comprising:
-
providing a library of candidate 3D avatar models;
searching among the candidate 3D avatar models and among deformations of the candidate 3D avatar models to locate a best-fit deformed avatar, the best-fit deformed avatar having a best measure of fit to the source 3D feature points, the measure of fit being based on a correspondence between feature points in a deformed 3D avatar and the source 3D feature points, wherein the estimate of the 3D shape of the target head is derived from the best-fit deformed avatar. - View Dependent Claims (24, 25, 26, 27)
-
-
28. A method of estimating a 3D shape of a target head from at least one source 2D image of the head, the method comprising:
-
providing a library of candidate 3D avatar models; and
searching among the candidate 3D avatar models and among deformations of the candidate 3D avatar models to locate a best-fit deformed avatar, the best-fit deformed avatar having a 2D projection with a best measure of fit to the at least one source 2D image, the measure of fit being based on a correspondence between dense imagery of a projected 3D avatar and dense imagery of the at least one source 2D image, wherein at least a portion of the dense imagery of the projected avatar is generated using a mirror symmetry of the candidate avatars, wherein the estimate of the 3D shape of the target head is derived from the best-fit deformed avatar.
-
-
29. A method of positively identifying at least one source image of a target head with a member of a database of candidate facial images, the method comprising:
-
providing a library of 3D avatar models;
searching among the 3D avatar models and among deformations of the candidate 3D avatar models to locate a source best-fit deformed avatar, the source best-fit deformed avatar having a 2D projection with a best first measure of fit to the at least one source image;
for each member of the database of candidate facial images, searching among the library of 3D avatar models and their deformations to locate a candidate best-fit deformed avatar having a 2D projection with a best second measure of fit to the member of the database of candidate facial images;
positively identifying the target head with a member of the database of candidate facial images if a third measure of fit between the source best-fit deformed avatar and the member candidate best-fit deformed avatar exceeds a predetermined threshold. - View Dependent Claims (30, 31)
-
Specification