3D face reconstruction from 2D images
First Claim
Patent Images
1. A method for automatically reconstructing a 3D face from a plurality of 2D images of a human face, comprising:
- deriving initial camera position estimates using prior face knowledge of a generic face;
selecting image pairs representing corresponding images from the plurality of 2D images that define similar poses and extracting sparse feature points representing perimeters of a face or locations of facial features for each of said image pairs;
refining said initial camera position estimates and said sparse feature points;
extracting dense 3D point clouds from said image pairs by performing dense feature detection and matching across said image pairs using a purely data-driven approach that does not use prior face knowledge or generic faces, wherein said dense feature matching identifies additional features beyond the sparse feature points;
merging said dense 3D point clouds into a single 3D cloud;
removing outliers from said single 3D cloud to form a cleaned 3D point cloud without using prior face knowledge or generic faces;
fitting a connected surface to the cleaned 3D point cloud without using prior face knowledge or generic faces; and
texture mapping surface detail and color information of a subject'"'"'s face onto said connected surface.
1 Assignment
0 Petitions
Accused Products
Abstract
A 3D face reconstruction technique using 2D images, such as photographs of a face, is described. Prior face knowledge or a generic face is used to extract sparse 3D information from the images and to identify image pairs. Bundle adjustment is carried out to determine more accurate 3D camera positions, image pairs are rectified, and dense 3D face information is extracted without using the prior face knowledge. Outliers are removed, e.g., by using tensor voting. A 3D surface is extracted from the dense 3D information and surface detail is extracted from the images.
87 Citations
15 Claims
-
1. A method for automatically reconstructing a 3D face from a plurality of 2D images of a human face, comprising:
-
deriving initial camera position estimates using prior face knowledge of a generic face; selecting image pairs representing corresponding images from the plurality of 2D images that define similar poses and extracting sparse feature points representing perimeters of a face or locations of facial features for each of said image pairs; refining said initial camera position estimates and said sparse feature points; extracting dense 3D point clouds from said image pairs by performing dense feature detection and matching across said image pairs using a purely data-driven approach that does not use prior face knowledge or generic faces, wherein said dense feature matching identifies additional features beyond the sparse feature points; merging said dense 3D point clouds into a single 3D cloud; removing outliers from said single 3D cloud to form a cleaned 3D point cloud without using prior face knowledge or generic faces; fitting a connected surface to the cleaned 3D point cloud without using prior face knowledge or generic faces; and texture mapping surface detail and color information of a subject'"'"'s face onto said connected surface. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A system for automatically reconstructing a 3D face from a plurality of 2D images of a human face, comprising:
-
means for deriving initial camera position estimates using prior face knowledge of a generic face; means for selecting image pairs representing corresponding images from the plurality of 2D images that define similar poses and extracting sparse feature points representing perimeters of a face or locations of facial features for each of said image pairs; means for refining said initial camera position estimates and said sparse feature points; means for extracting dense 3D point clouds from said image pairs by performing dense feature detection and matching across said image pairs using a purely data-driven approach that does not use prior face knowledge or generic faces, wherein said dense feature matching identifies additional features beyond the sparse feature points; means for merging said dense 3D point clouds into a single 3D cloud; means for removing outliers from said single 3D cloud to form a cleaned 3D point cloud without using prior face knowledge or generic faces; means for fitting a connected surface to the cleaned 3D point cloud without using prior face knowledge or generic faces; and means for texture mapping surface detail and color information of a subject'"'"'s face onto said connected surface. - View Dependent Claims (15)
-
Specification