Method for generating a personalized 3-D face model
First Claim
1. A method for controlling a first mesh representative of fine features of an object with a second mesh representative of coarse features of the object comprising the steps of:
- attaching the fine mesh with a first set of nodes to the coarse mesh with a second set of nodes; and
deforming the fine mesh using the coarse mesh.
3 Assignments
0 Petitions
Accused Products
Abstract
A method for generating a 3-D model of a person'"'"'s face is disclosed. The 3-D face model carries both the geometry (shape) and the texture (color) characteristics of the person'"'"'s face. The shape of the face model is represented via a 3-D triangular mesh (geometry mesh), while the texture of the face model is represented via a 2-D composite image (texture image). The geometry mesh is obtained by deforming a predefined standard 3-D triangular mesh based on the dimensions and relative positions of the person'"'"'s facial features, such as eyes, nose, ears, lips, chin, etc. The texture image is obtained by compositing a set of 2-D images of the person'"'"'s face which are taken from particular directions such as front, right, left, etc, and modifying them along region boundaries to achieve seamless stitching of color on the 3-D face model. The directional images are taken while the mouth is closed and the eyes are open. In order to capture the color information of the facial regions that are not visible in the directional images, i.e., the inside of the mouth and the outside of the eyelids, additional 2-D images are also taken and included in the texture image.
-
Citations
47 Claims
-
1. A method for controlling a first mesh representative of fine features of an object with a second mesh representative of coarse features of the object comprising the steps of:
-
attaching the fine mesh with a first set of nodes to the coarse mesh with a second set of nodes; and
deforming the fine mesh using the coarse mesh. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
providing one or more images of the object;
displaying the coarse and fine meshes over the image; and
moving the nodes of the coarse mesh to conform to the coarse features of the images of the object in order to generate a 3-D model of the object.
-
-
3. The method of claim 2 wherein the object is a face and the steps further comprise:
-
selecting a 3-D fine geometry mesh for the face; and
using the coarse mesh, adapting the 3-D fine geometry mesh in accordance with relative 3-D locations of facial features.
-
-
4. The method of claim 1 wherein the step of attaching comprises:
-
designing a 3-D triangular shape mesh that has less nodes than the 3-D geometry mesh;
calculating normal vectors at the nodes of the shape mesh;
defining a normal vector at every point on triangles of the shape mesh as a weighted average of the normal vectors at the nodes of the shape mesh; and
finding a triangle of the shape mesh and a point on the triangle for every node of the geometry mesh so that a line passing through the node and the point is parallel to the normal vector at the point.
-
-
5. The method of claim 1 wherein the step of deforming comprises changing the 3-D geometry mesh model in accordance with the calculated orientation and position of the face in each 2-D image to provide for local modifications.
-
6. The method of claim 1 wherein the step of deforming further comprises generating local modifications to one or more prominent facial features.
-
7. The method of claim 6 further comprising the steps of selecting a fine geometry mesh model generally corresponding to the face and comprising a plurality of fine triangular patches with a node at each corner of the patch, and overlying the fine geometry mesh with a coarse shape mesh comprising substantially fewer and larger triangular patches.
-
8. The method of claim 7 wherein the triangles of the coarse shape mesh control the position of the nodes of the fine geometry mesh that are in the proximity of the triangles of the coarse mesh.
-
9. The method of claim 8 wherein the nodes of the coarse shape mesh are selectively moveable by the user and the nodes of the fine geometry mesh that are attached to the triangles of the shape mesh affected by the movement of the nodes of the shape mesh are re-positioned as a result of the following steps:
-
calculating surface normals of the shape mesh at attachment points of all nodes of the geometry mesh controlled by the affected triangles of the shape mesh; and
obtaining the positions of the nodes of the geometry mesh by adding to their attachment points a surface vector defined as the surface distance coefficient times the surface normal of the shape mesh at respective attachment point.
-
-
10. The method of claim 7 wherein the coarse mesh encloses the periphery of the face and encloses prominent facial features.
-
11. The method of claim 10 wherein the prominent facial features include one or more of the group consisting of eyes, nose, mouth, chin, cheeks, ears, hair, eyebrows, neck, and forehead.
-
12. A method for generating a personalized 3-D face model comprising the steps of:
-
determining a calibration parameter of a camera;
acquiring a plurality of 2-D images of a person'"'"'s face;
marking 2-D locations of one or more facial features of the face in each of the acquired images;
calculating 3-D locations of facial features in accordance with the calibration parameter of the camera;
estimating orientation and position of the face in each 2-D image;
selecting a 3-D geometry mesh for the face;
adapting the 3-D geometry mesh in accordance with the relative 3-D locations of the facial features;
attaching a 3-D shape mesh to the 3-D geometry mesh;
deforming the 3-D geometry mesh using the 3-D shape mesh to conform to the 2-D images of the face;
selecting shade images from the 2-D images of the face;
blending the shade images in accordance with the calculated orientation and position of the face in each shade image to obtain a texture image; and
painting the deformed 3-D geometry mesh with the texture image to provide a 3-D model of the face. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43)
imaging a perspective view of a planar target having a plurality of point-like markings of fixed position on the target; acquiring 2-D locations of the point-like markings;
calculating the focal length of the camera in pixel units.
-
-
14. The method of claim 13 wherein the point-like markings are arranged at corners of a square target.
-
15. The method of claim 12 wherein the acquired 2-D images include at least two views of the face in a neutral state at different orientations.
-
16. The method of claim 15 wherein the two views are orthogonal.
-
17. The method of claim 12 wherein the acquired 2-D images comprise front, forehead, chin, angled-right, angled-right-tilted-up, angled-right-tilted-down, angled-left, angled-left-tilted-up, angled-left-tilted-down, full-right-profile, full-right-profile-tilted-up, full-right-profile-tilted-down, full-left-profile, full-left-profile-tilted-up, and full-left-profile-tilted-down views of the face in the neutral state.
-
18. The method of claim 12 wherein the acquired 2-D images comprise front, forehead, chin, full-right-profile, and full-left-profile views of the face in the neutral state.
-
19. The method of claim 12 wherein the acquired 2-D images include a plurality of views of the face in at least one action state.
-
20. The method of claim 19 wherein the action states of the face comprise smiling lips, kissing lips, yawning lips, raised eyebrows, and squeezed eyebrows.
-
21. The method of claim 19 wherein the acquired 2-D images of the face in an action state include at least two views at different orientations.
-
22. The method of claim 21 wherein the two views are front and angled-right.
-
23. The method of claim 12 wherein the step of marking further comprises marking pupils, eyebrows, ears, nose, eye corners and mouth.
-
24. The method of claim 23 wherein the ears are marked at the top and bottom ends.
-
25. The method of claim 23 wherein the eyebrows are marked at their proximate ends.
-
26. The method of claim 23 wherein the pupils are marked at their centers.
-
27. The method of claim 23 wherein the nose is marked at the base corners.
-
28. The method of claim 23 wherein the mouth is marked at opposite ends and at its center.
-
29. The method of claim 12 wherein the step of calculating the 3-D locations of facial features further comprises:
-
calculating the 3-D locations of the facial features to conform to their 2-D locations in the acquired 2-D images of the face in the neutral state under an orthographic projection model;
calculating relative distances of the face to the camera in the 2-D images to conform to the 2-D locations of the facial features and their calculated 3-D locations under a perspective projection model;
modifying the 2-D locations of the facial features to conform to the calculated relative distances and the 3-D locations under a perspective projection model;
recalculating the 3-D locations of the facial features to conform to their modified 2-D locations under an orthographic projection model;
repeating the steps of calculating the relative distances, modifying the 2-D locations, and recalculating the 3-D locations to satisfy a convergence requirement; and
translating and rotating the 3-D locations so that they correspond to a frontal-looking face.
-
-
30. The method of claim 12 wherein the step of estimating the orientation and position of the face in each 2-D image further comprises
calculating the position and orientation of the face in a 2-D image so that the 3-D locations of the facial features conform to their 2-D locations in the 2-D image under an orthographic projection model; -
calculating a relative distance of the face to the camera in the 2-D image to conform to the 2-D locations of the facial features and their calculated 3-D locations under a perspective projection model;
modifying the 2-D locations of the facial features to conform to the calculated relative distance and the 3-D locations under a perspective projection model;
recalculating the position and orientation of the face so that the 3-D locations of the facial features conform to their modified 2-D locations under an orthographic projection model; and
repeating the steps of calculating the relative distance, modifying the 2-D locations, and recalculating the position and orientation to satisfy a convergence requirement.
-
-
31. The method of claim 12 wherein the step of adapting comprises changing the 3-D geometry mesh in accordance with the calculated 3-D locations of facial features to provide for global adaptations.
-
32. The method of claim 12 wherein the step of attaching comprises:
-
designing a 3-D triangular shape mesh that has less nodes than the 3-D geometry mesh;
calculating normal vectors at the nodes of the shape mesh;
defining a normal vector at every point on triangles of the shape mesh as a weighted average of the normal vectors at the nodes of the shape mesh; and
finding a triangle of the shape mesh and a point on the triangle for every node of the geometry mesh so that a line passing through the node and the point is parallel to the normal vector at the point.
-
-
33. The method of claim 12 wherein the step of deforming comprises changing the 3-D geometry mesh model in accordance with the calculated orientation and position of the face in each 2-D image to provide for local modifications.
-
34. The method of claim 12 wherein the step of deforming further comprises generating local modifications to the 3 D geometry mesh to conform the 3 D geometry mesh to one or more prominent facial features.
-
35. The method of claim 34 comprising the steps of selecting a fine geometry mesh model generally corresponding to the face and comprising a plurality of fine triangular patches with a node at each corner of the patch, and overlying the fine geometry mesh with a coarse shape mesh comprising substantially fewer and larger triangular patches.
-
36. The method of claim 35 wherein the triangles of the coarse shape mesh control the position of the nodes of the fine geometry mesh that are in the proximity of the triangles.
-
37. The method of claim 36 wherein the nodes of the coarse shape mesh are selectively moveable by the user and the nodes of the fine geometry mesh that are attached to the triangles of the shape mesh affected by the movement of the nodes of the shape mesh are re-positioned as a result of the following steps:
-
calculating surface normals of the shape mesh at attachment points of all nodes of the geometry mesh controlled by the affected triangles of the shape mesh; and
obtaining the positions of the nodes of the geometry mesh by adding to their attachment points a surface vector defined as the surface distance coefficient times the surface normal of the shape mesh at respective attachment point.
-
-
38. The method of claim 35 wherein the coarse mesh encloses the periphery of the face and encloses prominent facial features.
-
39. The method of claim 38 wherein the prominent facial features include one or more of the group consisting of eyes, nose, mouth, chin, cheeks, ears, hair, eyebrows, neck, and forehead.
-
40. The method of claim 12 wherein the step of selecting the shade images comprises selecting one or more of the images from the group consisting of the front, forehead, chin, full-right-profile, full-left-profile images.
-
41. The method of claim 40 wherein the step of selecting the shade images comprises selecting the front image and selecting one or more of the images from the group consisting of forehead, chin, full-right-profile, full-left-profile images.
-
42. The method of claim 41 wherein the step of selecting the shade images further comprises selecting full right and full left profile images of the 2-D images for texture data.
-
43. The method of claim 12 wherein the step of selecting the shade images comprises selecting the front image only.
-
44. A method for blending 2-D images of an object to paint a 3-D geometry mesh of the object comprising the steps of:
-
selecting at least two 2-D images;
defining a border on the geometry mesh where the two images meet;
projecting the border onto the two images with the orientation and position of the object in each image; and
interpolating color in each image on both sides of the projected border to gradually transition color from one side of the border in one image to opposite side of the border in the other image. - View Dependent Claims (45, 46, 47)
-
Specification