Method for facial animation
First Claim
1. A method of animating a digital character based on facial expressions of a user, comprising:
- obtaining a first series of two-dimensional (2D) images of a face of a user;
obtaining a first series of three-dimensional (3D) depth maps of the face of the user;
determining a set of blendshape weights associated with a generic expression model based on at least some of the first series of 2D images and at least some of the first series of 3D depth maps, the generic expression model representative of a generic person;
identifying expression parameters for a user-specific expression model based on at least some of the set of blendshape weights, the user-specific expression model representative of the face of the user;
tracking the face of the user by decoupling rigid motion of the user from non-rigid motion of the user based on at least some of the first series of 2D images and at least some of the first series of 3D depth maps, wherein the rigid motion represents a movement of the 3D depth map of the face of the user and the non-rigid motion represents a change in expression of the face of the user;
determining animation parameters for a digital character based on the expression parameters, the rigid and non-rigid motions of the user, and an animation prior, the animation prior including a collection of animation parameters of the digital character, the animation prior indicative of a pre-defined animation of the generic expression model; and
animating, based on the animation parameters, the digital character to mimic the face of the user.
4 Assignments
0 Petitions
Accused Products
Abstract
A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
-
Citations
18 Claims
-
1. A method of animating a digital character based on facial expressions of a user, comprising:
-
obtaining a first series of two-dimensional (2D) images of a face of a user; obtaining a first series of three-dimensional (3D) depth maps of the face of the user; determining a set of blendshape weights associated with a generic expression model based on at least some of the first series of 2D images and at least some of the first series of 3D depth maps, the generic expression model representative of a generic person; identifying expression parameters for a user-specific expression model based on at least some of the set of blendshape weights, the user-specific expression model representative of the face of the user; tracking the face of the user by decoupling rigid motion of the user from non-rigid motion of the user based on at least some of the first series of 2D images and at least some of the first series of 3D depth maps, wherein the rigid motion represents a movement of the 3D depth map of the face of the user and the non-rigid motion represents a change in expression of the face of the user; determining animation parameters for a digital character based on the expression parameters, the rigid and non-rigid motions of the user, and an animation prior, the animation prior including a collection of animation parameters of the digital character, the animation prior indicative of a pre-defined animation of the generic expression model; and animating, based on the animation parameters, the digital character to mimic the face of the user. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A system for animating a digital character according to facial expressions of a user, comprising:
-
memory containing a program code; a display coupled to the memory; and one or more processors coupled to the memory and the display, the one or more processors configured to execute the program code, the program code configured to cause the one or more processors to; obtain a first series of two-dimensional (2D) images of a face of a user; obtain a first series of three-dimensional (3D) depth maps of the face of the user; determine a set of blendshape weights associated with a generic expression model based on at least some of the first series of 2D images and at least some of the first series of 3D depth maps, the generic expression model representative of a generic person; identify expression parameters for a user-specific expression model based on at least some of the set of blendshape weights, the user-specific expression model representative of the face of the user; track the face of the user by decoupling rigid motion of the user from non-rigid motion of the user based on at least some of the first series of 2D images and at least some of the first series of 3D depth maps, wherein the rigid motion represents a movement of the 3D depth map of the face of the user and the non-rigid motion represents a change in expression of the face of the user; determine animation parameters for a digital character based on the expression parameters, the rigid and non-rigid motions of the user, and an animation prior, the animation prior including a collection of animation parameters of the digital character, the animation prior indicative of a pre-defined animation of the generic expression model; and animate, based on the animation parameters, the digital character to mimic the face of the user on the display. - View Dependent Claims (8, 9, 10, 11, 12)
-
-
13. A non-transitory program storage device containing instructions that, when executed by a computer system, cause the computer system to:
-
obtain a first series of two-dimensional (2D) images of a face of a user; obtain a first series of three-dimensional (3D) depth maps of the face of the user; determine a set of blendshape weights associated with a generic expression model based on at least some of the first series of 2D images and at least some of the first series of 3D depth maps, the generic expression model representative of a generic person; identify expression parameters for a user-specific expression model based on at least some of the set of blendshape weights, the user-specific expression model representative of the face of the user; track the face of the user by decoupling rigid motion of the user from non-rigid motion of the user based on at least some of the first series of 2D images and at least some of the first series of 3D depth maps, wherein the rigid motion represents a movement of the 3D depth map of the face of the user and the non-rigid motion represents a change in expression of the face of the user; determine animation parameters for a digital character based on the expression parameters, the rigid and non-rigid motions of the user, and an animation prior, the animation prior including a collection of animation parameters of the digital character, the animation prior indicative of a pre-defined animation of the generic expression model; and
animate, based on the animation parameters, the digital character to mimic the face of the user. - View Dependent Claims (14, 15, 16, 17, 18)
-
Specification