Animation Retargeting
First Claim
1. A method for animation mapping comprising:
- mapping a transform of a source shape to a target shape, wherein the mapping is based on a training set of previous associations between the source and target shapes;
applying the mapped transform to the target shape for output of an initial mapping; and
modifying the training set to generate a refined mapping of the transform applied to the target shape.
1 Assignment
0 Petitions
Accused Products
Abstract
Systems and methods are described, which create a mapping from a space of a source object (e.g., source facial expressions) to a space of a target object (e.g., target facial expressions). In certain implementations, the mapping is learned based a training set composed of corresponding shapes (e.g. facial expressions) in each space. The user can create the training set by selecting expressions from, for example, captured source performance data, and by sculpting corresponding target expressions. Additional target shapes (e.g., target facial expressions) can be interpolated and extrapolated from the shapes in the training set to generate corresponding shapes for potential source shapes (e.g., facial expressions).
-
Citations
47 Claims
-
1. A method for animation mapping comprising:
-
mapping a transform of a source shape to a target shape, wherein the mapping is based on a training set of previous associations between the source and target shapes; applying the mapped transform to the target shape for output of an initial mapping; and modifying the training set to generate a refined mapping of the transform applied to the target shape. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28)
-
-
29. A method comprising:
-
outputting an initial mapping of a transform of a source shape to a target shape; and iteratively receiving feedback from a user and adjusting the mapping of the transform based on the feedback.
-
-
30. A computer-implemented method for mapping a transform of a source shape to a target object, the method comprising:
-
associating first and second positions of a source object with corresponding first and second positions of a target object, wherein the positions are at least partially defined by a mesh comprising vertices; and generating, based on the associations, a mapping between a third position of the source object and a third position of the target object, wherein the mapping comprises an affine transformation based on a transform of selected vertices of the source object relative to local vertices within a predetermined distance from vertices selected for transformation. - View Dependent Claims (31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41)
-
-
42. A method of generating animation, the method comprising:
generating a mapping between a source mesh and a target mesh based on previous mappings, wherein generating the mapping comprises applying an affine transformation to transforms of one or more vertices of the source mesh, wherein the transforms are relative to a neighborhood of vertices a predetermined distance from the one or more vertices being transformed.
-
43. A method of generating animation, the method comprising:
-
selecting a point on a target object to reflect a transform of a corresponding point on a source object; identifying a first neighborhood of geometric shapes that are a predetermined distance from the point on the target object and associating the first neighborhood with a corresponding second neighborhood of geometric shapes on the source object; and determining an affine mapping for a transform of the point on the target object relative to the first neighborhood based on the transform of the corresponding point on the source object relative to the second neighborhood. - View Dependent Claims (44, 45)
-
-
46. A system comprising:
-
a mapping function generator for mapping a transform of a source shape to a target shape, wherein the mapping is based on a training set of previous associations between the source and target shapes; and an interface for receiving modifications, based on user input received in response to an output of the mapping, to the training set for generating a refined mapping of the transform applied to the target shape.
-
-
47. A method for animation mapping comprising:
-
outputting an initial mapping between a first transform of a source shape and a second transform of a target shape, wherein the initial mapping is based on a training set of previous associations between the source and target shapes; modifying the training set to generate a refined mapping between the first and second transforms; and outputting the refined mapping.
-
Specification