Method for automatically animating lip synchronization and facial expression of animated characters
DC CAFCFirst Claim
1. A method for automatically animating lip synchronization and facial expression of three-dimensional characters comprising:
- obtaining a first set of rules that define output morph weight set stream as a function of phoneme sequence and time of said phoneme sequence;
obtaining a timed data file of phonemes having a plurality of sub-sequences;
generating an intermediate stream of output morph weight sets and a plurality of transition parameters between two adjacent morph weight sets by evaluating said plurality of sub-sequences against said first set of rules;
generating a final stream of output morph weight sets at a desired frame rate from said intermediate stream of output morph weight sets and said plurality of transition parameters; and
applying said final stream of output morph weight sets to a sequence of animated characters to produce lip synchronization and facial expression control of said animated characters.
1 Assignment
Litigations
1 Petition
Accused Products
Abstract
A method for controlling and automatically animating lip synchronization and facial expressions of three dimensional animated characters using weighted morph targets and time aligned phonetic transcriptions of recorded text. The method utilizes a set of rules that determine the systems output comprising a stream of morph weight sets when a sequence of timed phonemes and/or other timed data is encountered. Other data, such as timed emotional state data or emotemes such as “surprise, “disgust, “embarrassment”, “timid smile”, or the like, may be inputted to affect the output stream of morph weight sets, or create additional streams.
-
Citations
26 Claims
-
1. A method for automatically animating lip synchronization and facial expression of three-dimensional characters comprising:
-
obtaining a first set of rules that define output morph weight set stream as a function of phoneme sequence and time of said phoneme sequence;
obtaining a timed data file of phonemes having a plurality of sub-sequences;
generating an intermediate stream of output morph weight sets and a plurality of transition parameters between two adjacent morph weight sets by evaluating said plurality of sub-sequences against said first set of rules;
generating a final stream of output morph weight sets at a desired frame rate from said intermediate stream of output morph weight sets and said plurality of transition parameters; and
applying said final stream of output morph weight sets to a sequence of animated characters to produce lip synchronization and facial expression control of said animated characters. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
checking each sub-sequence of said plurality of sub-sequences for compliance with said rule'"'"'s criteria; and
applying said rule'"'"'s function upon said compliance.
-
-
4. The method of claim 1 wherein said first set of rules comprises a default set of rules and an optional secondary set of rules, said secondary set of rules having priority over said default set of rules.
-
5. The method of claim 4 wherein said default set of rules is adequate to create said intermediate stream of output morph weight sets and said plurality of transition parameters between two adjacent morph weight sets for all sub-sequences of phonemes in said timed data file.
-
6. The method of claim 4 wherein said secondary set of rules are used in special cases to substitute alternate output morph weight sets and/or transition parameters between two adjacent morph weight sets.
-
7. The method of claim 1 wherein said timed data is a timed aligned phonetic transcriptions data.
-
8. The method of claim 7 wherein said timed data further comprises time aligned data.
-
9. The method of claim 7 wherein said timed data further comprises time aligned emotional transcription data.
-
10. The method of claim 1 wherein each of said plurality of transition parameters comprises a transition start time and a transition end time;
- and said intermediate stream of output morph weight sets having entries at said transition start time and said transition end time.
-
11. The method of claim 10 wherein said generating a final stream of output morph weight sets comprises:
obtaining the output morph weight set at a desired time by interpolating between said intermediate stream of morph weight sets at said transition start time and said transition end time, said desired time representing a frame of said final stream of output.
-
12. The method of claim 11, further comprising:
applying a second set of rules to said output morph weight set for post processing.
-
13. The method of claim 1 wherein said first set of rules comprises:
-
correspondence rules between a plurality of visual phoneme groups and a plurality of morph weight sets; and
morph weight set transition rules specifying durational data for generating transitionary curves between morph weight sets.
-
-
14. An apparatus for automatically animating lip synchronization and facial expression of three-dimensional characters comprising:
-
a computer system;
a first set of rules in said computer system, said first set of rules defining output morph weight set stream as a function of phoneme sequence and time of said phoneme sequence;
a timed data file readable by said computer system, said timed data file having phonemes with a plurality of sub-sequences;
means, in said computer system, for generating an intermediate stream of output morph weight sets and a plurality of transition parameters between two adjacent morph weight sets by evaluating said plurality of sub-sequences against said first set of rules;
means, in said computer system, for generating a final stream of output morph weight sets at a desired frame rate from said intermediate stream of output morph weight sets and said plurality of transition parameters; and
means, in said computer system, for applying said final stream of output morph weight sets to a sequence of animated characters to produce lip synchronization and facial expression control of said animated characters. - View Dependent Claims (15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26)
checking each sub-sequence of said plurality of sub-sequences for compliance with said rule'"'"'s criteria; and
applying said rule'"'"'s function upon said compliance.
-
-
17. The apparatus of claim 14 wherein said first set of rules comprises a default set of rules and an optional secondary set of rules, said secondary set of rules having priority over said default set of rules.
-
18. The apparatus of claim 17 wherein said default set of rules is adequate to create said intermediate stream of output morph weight sets and said plurality of transition parameters between two adjacent morph weight sets for all sub-sequences of phonemes in said timed data file.
-
19. The apparatus of claim 17 wherein said secondary set of rules are used in special cases to substitute alternate output morph weight sets and/or transition parameters between two adjacent morph weight sets.
-
20. The apparatus of claim 14 wherein said timed data is a timed aligned phonetic transcriptions data.
-
21. The apparatus of claim 20 wherein said timed data further comprises time aligned data.
-
22. The apparatus of claim 20 wherein said timed data further comprises time aligned emotional transcription data.
-
23. The apparatus of claim 14 wherein each of said plurality of transition parameters comprises a transition start time and a transition end time;
- and said intermediate stream of output morph weight sets having entries at said transition start time and said transition end time.
-
24. The apparatus of claim 23 wherein said generating a final stream of output morph weight sets comprises:
obtaining the output morph weight set at a desired time by interpolating between said intermediate stream of morph weight sets at said transition start time and said transition end time, said desired time representing a frame of said final stream of output.
-
25. The apparatus of claim 24, further comprising:
means for applying a second set of rules to said output morph weight set for post processing.
-
26. The apparatus of claim 14 wherein said first set of rules comprises:
-
correspondence rules between a plurality of visual phoneme groups and a plurality of morph weight sets; and
morph weight set transition rules specifying durational data for generating transitionary curves between morph weight sets.
-
Specification