Computing system for expressive three-dimensional facial animation
First Claim
1. A computing device, comprising:
- a processor;
memory storing instructions, wherein the instructions, when executed by the processor, cause the processor to perform acts comprising;
receiving an audio sequence reflective of words uttered by a speaker;
based upon the audio sequence, generating a first set of coefficients that are indicative of lips of the speaker as the speaker utters the words, wherein the first set of coefficients are generated based upon latent content variables that have been generated via a computer-implemented model, wherein the computer-implemented model has been trained without utilization of motion capture data, wherein the latent content variables are generated by the computer-implemented model without utilization of motion capture techniques;
based upon the audio sequence, generating a second set of coefficients that are indicative of facial features of the speaker other than the lips of the speaker as the speaker utters the words, wherein the second set of coefficients are generated based upon latent style variables that have been generated via the computer-implemented model, wherein the latent style variables comprise latent identity variables that are based upon identity factors of a plurality of speaker as the plurality of speakers speak and latent emotional variables that are based upon emotions of the plurality of speakers as the plurality of speakers speak, wherein the latent style variables are generated by the computer-implemented model without utilization of the motion capture techniques;
generating a third set of coefficients based upon the first set of coefficients and the second set of coefficients; and
causing a visual representation of a face to be animated on a display based upon the third set of coefficients such that movement of lips of the visual representation reflects the words uttered by the speaker while the visual representation is animated, wherein facial features of the visual representation of the face other than the lips are synced to the lips of the visual representation, and further wherein the visual representation of the face reflects an identity of the speaker and an emotion of the speaker as the speaker utters the words while the visual representation of the face is animated.
1 Assignment
0 Petitions
Accused Products
Abstract
A computer-implemented technique for animating a visual representation of a face based on spoken words of a speaker is described herein. A computing device receives an audio sequence comprising content features reflective of spoken words uttered by a speaker. The computing device generates latent content variables and latent style variables based upon the audio sequence. The latent content variables are used to synchronized movement of lips on the visual representation to the spoken words uttered by the speaker. The latent style variables are derived from an expected appearance of facial features of the speaker as the speaker utters the spoken words and are used to synchronize movement of full facial features of the visual representation to the spoken words uttered by the speaker. The computing device causes the visual representation of the face to be animated on a display based upon the latent content variables and the latent style variables.
-
Citations
20 Claims
-
1. A computing device, comprising:
-
a processor; memory storing instructions, wherein the instructions, when executed by the processor, cause the processor to perform acts comprising; receiving an audio sequence reflective of words uttered by a speaker; based upon the audio sequence, generating a first set of coefficients that are indicative of lips of the speaker as the speaker utters the words, wherein the first set of coefficients are generated based upon latent content variables that have been generated via a computer-implemented model, wherein the computer-implemented model has been trained without utilization of motion capture data, wherein the latent content variables are generated by the computer-implemented model without utilization of motion capture techniques; based upon the audio sequence, generating a second set of coefficients that are indicative of facial features of the speaker other than the lips of the speaker as the speaker utters the words, wherein the second set of coefficients are generated based upon latent style variables that have been generated via the computer-implemented model, wherein the latent style variables comprise latent identity variables that are based upon identity factors of a plurality of speaker as the plurality of speakers speak and latent emotional variables that are based upon emotions of the plurality of speakers as the plurality of speakers speak, wherein the latent style variables are generated by the computer-implemented model without utilization of the motion capture techniques; generating a third set of coefficients based upon the first set of coefficients and the second set of coefficients; and causing a visual representation of a face to be animated on a display based upon the third set of coefficients such that movement of lips of the visual representation reflects the words uttered by the speaker while the visual representation is animated, wherein facial features of the visual representation of the face other than the lips are synced to the lips of the visual representation, and further wherein the visual representation of the face reflects an identity of the speaker and an emotion of the speaker as the speaker utters the words while the visual representation of the face is animated. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A method executed by a processor of a computing device, the method comprising:
-
receiving an audio sequence reflective of words uttered by a speaker; based upon the audio sequence, generating a first set of coefficients that are indicative of lips of the speaker as the speaker utters the words, wherein the first set of coefficients are generated based upon latent content variables that have been generated via a computer-implemented model, wherein the computer-implemented model has been trained without utilization of motion capture data, wherein the latent content variables are generated by the computer-implemented model without utilization of motion capture techniques; based upon the audio sequence, generating a second set of coefficients that are indicative of facial features of the speaker other than the lips of the speaker as the speaker utters the words, wherein the second set of coefficients are generated based upon latent style variables that have been generated via the computer-implemented model, wherein the latent style variables comprise latent identity variables that are based upon identity factors of a plurality of speakers as the plurality of speakers speak and latent emotional variables that are based upon emotions of the plurality of speakers as the plurality of speakers speak, wherein the latent style variables are generated by the computer-implemented model without utilization of the motion capture techniques; generating a third set of coefficients based upon the first set of coefficients and the second set of coefficients; and causing a visual representation of a face to be animated on a display based upon the third set of coefficients such that movement of lips of the visual representation reflects the words uttered by the speaker while the visual representation is animated, wherein facial features of the visual representation of the face other than the lips are synced to the lips of the visual representation, and further wherein the visual representation of the face reflects an identity of the speaker and an emotion of the speaker as the speaker utters the words while the visual representation of the face is animated. - View Dependent Claims (13, 14, 15, 16)
-
-
17. A computer-readable storage medium comprising instructions that, when executed one or more processors of a computing device, perform acts comprising:
-
receiving an audio sequence reflective of words uttered by a speaker by way of a microphone; based upon the audio sequence, generating a first set of coefficients that are indicative of lips of the speaker as the speaker utters the words, wherein the first set of coefficients are generated based upon latent content variables that have been generated via a computer-implemented model, wherein the computer-implemented model has been trained without utilization of motion capture data, wherein the latent content variables are generated by the computer-implemented model without utilization of motion capture techniques; based upon the audio sequence, generating a second set of coefficients that are indicative of facial features of the speaker other than the tips of the speaker as the speaker utters the words, wherein the second set of coefficients are generated based upon latent style variables that have been generated via the computer-implemented model, wherein the latent style variables comprise latent identity variables that are based upon identity factors of a plurality of speakers as the plurality of speakers speak and latent emotional variables that are based upon emotions of the plurality of speakers as the plurality of speakers speak, wherein the latent style variables are generated by the computer-implemented model without utilization of the motion capture techniques; generating a third set of coefficients based upon the first set of coefficients and the second set of coefficients; causing a visual representation of a face to be animated on a display of a second computing device that is in network communication with the computing device based upon the third set of coefficients such that movement of lips of the visual representation reflects the words uttered by the speaker while the visual representation is animated, wherein facial features of the visual representation of the face other than the lips are synced to the lips of the visual representation, and further wherein the visual representation of the face reflects an identity of the speaker and an emotion of the speaker as the sneaker utters the words while the visual representation of the face is animated; and causing the audio sequence to be played on a speaker of the second computing device concurrently with causing the visual representation of the face to be animated such that movements of the visual representation of the face are synchronized with the words of the audio sequence. - View Dependent Claims (18, 19, 20)
-
Specification