Avatar image animation using translation vectors
First Claim
1. A computer-implemented method for image generation comprising:
- obtaining an avatar image for representation on a first computing device;
training an autoencoder, on a second computing device comprising an artificial neural network, to generate synthetic emotive faces;
identifying, using a bottleneck layer within the autoencoder, a plurality of translation vectors corresponding to a plurality of emotion metrics, based on the training;
generating a first set of bottleneck layer parameters, from the bottleneck layer, learned for a neutral face;
applying a subset of the plurality of translation vectors to the avatar image, wherein the subset represents an emotion metric input; and
generating an animated avatar image for the first computing device, based on the applying, wherein the animated avatar image is reflective of the emotion metric input.
1 Assignment
0 Petitions
Accused Products
Abstract
Techniques are described for image generation for avatar image animation using translation vectors. An avatar image is obtained for representation on a first computing device. An autoencoder is trained, on a second computing device comprising an artificial neural network, to generate synthetic emotive faces. A plurality of translation vectors is identified corresponding to a plurality of emotion metrics, based on the training. A bottleneck layer within the autoencoder is used to identify the plurality of translation vectors. A subset of the plurality of translation vectors is applied to the avatar image, wherein the subset represents an emotion metric input. The emotion metric input is obtained from facial analysis of an individual. An animated avatar image is generated for the first computing device, based on the applying, wherein the animated avatar image is reflective of the emotion metric input and the avatar image includes vocalizations.
210 Citations
23 Claims
-
1. A computer-implemented method for image generation comprising:
-
obtaining an avatar image for representation on a first computing device; training an autoencoder, on a second computing device comprising an artificial neural network, to generate synthetic emotive faces; identifying, using a bottleneck layer within the autoencoder, a plurality of translation vectors corresponding to a plurality of emotion metrics, based on the training; generating a first set of bottleneck layer parameters, from the bottleneck layer, learned for a neutral face; applying a subset of the plurality of translation vectors to the avatar image, wherein the subset represents an emotion metric input; and generating an animated avatar image for the first computing device, based on the applying, wherein the animated avatar image is reflective of the emotion metric input. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21)
-
-
22. A computer program product embodied in a non-transitory computer readable medium for image generation, the computer program product comprising code which causes one or more processors to perform operations of:
-
obtaining an avatar image for representation on a first computing device; training an autoencoder, on a second computing device comprising an artificial neural network, to generate synthetic emotive faces; identifying, using a bottleneck layer within the autoencoder, a plurality of translation vectors corresponding to a plurality of emotion metrics, based on the training; generating a first set of bottleneck layer parameters, from the bottleneck layer, learned for a neutral face; applying a subset of the plurality of translation vectors to the avatar image, wherein the subset represents an emotion metric input; and generating an animated avatar image for the first computing device, based on the applying, wherein the animated avatar image is reflective of the emotion metric input.
-
-
23. A computer system for image generation comprising:
-
a memory which stores instructions; one or more processors attached to the memory wherein the one or more processors, when executing the instructions which are stored, are configured to; obtain an avatar image for representation on a first computing device; train an autoencoder, on a second computing device comprising an artificial neural network, to generate synthetic emotive faces; identify, using a bottleneck layer within the autoencoder, a plurality of translation vectors corresponding to a plurality of emotion metrics, based on the training; generate a first set of bottleneck layer parameters, from the bottleneck layer, learned for a neutral face; apply a subset of the plurality of translation vectors to the avatar image, wherein the subset represents an emotion metric input; and generate an animated avatar image for the first computing device, based on the applying, wherein the animated avatar image is reflective of the emotion metric input.
-
Specification