Systems and methods for creating animations using human faces
First Claim
1. A computer-implemented method for generating animations messages using video images of human faces, the method comprising:
- receiving a sequence of video frames;
detecting a human face within the sequence of the video frames;
tracking, within the sequence of video frames, facial expression changes in the detected human face;
identifying, for each of the tracked facial expression changes, a facial expression from a training set of a plurality of different facial expressions and a plurality of different user faces that most closely corresponds to the tracked facial expression change;
mapping, by at least one processor, each of the identified facial expressions to three-dimensional motion data such that the identified facial expressions corresponds with the tracked facial expression changes from the sequence of video frames; and
applying the three-dimensional motion data to a three-dimensional character model to animate a face of the three-dimensional character model.
3 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods in accordance with embodiments of the invention enable collaborative creation, transmission, sharing, non-linear exploration, and modification of animated video messages. One embodiment includes a video camera, a processor, a network interface, and storage containing an animated message application, and a 3D character model. In addition, the animated message application configures the processor to: capture a video sequence using the video camera; detect a human face within a sequence of video frames; track changes in human facial expression of a human face detected within a sequence of video frames; map tracked changes in human facial expression to motion data, where the motion data is generated to animate the 3D character model; apply motion data to animate the 3D character model; render an animation of the 3D character model into a file as encoded video; and transmit the encoded video to a remote device via the network interface.
-
Citations
20 Claims
-
1. A computer-implemented method for generating animations messages using video images of human faces, the method comprising:
-
receiving a sequence of video frames; detecting a human face within the sequence of the video frames; tracking, within the sequence of video frames, facial expression changes in the detected human face; identifying, for each of the tracked facial expression changes, a facial expression from a training set of a plurality of different facial expressions and a plurality of different user faces that most closely corresponds to the tracked facial expression change; mapping, by at least one processor, each of the identified facial expressions to three-dimensional motion data such that the identified facial expressions corresponds with the tracked facial expression changes from the sequence of video frames; and applying the three-dimensional motion data to a three-dimensional character model to animate a face of the three-dimensional character model. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A computer-implemented method for generating animations using video images of human faces, the method comprising:
-
capturing a sequence of video frames; detecting a facial expression within the sequence of the video frames; matching, by at least one processor, the detected facial expression to a stored facial expression from a training set of a plurality of different facial expressions and a plurality of different user faces that most closely corresponds to the detected facial expression; obtaining three-dimensional motion data associated with the stored facial expression that matches the detected facial expression; and applying the three-dimensional motion data to a three-dimensional character model to animate a face of the three-dimensional character model such that the animation replicates the detected facial expression on the three-dimensional character model. - View Dependent Claims (14, 15, 16, 17)
-
-
18. A system for generating animations using video images of human faces, the system comprising:
-
at least one processor; and at least one non-transitory computer readable storage medium storing instructions that, when executed by the at least one processor, cause the system to; receive a sequence of video frames; detect a human face within the sequence of the video frames; track, within the sequence of video frames, facial expression changes in the detected human face; identify, for each of the tracked facial expression changes, a facial expression from a training set of a plurality of different facial expressions and a plurality of different user faces that most closely corresponds to the tracked facial expression change; map each identified facial expression to three-dimensional motion data such that the identified facial expressions corresponds with the tracked facial expression changes from the sequence of video frames; and apply the three-dimensional motion data to a three-dimensional character model to animate a face of the three-dimensional character model. - View Dependent Claims (19, 20)
-
Specification