Method and system for aligning natural and synthetic video to speech synthesis
First Claim
1. A method for encoding a facial animation including at least one facial mimic and speech in the form of a text stream, comprising the steps of:
- a) assigning a predetermined code to at least one facial mimic;
b) placing the predetermined code within a text stream, wherein the predetermined code points to a stream of facial mimics thereby indicating a synchronization relationship between the text stream and the facial mimic stream; and
c) encoding the test stream.
4 Assignments
0 Petitions
Accused Products
Abstract
According to MPEG-4'"'"'s TTS architecture, facial animation can be driven by two streams simultaneously—text, and Facial Animation Parameters. In this architecture, text input is sent to a Text-To-Speech converter at a decoder that drives the mouth shapes of the face. Facial Animation Parameters are sent from an encoder to the face over the communication channel. The present invention includes codes (known as bookmarks) in the text string transmitted to the Text-to-Speech converter, which bookmarks are placed between words as well as inside them. According to the present invention, the bookmarks carry-an encoder time stamp. Due to the nature of text-to-speech conversion, the encoder time stamp does not relate to real-world time, and should be interpreted as a counter. In addition, the Facial Animation Parameter stream carries the same encoder time stamp found in the bookmark of the text. The system of the present invention reads the bookmark and provides the encoder time stamp as well as a real-time time stamp to the facial animation system. Finally, the facial animation system associates the correct facial animation parameter with the real-time time stamp using the encoder time stamp of the bookmark as a reference.
-
Citations
5 Claims
-
1. A method for encoding a facial animation including at least one facial mimic and speech in the form of a text stream, comprising the steps of:
-
a) assigning a predetermined code to at least one facial mimic;
b) placing the predetermined code within a text stream, wherein the predetermined code points to a stream of facial mimics thereby indicating a synchronization relationship between the text stream and the facial mimic stream; and
c) encoding the test stream.
-
-
2. A data stream comprising facial animation data including at least one facial mimic and speech information, the data stream comprising:
-
a) a text stream containing speech information;
b) a facial mimic stream separate from the text stream and containing at least one facial mimic; and
c) means placed within the text stream for pointing to the facial mimic stream to indicate a synchronization relationship between the text stream and the facial mimic stream.
-
-
3. A method for encoding a facial animation, comprising steps of:
-
a) generating a facial mimic stream containing at least one facial mimic;
b) generating a text stream containing speech information and being separate from the facial mimic stream;
c) placing within the text stream a means for pointing to the facial mimics stream to thereby indicate a synchronization relationship between the text stream and the facial mimic stream; and
d) encoding the text stream.
-
-
4. A method for decoding a facial animation including speech and at least one facial mimic, comprising the steps of:
-
a) monitoring a text stream for a predetermined code corresponding to a facial mimic, wherein the predetermined code points to a stream of facial mimics established during an encoding process of the text stream, thereby indicating a synchronization relationship between the text stream and the facial mimic stream; and
b) sending a signal to a visual decoder to start a particular facial mimic upon detecting the presence of the predetermined code. - View Dependent Claims (5)
-
Specification