Coarticulation Method for Audio-Visual Text-to-Speech Synthesis
First Claim
1. A method of synchronizing synthesized speech and animation, the method causing a computing device to perform steps comprising:
- associating a received stimulus with a phoneme having corresponding mouth parameters;
selecting a parameter set corresponding to the mouth parameters from an animation library, the parameter set representing frame segments; and
generating, via a noise producing entity, speech associated with the stimulus that is synchronized with the frame segments.
4 Assignments
0 Petitions
Accused Products
Abstract
A method for generating animated sequences of talking heads in text-to-speech applications wherein a processor samples a plurality of frames comprising image samples. The processor reads first data comprising one or more parameters associated with noise-producing orifice images of sequences of at least three concatenated phonemes which correspond to an input stimulus. The processor reads, based on the first data. second data comprising images of a noise-producing entity. The processor generates an animated sequence of the noise-producing entity.
-
Citations
20 Claims
-
1. A method of synchronizing synthesized speech and animation, the method causing a computing device to perform steps comprising:
-
associating a received stimulus with a phoneme having corresponding mouth parameters; selecting a parameter set corresponding to the mouth parameters from an animation library, the parameter set representing frame segments; and generating, via a noise producing entity, speech associated with the stimulus that is synchronized with the frame segments. - View Dependent Claims (2, 3, 4, 5, 6, 7, 16, 17, 18, 19, 20)
-
-
8. A system for synchronizing synthesized speech and animation, the system comprising:
-
a processor; a module controlling the processor to associate a received stimulus with a phoneme having corresponding mouth parameters; a module controlling the processor to select a parameter set corresponding to the mouth parameters from an animation library, the parameter set representing frame segments; and a module controlling the processor to generate, via a noise producing entity, speech associated with the stimulus that is synchronized with the frame segments. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A method of synchronizing synthesized speech and animation, the method causing a computing device to perform steps comprising:
-
associating a received stimulus with a phoneme having corresponding mouth parameters; selecting a parameter set corresponding to the mouth parameters from an animation library, the parameter set representing frame segments; and generating, via a noise producing entity, speech associated with the stimulus that is synchronized with the frame segments.
-
Specification