System and method for unified normalization in text-to-speech and automatic speech recognition
First Claim
1. A method comprising:
- receiving first text for training an automatic speech recognition process;
receiving second text to be converted to speech via a text-to-speech process;
normalizing, via a processor, the first text using a single set of normalization protocols that apply to both the automatic speech recognition process and the text-to-speech process, to yield first normalized text;
normalizing, via the processor, the second text using the single set of normalization protocols that apply to both the automatic speech recognition process and the text-to-speech process, to yield second normalized text, wherein normalized text comprises the first normalized text and the second normalized text;
generating, via the processor and using the normalized text and a dictionary configured for both the automatic speech recognition process and the text-to-speech process, output comprising one of output text corresponding to the first text and phonemes corresponding to the second text, wherein the dictionary comprises a rate of use of the automatic speech recognition process, a rate of use of the text-to-speech process, accent rules, language preference rules and location-based rules configured for both the automatic speech recognition process and the text-to-speech process;
when the output comprises the phonemes corresponding to the second text, generating speech by performing prosody generation and unit selection synthesis using the phonemes and the dictionary;
when the output comprises the output text corresponding to the first text, training both an acoustic model and a language model using the output text and the dictionary; and
performing speech recognition using the acoustic model and the language model.
1 Assignment
0 Petitions
Accused Products
Abstract
A system, method and computer-readable storage devices are for using a single set of normalization protocols and a single language lexica (or dictionary) for both TTS and ASR. The system receives input (which is either text to be converted to speech or ASR training text), then normalizes the input. The system produces, using the normalized input and a dictionary configured for both automatic speech recognition and text-to-speech processing, output which is either phonemes corresponding to the input or text corresponding to the input for training the ASR system. When the output is phonemes corresponding to the input, the system generates speech by performing prosody generation and unit selection synthesis using the phonemes. When the output is text corresponding to the input, the system trains both an acoustic model and a language model for use in future speech recognition.
27 Citations
17 Claims
-
1. A method comprising:
-
receiving first text for training an automatic speech recognition process; receiving second text to be converted to speech via a text-to-speech process; normalizing, via a processor, the first text using a single set of normalization protocols that apply to both the automatic speech recognition process and the text-to-speech process, to yield first normalized text; normalizing, via the processor, the second text using the single set of normalization protocols that apply to both the automatic speech recognition process and the text-to-speech process, to yield second normalized text, wherein normalized text comprises the first normalized text and the second normalized text; generating, via the processor and using the normalized text and a dictionary configured for both the automatic speech recognition process and the text-to-speech process, output comprising one of output text corresponding to the first text and phonemes corresponding to the second text, wherein the dictionary comprises a rate of use of the automatic speech recognition process, a rate of use of the text-to-speech process, accent rules, language preference rules and location-based rules configured for both the automatic speech recognition process and the text-to-speech process; when the output comprises the phonemes corresponding to the second text, generating speech by performing prosody generation and unit selection synthesis using the phonemes and the dictionary; when the output comprises the output text corresponding to the first text, training both an acoustic model and a language model using the output text and the dictionary; and performing speech recognition using the acoustic model and the language model. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A system comprising:
-
a processor; and a computer-readable storage medium having instructions stored which, when executed by the processor, cause the processor to perform operations comprising; receiving first text for training an automatic speech recognition process; receiving second text to be converted to speech via a text-to-speech process; normalizing the first text using a single set of normalization protocols that apply to both the automatic speech recognition process and the text-to-speech process, to yield first normalized text; normalizing the second text using the single set of normalization protocols that apply to both the automatic speech recognition process and the text-to-speech process, to yield second normalized text, wherein normalized text comprises the first normalized text and the second normalized text; generating, using the normalized text and a dictionary configured for both the automatic speech recognition process and the text-to-speech process, output comprising one of output text corresponding to the first text and phonemes corresponding to the second text, wherein the dictionary comprises a rate of use of the automatic speech recognition process, a rate of use of the text-to-speech process, accent rules, language preference rules and location-based rules configured for both the automatic speech recognition process and the text-to-speech process; when the output comprises the phonemes corresponding to the second text, generating speech by performing prosody generation and unit selection synthesis using the phonemes and the dictionary; when the output comprises the output text corresponding to the first text, training both an acoustic model and a language model using the output text and the dictionary; and performing speech recognition using the acoustic model and the language model. - View Dependent Claims (8, 9, 10, 11, 12)
-
-
13. A computer-readable storage device having instructions stored which, when executed by a computing device, cause the computing device to perform operations comprising:
-
receiving first text for training an automatic speech recognition process; receiving second text to be converted to speech via a text-to-speech process; normalizing the first text using a single set of normalization protocols that apply to both the automatic speech recognition process and the text-to-speech process, to yield first normalized text; normalizing the second text using the single set of normalization protocols that apply to both the automatic speech recognition process and the text-to-speech process, to yield second normalized text, wherein normalized text comprises the first normalized text and the second normalized text; generating, using the normalized text and a dictionary configured for both the automatic speech recognition process and the text-to-speech process, output comprising one of output text corresponding to the first text and phonemes corresponding to the second text, wherein the dictionary comprises a rate of use of the automatic speech recognition process, a rate of use of the text-to-speech process, accent rules, language preference rules and location-based rules configured for both the automatic speech recognition process and the text-to-speech process; when the output comprises the phonemes corresponding to the second text, generating speech by performing prosody generation and unit selection synthesis using the phonemes and the dictionary; when the output comprises the output text corresponding to the first text, training both an acoustic model and a language model using the output text and the dictionary; and performing speech recognition using the acoustic model and the language model. - View Dependent Claims (14, 15, 16, 17)
-
Specification