Meta-data inputs to front end processing for automatic speech recognition
First Claim
Patent Images
1. A method comprising:
- receiving, by a computing device, a sequence of speech features that characterize an unknown speech input provided on an audio input channel controlled by an application executing on the computing device;
receiving meta-data that characterizes the audio input channel, an audio codec applied when generating the sequence of speech features, and a type of the application;
transforming the sequence of speech features using one or more trained mapping functions including a feature-space maximum mutual information (fMMI) mapping function, the one or more trained mapping functions controlled by the meta-data that characterizes the audio input channel, the audio codec applied when generating the sequence of speech features, and the type of the application, the fMMI mapping function using neural network based posterior estimates that use the meta-data as input; and
performing automatic speech recognition of the transformed speech features.
3 Assignments
0 Petitions
Accused Products
Abstract
A computer-implemented method is described for front end speech processing for automatic speech recognition. A sequence of speech features which characterize an unknown speech input provided on an audio input channel and associated meta-data which characterize the audio input channel are received. The speech features are transformed with a computer process that uses a trained mapping function controlled by the meta-data, and automatic speech recognition is performed of the transformed speech features.
-
Citations
20 Claims
-
1. A method comprising:
-
receiving, by a computing device, a sequence of speech features that characterize an unknown speech input provided on an audio input channel controlled by an application executing on the computing device; receiving meta-data that characterizes the audio input channel, an audio codec applied when generating the sequence of speech features, and a type of the application; transforming the sequence of speech features using one or more trained mapping functions including a feature-space maximum mutual information (fMMI) mapping function, the one or more trained mapping functions controlled by the meta-data that characterizes the audio input channel, the audio codec applied when generating the sequence of speech features, and the type of the application, the fMMI mapping function using neural network based posterior estimates that use the meta-data as input; and performing automatic speech recognition of the transformed speech features. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. One or more non-transitory computer-readable media storing executable instructions that, when executed by a processor, cause a system to:
-
receive a sequence of speech features that characterize an unknown speech input provided on an audio input channel controlled by an application executing on the system; receive meta-data that characterizes the audio input channel, an audio codec applied when generating the sequence of speech features, and a type of the application; transform the sequence of speech features using one or more trained mapping functions including a feature-space maximum mutual information (fMMI) mapping function, the one or more trained mapping functions controlled by the meta-data that characterizes the audio input channel, the audio codec applied when generating the sequence of speech features, and the type of the application, the fMMI mapping function using neural network based posterior estimates that use the meta-data as input, wherein transforming the sequence of speech features comprises reducing a dimensionality of the sequence of speech features; and perform automatic speech recognition of the transformed speech features. - View Dependent Claims (14, 15, 16)
-
-
17. A system comprising:
-
at least one processor; and one or more non-transitory computer-readable media storing executable instructions that, when executed by the at least one processor, cause the system to; receive a sequence of speech features that characterize an unknown speech input provided on an audio input channel controlled by an application executing on the system; receive meta-data that characterizes the audio input channel, an audio codec applied when generating the sequence of speech features, a microphone type of the audio input channel, and a type of the application; transform the sequence of speech features using one or more trained mapping functions including a feature-space maximum mutual information (fMMI) mapping function, the one or more trained mapping functions controlled by the meta-data that characterizes the audio input channel, the audio codec applied when generating the sequence of speech features, the microphone type of the audio input channel, and the type of the application, the fMMI mapping function using neural network based posterior estimates that use the meta-data as input; and perform automatic speech recognition of the transformed speech features. - View Dependent Claims (18, 19, 20)
-
Specification