SPEECH RECOGNITION USING NON-PARAMETRIC MODELS
First Claim
1. A method performed by data processing apparatus, the method comprising:
- accessing, by the data processing apparatus, a aratus speech data that represents utterances of a particular phonetic unit occurring in a particular phonetic context that comprises one or more additional phonetic units, the speech data comprising values for multiple dimensions for each of the utterances;
determining, by the data processing apparatus, boundaries for a set of quantiles for each of the multiple dimensions based on the speech data that represents utterances of the particular phonetic unit occurring in the particular phonetic context;
generating, by the data processing apparatus and for each of the quantiles, a model that models the distribution of values within the quantile;
generating, by the data processing apparatus, a multidimensional probability function that indicates, for input speech data representing speech occurring in the particular phonetic context, a probability that the input speech data will have values that correspond to a given set of the quantiles for the multiple dimensions;
storing, by the data processing apparatus, data indicating the boundaries of the quantiles, the models for the distribution of values in the quantiles, and the multidimensional probability function as a portion of an acoustic model corresponding to the particular phonetic unit occurring in the particular phonetic context; and
using, by the data processing apparatus, the stored data indicating the boundaries of the quantiles, the models for the distribution of values in the quantiles, and the multidimensional probability function to perform speech recognition for an utterance.
2 Assignments
0 Petitions
Accused Products
Abstract
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for using non-parametric models in speech recognition. In some implementations, speech data is accessed. The speech data represents utterances of a particular phonetic unit occurring in a particular phonetic context, and the speech data includes values for multiple dimensions. Boundaries are determined for a set of quantiles for each of the multiple dimensions. Models for the distribution of values within the quantiles are generated. A multidimensional probability function is generated. Data indicating the boundaries of the quantiles, the models for the distribution of values in the quantiles, and the multidimensional probability function are stored.
-
Citations
23 Claims
-
1. A method performed by data processing apparatus, the method comprising:
-
accessing, by the data processing apparatus, a aratus speech data that represents utterances of a particular phonetic unit occurring in a particular phonetic context that comprises one or more additional phonetic units, the speech data comprising values for multiple dimensions for each of the utterances; determining, by the data processing apparatus, boundaries for a set of quantiles for each of the multiple dimensions based on the speech data that represents utterances of the particular phonetic unit occurring in the particular phonetic context; generating, by the data processing apparatus and for each of the quantiles, a model that models the distribution of values within the quantile; generating, by the data processing apparatus, a multidimensional probability function that indicates, for input speech data representing speech occurring in the particular phonetic context, a probability that the input speech data will have values that correspond to a given set of the quantiles for the multiple dimensions; storing, by the data processing apparatus, data indicating the boundaries of the quantiles, the models for the distribution of values in the quantiles, and the multidimensional probability function as a portion of an acoustic model corresponding to the particular phonetic unit occurring in the particular phonetic context; and using, by the data processing apparatus, the stored data indicating the boundaries of the quantiles, the models for the distribution of values in the quantiles, and the multidimensional probability function to perform speech recognition for an utterance. - View Dependent Claims (2, 3, 4, 5, 6, 7, 21, 22)
-
-
8. A system comprising:
one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising; receiving, from a client device over a computer network, audio data that describes an utterance of a user; accessing stored data of an acoustic model that was generated by; accessing speech data that represents utterances of a particular phonetic unit occurring in a particular phonetic context that comprises one or more additional phonetic units, the speech data comprising values for multiple dimensions for each of the utterances; determining boundaries for a set of quantiles for each of the multiple dimensions based on the speech data that represents utterances of the particular phonetic unit occurring in the particular phonetic context; generating, for each of the quantiles, a model that models the distribution of values within the quantile; generating a multidimensional probability function that indicates, for input speech data representing speech occurring in the particular phonetic context, a probability that the input speech data will have values that correspond to a given set of the quantiles for the multiple dimensions; storing data indicating the boundaries of the quantiles, the models for the distribution of values in the quantiles, and the multidimensional probability function as a portion of an acoustic model corresponding to the particular phonetic unit occurring in the particular phonetic context; using the stored data indicating the boundaries of the quantiles, the models for the distribution of values in the quantiles, and the multidimensional probability function to determine a transcription for the utterance; and providing, to the client device and over the computer network, the transcription for the utterance. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
15. A non-transitory computer storage medium storing a computer program, the program comprising instructions that when executed by one or more computers cause the one or more computers to perform operations comprising:
-
receiving, from a client device over a computer network, audio data that describes an utterance of a user; accessing stored data of an acoustic model that was generated by; accessing speech data that represents utterances of a particular phonetic unit occurring in a particular phonetic context that comprises one or more additional phonetic units, the speech data comprising values for multiple dimensions for each of the utterances; determining boundaries for a set of quantiles for each of the multiple dimensions based on the speech data that represents utterances of the particular phonetic unit occurring in the particular phonetic context; generating, for each of the quantiles, a model that models the distribution of values within the quantile; generating a multidimensional probability function that indicates, for input speech data representing speech occurring in the particular phonetic context, a probability that the input speech data will have values that correspond to a given set of the quantiles for the multiple dimensions; storing data indicating the boundaries of the quantiles, the models for the distribution of values in the quantiles, and the multidimensional probability function as a portion of an acoustic model corresponding to the particular phonetic unit occurring in the particular phonetic context; using the stored data indicating the boundaries of the quantiles, the models for the distribution of values in the quantiles, and the multidimensional probability function to determine a transcription for the utterance; and providing, to the client device and over the computer network, the transcription for the utterance. - View Dependent Claims (19, 20)
-
-
16. -18. (canceled)
-
23. The method of claim 23, wherein using the stored data to determine the transcription for the utterance comprises using the portions of the acoustic model generated by the different processing nodes to determine a transcription for the utterance.
Specification