Method and apparatus for speech recognition using neural networks with speaker adaptation
First Claim
Patent Images
1. A computer implemented method for speech recognition, the method implemented by one or more processors and comprising:
- receiving, by a deep neural network at a hidden layer, input speech data at a first set of nodes of the hidden layer of the deep neural network and corresponding speaker data at a second set of nodes of the hidden layer of the deep neural network, the second set of nodes serving as an extra input to the deep neural network; and
generating, by the deep neural network, a prediction of a phoneme corresponding to the input speech data based on the corresponding speaker data, wherein the generating comprises multiplying the input speech data received at the first set of nodes with a first matrix of weighting coefficients and multiplying the speaker data received at the second set of nodes with a second matrix of weighting coefficients, multiplying the speaker data with the second matrix of weighting coefficients removing speaker variability from the input speech data.
2 Assignments
0 Petitions
Accused Products
Abstract
In a speech recognition system, deep neural networks (DNNs) are employed in phoneme recognition. While DNNs typically provide better phoneme recognition performance than other techniques, such as Gaussian mixture models (GMM), adapting a DNN to a particular speaker is a real challenge. According to at least one example embodiment, speech data and corresponding speaker data are both applied as input to a DNN. In response, the DNN generates a prediction of a phoneme based on the input speech data and the corresponding speaker data. The speaker data may be generated from the corresponding speech data.
25 Citations
20 Claims
-
1. A computer implemented method for speech recognition, the method implemented by one or more processors and comprising:
-
receiving, by a deep neural network at a hidden layer, input speech data at a first set of nodes of the hidden layer of the deep neural network and corresponding speaker data at a second set of nodes of the hidden layer of the deep neural network, the second set of nodes serving as an extra input to the deep neural network; and generating, by the deep neural network, a prediction of a phoneme corresponding to the input speech data based on the corresponding speaker data, wherein the generating comprises multiplying the input speech data received at the first set of nodes with a first matrix of weighting coefficients and multiplying the speaker data received at the second set of nodes with a second matrix of weighting coefficients, multiplying the speaker data with the second matrix of weighting coefficients removing speaker variability from the input speech data. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. An apparatus for speech recognition comprising:
-
at least one processor; and at least one memory with computer code instructions stored thereon, the at least one processor and the at least one memory with computer code instructions being configured to cause the apparatus to; receive, by a deep neural network at a hidden layer, input speech data at a first set of nodes of the hidden layer of the deep neural network and corresponding speaker data at a second set of nodes of the hidden layer of the deep neural network, the second set of nodes serving as an extra input to the deep neural network; and generate, at an output layer of the deep neural network, a prediction of a phoneme corresponding to the input speech data based on the corresponding speaker data, wherein the generating comprises multiplying the input speech data received at the first set of nodes with a first matrix of weighting coefficients and multiplying the speaker data received at the second set of nodes with a second matrix of weighting coefficients, multiplying the speaker data with the second matrix of weighting coefficients removing speaker variability from the input speech data. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. A non-transitory computer-readable medium with computer code instructions stored thereon, the computer code instructions being configured, when executed by a processor, to cause an apparatus to:
-
receive, by a deep neural network at a hidden layer, input speech data at a first set of nodes of the hidden layer of the deep neural network and corresponding speaker data at a second set of nodes of the hidden layer of the deep neural network, the second set of nodes serving as an extra input to the deep neural network; and generate, by the deep neural network, a prediction of a phoneme corresponding to the input speech data based on the corresponding speaker data, wherein the generating comprises multiplying the input speech data received at the first set of nodes with a first matrix of weighting coefficients and multiplying the speaker data received at the second set of nodes with a second matrix of weighting coefficients, multiplying the speaker data with the second matrix of weighting coefficients removing speaker variability from the input speech data.
-
Specification