Anchored speech detection and speech recognition
First Claim
1. A computer implemented method for identifying speech from a desired speaker for automatic speech recognition (ASR), the method comprising:
- receiving audio data corresponding to speech, the audio data comprising a plurality of audio frames;
processing the plurality of audio frames to determine a first plurality of feature vectors corresponding to a first portion of the audio data and a second plurality of feature vectors corresponding to a second portion of the audio data;
determining that the first plurality of feature vectors corresponds to a wakeword;
processing the first plurality of feature vectors with a recurrent neural network encoder to determine a reference feature vector corresponding to speech from a desired speaker;
processing the second plurality of feature vectors, and the reference feature vector, using a neural-network classifier to determine a first score corresponding to a first feature vector in the second plurality, the first score corresponding to a likelihood that the first feature vector corresponds to audio spoken by the desired speaker;
determining that the score is above a threshold;
creating an indication that the first feature vector corresponds to speech from the desired speaker;
determining a first weight corresponding to the first feature vector based on the first feature vector corresponding to speech from the desired speaker; and
performing ASR using the first weight and the first feature vector.
1 Assignment
0 Petitions
Accused Products
Abstract
A system configured to process speech commands may classify incoming audio as desired speech, undesired speech, or non-speech. Desired speech is speech that is from a same speaker as reference speech. The reference speech may be obtained from a configuration session or from a first portion of input speech that includes a wakeword. The reference speech may be encoded using a recurrent neural network (RNN) encoder to create a reference feature vector. The reference feature vector and incoming audio data may be processed by a trained neural network classifier to label the incoming audio data (for example, frame-by-frame) as to whether each frame is spoken by the same speaker as the reference speech. The labels may be passed to an automatic speech recognition (ASR) component which may allow the ASR component to focus its processing on the desired speech.
26 Citations
17 Claims
-
1. A computer implemented method for identifying speech from a desired speaker for automatic speech recognition (ASR), the method comprising:
-
receiving audio data corresponding to speech, the audio data comprising a plurality of audio frames; processing the plurality of audio frames to determine a first plurality of feature vectors corresponding to a first portion of the audio data and a second plurality of feature vectors corresponding to a second portion of the audio data; determining that the first plurality of feature vectors corresponds to a wakeword; processing the first plurality of feature vectors with a recurrent neural network encoder to determine a reference feature vector corresponding to speech from a desired speaker; processing the second plurality of feature vectors, and the reference feature vector, using a neural-network classifier to determine a first score corresponding to a first feature vector in the second plurality, the first score corresponding to a likelihood that the first feature vector corresponds to audio spoken by the desired speaker; determining that the score is above a threshold; creating an indication that the first feature vector corresponds to speech from the desired speaker; determining a first weight corresponding to the first feature vector based on the first feature vector corresponding to speech from the desired speaker; and performing ASR using the first weight and the first feature vector. - View Dependent Claims (2, 3)
-
-
4. A computer implemented method comprising:
-
receiving first audio data as part of a first interaction with a device, the first audio data corresponding to first speech from a first speaker; determining, using the first audio data and a recurrent neural network, a reference feature vector; receiving second audio data as part of a second interaction with the device; determining, using the reference feature vector and a trained model, that a first portion of the second audio data corresponds to second speech from a second speaker; determining, using the reference feature vector and the trained model, that a second portion of the second audio data corresponds to third speech from the first speaker; and based at least in part on determining that the first portion of the second audio data corresponds to the second speaker, executing a command corresponding to the second portion of the second audio data. - View Dependent Claims (6, 7, 8, 9, 16)
-
-
5. A computer implemented method comprising:
-
receiving audio data as part of an interaction with a device; determining that a first portion of the audio data represents a wakeword spoken by a first speaker; based at least in part on determining that the first portion of the audio data represents the wakeword, processing the first portion of the audio data to determine a reference feature vector; determining, using the reference feature vector and a trained model, that a second portion of the audio data represents speech of a second speaker; determining, using the reference feature vector and the trained model, that a third portion of the audio data represents speech of the first speaker; and based at least in part on determining that the second portion of the audio data represents the speech of the second speaker, executing a command corresponding to the third portion of the audio data.
-
-
10. A computing system comprising:
-
at least one processor; and at least one memory including instructions that, when executed by the at least one processor, cause the computing system to; receive first audio data as part of a first interaction with a device, the first audio data corresponding to first speech from a first speaker; determine, using the first audio data and a recurrent neural network, a reference feature vector; receiving second audio data as part of a second interaction with the device; determine, using the reference feature vector and a trained model, that a first portion of the first audio data corresponds to second speech from a second speaker; determine, using the reference feature vector and the trained model, that a second portion of the second audio data corresponds to third speech from the first speaker; and based at least in part on determining that the first portion of the second audio data corresponds to the second speaker, execute a command corresponding to the second portion of the second audio data. - View Dependent Claims (12, 13, 14, 15, 17)
-
-
11. A computing system comprising:
-
at least one processor; and at least one memory including instructions that, when executed by the at least one processor, cause the system to; receive audio data as part of an interaction with a device; determine that a first portion of the audio data represents a wakeword spoken by a first speaker; based at least in part on determining that the first portion of the audio data represents the wakeword, process the first portion of the audio data to determine a reference feature vector; determine, using the reference feature vector and a trained model, that a second portion of the audio data represents speech of a second speaker; determine, using the reference feature vector and the trained model, that a third portion of the audio data represents speech of the first speaker; and based at least in part on determining that the second portion of the audio data represents the speech of the second speaker, execute a command corresponding to the third portion of the audio data.
-
Specification