Model training for automatic speech recognition from imperfect transcription data
First Claim
1. A computer-implemented method, comprising:
- a. aligning an utterance from a set of training data with a corresponding original transcription from the set of training data to produce a time-aligned transcription with time alignment information for each word in the utterance, wherein the set of training data includes transcription errors;
b. decoding the same utterance with an incremental acoustic model and an incremental language model to produce a decoded transcription with time alignment information for each word;
c. aligning the time-aligned and decoded transcriptions according to time alignment information;
d. selecting all segments from the utterance having at least Q contiguous matching aligned words, where Q is a positive integer, by;
including a silence in a selected segment comprising the Q matching aligned words when the selected segment is preceded or followed by a silence; and
when there is no silence preceding or succeeding the selected segment;
selecting the selected segment according to the original transcription with time alignment information; and
inserting part of a silence segment from the beginning of the utterance into the beginning of the selected segment, and appending a part of a silence segment from the beginning of the utterance to the end of the selected segment;
e. training the incremental acoustic model with the selected segments; and
f. evaluating the accuracy of the incremental acoustic model built from the training data including transcription errors compared to the accuracy of an acoustic model built from a similar amount of training data having no transcription errors.
2 Assignments
0 Petitions
Accused Products
Abstract
Techniques and systems for training an acoustic model are described. In an embodiment, a technique for training an acoustic model includes dividing a corpus of training data that includes transcription errors into N parts, and on each part, decoding an utterance with an incremental acoustic model and an incremental language model to produce a decoded transcription. The technique may further include inserting silence between a pair of words into the decoded transcription and aligning an original transcription corresponding to the utterance with the decoded transcription according to time for each part. The technique may further include selecting a segment from the utterance having at least Q contiguous matching aligned words, and training the incremental acoustic model with the selected segment. The trained incremental acoustic model may then be used on a subsequent part of the training data. Other embodiments are described and claimed.
-
Citations
15 Claims
-
1. A computer-implemented method, comprising:
-
a. aligning an utterance from a set of training data with a corresponding original transcription from the set of training data to produce a time-aligned transcription with time alignment information for each word in the utterance, wherein the set of training data includes transcription errors; b. decoding the same utterance with an incremental acoustic model and an incremental language model to produce a decoded transcription with time alignment information for each word; c. aligning the time-aligned and decoded transcriptions according to time alignment information; d. selecting all segments from the utterance having at least Q contiguous matching aligned words, where Q is a positive integer, by; including a silence in a selected segment comprising the Q matching aligned words when the selected segment is preceded or followed by a silence; and when there is no silence preceding or succeeding the selected segment; selecting the selected segment according to the original transcription with time alignment information; and inserting part of a silence segment from the beginning of the utterance into the beginning of the selected segment, and appending a part of a silence segment from the beginning of the utterance to the end of the selected segment; e. training the incremental acoustic model with the selected segments; and f. evaluating the accuracy of the incremental acoustic model built from the training data including transcription errors compared to the accuracy of an acoustic model built from a similar amount of training data having no transcription errors. - View Dependent Claims (2, 3)
-
-
4. A computer-readable hardware medium storing computer-executable program instructions that when executed cause a computing system to:
-
compute a frame posterior for each word in an utterance from a corpus comprising audio data and a corresponding transcription that contains transcription errors, wherein the instructions to compute the frame posterior include instructions that when executed cause the computing system to; decode the audio data using an existing acoustic model to generate a lattice, merging the decoded lattice with the transcription, labeling each word in the merged lattice as one of correct or incorrect by examining a percentage to which the word is overlapped in duration with the transcription, computing a posterior probability for each word in the merged lattice, and computing the frame posterior q(t) of time t by summing the posterior probabilities of all the correct words passing time t for a time interval; train an acoustic model with confidence-based maximum likelihood estimation (MLE) training using the frame posterior by estimating acoustic model parameters using the transcription, the audio data and the frame posterior; estimate the acoustic model parameters with confidence-based discriminative training using the frame posterior; evaluate the accuracy of the acoustic model built from the corpus including the corresponding transcription that contains transcription errors compared to the accuracy of an acoustic model built from a similar amount of training data having no transcription errors; and generate a finalized acoustic model. - View Dependent Claims (5, 6, 7, 8, 9, 10, 11)
-
-
12. A system, comprising:
-
a processing unit; an alignment component, executing on the processing unit, operative to align an utterance from a corpus of training data including transcription errors with a corresponding original transcription from the corpus of training data to produce a time-aligned transcription with time alignment information for each word in the utterance; a decoding component, executing on the processing unit, operative to decode the utterance from the corpus of training data using an incremental acoustic model and an incremental language model to produce a decoded transcription;
wherein the alignment component is operative to align the time-aligned transcription with the decoded transcription;a segment selecting component, executing on the processing unit, operative to select a segment from the utterance having at least Q contiguous matching aligned words, where Q is a positive integer, by; including a silence in a selected segment comprising the Q matching aligned words when the selected segment is preceded or followed by a silence; and when there is no silence preceding or succeeding the selected segment, to; selecting the selected segment according to the original transcription with time alignment information; and inserting part of a silence segment from the beginning of the utterance into the beginning of the selected segment, and appending a part of a silence segment from the beginning of the utterance to the end of the selected segment; and a training component, executing on the processing unit, to train the incremental acoustic model with the selected segment and to generate a final acoustic model. - View Dependent Claims (13, 14, 15)
-
Specification