Method and apparatus for speaker identification using mixture discriminant analysis to develop speaker models
First Claim
1. A method of determining an utterance score for identifying a speaker from a group of speakers based on a first set of feature vectors of an utterance from the speaker, comprising:
- transforming the first set of feature vectors into a second set of feature vectors using transformations specific to a segmentation unit;
computing likelihood scores of the second set of feature vectors using speaker models trained by a mixture discriminate analysis using a collection of first sets of feature vectors from all the speakers in the group; and
combining the likelihood scores to determine an utterance score.
4 Assignments
0 Petitions
Accused Products
Abstract
A speaker identification system is provided that constructs speaker models using a discriminant analysis technique where the data in each class is modeled by Gaussian mixtures. The speaker identification method and apparatus determines the identity of a speaker, as one of a small group, based on a sentence-length password utterance. A speaker'"'"'s utterance is received and a sequence of a first set of feature vectors are computed based on the received utterance. The first set of feature vectors are then transformed into a second set of feature vectors using transformations specific to a particular segmentation unit, and likelihood scores of the second set of feature vectors are computed using speaker models trained using mixture discriminant analysis. The likelihood scores are then combined to determine an utterance score and the speaker'"'"'s identity is validated based on the utterance score. The speaker identification method and apparatus also includes training and enrollment phases. In the enrollment phase the speaker'"'"'s password utterance is received multiple times. A transcription of the password utterance as a sequence of phones is obtained, and the phone string is stored in a database containing phone strings of other speakers in the group. In the training phase, the first set of feature vectors are extracted from each password utterance and the phone boundaries for each phone in the password transcription are obtained using a speaker independent phone recognizer. A mixture model is developed for each phone of a given speaker'"'"'s password. Then, using the feature vectors from the password utterances of all of the speakers in the group, transformation parameters and transformed models are generated for each phone and speaker, using mixture discriminant analysis.
-
Citations
22 Claims
-
1. A method of determining an utterance score for identifying a speaker from a group of speakers based on a first set of feature vectors of an utterance from the speaker, comprising:
-
transforming the first set of feature vectors into a second set of feature vectors using transformations specific to a segmentation unit;
computing likelihood scores of the second set of feature vectors using speaker models trained by a mixture discriminate analysis using a collection of first sets of feature vectors from all the speakers in the group; and
combining the likelihood scores to determine an utterance score. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
extracting the collection of first sets of feature vectors and obtaining phone segments from a password utterance for each of the speakers in the group;
developing a mixture model for each of the phone segments of the password utterance for each of the speakers in the group;
estimating posterior mixture probabilities using the mixture models and the collection of first sets of feature vectors;
performing the mixture discriminant analysis using the posterior mixture probabilities and the collection of first sets of feature vectors; and
outputting transformation parameters specific to phone segments and common to all the speakers in the group and transformed models for each speaker in the group.
-
-
6. The method of claim 1, further comprising an enrollment phase, the enrollment phase comprising:
-
receiving a password utterance multiple times for one of the speakers in the group;
converting the password utterance into a phone string; and
storing the phone string in a database containing phone strings of the other speakers in the group.
-
-
7. The method of claim 6, wherein the password utterance is known.
-
8. The method of claim 6, wherein the password utterance is not known.
-
9. The method of claim 1, wherein the utterance score is determined by averaging the likelihood scores.
-
10. The method of claim 1, wherein the utterance score is based on threshold scores generated from the likelihood scores.
-
11. An apparatus for determining an utterance score for identifying a speaker from a group of speakers based on a first set of feature vectors of utterance from the speaker, comprising:
-
a speaker independent phone recognizer that transforms the first set of feature vectors into a second set of feature vectors using transformations specific to a particular segmentation unit;
a likelihood estimator that computes likelihood scores of the second set of feature vectors using speaker models trained by mixture discriminate analysis using a collection of first sets of feature vectors from all the speakers in the group; and
a score combiner that combines the likelihood scores to determine an utterance score. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
the speaker independent phone recognizer extracts the collection of first sets of feature vectors and obtains phone segments from a password utterance for each of the speakers in the group;
a Gaussian mixture model trainer develops a mixture model for each one of the phone segments of the password utterance for each speaker in the group;
a posterior probability estimator estimates posterior mixture probabilities using the mixture models and the collection of first sets of feature vectors; and
a mixture discriminant analysis unit performs a mixture discriminant analysis using the posterior mixture probabilities and the collection of first sets of feature vectors and outputs transformation parameters specific to phone segments and common to all the speakers in the group and outputs transformed models for each speaker in the group.
-
-
16. The apparatus of claim 11, wherein:
the speaker independent phone recognizer receives the password utterance for one of the speakers in the group multiple times, converts the password utterance into a phone string, and stores the phone string in a database containing phone strings of the other speakers in the group.
-
17. The apparatus of claim 16, wherein the password utterance for the speakers in the group is known.
-
18. The apparatus of claim 16, wherein the password utterance for the speakers in the group is not known.
-
19. The apparatus of claim 11, wherein the score combiner determines the utterance score by averaging the likelihood scores.
-
20. The apparatus of claim 11, further comprising a threshold unit, wherein:
the score combiner determines the utterance score based on threshold scores generated from the likelihood scores by the threshold unit.
-
21. A method for identifying a speaker of a group from an utterance having features represented by a first set of feature vectors, comprising:
-
transforming the first set of feature vectors into a second set of feature vectors;
comparing the second set of feature vectors to speaker models to generate likelihood scores, combining the likelihood scores to determine an utterance score; and
comparing the utterance score to a speaker specific threshold, wherein the speaker models are trained by mixture discriminate analysis using a collection of first sets of feature vectors from all the speakers in the group. - View Dependent Claims (22)
-
Specification