Method and device for voiceprint recognition
First Claim
1. A method, comprising:
- at a device having one or more processors and memory;
establishing a first-level Deep Neural Network (DNN) model based on unlabeled speech data, the unlabeled speech data containing no speaker labels and the first-level DNN model specifying a plurality of basic voiceprint features for the unlabeled speech data;
obtaining a plurality of high-level voiceprint features by tuning the first-level DNN model based on labeled speech data, the labeled speech data containing speech samples with respective speaker labels, and the tuning producing a second-level DNN model specifying the plurality of high-level voiceprint features;
based on the second-level DNN model, registering a first high-level voiceprint feature sequence for a user based on a registration speech sample received from the user; and
performing speaker verification for the user based on the first high-level voiceprint feature sequence registered for the user, the speaker verification comprising;
receiving, from the user, a test speech sample;
obtaining a second high-level voiceprint feature sequence based on the test speech sample using the first-level DNN model and the second-level DNN model in sequence;
determining a distance between the second high-level voiceprint feature sequence and the first high-level voiceprint feature sequence registered for the user; and
in accordance with a determination that the distance between the second high-level voiceprint feature sequence and the first high-level voiceprint feature sequence is less than a preset threshold, automatically, without user intervention, verifying the identity of the user.
1 Assignment
0 Petitions
Accused Products
Abstract
A method and device for voiceprint recognition, include: establishing a first-level Deep Neural Network (DNN) model based on unlabeled speech data, the unlabeled speech data containing no speaker labels and the first-level DNN model specifying a plurality of basic voiceprint features for the unlabeled speech data; obtaining a plurality of high-level voiceprint features by tuning the first-level DNN model based on labeled speech data, the labeled speech data containing speech samples with respective speaker labels, and the tuning producing a second-level DNN model specifying the plurality of high-level voiceprint features; based on the second-level DNN model, registering a respective high-level voiceprint feature sequence for a user based on a registration speech sample received from the user; and performing speaker verification for the user based on the respective high-level voiceprint feature sequence registered for the user.
-
Citations
17 Claims
-
1. A method, comprising:
at a device having one or more processors and memory; establishing a first-level Deep Neural Network (DNN) model based on unlabeled speech data, the unlabeled speech data containing no speaker labels and the first-level DNN model specifying a plurality of basic voiceprint features for the unlabeled speech data; obtaining a plurality of high-level voiceprint features by tuning the first-level DNN model based on labeled speech data, the labeled speech data containing speech samples with respective speaker labels, and the tuning producing a second-level DNN model specifying the plurality of high-level voiceprint features; based on the second-level DNN model, registering a first high-level voiceprint feature sequence for a user based on a registration speech sample received from the user; and performing speaker verification for the user based on the first high-level voiceprint feature sequence registered for the user, the speaker verification comprising; receiving, from the user, a test speech sample; obtaining a second high-level voiceprint feature sequence based on the test speech sample using the first-level DNN model and the second-level DNN model in sequence; determining a distance between the second high-level voiceprint feature sequence and the first high-level voiceprint feature sequence registered for the user; and in accordance with a determination that the distance between the second high-level voiceprint feature sequence and the first high-level voiceprint feature sequence is less than a preset threshold, automatically, without user intervention, verifying the identity of the user. - View Dependent Claims (2, 3, 4, 5, 6)
-
7. A voiceprint recognition system, comprising:
-
one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the processors to perform operations comprising; establishing a first-level Deep Neural Network (DNN) model based on unlabeled speech data, the unlabeled speech data containing no speaker labels and the first-level DNN model specifying a plurality of basic voiceprint features for the unlabeled speech data; obtaining a plurality of high-level voiceprint features by tuning the first-level DNN model based on labeled speech data, the labeled speech data containing speech samples with respective speaker labels, and the tuning producing a second-level DNN model specifying the plurality of high-level voiceprint features; based on the second-level DNN model, registering a first high-level voiceprint feature sequence for a user based on a registration speech sample received from the user; and performing speaker verification for the user based on the first high-level voiceprint feature sequence registered for the user, the speaker verification comprising; receiving, from the user, a test speech sample; obtaining a second high-level voiceprint feature sequence based on the test speech sample using the first-level DNN model and the second-level DNN model in sequence; determining a distance between the second high-level voiceprint feature sequence and the first high-level voiceprint feature sequence registered for the user; and in accordance with a determination that the distance between the second high-level voiceprint feature sequence and the first high-level voiceprint feature sequence is less than a preset threshold, automatically, without user intervention, verifying the identity of the user. - View Dependent Claims (8, 9, 10, 11, 12)
-
-
13. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors, cause the processors to perform operations comprising:
-
establishing a first-level Deep Neural Network (DNN) model based on unlabeled speech data, the unlabeled speech data containing no speaker labels and the first-level DNN model specifying a plurality of basic voiceprint features for the unlabeled speech data; obtaining a plurality of high-level voiceprint features by tuning the first-level DNN model based on labeled speech data, the labeled speech data containing speech samples with respective speaker labels, and the tuning producing a second-level DNN model specifying the plurality of high-level voiceprint features; based on the second-level DNN model, registering a first high-level voiceprint feature sequence for a user based on a registration speech sample received from the user; and performing speaker verification for the user based on the first high-level voiceprint feature sequence registered for the user, the speaker verification comprising; receiving, from the user, a test speech sample; obtaining a second high-level voiceprint feature sequence based on the test speech sample using the first-level DNN model and the second-level DNN model in sequence; determining a distance between the second high-level voiceprint feature sequence and the first high-level voiceprint feature sequence registered for the user; and in accordance with a determination that the distance between the second high-level voiceprint feature sequence and the first high-level voiceprint feature sequence is less than a preset threshold, automatically, without user intervention, verifying the identity of the user. - View Dependent Claims (14, 15, 16, 17)
-
Specification