Systems and methods for improving the accuracy of a transcription using auxiliary data such as personal data
First Claim
1. A method of generating a personalized transcription from an audio recording, wherein the method is performed by a mobile device in communication with a server, wherein computational resources of the server are greater than computational resources of the mobile device, the method comprising:
- maintaining a personal vocabulary of words on the mobile device associated with a user of the mobile device, wherein the personal vocabulary is based on personal data associated with the user;
receiving, from the server, a first transcription of an audio recording,wherein the first transcription is generated by a server automatic speech recognition (ASR) engine at the server and using an ASR vocabulary associated with a population of users,wherein the first transcription includes a first word list and confidence scores associated with a plurality of words in the first word list, andwherein the first transcription includes both words that the server ASR engine identified as most likely spoken as well as alternatives to those words;
receiving, from the server, audio data corresponding to at least the portion of the audio recording;
generating a second transcription,wherein the second transcription is of the received audio data,wherein the second transcription comprises a second word list and confidence scores associated with a plurality of words in the second word list, andwherein the second transcription is generated by a mobile device ASR engine located on the mobile device using the maintained personal vocabulary and an acoustic model associated with the user of the mobile device;
re-scoring the first transcription, the re-scoring comprising;
comparing the first transcription with the second transcription, and modifying a confidence score associated with an alternative word in the first word list when the mobile device ASR engine indicates a higher confidence score for the alternative word than the confidence score attributed by the server ASR engine to the alternative word; and
generating a final transcription based on the re-scored first transcription, the final transcription including a combination of most likely spoken words identified by the UASR engine as well as the re-scored alternative words identified by the mobile device ASR engine.
1 Assignment
0 Petitions
Accused Products
Abstract
A method is described for improving the accuracy of a transcription generated by an automatic speech recognition (ASR) engine. A personal vocabulary is maintained that includes replacement words. The replacement words in the personal vocabulary are obtained from personal data associated with a user. A transcription is received of an audio recording. The transcription is generated by an ASR engine using an ASR vocabulary and includes a transcribed word that represents a spoken word in the audio recording. Data is received that is associated with the transcribed word. A replacement word from the personal vocabulary is identified, which is used to re-score the transcription and replace the transcribed word.
-
Citations
24 Claims
-
1. A method of generating a personalized transcription from an audio recording, wherein the method is performed by a mobile device in communication with a server, wherein computational resources of the server are greater than computational resources of the mobile device, the method comprising:
-
maintaining a personal vocabulary of words on the mobile device associated with a user of the mobile device, wherein the personal vocabulary is based on personal data associated with the user; receiving, from the server, a first transcription of an audio recording, wherein the first transcription is generated by a server automatic speech recognition (ASR) engine at the server and using an ASR vocabulary associated with a population of users, wherein the first transcription includes a first word list and confidence scores associated with a plurality of words in the first word list, and wherein the first transcription includes both words that the server ASR engine identified as most likely spoken as well as alternatives to those words; receiving, from the server, audio data corresponding to at least the portion of the audio recording; generating a second transcription, wherein the second transcription is of the received audio data, wherein the second transcription comprises a second word list and confidence scores associated with a plurality of words in the second word list, and wherein the second transcription is generated by a mobile device ASR engine located on the mobile device using the maintained personal vocabulary and an acoustic model associated with the user of the mobile device; re-scoring the first transcription, the re-scoring comprising; comparing the first transcription with the second transcription, and modifying a confidence score associated with an alternative word in the first word list when the mobile device ASR engine indicates a higher confidence score for the alternative word than the confidence score attributed by the server ASR engine to the alternative word; and generating a final transcription based on the re-scored first transcription, the final transcription including a combination of most likely spoken words identified by the UASR engine as well as the re-scored alternative words identified by the mobile device ASR engine. - View Dependent Claims (2, 3)
-
-
4. A non-transitory computer-readable medium encoded with instructions that, when executed by a processor, perform a method in a computing system of generating a personalized transcription from an audio recording, wherein the method is performed by a mobile device in communication with a server, wherein computational resources of the server are greater than computational resources of the mobile device, the method comprising:
-
maintaining a personal vocabulary of words on the mobile device associated with a user, wherein the personal vocabulary is based on personal data associated with the user; receiving, from the server, a first transcription of an audio recording, wherein the first transcription is generated by a server automatic speech recognition (ASR) engine at the server and using an ASR vocabulary associated with a population of users, wherein the first transcription includes a first word list and confidence scores associated with a plurality of words in the first word list, and wherein the first transcription includes both words that the ASR engine identified as most likely spoken as well as alternatives to those words; receiving, from the server, audio data corresponding to at least the portion of the audio recording; generating a second transcription, wherein the second transcription is of the received audio data, wherein the second transcription comprises a second word list and confidence scores associated with a plurality of words in the second word list, and wherein the second transcription is generated by a mobile device ASR engine located on the mobile device using the maintained personal vocabulary and an acoustic model associated with the user of the mobile device; re-scoring the first transcription, the re-scoring comprising; comparing the first transcription with the second transcription, and modifying a confidence score associated with an alternative word in the first word list when the mobile device ASR engine indicates a higher confidence score for the alternative word than the confidence score attributed by server the ASR engine to the alternative word; and generating a final transcription based on the re-scored first transcription, the final transcription including a combination of most likely spoken words identified by the server ASR engine as well as the re-scored alternative words identified by the mobile device ASR engine. - View Dependent Claims (5, 6)
-
-
7. A method of replacing a word in a transcription of an audio recording, wherein the method is performed by a mobile device in communication with a server, wherein computational resources of the server are greater than computational resources of the mobile device, the method comprising:
-
maintaining a personal vocabulary of words on the mobile device associated with a user of the mobile device, wherein the personal vocabulary is based on personal data associated with the user and includes an acoustic model associated with the user of the mobile device; receiving, from the server, a first transcription of an audio recording, wherein the first transcription data is generated by a server automatic speech recognition (ASR) engine at the server using an ASR vocabulary associated with a population of users that does not include the personal vocabulary of the user of the mobile device, wherein the first transcription includes confidence scores associated with certain words in the transcription; receiving, from the server, audio data corresponding to the first transcription; identifying, at the mobile device, a replaceable word from the first transcription; generating a second transcription of a portion of the received audio data corresponding to the replaceable word, wherein the second transcription includes phonetic data, and wherein the second transcription is generated by a mobile device ASR engine on the mobile device using the maintained personal vocabulary and an acoustic model associated with the user of the mobile device; and identifying a replacement word for the replaceable word, wherein the replacement word is identified based on a comparison between the phonetic data of the second transcription and the personal vocabulary, and wherein the replacement word is from the personal vocabulary; identifying, at the mobile device, a non-replaceable word from the first transcription partially based on the maintained personal vocabulary;
producing a modified confidence score associated with the portion of the received first transcript based at least in part on the comparison; andgenerating a final transcription using the modified confidence score and the non-replaceable word, wherein the replacement word appears in the final transcription in place of at least one word from the first transcription, and wherein the non-replaceable word appears in the final transcription. - View Dependent Claims (8, 9, 10, 11, 12, 13, 14, 15)
-
-
16. A non-transitory computer-readable medium encoded with instructions that, when executed by a processor, perform a method in a computing system of replacing a word in a transcription of an audio recording, wherein the method is performed by a mobile device in communication with a server, wherein computational resources of the server are greater than computational resources of the mobile device, the method comprising:
-
maintaining a personal vocabulary of words on the mobile device associated with a user of the mobile device, wherein the personal vocabulary is based on personal data associated with the user and includes an acoustic model associated with the user of the mobile device; receiving, from the server, a first transcription of an audio recording, wherein the first transcription data is generated by a server automatic speech recognition (ASR) engine at the server using an ASR vocabulary associated with a population of users, and wherein the first transcription includes confidence scores associated with certain words in the transcription; receiving, from the server, audio data corresponding to the first transcription; identifying, at the mobile device, a replaceable word from the first transcription; generating a second transcription of a portion of the received audio data corresponding to the replaceable word, wherein the second transcription includes phonetic data, and wherein the second transcription is generated by a mobile device ASR engine on the mobile device using the maintained personal vocabulary; and identifying a replacement word for the replaceable word, wherein the replacement word is identified based on a comparison between the phonetic data of the second transcription and the personal vocabulary, and wherein the replacement word is from the personal vocabulary; identifying, at the mobile device, a non-replaceable word from the first transcription partially based on the maintained personal vocabulary and the acoustic model associated with the user of the mobile device; producing a modified confidence score associated with the portion of the received first transcript based at least in part on the comparison; and generating a final transcription using the modified confidence score and the non-replaceable word, wherein the replacement word appears in the final transcription in place of at least one word from the first transcription, and wherein the non-replaceable word appears in the final transcription. - View Dependent Claims (17, 18, 19, 20, 21, 22, 23, 24)
-
Specification