Enhanced voice conferencing with history, language translation and identification
First Claim
1. A method for ability enhancement, the method comprising:
- by a computer system,receiving data representing speech signals from a voice conference amongst multiple speakers, wherein the multiple speakers are remotely located from one another, wherein each of the multiple speakers uses a separate conferencing device to participate in the voice conference;
determining speaker-related information associated with the multiple speakers, based on the data representing speech signals from the voice conference;
recording conference history information based on the speaker-related information, by recording indications of topics discussed during the voice conference by;
performing speech recognition to convert the data representing speech signals into text;
analyzing the text to identify frequently used terms or phrases; and
determining the topics discussed during the voice conference based on the frequently used terms or phrases;
audibly notifying a user to view the conference history information on a display device, wherein the user is notified in a manner that is not audible to at least some of the multiple speakers; and
presenting, on the display device, at least some of the conference history information to the user;
translating an utterance of one of the multiple speakers in a first language into a message in a second language, based on the speaker-related information,wherein the speaker related information is determined by automatically determining the second and the first language comprising steps of;
concurrently or simultaneously applying multiple speech recognizers and using GPS information indicating the speakers'"'"' locations; and
recording the message in the second language as part of the conference history information.
3 Assignments
0 Petitions
Accused Products
Abstract
Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance voice conferencing among multiple speakers. Some embodiments of the AEFS enhance voice conferencing by recording, translating and presenting voice conference history information based on speaker-related information, wherein the translation is based on language identification using multiple speech recognizers and GPS information. The AEFS receives data that represents utterances of multiple speakers who are engaging in a voice conference with one another. The AEFS then determines speaker-related information, such as by identifying a current speaker, locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS records conference history information (e.g., a transcript) based on the determined speaker-related information. The AEFS then informs a user of the conference history information, such as by presenting a transcript of the voice conference and/or related information items on a display of a conferencing device associated with the user.
-
Citations
43 Claims
-
1. A method for ability enhancement, the method comprising:
- by a computer system,
receiving data representing speech signals from a voice conference amongst multiple speakers, wherein the multiple speakers are remotely located from one another, wherein each of the multiple speakers uses a separate conferencing device to participate in the voice conference; determining speaker-related information associated with the multiple speakers, based on the data representing speech signals from the voice conference; recording conference history information based on the speaker-related information, by recording indications of topics discussed during the voice conference by; performing speech recognition to convert the data representing speech signals into text; analyzing the text to identify frequently used terms or phrases; and determining the topics discussed during the voice conference based on the frequently used terms or phrases; audibly notifying a user to view the conference history information on a display device, wherein the user is notified in a manner that is not audible to at least some of the multiple speakers; and presenting, on the display device, at least some of the conference history information to the user; translating an utterance of one of the multiple speakers in a first language into a message in a second language, based on the speaker-related information, wherein the speaker related information is determined by automatically determining the second and the first language comprising steps of; concurrently or simultaneously applying multiple speech recognizers and using GPS information indicating the speakers'"'"' locations; and recording the message in the second language as part of the conference history information. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41)
- by a computer system,
-
42. A non-transitory computer-readable medium having contents that are configured, when executed, to cause a computing system to perform a method for ability enhancement, the method comprising:
- by the computer system,
receiving data representing speech signals from a voice conference amongst multiple speakers, wherein the multiple speakers are remotely located from one another, wherein each of the multiple speakers uses a separate conferencing device to participate in the voice conference; determining speaker-related information associated with the multiple speakers, based on the data representing speech signals from the voice conference; recording conference history information based on the speaker-related information, by recording indications of topics discussed during the voice conference by; performing speech recognition to convert the data representing speech signals into text; analyzing the text to identify frequently used terms or phrases; and determining the topics discussed during the voice conference based on the frequently used terms or phrases; audibly notifying a user to view the conference history information on a display device, wherein the user is notified in a manner that is not audible to at least some of the multiple speakers; and presenting, on the display device, at least some of the conference history information to the user; translating an utterance of one of the multiple speakers in a first language into a message in a second language, based on the speaker-related information, wherein the speaker related information is determined by automatically determining the second and the first language comprising steps of; concurrently or simultaneously applying multiple speech recognizers and using GPS information indicating the speakers'"'"' locations; and recording the message in the second language as part of the conference history information.
- by the computer system,
-
43. A computing system for ability enhancement, the computing system comprising:
-
a processor; a memory; and a module that is stored in the memory and that is configured, when executed by the processor, to perform a method comprising;
by the computer system,receiving data representing speech signals from a voice conference amongst multiple speakers, wherein the multiple speakers are remotely located from one another, wherein each of the multiple speakers uses a separate conferencing device to participate in the voice conference; determining speaker-related information associated with the multiple speakers, based on the data representing speech signals from the voice conference; recording conference history information based on the speaker-related information, by recording indications of topics discussed during the voice conference by; performing speech recognition to convert the data representing speech signals into text; analyzing the text to identify frequently used terms or phrases; and
determining the topics discussed during the voice conference based on the frequently used terms or phrases;audibly notifying a user to view the conference history information on a display device, wherein the user is notified in a manner that is not audible to at least some of the multiple speakers; and presenting, on the display device, at least some of the conference history information to the user; translating an utterance of one of the multiple speakers in a first language into a message in a second language, based on the speaker-related information, wherein the speaker related information is determined by automatically determining the second and the first language comprising steps of; concurrently or simultaneously applying multiple speech recognizers and using GPS information indicating the speakers'"'"' locations; and recording the message in the second language as part of the conference history information.
-
Specification