Customer sentiment analysis using recorded conversation
First Claim
Patent Images
1. A system comprising:
- one or more server devices to;
receive voice emotion information related to an audio recording,the audio recording containing a first vocal utterance of a first speaker and a second vocal utterance of a second speaker, andthe voice emotion information indicating that the first vocal utterance is spoken with a particular emotion;
associate a first word or phrase, within the first vocal utterance, with the first speaker and a second word or phrase, within the second vocal utterance, with the second speaker;
associate the voice emotion information with attribute information related to the audio recording,the voice emotion information including information regarding at least one of the first word or phrase or the second word or phrase;
aggregate the associated voice emotion and attribute information with other associated voice emotion and attribute information to form aggregated information;
generate a report based on the aggregated information and one or more report parameters; and
provide the report.
1 Assignment
0 Petitions
Accused Products
Abstract
A system is configured to receive voice emotion information, related to an audio recording, indicating that a vocal utterance of a speaker is spoken with negative or positive emotion. The system is configured to associate the voice emotion information with attribute information related to the audio recording, and aggregate the associated voice emotion and attribute information with other associated voice emotion and attribute information to form aggregated information. The system is configured to generate a report based on the aggregated information and one or more report parameters, and provide the report.
20 Citations
20 Claims
-
1. A system comprising:
one or more server devices to; receive voice emotion information related to an audio recording, the audio recording containing a first vocal utterance of a first speaker and a second vocal utterance of a second speaker, and the voice emotion information indicating that the first vocal utterance is spoken with a particular emotion; associate a first word or phrase, within the first vocal utterance, with the first speaker and a second word or phrase, within the second vocal utterance, with the second speaker; associate the voice emotion information with attribute information related to the audio recording, the voice emotion information including information regarding at least one of the first word or phrase or the second word or phrase; aggregate the associated voice emotion and attribute information with other associated voice emotion and attribute information to form aggregated information; generate a report based on the aggregated information and one or more report parameters; and provide the report. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
8. A non-transitory computer-readable medium storing instructions, the instructions comprising:
-
a plurality of instructions that, when executed by one or more processors, cause the one or more processors to; receive voice emotion information related to an audio recording, the audio recording containing a vocal utterance of a speaker, and the voice emotion information indicating that the vocal utterance relates to a particular emotion; receive attribute information related to the audio recording; associate the voice emotion information with the attribute information; aggregate the associated voice emotion and attribute information with other associated voice emotion and attribute information to form aggregated information; generate a report based on the aggregated information and one or more report parameters, the report including a count of words or phrases, within the aggregated information, associated with the particular emotion and relating to at least one of; the speaker, a location associated with the speaker, a product associated with the speaker, or a subject associated with the vocal utterance; and provide the report. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A method comprising:
-
receiving, by one or more processors, voice emotion information related to an audio recording, the audio recording containing a first vocal utterance by a first speaker and a second vocal utterance of a second speaker, and the voice emotion information indicating that the first vocal utterance is spoken with a particular emotion; associating, by one or more processors, a first word or phrase, within the first vocal utterance, with the first speaker and a second word or phrase, within the second vocal utterance, with the second speaker; associating, by one or more processors, the voice emotion information with attribute information, related to the first speaker, within a data structure, the voice emotion information including information regarding at least one of the first word or phrase or the second word or phrase; aggregating, by one or more processors and within the data structure, the associated voice emotion and attribute information with other associated voice emotion and attribute information to form aggregated information; receiving, by one or more processors, one or more report parameters; generating, by one or more processors, a report based on the aggregated information and the one or more report parameters; and outputting, by one or more processors, the report for display. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification