Systems and methods for enhancing responsiveness to utterances having detectable emotion
First Claim
Patent Images
1. A method comprising:
- receiving, via a user device, a natural utterance;
converting the natural utterance into text;
extracting a textual emotion feature from the text;
extracting a non-textual emotion feature from the natural utterance, the textual emotion feature and the non-textual emotion feature corresponding, respectively, to conflicting first and second commands;
selecting either the non-textual emotion feature or the textual emotion feature, thereby providing a selected emotion feature;
mapping the selected emotion feature to an emotion; and
responding to the natural utterance by providing a verbalized acknowledgement of the natural utterance adapted to the emotion and performing an action corresponding to the first or the second command associated with the selected emotion feature.
1 Assignment
0 Petitions
Accused Products
Abstract
Methods, systems, and related products that provide emotion-sensitive responses to user'"'"'s commands and other utterances received at an utterance-based user interface. Acknowledgements of user'"'"'s utterances are adapted to the user and/or the user device, and emotions detected in the user'"'"'s utterance that have been mapped from one or more emotion features extracted from the utterance. In some examples, extraction of a user'"'"'s changing emotion during a sequence of interactions is used to generate a response to a user'"'"'s uttered command. In some examples, emotion processing and command processing of natural utterances are performed asynchronously.
-
Citations
25 Claims
-
1. A method comprising:
-
receiving, via a user device, a natural utterance; converting the natural utterance into text; extracting a textual emotion feature from the text; extracting a non-textual emotion feature from the natural utterance, the textual emotion feature and the non-textual emotion feature corresponding, respectively, to conflicting first and second commands; selecting either the non-textual emotion feature or the textual emotion feature, thereby providing a selected emotion feature; mapping the selected emotion feature to an emotion; and responding to the natural utterance by providing a verbalized acknowledgement of the natural utterance adapted to the emotion and performing an action corresponding to the first or the second command associated with the selected emotion feature. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A non-transitory computer readable medium comprising:
-
an emotion processor having one or more sequences of emotion processor instructions that, when executed by one or more processors, causes the one or more processors to generate an output adapted to an emotion detected in a natural utterance; and one or more sequences of instructions which, when executed by one or more processors, cause the one or processors to; receive the natural utterance, convert the natural utterance into text; extract a textual emotion feature from the text; extract a non-textual emotion feature from the natural utterance, the textual emotion feature and the non-textual emotion feature corresponding, respectively, to conflicting first and second commands; select either the non-textual emotion feature or the textual emotion feature, thereby providing a selected emotion feature; map the selected emotion feature to an emotion; and respond to the natural utterance by providing a verbalized acknowledgement adapted to the emotion and performing an action corresponding to the first or the second command associated with the selected emotion feature.
-
-
14. A system comprising:
-
one or more processors configured to execute one or more sequences of computer-readable instructions that cause the one or more processors to; convert a received natural utterance into text; extract a textual emotion feature from the text; extract a non-textual emotion feature from the natural utterance, the textual emotion feature and the non-textual emotion feature corresponding, respectively, to conflicting first and second commands; select either the non-textual emotion feature or the textual emotion feature, thereby providing a selected emotion feature; map the selected emotion feature to an emotion; and respond to the natural utterance by providing a verbalized acknowledgement adapted to the emotion and performing an action corresponding to the first or the second command associated with the selected emotion feature. - View Dependent Claims (15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25)
-
Specification