CUSTOMIZABLE METHOD AND SYSTEM FOR EMOTIONAL RECOGNITION
1 Assignment
0 Petitions
Accused Products
Abstract
An automated emotional recognition system is adapted to determine emotional states of a speaker based on the analysis of a speech signal. The emotional recognition system includes at least one server function and at least one client function in communication with the at least one server function for receiving assistance in determining the emotional states of the speaker. The at least one client function includes an emotional features calculator adapted to receive the speech signal and to extract therefrom a set of speech features indicative of the emotional state of the speaker. The emotional state recognition system further includes at least one emotional state decider adapted to determine the emotional state of the speaker exploiting the set of speech features based on a decision model. The server function includes at least a decision model trainer adapted to update the selected decision model according to the speech signal. The decision model to be used by the emotional state decider for determining the emotional state of the speaker is selectable based on a context of use of the recognition system.
60 Citations
62 Claims
-
1-37. -37. (canceled)
-
38. An automated emotional recognition system for determining emotional states of a speaker based on analysis of a speech signal, comprising:
-
at least one server function; at least one client function in communication with the at least one server function for receiving assistance in determining the emotional states of the speaker, wherein the at least one client function comprises an emotional features calculator capable of being adapted to receive the speech signal and to extract therefrom a set of speech features indicative of an emotional state of the speaker; and at least one emotional state decider capable of being adapted to determine the emotional state of the speaker exploiting the set of speech features based on a decision model, the server function comprising at least a decision model trainer capable of being adapted to update the decision model according to the speech features, and the decision model to be used by the emotional state decider for determining the emotional state of the speaker being selectable based on a context of use of the recognition system. - View Dependent Claims (39, 40, 41, 42, 43, 44, 45, 46, 47)
-
-
48. A method for the automatic emotional recognition of a speaker capable of being adapted to determine emotional states of the speaker based on analysis of a speech signal, comprising:
-
having a client function extract from the speech signal a set of speech features indicative of an emotional state; using a decision model for performing emotional state decision operations on at least one emotional state decider, thereby determining the emotional state from the set of speech features; and having a server function, in communication relationship with the client function, at least updating the decision model according to the speech features, the decision model to be used by the emotional state decider for determining the emotional state of the speaker being selectable based on a context of use of the recognition method. - View Dependent Claims (49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62)
-
Specification