Method for detecting emotions from speech using speaker identification
First Claim
1. Method for detecting emotions from speech input of at least one speaker, wherein a process of speaker identification is carried out on a given speech input (SI) so as to obtain speaker identification and/or classification data (SID) and wherein a process of recognizing an emotional state or a change thereof for said speaker from said speech input (SI) is adapted and/or configured according to said speaker identification and/or classification data (SID), in particular so as to reduce an error rate of the process of recognizing said emotional state.
2 Assignments
0 Petitions
Accused Products
Abstract
To reduce the error rate when classifying emotions from an acoustical speech input (SI) only, it is suggested to include a process of speaker identification to obtain certain speaker identification data (SID) on the basis of which the process of recognizing an emotional state is adapted and/or configured. In particular, speaker-specific feature extractors (FE) and/or emotion classifiers (EC) are selected based on said speaker identification data (SID).
-
Citations
14 Claims
-
1. Method for detecting emotions from speech input of at least one speaker,
wherein a process of speaker identification is carried out on a given speech input (SI) so as to obtain speaker identification and/or classification data (SID) and wherein a process of recognizing an emotional state or a change thereof for said speaker from said speech input (SI) is adapted and/or configured according to said speaker identification and/or classification data (SID), in particular so as to reduce an error rate of the process of recognizing said emotional state.
Specification