MULTI-MODAL SENSOR BASED EMOTION RECOGNITION AND EMOTIONAL INTERFACE
First Claim
1. A method for determining an emotional state of a user, comprising:
- extracting features including one or more acoustic features, visual features, linguistic features, physical features from signals obtained by one or more sensors with a processor;
analyzing the features including the acoustic features, visual features, linguistic features, and physical features with one or more machine learning algorithms implemented on a processor; and
extracting an emotional state of the user from analysis of the features including analysis of the acoustic features, visual features, linguistic features, and physical features with the one or more machine learning algorithms.
2 Assignments
0 Petitions
Accused Products
Abstract
Features, including one or more acoustic features, visual features, linguistic features, and physical features may be extracted from signals obtained by one or more sensors with a processor. The acoustic, visual, linguistic, and physical features may be analyzed with one or more machine learning algorithms and an emotional state of a user may be extracted from analysis of the features. It is emphasized that this abstract is provided to comply with the rules requiring an abstract that will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims.
-
Citations
28 Claims
-
1. A method for determining an emotional state of a user, comprising:
-
extracting features including one or more acoustic features, visual features, linguistic features, physical features from signals obtained by one or more sensors with a processor; analyzing the features including the acoustic features, visual features, linguistic features, and physical features with one or more machine learning algorithms implemented on a processor; and extracting an emotional state of the user from analysis of the features including analysis of the acoustic features, visual features, linguistic features, and physical features with the one or more machine learning algorithms. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16)
-
-
17. An apparatus for determining an emotional state of a user, comprising:
-
a processor; instructions executable by the processor, wherein the instructions are configured, when executed to extract one or more acoustic features, visual features, linguistic features, and physical features of the user from signals obtained by one or more sensors, analyze the acoustic features, visual features, linguistic features, and physical features with one or more machine learning algorithms; and extract an emotional state of the user from analysis of the acoustic features, visual features, linguistic features, and physical features with the one or more machine learning algorithms. - View Dependent Claims (18, 19, 20, 21, 22, 23, 24, 25, 26, 27)
-
-
28. A non-transitory computer-readable medium having computer executable instructions embodied therein, wherein the instructions are configured to implement a method for determining an emotional state of a user, when executed, the method comprising:
-
extracting one or more acoustic features, visual features, linguistic features, and physical features from signals obtained by one or more sensors; analyzing the acoustic features, visual features, linguistic features, and physical features with one or more machine learning algorithms; and extracting an emotional state of the user from analysis of the acoustic features, visual features, linguistic features, and physical features with the one or more machine learning algorithms.
-
Specification