Method and apparatus for tailoring the output of an intelligent automated assistant to a user
First Claim
Patent Images
1. A method for conducting an interaction with a user, the method comprising:
- collecting data about the user using at least one audio sensor positioned in a vicinity of the user;
extracting feature data from the collected data using a plurality of feature extractors, wherein the feature data includes at least one feature of the collected data;
combining the feature data from the plurality of feature extractors to produce combined features;
modeling ones of the combined features as joint features; and
classifying at least one of the joint features by at least one classifier using at least one model that defines an affective state of the user in accordance with the collected data; and
tailoring an output to be delivered to the user in accordance with the affective state.
0 Assignments
0 Petitions
Accused Products
Abstract
The present invention relates to a method and apparatus for tailoring the output of an intelligent automated assistant. One embodiment of a method for conducting an interaction with a human user includes collecting data about the user using a multimodal set of sensors positioned in a vicinity of the user, making a set of inferences about the user in accordance with the data, and tailoring an output to be delivered to the user in accordance with the set of inferences.
17 Citations
18 Claims
-
1. A method for conducting an interaction with a user, the method comprising:
-
collecting data about the user using at least one audio sensor positioned in a vicinity of the user; extracting feature data from the collected data using a plurality of feature extractors, wherein the feature data includes at least one feature of the collected data; combining the feature data from the plurality of feature extractors to produce combined features; modeling ones of the combined features as joint features; and classifying at least one of the joint features by at least one classifier using at least one model that defines an affective state of the user in accordance with the collected data; and tailoring an output to be delivered to the user in accordance with the affective state. - View Dependent Claims (2, 3, 4, 5, 16)
-
-
6. A non-transitory computer readable medium containing an executable program for conducting an interaction with a user, where the program performs steps comprising:
-
collecting data about the user using at least one audio sensor positioned in a vicinity of the user; extracting feature data from the collected data using a plurality of feature extractors, wherein the feature data includes at least one feature of the collected data; combining the feature data from the plurality of feature extractors to produce combined features; modeling ones of the combined features as joint features; classifying at least one of the joint features by at least one classifier using at least one model that defines an affective state of the user in accordance with the collected data; and tailoring an output to be delivered to the user in accordance with the affective state. - View Dependent Claims (12, 14, 15, 17)
-
-
7. A system for conducting an interaction with a user, the system comprising:
-
at least one audio sensor positioned in a vicinity of the user for collecting data about the user; a plurality of feature extractors for receiving the collected data and extracting feature data including at least one feature from the collected data; a feature combination module for combining the feature data received from the plurality of feature extractors to produce combined features and modeling ones of the combined features as joint features; at least one classifier for classifying at least one of the joint features using at least one model that defines an affective state of the user in accordance with the collected data; and an output selection module for tailoring an output to be delivered to the user in accordance with the affective state. - View Dependent Claims (8, 9, 10, 11, 13, 18)
-
Specification