Method and system for automation of response selection and composition in dialog systems
First Claim
1. A system comprising:
- a processor;
a dialog manager configured to receive input from a user, the received input being the user'"'"'s speech;
a user category classification and detection module configured to use the processor to identify categories for the user with reference to the received input;
a user mood detection and tracking module configured to use the processor to identify a mood of the user with reference to the received input;
a user physical and mind state and energy level detection module configured to use the processor to identify at least one of a physical status and a mental status of the user with reference to the received input;
a user acquaintance module configured to use the processor to identify an acquaintance status of the user with reference to the received input;
a user personality detection and tracking module configured to use the processor to identify a personality status of the user with reference to the received input;
a conversational context detection and management module configured to use the processor to identify a conversational context of the received input; and
a response generation module configured to use the processor to build a knowledge base and generate a response for the user with reference to the received input and to vocalize the response for the user, the categories for the user, the mood of the user, the mental status of the user, the acquaintance status of the user, the personality status of the user, and the conversational context of the received input,wherein the user category classification and detection module is further configured to associate at least one of an age, a gender, a profession and a relationship with the user with reference to at least one of a voice characteristic of the user, at least one image of the user, and at least one video of the user, andwherein the response generation module is configured to assign a voice type for vocalizing the response and to select at least one voice characteristic for the voice type based at least in part on the at least one of the age, the gender, the profession, and the relationship associated with the user.
1 Assignment
0 Petitions
Accused Products
Abstract
A dialog system includes a processor. The system can further include a dialog manager. The dialog manager can be configured to receive input from a user using the processor. The system can further include a user category classification and detection module, which is configured to identify categories for the user from the received input. The system can further include a user mood detection and tracking module configured to identify a mood of the user. The system can further include a user physical and mind state and energy level detection module configured to identify a mental status of the user. The system can further include a user acquaintance module configured to identify an acquaintance status of the user. The system can further include user personality detection and tracking module configured to identify a personality status of the user. The system can further include a conversational context detection and response generation module.
-
Citations
10 Claims
-
1. A system comprising:
-
a processor; a dialog manager configured to receive input from a user, the received input being the user'"'"'s speech; a user category classification and detection module configured to use the processor to identify categories for the user with reference to the received input; a user mood detection and tracking module configured to use the processor to identify a mood of the user with reference to the received input; a user physical and mind state and energy level detection module configured to use the processor to identify at least one of a physical status and a mental status of the user with reference to the received input; a user acquaintance module configured to use the processor to identify an acquaintance status of the user with reference to the received input; a user personality detection and tracking module configured to use the processor to identify a personality status of the user with reference to the received input; a conversational context detection and management module configured to use the processor to identify a conversational context of the received input; and a response generation module configured to use the processor to build a knowledge base and generate a response for the user with reference to the received input and to vocalize the response for the user, the categories for the user, the mood of the user, the mental status of the user, the acquaintance status of the user, the personality status of the user, and the conversational context of the received input, wherein the user category classification and detection module is further configured to associate at least one of an age, a gender, a profession and a relationship with the user with reference to at least one of a voice characteristic of the user, at least one image of the user, and at least one video of the user, and wherein the response generation module is configured to assign a voice type for vocalizing the response and to select at least one voice characteristic for the voice type based at least in part on the at least one of the age, the gender, the profession, and the relationship associated with the user. - View Dependent Claims (5, 6, 7, 8, 9)
-
-
2. The system of concept 1, further comprising:
a database, the knowledge base stored in the database.
-
3. The system of concept 1, wherein the knowledge base includes at least one of a rich word dictionary and a rich expression dictionary.
-
4. The system of concept 1, wherein the response generation module is further configured to generate the response by:
-
selecting words with reference to the received input, the categories for the user, the mood of the user, the mental status of the user, the acquaintance status of the user, the personality status of the user, and the conversational context of the received input; selecting a tone based on the received input, the categories for the user, the mood of the user, the mental status of the user, the acquaintance status of the user, the personality status of the user, and the conversational context of the received input; and selecting a volume level based on the received input, the categories for the user, the mood of the user, the mental status of the user, the acquaintance status of the user, the personality status of the user, and the conversational context of the received input.
-
-
10. A method of operating a dialog system configured to interact with a user using spoken dialog, the method comprising:
-
receiving voice input into the system via a voice detection system, the voice input indicative of a spoken statement spoken of a user; using a user category classification and detection module to; determine at least one voice characteristic of the voice input; and identify at least one of an age, a gender, and an identity of the user with reference to the at least one voice characteristic; using a user mood detection and tracking module to associate at least one mood with the user with reference to the at least one voice characteristic; using a physical and mind state and energy level detection module to associate at least one of a physical state and a mental state with the user with reference to the at least one voice characteristic; using a user acquaintance module to associate an acquaintance status with the user with reference to the at least one voice characteristic and to a history of prior interaction of the user with the system; using a user personality detection and tracking module to associate a personality status with the user with reference to the at least one voice characteristic and to the history; using a conversational context detection and management module to associate a conversational context with the voice input with reference to the at least one voice characteristic; generating a response to the spoken statement and assigning a voice type to the response, the voice type having at least one voice characteristic that is selected with reference to the category, the mood, the at least one of physical and mental state, the personality, and the conversational context associated with the user; and using the system to vocalize the response with the assigned voice type having the corresponding voice characteristics, wherein the user category classification and detection module is further configured to associate at least one of an age, a gender, a profession and a relationship with the user with reference to at least one of a voice characteristic of the user, at least one image of the user, and at least one video of the user, and wherein the response generation module is configured to assign a voice type for vocalizing the response and to select at least one voice characteristic for the voice type based at least in part on at least one of the age, the gender, the profession, and the relationship associated with the user.
-
Specification