Application of personality models and interaction with synthetic characters in a computing system
First Claim
Patent Images
1. An apparatus, comprising:
- a multiple-sensor recognition unit to receive and process image and audio data to capture one of a facial expression, a gesture and a recognized verbal command input from a user;
an agent to provide responses to the user based on the captured one of a facial expression, a gesture and a recognized verbal command;
a personality database containing a set of personality models, each model containing a set of parameters comprised of personality traits, wherein each personality trait is assigned a score and the set of parameters represents a range of scores to create a personality profile to influence the behavior of responses provided by the agent; and
an event/response database to store previously captured facial expressions, gestures, and recognized verbal commands and previously given agent responses, from which the agent learns, wherein responses by the agent are based on the information stored in the event/response database, on current user inputs, and are subjected to noise to make the agent respond in a humanly unpredictable manner.
1 Assignment
0 Petitions
Accused Products
Abstract
An apparatus includes a video input unit and an audio input unit. The apparatus also includes a multi-sensor fusion/recognition unit coupled to the video input unit and the audio input unit, and a processor coupled to the multi-sensor fusion/recognition unit. The multi-sensor fusion/recognition unit decodes a combined video and audio stream containing a set of user inputs.
419 Citations
15 Claims
-
1. An apparatus, comprising:
-
a multiple-sensor recognition unit to receive and process image and audio data to capture one of a facial expression, a gesture and a recognized verbal command input from a user;
an agent to provide responses to the user based on the captured one of a facial expression, a gesture and a recognized verbal command;
a personality database containing a set of personality models, each model containing a set of parameters comprised of personality traits, wherein each personality trait is assigned a score and the set of parameters represents a range of scores to create a personality profile to influence the behavior of responses provided by the agent; and
an event/response database to store previously captured facial expressions, gestures, and recognized verbal commands and previously given agent responses, from which the agent learns, wherein responses by the agent are based on the information stored in the event/response database, on current user inputs, and are subjected to noise to make the agent respond in a humanly unpredictable manner. - View Dependent Claims (2, 3, 4)
a speaker output unit; and
a video output unit, wherein audio outputs for the speaker output unit and video outputs for the video output unit are based on the responses from the agent.
-
-
3. The apparatus of claim 1, further comprising:
-
a knowledge database to store information that allows the agent to respond to factual and subjective questions from a user; and
a link to an internet, wherein answers to factual questions not contained in the knowledge database are to be searched in the internet, retrieved and stored in the knowledge database to allow the agent to respond to factual questions not stored in the knowledge database.
-
-
4. The apparatus of claim 1, further comprising:
-
a voice/gesture unit, wherein the voice/gesture unit is to receive information from the agent and information from the personality database to formulate voice inflections and gestures in the responses provided by the agent.
-
-
5. An apparatus, comprising:
-
a multi-sensor recognition unit to receive and process image and audio data to simultaneously capture two of a facial expression, a gesture and a recognized verbal command input from a user;
an agent to provide responses to the user based on the captured two of a facial expression, a gesture and a recognized verbal command;
a personality database containing a set of personality models, each model containing a set of parameters comprised of personality traits, wherein each personality trait is assigned a score and the set of parameters represents a range of scores to create a personality profile to influence the behavior of responses provided by the agent; and
an event/response database to store previously captured facial expressions, gestures, and recognized verbal commands and previously given agent responses, from which the agent learns, wherein responses by the agent are based on the information stored in the event/response database on current user inputs, and are subjected to noise to make the agent respond in a humanly unpredictable manner. - View Dependent Claims (6, 7, 8)
a speaker output unit; and
a video output unit, wherein audio outputs for the speaker output unit and video outputs for the video output unit are based on the responses from the agent.
-
-
7. The apparatus of claim 6, further comprising:
-
a knowledge database to store information to allow the agent to respond to factual and subjective questions from a user; and
a link to an internet, wherein answers to factual questions not contained in the knowledge database are to be searched in the internet, retrieved and stored in the knowledge database to allow the agent to respond to factual questions not stored in the knowledge database.
-
-
8. The apparatus of claim 6, further comprising:
-
a voice/gesture unit, wherein the voice/gesture unit is to receive information from the agent and information from the personality database to formulate voice inflections and gestures in the responses provided by the agent.
-
-
9. A method, comprising:
-
receiving and processing image and audio data to capture one or more of a facial expression, a gesture and a recognized verbal command input from a user;
providing responses to the user from an automated agent, based on the captured one or more of a facial expression, a gesture and a recognized verbal command;
influencing a behavior of the responses provided by the agent using personality models, each model containing a set of parameters comprised of personality traits, wherein each personality trait is assigned a score and the set of parameters represents a range of scores to create a personality profile to influence the behavior of responses provided by the agent; and
the agent learning from previously captured facial expressions, gestures, and recognized verbal commands and previously given agent responses, wherein the responses by the agent are based on past user inputs, current user inputs, and are subjected to noise to make the agent respond in a humanly unpredictable manner. - View Dependent Claims (10, 11, 12)
creating audio and visual outputs based on the responses from the agent.
-
-
11. The method of claim 10, further comprising:
-
the agent responding to factual and subjective questions from a user; and
searching, retrieving and storing information from an internet to assist in responding to the factual questions.
-
-
12. The method of claim 10, further comprising:
receiving information to formulate voice inflections and gestures in the responses provided by the agent.
-
13. An article of manufacture, comprising:
-
a machine-accessible medium including data that, when accessed by a machine, cause the machine to perform operations including;
receiving and processing image and audio data to capture one or more of a facial expression, a gesture and a recognized verbal command input from a user;
providing responses to the user from an agent based on the captured one or more of a facial expression, a gesture and a recognized verbal command;
influencing a behavior of the responses provided by the agent using personality models, each model containing a set of parameters comprised of personality traits, wherein each personality trait is assigned a score and the set of parameters represents a range of scores to create a personality profile to influence the behavior of responses provided by the agent;
writing to memory previously captured facial expressions, gestures, and recognized verbal commands and previously given agent responses;
providing past user input information, current user input information to the agent from which the agent learns; and
relaying noise to the agent to make the agent respond in a humanly unpredictable manner. - View Dependent Claims (14)
searching for information on an internet to answer factual and subjective questions from the user.
-
-
15. The article of manufacture of claim 17, wherein the machine-accessible medium further includes data that cause the machine to perform operations comprising
providing information to allow the agent to make responses with voice inflections and gestures.
Specification