Information Processing Device Capable of Displaying A Character Representing A User, and Information Processing Method Thereof.
First Claim
1. An information processing device comprising:
- an image data storage operative to store data on models of a character representing a user, the model including a plurality of facial expression models providing different facial expressions;
a facial expression parameter generating unit operative to calculate a degree of facial expression for each facial expression type as a facial expression parameter by sequentially analyzing input moving image data acquired by capturing an image of a user, by deriving a numerical value representing the shape of a portion of a face for each input image frame, and by comparing the numerical value with a criteria value defined in advance;
a model control unit operative to first determine a weight for each of the plurality of facial expression models stored in the image data storage by using the facial expression parameter calculated by the facial expression parameter generating unit and a volume level obtained from voice data of the user acquired at the same time with the capturing of the image, to synthesize the plurality of facial expression models according to the weights, and to determine an output model of the character for points of time corresponding to each of the input image frames;
a moving image parameter generating unit operative to generate a moving image parameter for generating animated moving image frames of the character including the output model determined by the model control unit for respective points of time; and
an output unit operative to synchronize the moving image parameter generated by the moving image parameter generating unit and the voice data and to sequentially output.
4 Assignments
0 Petitions
Accused Products
Abstract
The basic image specifying unit specifies the basic image of a character representing a user of the information processing device. The facial expression parameter generating unit converts the degree of the facial expression of the user to a numerical value. The model control unit determines an output model of the character for respective points of time. The moving image parameter generating unit generates a moving image parameter for generating animated moving image frames of the character for respective points of time. The command specifying unit specifies a command corresponding to the pattern of the facial expression of the user. The playback unit outputs an image based on the moving image parameter and the voice data received from the information processing device of the other user. The command executing unit executes a command based on the identification information of the command.
99 Citations
14 Claims
-
1. An information processing device comprising:
-
an image data storage operative to store data on models of a character representing a user, the model including a plurality of facial expression models providing different facial expressions; a facial expression parameter generating unit operative to calculate a degree of facial expression for each facial expression type as a facial expression parameter by sequentially analyzing input moving image data acquired by capturing an image of a user, by deriving a numerical value representing the shape of a portion of a face for each input image frame, and by comparing the numerical value with a criteria value defined in advance; a model control unit operative to first determine a weight for each of the plurality of facial expression models stored in the image data storage by using the facial expression parameter calculated by the facial expression parameter generating unit and a volume level obtained from voice data of the user acquired at the same time with the capturing of the image, to synthesize the plurality of facial expression models according to the weights, and to determine an output model of the character for points of time corresponding to each of the input image frames; a moving image parameter generating unit operative to generate a moving image parameter for generating animated moving image frames of the character including the output model determined by the model control unit for respective points of time; and an output unit operative to synchronize the moving image parameter generated by the moving image parameter generating unit and the voice data and to sequentially output. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. An information processing method comprising:
-
calculating a degree of facial expression for a plurality of facial expression types as a facial expression parameter by sequentially analyzing input moving image data acquired by capturing an image of a user, by deriving a numerical value representing the shape of a portion of a face for a plurality of input image frames, and by comparing the numerical value with a criteria value defined in advance; determining a weight for each of a plurality of models of a character representing a user, the models including a plurality of facial expression models providing different facial expressions stored in a memory by using the calculated facial expression parameter and a volume level obtained from voice data of the user acquired at the same time with the capturing of the image; reading data of the plurality of facial expression models from the memory; synthesizing the plurality of facial expression models while weighing with the weights, and determining an output model of the character for points of time corresponding to each of the plurality of input image frames; generating a moving image parameter for generating animated moving image frames of the character including the output model for respective points of time; and synchronizing the moving image parameter and the voice data, and outputting sequentially.
-
-
13. A computer program embedded on a non-transitory computer-readable recording medium, comprising:
-
a module configured to calculate a degree of facial expression for a plurality of facial expression types as a facial expression parameter by sequentially analyzing input moving image data acquired by capturing an image of a user, by deriving a numerical value representing the shape of a portion of a face for a plurality of input image frames, and by comparing the numerical value with a criteria value defined in advance; a module configured to determine a weight for a plurality of models of a character representing a user, the models including a plurality of facial expression models providing different facial expressions stored in a memory by using the calculated facial expression parameter and a volume level obtained from voice data of the user acquired at the same time with the capturing of the image; a module configured to read data of the plurality of facial expression models from the memory; a module configured to synthesize the plurality of facial expression models while weighing with the weights, and to determine an output model of the character for points of time corresponding to each of the plurality of input image frames; a module configured to generate a moving image parameter for generating animated moving image frames of the character including the output model for respective points of time; and a module configured to synchronize the moving image parameter and the voice data, and to output sequentially.
-
-
14. A computer readable medium encoded with a program comprising:
-
a module configured to calculate a degree of facial expression for a plurality of facial expression types as a facial expression parameter by sequentially analyzing input moving image data acquired by capturing an image of a user, by deriving a numerical value representing the shape of a portion of a face for a plurality of input image frames, and by comparing the numerical value with a criteria value defined in advance; a module configured to determine a weight for a plurality of models of a character representing a user, the models including a plurality of facial expression models providing different facial expressions stored in a memory by using the calculated facial expression parameter and a volume level obtained from voice data of the user acquired at the same time with the capturing of the image; a module configured to read data of the plurality of facial expression models from the memory; a module configured to synthesize the plurality of facial expression models while weighing with the weights, and to determine an output model of the character for points of time corresponding to each of the plurality of input image frames; a module configured to generate a moving image parameter for generating animated moving image frames of the character including the output model for respective points of time; and a module configured to synchronize the moving image parameter and the voice data, and to output sequentially.
-
Specification