Method and device for automatically playing expression on virtual image
First Claim
1. A method for automatically playing an expression on a virtual image, comprising steps of:
- A;
capturing, by a machine, a video image related to a game player in a game, and determining, by the machine, whether the captured video image contains a facial image of the game player, and if so, executing a step B;
otherwise, returning to the step A;
B;
extracting, by the machine, facial features from the facial image, and obtaining, by the machine, a motion track of each of the extracted facial features by motion directions of the extracted facial features that are sequentially acquired for N times;
wherein N is greater than or equal to 2; and
C;
determining, by the machine, a corresponding facial expression according to the motion track of the facial feature which is obtained in step B, and automatically playing, by the machine, the determined facial expression on a virtual image of the game player;
and further comprising;
obtaining, by the machine, a set of correspondence relations between facial expressions and common facial motion tracks through pre-training;
wherein, the step C comprises;
C1;
searching a correspondence relation from the set of correspondence relations, wherein the common facial motion tracks of the correspondence relation are related to motion tracks of the extracted facial features; and
if the correspondence relation is searched, then executing a step C2;
C2;
automatically playing the facial expression in the searched correspondence relation on the virtual image of the game player;
wherein, in the step C1, the fact that the common facial motion tracks of the correspondence relation are related to motion tracks of the extracted facial features comprises that;
a match between the motion track of each of the extracted facial features and a respective one of the common facial motion tracks is larger than or equal to a match value corresponding to the extracted facial features.
1 Assignment
0 Petitions
Accused Products
Abstract
Provided are a method and device for automatically playing an expression on a virtual image. The method includes the steps of: A, capturing a video image related to a game player in a game, and determining whether the captured video image contains a facial image of the game player, and if so, executing a step B; otherwise, returning to the step A; B: extracting facial features from the facial image, and obtaining a motion track of each of the facial features by motion directions of the extracted facial feature that are acquired for N times, where N is greater than or equal to 2; and C, determining a corresponding facial expression according to the motion track of the facial feature which is obtained in step B, and automatically playing the determined facial expression on a virtual image of the game player.
-
Citations
7 Claims
-
1. A method for automatically playing an expression on a virtual image, comprising steps of:
-
A;
capturing, by a machine, a video image related to a game player in a game, and determining, by the machine, whether the captured video image contains a facial image of the game player, and if so, executing a step B;
otherwise, returning to the step A;B;
extracting, by the machine, facial features from the facial image, and obtaining, by the machine, a motion track of each of the extracted facial features by motion directions of the extracted facial features that are sequentially acquired for N times;
wherein N is greater than or equal to 2; andC;
determining, by the machine, a corresponding facial expression according to the motion track of the facial feature which is obtained in step B, and automatically playing, by the machine, the determined facial expression on a virtual image of the game player;and further comprising; obtaining, by the machine, a set of correspondence relations between facial expressions and common facial motion tracks through pre-training; wherein, the step C comprises; C1;
searching a correspondence relation from the set of correspondence relations, wherein the common facial motion tracks of the correspondence relation are related to motion tracks of the extracted facial features; and
if the correspondence relation is searched, then executing a step C2;C2;
automatically playing the facial expression in the searched correspondence relation on the virtual image of the game player;wherein, in the step C1, the fact that the common facial motion tracks of the correspondence relation are related to motion tracks of the extracted facial features comprises that; a match between the motion track of each of the extracted facial features and a respective one of the common facial motion tracks is larger than or equal to a match value corresponding to the extracted facial features. - View Dependent Claims (2, 3, 7)
-
-
4. A device for automatically playing an expression on a virtual image, comprising:
- a capture unit, a track determination unit and an expression play unit, wherein,
the capture unit is configured to capture a video image related to a game player in a game, and determine whether the captured video image contains a facial image of the game player, and if so, send a notification to the track determination unit;
otherwise, continue to capture the video image related to the game player in the game;the track determination unit is configured to extract facial features from the facial image after receiving the notification sent by the capture unit, and obtain a motion track of each of the extracted facial features by motion directions of the extracted facial feature that are sequentially acquired for N times;
where N is greater than or equal to 2; andthe expression play unit is configured to determine a corresponding facial expression according to the motion track of the facial feature which is obtained by the track determination unit, and automatically play the determined facial expression on the virtual image of the game player, wherein, the expression play unit comprises; a match module, which is configured to search a correspondence relation from the set of correspondence relations, wherein the common facial motion tracks of the correspondence relation are related to the motion tracks of the extracted facial features; and
if the correspondence relation is searched, then send a notification to an expression play module;
wherein the set of correspondence relations are obtained through pre-training; andthe expression play module, which is configured to play the facial expression in the searched correspondence relation on the virtual image of the game player; wherein, the fact that the common facial motion tracks of the correspondence relation are related to motion tracks of the extracted facial features comprises that; a match between the motion track of each of the extracted facial features and a respective one of the common facial motion tracks is larger than or equal to a match value corresponding to the extracted facial features. - View Dependent Claims (5, 6)
- a capture unit, a track determination unit and an expression play unit, wherein,
Specification