EMOTION RECOGNITION IN VIDEO CONFERENCING
First Claim
1. A computer-implemented method for video conferencing, the method comprising:
- receiving a video inducing a sequence of images;
detecting at least one object of interest in one or more of the images;
locating feature reference points of the at least one object of interest;
aligning a virtual face mesh to the at least one object of interest in one or more of the images based at least in part on the feature reference points;
finding over the sequence of images at least one deformation of the virtual face mesh, wherein the at least one deformation is associated with at least one face mimic;
determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions; and
generating a communication bearing data associated with the facial emotion.
1 Assignment
0 Petitions
Accused Products
Abstract
Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
-
Citations
1 Claim
-
1. A computer-implemented method for video conferencing, the method comprising:
-
receiving a video inducing a sequence of images; detecting at least one object of interest in one or more of the images; locating feature reference points of the at least one object of interest; aligning a virtual face mesh to the at least one object of interest in one or more of the images based at least in part on the feature reference points; finding over the sequence of images at least one deformation of the virtual face mesh, wherein the at least one deformation is associated with at least one face mimic; determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions; and generating a communication bearing data associated with the facial emotion.
-
Specification