EMOTION RECOGNITION FOR WORKFORCE ANALYTICS
First Claim
1. A computer-implemented method for workforce analytics, the method comprising:
- receiving a video including a sequence of images;
detecting an individual in one or more of the images;
locating feature reference points of the individual;
aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points;
dynamically determining over the sequence of images at least one deformation of the virtual face mesh;
determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions; and
generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
4 Assignments
0 Petitions
Accused Products
Abstract
Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
-
Citations
23 Claims
-
1. A computer-implemented method for workforce analytics, the method comprising:
-
receiving a video including a sequence of images; detecting an individual in one or more of the images; locating feature reference points of the individual; aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points; dynamically determining over the sequence of images at least one deformation of the virtual face mesh; determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions; and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21)
-
-
22. A system, comprising:
-
a computing device including at least one processor and a memory storing processor-executable codes, which, when implemented by the at least one processor, cause to perform the steps of; receiving a video including a sequence of images; detecting an individual in one or more of the images; locating feature reference points of the individual; aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points; dynamically determining over the sequence of images at least one deformation of the virtual face mesh; determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions; and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
-
-
23. A non-transitory processor-readable medium having instructions stored thereon, which when executed by one or more processors, cause the one or more processors to implement a method, comprising:
-
receiving a video including a sequence of images; detecting an individual in one or more of the images; locating feature reference points of the individual; aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points; dynamically determining at least one deformation of the virtual face mesh; determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions; and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
-
Specification