Emotion recognition for workforce analytics
First Claim
1. A computer-implemented method for workforce analytics, the method comprising:
- receiving a video stream including a sequence of frames;
detecting an individual in one or more of the frames;
locating feature reference points of the individual;
aligning a virtual face mesh to the individual in one or more of the frames based at least in part on the feature reference points;
dynamically determining over the sequence of frames at least one deformation of the virtual face mesh;
determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions;
identifying an emotional status for the individual, the emotional status determined based on the at least one facial emotion of the individual within the video stream; and
generating at least one work quality parameter associated with the individual based on the at least one facial emotion and the emotional status.
4 Assignments
0 Petitions
Accused Products
Abstract
Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
-
Citations
26 Claims
-
1. A computer-implemented method for workforce analytics, the method comprising:
-
receiving a video stream including a sequence of frames; detecting an individual in one or more of the frames; locating feature reference points of the individual; aligning a virtual face mesh to the individual in one or more of the frames based at least in part on the feature reference points; dynamically determining over the sequence of frames at least one deformation of the virtual face mesh; determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions; identifying an emotional status for the individual, the emotional status determined based on the at least one facial emotion of the individual within the video stream; and generating at least one work quality parameter associated with the individual based on the at least one facial emotion and the emotional status. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
-
-
19. A system, comprising:
-
a computing device including at least one processor and a memory storing processor-executable codes, which, when implemented by the at least one processor, cause to perform operations comprising; receiving a video stream including a sequence of frames; detecting an individual in one or more of the frames; locating feature reference points of the individual; aligning a virtual face mesh to the individual in one or more of the frames based at least in part on the feature reference points; dynamically determining over the sequence of frames at least one deformation of the virtual face mesh; determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions; identifying an emotional status for the individual, the emotional status determined based on the at least one facial emotion of the individual within the video stream; and generating at least one work quality parameter associated with the individual based on the at least one facial emotion and the emotional status. - View Dependent Claims (20, 21, 22)
-
-
23. A non-transitory processor-readable medium having instructions stored thereon, which when executed by one or more processors, cause the one or more processors to perform operations, comprising:
-
receiving a video stream including a sequence of frames; detecting an individual in one or more of the frames; locating feature reference points of the individual; aligning a virtual face mesh to the individual in one or more of the frames based at least in part on the feature reference points; dynamically determining at least one deformation of the virtual face mesh; determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions; identifying an emotional status for the individual, the emotional status determined based on the at least one facial emotion of the individual within the video stream; and generating at least one work quality parameter associated with the individual based on the at least one facial emotion and the emotional status. - View Dependent Claims (24, 25, 26)
-
Specification