Training a predictor of emotional response based on explicit voting on content and eye tracking to verify attention
First Claim
1. A system configured to utilize eye tracking for collecting a naturally expressed affective response for training an emotional response predictor, comprising:
- a memory storing computer executable modules; and
a processor configured to execute the computer executable modules;
the computer executable modules comprising;
a label generator configured to receive first and second votes of a user on first and second segments of content consumed by the user, respectively;
the label generator is further configured to utilize the first and second votes to generate first and second labels related to first and second emotional responses to the first and second segments, respectively;
wherein the first and second votes are generated via a voting mechanism that comprises one or more of the following;
a like voting mechanism, a dislike voting mechanism, a star rating mechanism, a numerical rating mechanism, an up voting mechanism, a down voting mechanism, and a ranking mechanism;
a gaze analyzer configured to receive first and second eye tracking data of the user acquired while the user consumed the first and second segments, and to make a first determination that a first gaze-based attention level to the first segment reaches a first predetermined threshold, and to make a second determination that a second gaze-based attention level to the second segment does not reach a second predetermined threshold; and
a sample generator configured to utilize the first and second determinations to assign, for purpose of training a measurement emotional response predictor, a higher weight to a first sample comprising the first label and a first affective response measurement of the user, than to a second sample comprising the second label and a second affective response measurement of the user;
wherein the first measurement was taken by a sensor while the user consumed the first segment, and the second measurement was taken by the sensor while the user consumed the second segment.
2 Assignments
0 Petitions
Accused Products
Abstract
Utilizing eye tracking to collect naturally expressed affective responses for training an emotional response predictor, comprising: receiving a vote of a user on a segment of content consumed by the user; receiving eye tracking data of the user taken while the user consumed the segment of content; determining, based on the eye tracking data, that a gaze-based attention level to the segment reaches a predetermined threshold; utilizing the vote to generate a label related to an emotional response to the segment; receiving an affective response measurement of the user taken substantially while the user consumed the segment of content; and training a measurement emotional response predictor with the label and the affective response measurement.
-
Citations
20 Claims
-
1. A system configured to utilize eye tracking for collecting a naturally expressed affective response for training an emotional response predictor, comprising:
-
a memory storing computer executable modules; and a processor configured to execute the computer executable modules;
the computer executable modules comprising;a label generator configured to receive first and second votes of a user on first and second segments of content consumed by the user, respectively; the label generator is further configured to utilize the first and second votes to generate first and second labels related to first and second emotional responses to the first and second segments, respectively;
wherein the first and second votes are generated via a voting mechanism that comprises one or more of the following;
a like voting mechanism, a dislike voting mechanism, a star rating mechanism, a numerical rating mechanism, an up voting mechanism, a down voting mechanism, and a ranking mechanism;a gaze analyzer configured to receive first and second eye tracking data of the user acquired while the user consumed the first and second segments, and to make a first determination that a first gaze-based attention level to the first segment reaches a first predetermined threshold, and to make a second determination that a second gaze-based attention level to the second segment does not reach a second predetermined threshold; and a sample generator configured to utilize the first and second determinations to assign, for purpose of training a measurement emotional response predictor, a higher weight to a first sample comprising the first label and a first affective response measurement of the user, than to a second sample comprising the second label and a second affective response measurement of the user;
wherein the first measurement was taken by a sensor while the user consumed the first segment, and the second measurement was taken by the sensor while the user consumed the second segment. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A method for utilizing eye tracking for collecting a naturally expressed affective response for training an emotional response predictor, comprising:
-
receiving first and second votes of a user on first and second segments of content consumed by the user;
wherein the first and second votes are generated via a voting mechanism that comprises one or more of the following;
a like voting mechanism, a dislike voting mechanism, a star rating mechanism, a numerical rating mechanism, an up voting mechanism, a down voting mechanism, and a ranking mechanism;utilizing the first and second votes to generate first and second labels related to first and second emotional responses to the first and second segments, respectively; receiving first and second eye tracking data of the user acquired while the user consumed the first and second segments, respectively; making a first determination, based on the first eye tracking data, that a first gaze-based attention level to the first segment reaches a first predetermined threshold; making a second determination, based on the second eye tracking data, that a second gaze-based attention level to the second segment does not reach a second predetermined threshold; utilizing the first and second determinations to assign, for purpose of training a measurement emotional response predictor, a higher weight to a first sample comprising the first label and a first affective response measurement of the user, than a second sample comprising the second label and a second affective response measurement of the user;
wherein the first measurement was taken by a sensor while the user consumed the first segment, and the second measurement was taken by the sensor while the user consumed the second segment. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17)
-
-
18. A system configured to utilize eye tracking to train an emotional response predictor, comprising:
-
a memory storing computer executable modules; and a processor configured to execute the computer executable modules;
the computer executable modules comprising;a label generator configured to receive a vote of a user on a segment of content consumed by the user on a social network;
wherein the vote is generated via a voting mechanism belonging to the social network; and
wherein the voting mechanism comprises one or more of the following;
a like voting mechanism, a dislike voting mechanism, a star rating mechanism, a numerical rating mechanism, an up voting mechanism, a down voting mechanism, and a ranking mechanism;the label generator is further configured to utilize the vote to generate a label related to an emotional response to the segment; a gaze analyzer configured to receive eye tracking data of the user taken while the user consumed the segment, and to determine, based on the eye tracking data, whether a gaze-based attention level to the segment reaches a predetermined threshold, and to indicate thereof to a training module; and the training module is configured to receive the label and an affective response measurement of the user taken while the user consumed the segment, and to train a measurement emotional response predictor with the measurement and the label;
wherein the measurement is taken by a sensor coupled to the user. - View Dependent Claims (19, 20)
-
Specification