Annotating audio-visual data
First Claim
Patent Images
1. A method of annotating audio-visual data comprising:
- detecting a plurality of facial expressions in an audience based on a stimulus;
determining an emotional response to the stimulus based on the facial expressions; and
generating at least one annotation of the stimulus based on the determined emotional response.
1 Assignment
0 Petitions
Accused Products
Abstract
A method of annotating audio-visual data is disclosed. The method includes detecting a plurality of facial expressions in an audience based on a stimulus, determining an emotional response to the stimulus based on the facial expressions and generating at least one annotation of the stimulus based on the determined emotional response.
52 Citations
16 Claims
-
1. A method of annotating audio-visual data comprising:
-
detecting a plurality of facial expressions in an audience based on a stimulus; determining an emotional response to the stimulus based on the facial expressions; and generating at least one annotation of the stimulus based on the determined emotional response. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A computer program product for annotating audio-visual data, the computer program product comprising a computer usable medium having computer readable program means for causing a computer to perform the steps of:
-
initiating a stimulus to an audience; detecting a plurality of facial expressions in the audience; determining an emotional response to the stimulus based on the facial expressions; and generating at least one annotation of the stimulus based on the determined emotional response. - View Dependent Claims (7, 8, 9, 10)
-
-
11. A system for annotating audio-visual data comprising:
-
video detection means; computer processing means coupled to the video detection means wherein the computer processing means includes an annotation generation module wherein the annotation generation module comprises logic for detecting a plurality of facial expressions in an audience based on a stimulus; determining an emotional response to the stimulus based on the facial expressions; and generating at least one annotation of the stimulus based on the determined emotional response. - View Dependent Claims (12, 13, 14, 15, 16)
-
Specification