CONTENT PLAYING DEVICE
First Claim
Patent Images
1. A system for generating information on viewer emotional response to content, comprising:
- a viewer response input unit configured to capture local data representing at least one of local viewer audio or local viewer video of a local viewer'"'"'s response to content data, the content data representing at least one of content audio or content video; and
a viewer emotion analysis unit configured to generate local viewer emotion information indicative of an emotional response of the local viewer to the content data, based on the local data.
1 Assignment
0 Petitions
Accused Products
Abstract
A system for generating information on viewer emotional response to content is disclosed. The system may include a viewer response input unit configured to capture local data representing at least one of local viewer audio or local viewer video of a local viewer'"'"'s response to content data, the content data representing at least one of content audio or content video. The system may also include a viewer emotion analysis unit configured to generate local viewer emotion information indicative of an emotional response of the local viewer to the content data, based on the local data.
37 Citations
21 Claims
-
1. A system for generating information on viewer emotional response to content, comprising:
-
a viewer response input unit configured to capture local data representing at least one of local viewer audio or local viewer video of a local viewer'"'"'s response to content data, the content data representing at least one of content audio or content video; and a viewer emotion analysis unit configured to generate local viewer emotion information indicative of an emotional response of the local viewer to the content data, based on the local data. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17)
-
-
18. A device for combining content with information on viewer emotional response to the content, comprising:
-
a viewer response input unit configured to capture local data representing at least one of local viewer audio or local viewer video of a local viewer'"'"'s response to content data, the content data representing at least one of content audio or content video; a viewer emotion analysis unit configured to generate local viewer emotion information indicative of an emotional response of the local viewer to the content data, based on the local data; a transmission unit configured to transmit the local viewer emotion information to a server; and a synthesis unit configured to; receive combined viewer emotion information from the server; determine at least one of effect audio or effect video, based on the combined viewer emotion information; and combine at least one of effect audio data or effect video data, representing the determined at least one of effect audio or effect video, with the content data.
-
-
19. A method for generating information on viewer emotional response to content, comprising:
-
capturing local data representing at least one of local viewer audio or local viewer video of a local viewer'"'"'s response to content data, the content data representing at least one of content audio or content video; and generating local viewer emotion information indicative of an emotional response of the local viewer to the content data, based on the local data.
-
-
20. A method for combining content with information on viewer emotional response to the content, comprising:
-
capturing local data representing at least one of local viewer audio or local viewer video of a local viewer'"'"'s response to content data, the content data representing at least one of content audio or content video; generating local viewer emotion information indicative of an emotional response of the local viewer to the content data, based on the local data; transmitting the local viewer emotion information to a server; receiving combined viewer emotion information from the server; determining at least one of effect audio or effect video, based on the combined viewer emotion information; and combining at least one of effect audio data or effect video data, representing the . determined at least one of effect audio or effect video, with the content data.
-
-
21. A non-transitory, computer-readable storage medium storing a program that, when executed by a processor, causes a content presenting device to perform a method for combining content with information on viewer emotional response to the content, the method comprising:
-
capturing local data representing at least one of local viewer audio or local viewer video of a local viewer'"'"'s response to content, data, the content data representing at least one of content audio or content video; generating local viewer emotion information indicative of an emotional response of the local viewer to the content data, based on the local data; transmitting the local viewer emotion information to a server; receiving combined viewer emotion information from the server; determining at least one of effect audio or effect video, based on the combined viewer emotion information; and combining at least one of effect audio data or effect video data, representing the determined at least one of effect audio or effect video, with the content data.
-
Specification