Emotional reaction sharing
First Claim
1. A computing device comprising:
- a processor; and
memory comprising processor-executable instructions that when executed by the processor cause performance of operations, the operations comprising;
receiving, at a user reaction distribution service, a first set of landmark points, a second set of landmark points, and a mood of a first user from a client device, wherein;
the first set of landmark points represents a set of facial features of the first user at a first point in time and the second set of landmark points represents the set of facial features of the first user at a second point in time while the first user is viewing content through the client device, andthe mood is identified at the client device from audio of the first user while the first user is viewing the content through the client device;
evaluating, at the user reaction distribution service, the first set of landmark points and the second set of landmark points, using a facial expression recognition algorithm that maps changes in location of landmark points to facial movements indicative of facial expressions, to identify a facial expression of the first user while the first user is viewing the content;
verifying the facial expression of the first user based upon the mood;
identifying, at the user reaction distribution service, a set of facial expressions of other users viewing the content during a time interval between the first point in time and the second point in time based upon landmark points received from client devices of the other users, wherein;
the client device of the first user and the client devices of the other users define a group of client devices, andthe facial expression of the first user and the set of facial expressions of other users define a group of facial expressions;
ranking, at the user reaction distribution service, the group of facial expressions to determine a most frequently occurring facial expression, amongst the group of facial expressions, during the time interval; and
sending, from the user reaction distribution service, the most frequently occurring facial expression to a plurality of client devices amongst the group of client devices in real-time during viewing of the content by the first user and by the other users.
6 Assignments
0 Petitions
Accused Products
Abstract
One or more computing devices, systems, and/or methods for emotional reaction sharing are provided. For example, a client device captures video of a user viewing content, such as a live stream video. Landmark points, corresponding to facial features of the user, are identified and provided to a user reaction distribution service that evaluates the landmark points to identify a facial expression of the user, such as a crying facial expression. The facial expression, such as landmark points that can be applied to a three-dimensional model of an avatar to recreate the facial expression, are provided to client devices of users viewing the content, such as a second client device. The second client device applies the landmark points of the facial expression to a bone structure mapping and a muscle movement mapping to create an expressive avatar having the facial expression for display to a second user.
235 Citations
18 Claims
-
1. A computing device comprising:
-
a processor; and memory comprising processor-executable instructions that when executed by the processor cause performance of operations, the operations comprising; receiving, at a user reaction distribution service, a first set of landmark points, a second set of landmark points, and a mood of a first user from a client device, wherein; the first set of landmark points represents a set of facial features of the first user at a first point in time and the second set of landmark points represents the set of facial features of the first user at a second point in time while the first user is viewing content through the client device, and the mood is identified at the client device from audio of the first user while the first user is viewing the content through the client device; evaluating, at the user reaction distribution service, the first set of landmark points and the second set of landmark points, using a facial expression recognition algorithm that maps changes in location of landmark points to facial movements indicative of facial expressions, to identify a facial expression of the first user while the first user is viewing the content; verifying the facial expression of the first user based upon the mood; identifying, at the user reaction distribution service, a set of facial expressions of other users viewing the content during a time interval between the first point in time and the second point in time based upon landmark points received from client devices of the other users, wherein; the client device of the first user and the client devices of the other users define a group of client devices, and the facial expression of the first user and the set of facial expressions of other users define a group of facial expressions; ranking, at the user reaction distribution service, the group of facial expressions to determine a most frequently occurring facial expression, amongst the group of facial expressions, during the time interval; and sending, from the user reaction distribution service, the most frequently occurring facial expression to a plurality of client devices amongst the group of client devices in real-time during viewing of the content by the first user and by the other users. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A method of emotional reaction sharing, the method comprising:
-
receiving, at a user reaction distribution service, a first set of landmark points, a second set of landmark points, and a mood of a first user from a client device, wherein; the first set of landmark points represents a set of facial features of the first user at a first point in time and the second set of landmark points represents the set of facial features of the first user at a second point in time while the first user is viewing content through the client device, and the mood is identified at the client device from audio of the first user while the first user is viewing the content through the client device; evaluating, at the user reaction distribution service, the first set of landmark points and the second set of landmark points, using a facial expression recognition algorithm that maps changes in location of landmark points to facial movements indicative of facial expressions, to identify a facial expression of the first user while the first user is viewing the content; verifying the facial expression of the first user based upon the mood; identifying, at the user reaction distribution service, a set of facial expressions of other users viewing the content during a time interval between the first point in time and the second point in time based upon landmark points received from client devices of the other users, wherein; the client device of the first user and the client devices of the other users define a group of client devices, and the facial expression of the first user and the set of facial expressions of other users define a group of facial expressions; ranking, at the user reaction distribution service, the group of facial expressions to determine a most frequently occurring facial expression, amongst the group of facial expressions, during the time interval; and sending, from the user reaction distribution service, the most frequently occurring facial expression to a plurality of client devices amongst the group of client devices in real-time during viewing of the content by the first user and by the other users. - View Dependent Claims (8, 9, 10)
-
-
11. A non-transitory machine readable medium having stored thereon processor-executable instructions that when executed cause performance of operations, the operations comprising:
-
receiving, at a user reaction distribution service, a first set of landmark points, a second set of landmark points, and a mood of a first user from a client device, wherein; the first set of landmark points represents a set of facial features of the first user at a first point in time and the second set of landmark points represents the set of facial features of the first user at a second point in time while the first user is viewing content through the client device, and the mood is identified at the client device from audio of the first user while the first user is viewing the content through the client device; evaluating, at the user reaction distribution service, the first set of landmark points and the second set of landmark points, using a facial expression recognition algorithm that maps changes in location of landmark points to facial movements indicative of facial expressions, to identify a facial expression of the first user while the first user is viewing the content; verifying the facial expression of the first user based upon the mood; identifying, at the user reaction distribution service, a set of facial expressions of other users viewing the content during a time interval between the first point in time and the second point in time based upon landmark points received from client devices of the other users, wherein; the client device of the first user and the client devices of the other users define a group of client devices, the first user and the other users define a group of users, and the facial expression of the first user and the set of facial expressions of other users define a group of facial expressions; ranking, at the user reaction distribution service, the group of facial expressions to determine a most frequently occurring facial expression, amongst the group of facial expressions, during the time interval; and sending, from the user reaction distribution service, the most frequently occurring facial expression to a plurality of client devices amongst the group of client devices in real-time during viewing of the content by the first user and by the other users. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18)
-
Specification