Generating a mood log based on user images
First Claim
Patent Images
1. A method comprising:
- iteratively performing, in response to interaction events between a user and another user;
receiving, by one or more hardware processors, an image taken by a mobile computing device during an interaction event;
determining, using facial recognition and in response to an image being taken, that a face of the user is included in the image;
in response to determining that the image includes the face of the user;
identifying points for two or more features of the face of the user,comparing a distribution of the points to a database of point distributions, determining a mood from the database associated with the distribution in the database that most closely matches the distribution of points,determining an intensity level of the mood based on the distribution in the database that most closely matches the distribution of points; and
storing an association between the mood and intensity level with the other user associated with the interaction event;
causing, by a server, display of a visual indicator of the moods on the mobile computing device.
1 Assignment
0 Petitions
Accused Products
Abstract
A system and method for generate a mood log based on user images. In one embodiment, a system includes an image module that receives images taken by a user'"'"'s mobile computing device and determines that a face of the user is included in the image, a mood module that determines a mood level of the user based on the face, and a log module that stores the mood level in a log of mood levels for the user.
65 Citations
23 Claims
-
1. A method comprising:
iteratively performing, in response to interaction events between a user and another user; receiving, by one or more hardware processors, an image taken by a mobile computing device during an interaction event; determining, using facial recognition and in response to an image being taken, that a face of the user is included in the image; in response to determining that the image includes the face of the user; identifying points for two or more features of the face of the user, comparing a distribution of the points to a database of point distributions, determining a mood from the database associated with the distribution in the database that most closely matches the distribution of points, determining an intensity level of the mood based on the distribution in the database that most closely matches the distribution of points; and storing an association between the mood and intensity level with the other user associated with the interaction event; causing, by a server, display of a visual indicator of the moods on the mobile computing device. - View Dependent Claims (2, 3, 4, 5, 6, 23)
-
7. A system comprising:
-
one or more hardware processors, configured to; iteratively perform, in response to interaction events between a user and another user; receive an image taken by a mobile computing device during an interaction event, determining, using facial recognition and in response to the image being received, that a face of the user is included in the received image; in response to determining that the image includes the face of the user; identifying points for two or more features of the face of the user, comparing a distribution of the points to a database of point distributions, determining a mood from the database associated with the distribution in the database that most closely matches the distribution of points, determining an intensity level of the mood based on the distribution in the database that most closely matches the distribution of points, storing an association between the mood and intensity level with the other user associated with the interaction event, and causing, by a server, display of a visual indicator of the moods on the mobile computing device. - View Dependent Claims (8, 9, 10, 11, 12, 13, 14)
-
-
15. A machine-readable medium having no transitory signals and storing instructions that, when executed by at least one processor of a machine, cause the machine to perform operations comprising:
-
iteratively performing, in response to interaction events between a user and another user; receiving, an image taken by the mobile computing device during an interaction event; determining, using facial recognition and in response to an image being taken, that a face of the user is included in the image; in response to determining that the image includes the face of the user; identifying points for two or more features of the face of the user, comparing a distribution of the points to a database of point distributions, determining a mood from the database associated with the distribution in the database that most closely matches the distribution of points, determining an intensity level of the mood based on the distribution in the database that most closely matches the distribution of points; and storing an association between the mood and intensity level with the other user associated with the interaction event; causing, by a server, display of a visual indicator of the moods on the mobile computing device. - View Dependent Claims (16, 17, 18, 19, 20, 21, 22)
-
Specification