Automated avatar generation
First Claim
Patent Images
1. A method, comprising:
- receiving, by one or more processors, a two-dimensional (2D) image depicting at least a portion of a face of a first user;
determining, by the one or more processors, a hair texture of a hair region of the face of the first user depicted in the 2D image;
comparing a dimension of the hair region to one or more of a set of facial landmarks within a portion of the face; and
determining hair length based on the comparison of the dimension of the hair region to the one or more of the set of facial landmarks;
generating, by the one or more processors, a representation of the face of the first user depicted in the 2D image based on the determined hair texture and hair length of the hair region;
displaying a user interface element associated with a physical attribute of the generated representation of the face;
in response to receiving input selecting the user interface element associated with the physical attribute of the generated representation of the face, presenting a plurality of user interface elements for modifying the physical attribute of the generated representation of the face, each of the user interface elements in the plurality of user interface elements representing a different modification to a same physical attribute of the generated representation of the face; and
generating, by a user device of the first user, a message, directed to a second user, that includes the generated representation of the face of the first user.
2 Assignments
0 Petitions
Accused Products
Abstract
Systems, devices, media, and methods are presented for generating facial representations using image segmentation with a client device. The systems and methods receive an image depicting a face, detect at least a portion of the face within the image, and identify a set of facial landmarks within the portion of the face. The systems and methods determine one or more characteristics representing the portion of the face, in response to detecting the portion of the face. Based on the one or more characteristics and the set of facial landmarks, the systems and methods generate a representation of a face.
327 Citations
20 Claims
-
1. A method, comprising:
-
receiving, by one or more processors, a two-dimensional (2D) image depicting at least a portion of a face of a first user; determining, by the one or more processors, a hair texture of a hair region of the face of the first user depicted in the 2D image; comparing a dimension of the hair region to one or more of a set of facial landmarks within a portion of the face; and determining hair length based on the comparison of the dimension of the hair region to the one or more of the set of facial landmarks; generating, by the one or more processors, a representation of the face of the first user depicted in the 2D image based on the determined hair texture and hair length of the hair region; displaying a user interface element associated with a physical attribute of the generated representation of the face; in response to receiving input selecting the user interface element associated with the physical attribute of the generated representation of the face, presenting a plurality of user interface elements for modifying the physical attribute of the generated representation of the face, each of the user interface elements in the plurality of user interface elements representing a different modification to a same physical attribute of the generated representation of the face; and generating, by a user device of the first user, a message, directed to a second user, that includes the generated representation of the face of the first user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 18, 19)
-
-
17. A system, comprising:
-
one or more processors; and a non-transitory processor-readable storage medium coupled to the one or more processors and storing processor executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising; receiving a two-dimensional (2D) image depicting at least a portion of a face of a first user; determining a hair texture of a hair region of the face of the first user depicted in the 2D image; comparing a dimension of the hair region to one or more of a set of facial landmarks within a portion of the face; and determining hair length based on the comparison of the dimension of the hair region to the one or more of the set of facial landmarks; generating a representation of the face of the first user depicted in the 2D image based on the determined hair texture and hair length of the hair region; displaying a user interface element associated with a physical attribute of the generated representation of the face; in response to receiving input selecting the user interface element associated with the physical attribute of the generated representation of the face, presenting a plurality of user interface elements for modifying the physical attribute of the generated representation of the face, each of the user interface elements in the plurality of user interface elements represent a different modification to a same physical attribute of the generated representation of the face; and generating, by a user device of the first user, a message, directed to a second user, that includes the generated representation of the face of the first user.
-
-
20. A non-transitory processor-readable storage medium storing processor executable instructions that, when executed by a processor of a machine, cause the machine to perform operations comprising:
-
receiving a two-dimensional (2D) image depicting at least a portion of a face of a first user; determining a hair texture of a hair region of the face of the first user depicted in the 2D image; comparing a dimension of the hair region to one or more of a set of facial landmarks within a portion of the face; and determining hair length based on the comparison of the dimension of the hair region to the one or more of the set of facial landmarks; generating a representation of the face of the first user depicted in the 2D image based on the determined hair texture and hair length of the hair region; displaying a user interface element associated with a physical attribute of the generated representation of the face; in response to receiving input selecting the user interface element associated with the physical attribute of the generated representation of the face, presenting a plurality of user interface elements for modifying the physical attribute of the generated representation of the face, each of the user interface elements in the plurality of user interface elements represent a different modification to a same physical attribute of the generated representation of the face; and generating, by a user device of the first user, a message, directed to a second user, that includes the generated representation of the face of the first user.
-
Specification