Avatars reflecting user states
First Claim
1. At least one non-transitory computer readable medium storing instructions, the instructions comprising instructions executable by a data processing apparatus to cause the data processing apparatus to:
- receive a first user input selecting a user state, the user state associated with one or more trigger events;
receive a second user input selecting a first avatar instance from among one or more second avatar instances, each of the second avatar instances associated with the user state;
detect an occurrence of at least one of the one or more trigger events; and
update a current avatar instance with the first avatar instance based, at least in part, on the detection,wherein each second avatar instance is generated, at least in part, by modifying a basic avatar, and wherein modifying a basic avatar comprises selecting a facial feature and changing, responsive to one or more third user inputs, at least one of a location, shape, or size of the facial feature.
0 Assignments
0 Petitions
Accused Products
Abstract
Methods, systems, and computer-readable media for creating and using customized avatar instances to reflect current user states are disclosed. In various implementations, the user states can be defined using trigger events based on user-entered textual data, emoticons, or states of the device being used. For each user state, a customized avatar instance having a facial expression, body language, accessories, clothing items, and/or a presentation scheme reflective of the user state can be generated. When one or more trigger events indicating occurrence of a particular user state are detected on the device, the avatar presented on the device is updated with the customized avatar instance associated with the particular user state.
-
Citations
20 Claims
-
1. At least one non-transitory computer readable medium storing instructions, the instructions comprising instructions executable by a data processing apparatus to cause the data processing apparatus to:
-
receive a first user input selecting a user state, the user state associated with one or more trigger events; receive a second user input selecting a first avatar instance from among one or more second avatar instances, each of the second avatar instances associated with the user state; detect an occurrence of at least one of the one or more trigger events; and update a current avatar instance with the first avatar instance based, at least in part, on the detection, wherein each second avatar instance is generated, at least in part, by modifying a basic avatar, and wherein modifying a basic avatar comprises selecting a facial feature and changing, responsive to one or more third user inputs, at least one of a location, shape, or size of the facial feature. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A method of personalizing avatars in an electronic device, the method comprising:
-
receiving a first user input selecting a user state, the user state associated with one or more trigger events; receiving a second user input selecting a first avatar instance from among one or more second avatar instances, each of the second avatar instances associated with the user state; detecting an occurrence of at least one of the one or more trigger events; and updating a current avatar instance with the first avatar instance based, at least in part, on the detection, wherein each second avatar instance is generated, at least in part, by modifying a basic avatar, and wherein modifying a basic avatar comprises selecting a facial feature and changing, responsive to one or more third user inputs, at least one of a location, shape, or size of the facial feature. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A computing system comprising:
-
at least one processor; at least one memory coupled to the at least one processor, the memory storing instructions executable by the at least one processor, the instructions comprising instructions to; receive a first user input selecting a user state, the user state associated with one or more trigger events; receive a second user input selecting a first avatar instance from among one or more plurality of second avatar instances, each of the second avatar instances associated with the user state; detect an occurrence of at least one of the one or more trigger events; and update a current avatar instance with the first avatar instance based, at least in part, on the detection, wherein each second avatar instance is generated, at least in part, by modifying a basic avatar, and wherein modifying a basic avatar comprises selecting a facial feature and changing, responsive to one or more third user inputs, at least one of a location, shape, or size of the facial feature. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification