Avatar-based augmented reality engagement
First Claim
1. A processor-implemented method for determining an engagement level of an individual, the method comprising:
- detecting, by a processor, a user intent to engage with another individual based on a user interaction with an augmented reality device;
receiving a user-designated topic;
capturing, by a computer, a plurality of image data depicting a relative location of a user;
identifying an individual within the captured image data using facial recognition, a social media location check-in, and a plurality of livestreaming information;
generating a profile for each identified individual;
storing the generated profile in a repository;
gathering a plurality of engagement level indicator data associated with the identified individual, wherein the plurality of engagement level indicator data associated with the identified individual comprises of a plurality of physical attribute information and a plurality of social media information, and wherein the plurality of physical attribute comprises a plurality of facial expressions, a posture of the identified individual, a laughter volume of the identified individual, a plurality of emoted speech of the identified individual, an eye elongation of the identified individual, and an iris elongation of the identified individual;
calculating a current engagement level of the identified individual using the plurality of gathered engagement level indicator data and a variability of conversation of the identified individual with one or more other individuals, wherein a variability of conversation is categorical or physical, and wherein a categorical variability of conversation establishes a percentage chance with which a conversation varies from the user-designated topic, and wherein a physical variability of conversation weighs a plurality of physical criteria to determine a likelihood the user will enter into a conversation with the identified individual; and
generating a relative engagement model showing an engagement of the identified individual compared to one or more individuals within the captured plurality of image data using the calculated current engagement level, wherein the generated relative engagement model is an analytical model that shows an engagement of the identified individual compared to one or more other individuals within the plurality of captured image data; and
modifying a device display using an avatar corresponding to the calculated current engagement level, wherein the avatar is unique to the calculated current engagement level based on an openness of the identified individual to engage with the user.
1 Assignment
0 Petitions
Accused Products
Abstract
A method, computer system, and computer program product for determining an engagement level of an individual is provided. The present invention may include capturing a plurality of image data depicting a relative location of a user. The present invention may also include identifying an individual within the captured image data. The present invention may further include gathering a plurality of engagement level indicator data associated with the identified individual. The present invention may also include calculating a current engagement level of the identified individual using the plurality of gathered engagement level indicator data.
28 Citations
11 Claims
-
1. A processor-implemented method for determining an engagement level of an individual, the method comprising:
-
detecting, by a processor, a user intent to engage with another individual based on a user interaction with an augmented reality device; receiving a user-designated topic; capturing, by a computer, a plurality of image data depicting a relative location of a user; identifying an individual within the captured image data using facial recognition, a social media location check-in, and a plurality of livestreaming information; generating a profile for each identified individual; storing the generated profile in a repository; gathering a plurality of engagement level indicator data associated with the identified individual, wherein the plurality of engagement level indicator data associated with the identified individual comprises of a plurality of physical attribute information and a plurality of social media information, and wherein the plurality of physical attribute comprises a plurality of facial expressions, a posture of the identified individual, a laughter volume of the identified individual, a plurality of emoted speech of the identified individual, an eye elongation of the identified individual, and an iris elongation of the identified individual; calculating a current engagement level of the identified individual using the plurality of gathered engagement level indicator data and a variability of conversation of the identified individual with one or more other individuals, wherein a variability of conversation is categorical or physical, and wherein a categorical variability of conversation establishes a percentage chance with which a conversation varies from the user-designated topic, and wherein a physical variability of conversation weighs a plurality of physical criteria to determine a likelihood the user will enter into a conversation with the identified individual; and generating a relative engagement model showing an engagement of the identified individual compared to one or more individuals within the captured plurality of image data using the calculated current engagement level, wherein the generated relative engagement model is an analytical model that shows an engagement of the identified individual compared to one or more other individuals within the plurality of captured image data; and modifying a device display using an avatar corresponding to the calculated current engagement level, wherein the avatar is unique to the calculated current engagement level based on an openness of the identified individual to engage with the user. - View Dependent Claims (2, 3, 4)
-
-
5. A computer system for determining an engagement level of an individual, the computer system comprising:
-
one or more processors, one or more computer-readable memories, one or more computer-readable tangible storage medium, and program instructions stored on at least one of the one or more tangible storage medium for execution by at least one of the one or more processors via at least one of the one or more memories, wherein the computer system is capable of performing a method comprising; detecting a user intent to engage with another individual based on a user interaction with an augmented reality device; receiving a user-designated topic; capturing, by a computer, a plurality of image data depicting a relative location of a user; identifying an individual within the captured image data using facial recognition, a social media location check-in, and a plurality of livestreaming information; generating a profile for each identified individual; storing the generated profile in a repository; gathering a plurality of engagement level indicator data associated with the identified individual, wherein the plurality of engagement level indicator data associated with the identified individual comprises of a plurality of physical attribute information and a plurality of social media information, and wherein the plurality of physical attribute comprises a plurality of facial expressions, a posture of the identified individual, a laughter volume of the identified individual, a plurality of emoted speech of the identified individual, an eye elongation of the identified individual, and an iris elongation of the identified individual; calculating a current engagement level of the identified individual using the plurality of gathered engagement level indicator data and a variability of conversation of the identified individual with one or more other individuals, wherein a variability of conversation is categorical or physical, and wherein a categorical variability of conversation establishes a percentage chance with which a conversation varies from the user-designated topic, and wherein a physical variability of conversation weighs a plurality of physical criteria to determine a likelihood the user will enter into a conversation with the identified individual; and generating a relative engagement model showing an engagement of the identified individual compared to one or more individuals within the captured plurality of image data using the calculated current engagement level, wherein the generated relative engagement model is an analytical model that shows an engagement of the identified individual compared to one or more other individuals within the plurality of captured image data; and modifying a device display using an avatar corresponding to the calculated current engagement level, wherein the avatar is unique to the calculated current engagement level based on an openness of the identified individual to engage with the user. - View Dependent Claims (6, 7, 8)
-
-
9. A computer program product for determining an engagement level of an individual, the computer program product comprising:
-
one or more computer-readable tangible storage medium and program instructions stored on at least one of the one or more tangible storage medium, the program instructions executable by a processor, the program instructions comprising; program instruction to detect a user intent to engage with another individual based on a user interaction with an augmented reality device; program instructions to receive a user-designated topic; program instructions to capture a plurality of image data depicting a relative location of a user; program instructions to identify an individual within the captured image data using facial recognition, a social media location check-in, and a plurality of livestreaming information; program instruction to generate a profile for each identified individual; program instructions to store the generated profile in a repository; program instructions to gather a plurality of engagement level indicator data associated with the identified individual, wherein the plurality of engagement level indicator data associated with the identified individual comprises of a plurality of physical attribute information and a plurality of social media information, and wherein the plurality of physical attribute comprises a plurality of facial expressions, a posture of the identified individual, a laughter volume of the identified individual, a plurality of emoted speech of the identified individual, an eye elongation of the identified individual, and an iris elongation of the identified individual; program instructions to calculate a current engagement level of the identified individual using the plurality of gathered engagement level indicator data and a variability of conversation of the identified individual with one or more other individuals, wherein a variability of conversation is categorical or physical, and wherein a categorical variability of conversation establishes a percentage chance with which a conversation varies from the user-designated topic, and wherein a physical variability of conversation weighs a plurality of physical criteria to determine a likelihood the user will enter into a conversation with the identified individual; and program instructions to generate a relative engagement model showing an engagement of the identified individual compared to one or more individuals within the captured plurality of image data using the calculated current engagement level, wherein the generated relative engagement model is an analytical model that shows an engagement of the identified individual compared to one or more other individuals within the plurality of captured image data; and program instructions to modify a device display using an avatar corresponding to the calculated current engagement level, wherein the avatar is unique to the calculated current engagement level based on an openness of the identified individual to engage with the user. - View Dependent Claims (10, 11)
-
Specification