×

Contextual inference of non-verbal expressions

  • US 10,540,348 B2
  • Filed: 09/22/2014
  • Issued: 01/21/2020
  • Est. Priority Date: 09/22/2014
  • Status: Active Grant
First Claim
Patent Images

1. A system, comprising:

  • a contextual database that stores different contexts associated with their corresponding sensory outputs;

    a body language database that maps the different contexts to different human body postures, at least one of the different contexts being an authentication context, and at least one of the different human body postures being an authentication body posture, the authentication body posture being a unique one of the different human body postures used for authentication purposes and being mapped with the authentication context;

    an emotional database that maps the different contexts and the different human body postures to different emotional states;

    a processor; and

    a non-transitory memory storing instructions that when executed cause the processor to perform operations, the operations comprising;

    receiving an identifier of a device;

    receiving a request for authentication;

    receiving a plurality of sensory outputs from the device, one of the plurality of sensory outputs pertaining to how a user is holding the device;

    evaluating the plurality of sensory outputs from the device;

    determining a hand orientation of a user of the device based on the evaluating;

    inferring a forearm orientation, a shoulder orientation, a torso orientation, and a head orientation of the user based on the hand orientation of the user of the device;

    determining, using the plurality of sensory outputs evaluated, a current context associated with a time and a location of the identifier of the device, wherein the determining the current context includes querying, using the plurality of sensory outputs, the contextual database for determining whether there is a matching different context with the plurality of sensory outputs, and if a matching entry is found, retrieving the matching different context;

    inferring a corresponding body posture and an emotional state from the current context by;

    querying, using the current context and the hand orientation, the forearm orientation, the shoulder orientation, the torso orientation, and the head orientation of the user of the device, the body language database for determining whether there is a matching different human body posture, and if a matching entry is found, retrieving the matching different human body posture; and

    querying, using the current context and the corresponding body posture inferred, the emotional database for determining whether there is a matching emotional state, and if a matching entry is found, retrieving the matching emotional state;

    querying the body language database for determining whether there is a matching entry based on matching the corresponding body posture inferred and the authentication body posture; and

    if the matching entry is found, authenticating the device then;

    transmitting the corresponding body posture and the emotional state inferred; and

    receiving, in response to the transmitting the corresponding body posture and the emotional state inferred, a product recommendation selected based on at least one of the corresponding body posture and the emotional state inferred.

View all claims
  • 1 Assignment
Timeline View
Assignment View
    ×
    ×