×

Managing companionship data

  • US 10,296,723 B2
  • Filed: 12/01/2014
  • Issued: 05/21/2019
  • Est. Priority Date: 12/01/2014
  • Status: Active Grant
First Claim
Patent Images

1. A computer-implemented method for managing companionship data, comprising:

  • acquiring participant data from one or more participants, wherein the participant data includes one or more images of the one or more participants and one or more audio fragments corresponding to voices of the one or more participants;

    receiving a first statement from a user, wherein the first statement includes a mention of a participant, wherein the participant is selected from the one or more participants, wherein the mention of the participant is a correlated designation of the participant, and wherein the mention of the participant indicates which of the one or more participants is to be simulated;

    generating a simulated face of the participant using one or more images of the participant, wherein the one or more images of the participant are selected from the one or more images of the one or more participants;

    generating a simulated voice of the participant using one or more audio fragments corresponding to the voice of the participant, wherein the one or more audio fragments corresponding to the voice of the participant are selected from the one or more audio fragments corresponding to the voices of the one or more participants;

    establishing, by a computer, a set of companion data related to the user, wherein the set of companion data includes a first portion, and wherein the first portion includes data for presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant constructed, in response to receiving the first statement, to replicate a first facial motion and a first phrase associatively expressed by the participant;

    presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant replicating the first facial motion and the first phrase;

    collecting, by the computer, a first set of stimuli associated with the user in response to the user being presented with the simulated face and simulated voice of the participant replicating the first facial motion and the first phrase, wherein the first set of stimuli is collected using motion data of the user'"'"'s face, the motion data being collected by data points associated with one or more positions of the user'"'"'s face during the presentation of the first facial motion and the first phrase;

    determining, based on the first set of stimuli, a second portion of the set of companion data to provide to the user;

    presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant constructed to replicate a second facial motion and a second phrase associatively expressed by the participant; and

    providing the second portion to the user.

View all claims
  • 1 Assignment
Timeline View
Assignment View
    ×
    ×