Managing companionship data
First Claim
Patent Images
1. A computer-implemented method for managing companionship data, comprising:
- acquiring participant data from one or more participants, wherein the participant data includes one or more images of the one or more participants and one or more audio fragments corresponding to voices of the one or more participants;
receiving a first statement from a user, wherein the first statement includes a mention of a participant, wherein the participant is selected from the one or more participants, wherein the mention of the participant is a correlated designation of the participant, and wherein the mention of the participant indicates which of the one or more participants is to be simulated;
generating a simulated face of the participant using one or more images of the participant, wherein the one or more images of the participant are selected from the one or more images of the one or more participants;
generating a simulated voice of the participant using one or more audio fragments corresponding to the voice of the participant, wherein the one or more audio fragments corresponding to the voice of the participant are selected from the one or more audio fragments corresponding to the voices of the one or more participants;
establishing, by a computer, a set of companion data related to the user, wherein the set of companion data includes a first portion, and wherein the first portion includes data for presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant constructed, in response to receiving the first statement, to replicate a first facial motion and a first phrase associatively expressed by the participant;
presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant replicating the first facial motion and the first phrase;
collecting, by the computer, a first set of stimuli associated with the user in response to the user being presented with the simulated face and simulated voice of the participant replicating the first facial motion and the first phrase, wherein the first set of stimuli is collected using motion data of the user'"'"'s face, the motion data being collected by data points associated with one or more positions of the user'"'"'s face during the presentation of the first facial motion and the first phrase;
determining, based on the first set of stimuli, a second portion of the set of companion data to provide to the user;
presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant constructed to replicate a second facial motion and a second phrase associatively expressed by the participant; and
providing the second portion to the user.
1 Assignment
0 Petitions
Accused Products
Abstract
Aspects of the disclosure relate to managing companionship data. The managing of companionship data includes establishing a set of companion data. The set of companion data relates to a user. A computer establishes the set of companion data. The computer also collects a set of stimuli. The set of stimuli is associated with the user. Based on the set of stimuli, a portion of the set of companion data is determined. The portion of the set of companion data is provided to the user.
-
Citations
19 Claims
-
1. A computer-implemented method for managing companionship data, comprising:
-
acquiring participant data from one or more participants, wherein the participant data includes one or more images of the one or more participants and one or more audio fragments corresponding to voices of the one or more participants; receiving a first statement from a user, wherein the first statement includes a mention of a participant, wherein the participant is selected from the one or more participants, wherein the mention of the participant is a correlated designation of the participant, and wherein the mention of the participant indicates which of the one or more participants is to be simulated; generating a simulated face of the participant using one or more images of the participant, wherein the one or more images of the participant are selected from the one or more images of the one or more participants; generating a simulated voice of the participant using one or more audio fragments corresponding to the voice of the participant, wherein the one or more audio fragments corresponding to the voice of the participant are selected from the one or more audio fragments corresponding to the voices of the one or more participants; establishing, by a computer, a set of companion data related to the user, wherein the set of companion data includes a first portion, and wherein the first portion includes data for presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant constructed, in response to receiving the first statement, to replicate a first facial motion and a first phrase associatively expressed by the participant; presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant replicating the first facial motion and the first phrase; collecting, by the computer, a first set of stimuli associated with the user in response to the user being presented with the simulated face and simulated voice of the participant replicating the first facial motion and the first phrase, wherein the first set of stimuli is collected using motion data of the user'"'"'s face, the motion data being collected by data points associated with one or more positions of the user'"'"'s face during the presentation of the first facial motion and the first phrase; determining, based on the first set of stimuli, a second portion of the set of companion data to provide to the user; presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant constructed to replicate a second facial motion and a second phrase associatively expressed by the participant; and providing the second portion to the user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 16, 17, 18, 19)
-
-
14. A system for managing companionship data, the system comprising:
-
a memory; and a processor in communication with the memory, the processor being configured to perform operations comprising; acquiring participant data from one or more participants, wherein the participant data includes one or more images of the one or more participants and one or more audio fragments corresponding to voices of the one or more participants; receiving a first statement from a user, wherein the first statement includes a mention of a participant, wherein the participant is selected from the one or more participants, wherein the mention of the participant is a correlated designation of the participant, and wherein the mention of the participant indicates which of the one or more participants is to be simulated; generating a simulated face of the participant using one or more images of the participant, wherein the one or more images of the participant are selected from the one or more images of the one or more participants; generating a simulated voice of the participant using one or more audio fragments corresponding to the voice of the participant, wherein the one or more audio fragments corresponding to the voice of the participant are selected from the one or more audio fragments corresponding to the voices of the one or more participants; establishing, by a computer, a set of companion data related to the user, wherein the set of companion data includes a first portion, and wherein the first portion includes data for presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant constructed, in response to receiving the first statement, to replicate a first facial motion and a first phrase associatively expressed by the participant; presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant replicating the first facial motion and the first phrase; collecting, by the computer, a first set of stimuli associated with the user in response to the user being presented with the simulated face and simulated voice of the participant replicating the first facial motion and the first phrase, wherein the first set of stimuli is collected using motion data of the user'"'"'s face, the motion data being collected by data points associated with one or more positions of the user'"'"'s face during the presentation of the first facial motion and the first phrase; determining, based on the first set of stimuli, a second portion of the set of companion data to provide to the user; presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant constructed to replicate a second facial motion and a second phrase associatively expressed by the participant; and providing the second portion to the user.
-
-
15. A computer program product for managing companionship data, the computer program product disposed upon a computer readable storage medium, the computer program product comprising computer program instructions that, when executed by a computer processor of a computer, cause the computer to carry out the steps of:
-
acquire participant data from one or more participants, wherein the participant data includes one or more images of the one or more participants and one or more audio fragments corresponding to voices of the one or more participants; receive a first statement from a user, wherein the first statement includes a mention of a participant, wherein the participant is selected from the one or more participants, wherein the mention of the participant is a correlated designation of the participant, and wherein the mention of the participant indicates which of the one or more participants is to be simulated; generate a simulated face of the participant using one or more images of the participant, wherein the one or more images of the participant are selected from the one or more images of the one or more participants; generate a simulated voice of the participant using one or more audio fragments corresponding to the voice of the participant, wherein the one or more audio fragments corresponding to the voice of the participant are selected from the one or more audio fragments corresponding to the voices of the one or more participants; establish, by a computer, a set of companion data related to the user, wherein the set of companion data includes a first portion, and wherein the first portion includes data for presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant constructed, in response to receiving the first statement, to replicate a first facial motion and a first phrase associatively expressed by the participant; presenting the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant replicating the first facial motion and the first phrase; collect, by the computer, a first set of stimuli associated with the user in response to the user being presented with the simulated face and simulated voice of the participant replicating the first facial motion and the first phrase, wherein the first set of stimuli is collected using motion data of the user'"'"'s face, the motion data being collected by data points associated with one or more positions of the user'"'"'s face during the presentation of the first facial motion and the first phrase; determine, based on the first set of stimuli, a second portion of the set of companion data to provide to the user; present the simulated face and simulated voice of the participant to the user, the simulated face and simulated voice of the participant constructed to replicate a second facial motion and a second phrase associatively expressed by the participant; and provide the second portion to the user.
-
Specification