×

Computer generated emulation of a subject

  • US 9,959,368 B2
  • Filed: 08/13/2014
  • Issued: 05/01/2018
  • Est. Priority Date: 08/16/2013
  • Status: Expired due to Fees
First Claim
Patent Images

1. A system for creating a response to an inputted user query, said system comprising:

  • a user interface configured to emulate a subject by displaying a talking head including a face of the subject, and output speech from a mouth of the face with a voice of the subject, the user interface further including a receiver to receive a query from a user, the emulated subject being configured to respond to the query received from the user;

    a personality file memory storing a plurality of documents in an unstructured form and storing model parameters, the model parameters describing probability distributions that relate an acoustic unit to an image vector and a speech vector, the image vector including a plurality of parameters that define the subject'"'"'s face and the speech vector including a plurality of parameters that define the subject'"'"'s voice; and

    processing circuitry configured toconvert said query into a word vector;

    compare said word vector generated from said query with word vectors generated from the documents in said personality file memory and output identified documents;

    compare said word vector selected from said query and passages from said identified documents and to rank said selected passages, said ranking being based on a number of matches between said selected passage and said query;

    concatenate selected passages together using sentence connectors to produce the response, wherein said sentence connectors are chosen from a plurality of sentence connectors, said sentence connectors being chosen based on a language model,convert the response into a sequence of acoustic units using a statistical model, the statistical model including a plurality of model parameters, the model parameters being retrieved from the personality file memory,output a sequence of speech vectors and image vectors that are synchronized such that the head appears to talk,output an expressive response such that the face and voice demonstrate expression, anddetermine the expression with which to output the generated response,wherein the model parameters stored in the personality file memory describe probability distributions that relate the acoustic unit to the image vector and the speech vector for an associated expression.

View all claims
  • 1 Assignment
Timeline View
Assignment View
    ×
    ×