×

Natural language generation through character-based recurrent neural networks with finite-state prior knowledge

  • US 10,049,106 B2
  • Filed: 01/18/2017
  • Issued: 08/14/2018
  • Est. Priority Date: 01/18/2017
  • Status: Active Grant
First Claim
Patent Images

1. A method comprising:

  • building a target background model using words occurring in training data, the target background model being adaptable to accept subsequences of an input semantic representation, the training data includes training pairs, each training pair including a semantic representation and a corresponding reference sequence in a natural language;

    receiving human-generated utterances in the form of speech or text;

    predicting a current dialog state of a natural language dialog between a virtual agent and a user, based on the utterances;

    generating a semantic representation of a next utterance, based on the current dialog state, the semantic representation including a sequence of characters; and

    generating a target sequence in a natural language from the semantic representation, comprising;

    after generating the semantic representation, adapting the target background model to form an adapted background model, which accepts all subsequences of the semantic representation;

    representing the semantic representation as a sequence of character embeddings;

    with an encoder, encoding the character embeddings to generate a set of character representations; and

    with a decoder, generating a target sequence of characters, based on the set of character representations, wherein at a plurality of time steps, a next character in the target sequence is a function of a previously generated character of the target sequence and the adapted background model; and

    outputting the target sequence.

View all claims
  • 6 Assignments
Timeline View
Assignment View
    ×
    ×