Semantic parsing using deep neural networks for predicting canonical forms
First Claim
Patent Images
1. A method comprising:
- providing a neural network model which has been trained to predict a canonical form, containing a sequence of words, for an input text sequence, containing a sequence of words, the neural network model comprising;
an encoder which generates a first representation of the input text sequence based on a representation of n-grams in the text sequence, the encoder including a first neural network which reads the input text sequence and generates a second representation of the input text sequence, anda decoder which sequentially predicts a next term of the canonical form, based on the first and second representations and a predicted prefix of the canonical form, the prefix containing a sequence of at least one word;
receiving an input text sequence, containing a sequence of words;
with a processor, predicting a canonical form, containing a sequence of words, for the input text sequence with the trained neural network model; and
outputting information based on the predicted canonical form.
7 Assignments
0 Petitions
Accused Products
Abstract
A method for predicting a canonical form for an input text sequence includes predicting the canonical form with a neural network model. The model includes an encoder, which generates a first representation of the input text sequence based on a representation of n-grams in the text sequence and a second representation of the input text sequence generated by a first neural network. The model also includes a decoder which sequentially predicts terms of the canonical form based on the first and second representations and a predicted prefix of the canonical form. The canonical form can be used, for example, to query a knowledge base or to generate a next utterance in a discourse.
16 Citations
19 Claims
-
1. A method comprising:
-
providing a neural network model which has been trained to predict a canonical form, containing a sequence of words, for an input text sequence, containing a sequence of words, the neural network model comprising; an encoder which generates a first representation of the input text sequence based on a representation of n-grams in the text sequence, the encoder including a first neural network which reads the input text sequence and generates a second representation of the input text sequence, and a decoder which sequentially predicts a next term of the canonical form, based on the first and second representations and a predicted prefix of the canonical form, the prefix containing a sequence of at least one word; receiving an input text sequence, containing a sequence of words; with a processor, predicting a canonical form, containing a sequence of words, for the input text sequence with the trained neural network model; and outputting information based on the predicted canonical form. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. A method comprising:
-
providing a neural network model which has been trained to predict a canonical form for an input text sequence, the neural network model comprising; an encoder which comprises a first multilayer perceptron which generates a first representation of the input text sequence based on a representation of n-grams in the text sequence, and a first recurrent neural network which reads the input text sequence and generates a second representation of the input text sequence, and a decoder which sequentially predicts a next term of the canonical form, based on the first and second representations and a predicted prefix of the canonical form; receiving an input text sequence; with a processor, predicting a canonical form for the input text sequence with the trained neural network model; and outputting information based on the predicted canonical form.
-
-
16. A system comprising:
-
memory which stores a neural network model which has been trained to predict a canonical form for an input text sequence, the neural network model comprising; an encoder which generates a first representation of the input text sequence based on a representation of n-grams in the text sequence and a second representation of the input text sequence generated by a first neural network, and a decoder which sequentially predicts terms of the canonical form based on the first and second representations and a predicted prefix of the canonical form; a prediction component which predicts a canonical form for an input text sequence with the trained neural network model; a semantic parser which generates a logical form based on the predicted canonical form; an output component which outputs information based on the predicted canonical form; and a processor which implements the prediction component and the output component. - View Dependent Claims (17, 18)
-
-
19. A method for predicting a canonical form comprising:
-
providing training data, the training data comprising a collection of training pairs, each training pair in the collection including a canonical form, containing a sequence of words, and a corresponding text sequence, containing a sequence of words; with the training data, training a neural network model to predict a canonical form, containing a sequence of words, for an input text sequence, the neural network model comprising; an encoder which generates a first representation of the input text sequence based on a representation of n-grams in the text sequence and a second representation of the input text sequence generated by a first neural network, and a decoder which sequentially predicts terms of the canonical form based on the first and second representations and a predicted prefix of the canonical form, each of the terms of the canonical form including at least one word; receiving an input text sequence, containing a sequence of words; with a processor, predicting a canonical form, containing a sequence of words, for the input text sequence with the trained neural network model; and outputting information based on the predicted canonical form.
-
Specification