System and method for managing conversation
First Claim
1. A conversation management system, comprising:
- a training unit that generates an articulation speech act and an entity name of a training corpus, that generates a lexical syntactic pattern which is generated from the training corpus, and that estimates the articulation speech act and the entity name of the training corpus;
a database that stores the articulation speech act, the entity name, and the lexical syntactic pattern of the training corpus;
an input unit receiving a user articulation when an actual conversation is performed by a user;
an execution unit that generates a user articulation speech act and an entity name of the user articulation, that generates a user lexical syntactic pattern, that estimates the user articulation speech act and the entity name of the user articulation, that searches for an articulation pair corresponding to the user articulation at the database using a search condition comprising the estimated user articulation speech act and the generated user lexical syntactic pattern, and that generates a final response by selecting an articulation template using a restriction condition comprising the estimated entity name of the user articulation among the found articulation pair; and
an output unit that outputs the final response that is generated by the execution unit,wherein the training unit analyzes a part of speech of the training corpus, defines a priority of parts of speech, and generates the lexical syntactic pattern from the part of speech, andwherein the execution unit estimates the user speech act and the entity name of the user articulation using a machine learning training model generated by the training unit.
1 Assignment
0 Petitions
Accused Products
Abstract
A conversation management system includes: a training unit that generates an articulation speech act and an entity name of a training corpus, that generates a lexical syntactic pattern, and that estimates a speech act and an entity name of a training corpus; a database that stores the articulation speech act, the entity name, and the lexical syntactic pattern of the training corpus; an execution unit that generates an articulation speech act and an entity name of a user, that generates a user lexical syntactic pattern, that estimates a speech act and an entity name of a user, that searches for an articulation pair corresponding to a user articulation at the database using a search condition including the estimated user speech act and the generated user lexical syntactic pattern, and that generates a final response by selecting an articulation template using a restriction condition including an estimated entity name among the found articulation pair; and an output unit that outputs a final response that is generated by the execution unit.
9 Citations
18 Claims
-
1. A conversation management system, comprising:
-
a training unit that generates an articulation speech act and an entity name of a training corpus, that generates a lexical syntactic pattern which is generated from the training corpus, and that estimates the articulation speech act and the entity name of the training corpus; a database that stores the articulation speech act, the entity name, and the lexical syntactic pattern of the training corpus; an input unit receiving a user articulation when an actual conversation is performed by a user; an execution unit that generates a user articulation speech act and an entity name of the user articulation, that generates a user lexical syntactic pattern, that estimates the user articulation speech act and the entity name of the user articulation, that searches for an articulation pair corresponding to the user articulation at the database using a search condition comprising the estimated user articulation speech act and the generated user lexical syntactic pattern, and that generates a final response by selecting an articulation template using a restriction condition comprising the estimated entity name of the user articulation among the found articulation pair; and an output unit that outputs the final response that is generated by the execution unit, wherein the training unit analyzes a part of speech of the training corpus, defines a priority of parts of speech, and generates the lexical syntactic pattern from the part of speech, and wherein the execution unit estimates the user speech act and the entity name of the user articulation using a machine learning training model generated by the training unit. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A method of managing a conversation, the method comprising:
-
a training step of generating an articulation speech act and an entity name of a training corpus, generating a lexical syntactic pattern which is generated from the training corpus, and estimating the articulation speech act and the entity name of the training corpus; storing the articulation speech act, the entity name of the training corpus, and the lexical syntactic pattern of the training corpus at to a database; receiving a user articulation when an actual conversation is performed by a user; an execution step of generating a user articulation speech act and an entity name of the user articulation, generating a user lexical syntactic pattern, estimating the user articulation speech act and the entity name of the user articulation, searching for an articulation pair corresponding to the user articulation at the database using a search condition comprising the estimated user articulation speech act and the generated user lexical syntactic pattern, and selecting an articulation template and generating a final response using a restriction condition comprising the estimated entity name of the user articulation among the found articulation pairs; and outputting the final response that is generated at the execution step, wherein the training step comprises; analyzing a part of speech of the training corpus; defining a priority of the parts of speech; and generating the lexical syntactic pattern from the part of speech, and wherein the user articulation speech act and the entity name of the user articulation is estimated by using a machine learning training model generated in the training step. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18)
-
Specification