Method and system for assisting users in interacting with multi-modal dialog systems
First Claim
Patent Images
1. A method for assisting a user in interacting with a multi-modal dialog apparatus, the method comprising:
- interpreting by a processor of the multi-modal dialog apparatus a What Can I Do (WCID) type of question from the user that is received at a user interface device coupled to the multi-modal dialog apparatus;
generating by the processor one or more user multi-modal utterances as sequences of words that a user may provide in the multi-modal inputs in a next turn of the dialog based on the WCID question and a multi-modal grammar, wherein the multi-modal grammar is based on a current context of a multi-modal dialog; and
conveying the one or more user multi-modal utterances to the user at a user interface device coupled to the multi-modal dialog apparatus.
5 Assignments
0 Petitions
Accused Products
Abstract
A method and system for assisting a user in interacting with a multi-modal dialog system (104) is provided. The method includes interpreting a “What Can I Do? (WCID)” question from a user in a turn of the dialog. A multi-modal grammar (212) is generated, based on the current context of the dialog. One or more user multi-modal utterances are generated, based on the WCID question and the multi-modal grammar. One or more user multi-modal utterances are conveyed to the user.
-
Citations
13 Claims
-
1. A method for assisting a user in interacting with a multi-modal dialog apparatus, the method comprising:
-
interpreting by a processor of the multi-modal dialog apparatus a What Can I Do (WCID) type of question from the user that is received at a user interface device coupled to the multi-modal dialog apparatus; generating by the processor one or more user multi-modal utterances as sequences of words that a user may provide in the multi-modal inputs in a next turn of the dialog based on the WCID question and a multi-modal grammar, wherein the multi-modal grammar is based on a current context of a multi-modal dialog; and conveying the one or more user multi-modal utterances to the user at a user interface device coupled to the multi-modal dialog apparatus. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A multi-modal dialog apparatus comprising:
-
a user interface device; a processor; and a memory, wherein the memory stores programming instructions organized into functional groups that control the processor, the functional groups comprising a multi-modal input fusion (MMIF) component, the multi-modal input fusion component accepting a What Can I Do (WCID) type of question from the user through the user interface device; a dialog manager, the dialog manager generating a multi-modal grammar based on a current context of a multi-modal dialog; a multi-modal utterance generator, the multi-modal utterance generator generating one or more user multi-modal utterances through the user interface device, as sequences of words that a user may provide in the multi-modal inputs in a next turn of the dialog based on the question and the multi-modal grammar. - View Dependent Claims (7, 8, 9, 10, 11, 12, 13)
-
Specification