Dynamically generating a vocal help prompt in a multimodal application
First Claim
1. A method for dynamically generating a vocal help prompt in a multimodal application, the method comprising:
- detecting a help-triggering event for an input element of a VoiceXML dialog, wherein the help-triggering event is selected from the group consisting of a request by a user for help, speech input that does not match any active grammar, and no speech input being received for a specified period of time, the detecting implemented with the multimodal application operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to a VoiceXML interpreter;
retrieving, by the VoiceXML interpreter from a speech recognition grammar updated based on a changing information source, retrieved help text, wherein the retrieved help text includes first help text associated with at least one non-terminal element of the speech recognition grammar and second help text associated with at least one terminal element of the speech recognition grammar, wherein at least some of the retrieved help text is not hard-coded by a programmer of the multimodal applicationgenerating, by the VoiceXML interpreter, a vocal help prompt based, at least in part, on the first help text and the second help text; and
presenting by the multimodal application the vocal help prompt through a computer user interface to a user.
2 Assignments
0 Petitions
Accused Products
Abstract
Dynamically generating a vocal help prompt in a multimodal application that include detecting a help-triggering event for an input element of a VoiceXML dialog, where the detecting is implemented with a multimodal application operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application is operatively coupled to a VoiceXML interpreter, and the multimodal application has no static help text. Dynamically generating a vocal help prompt in a multimodal application according to embodiments of the present invention typically also includes retrieving, by the VoiceXML interpreter from a source of help text, help text for an element of a speech recognition grammar, forming by the VoiceXML interpreter the help text into a vocal help prompt, and presenting by the multimodal application the vocal help prompt through a computer user interface to a user.
213 Citations
15 Claims
-
1. A method for dynamically generating a vocal help prompt in a multimodal application, the method comprising:
-
detecting a help-triggering event for an input element of a VoiceXML dialog, wherein the help-triggering event is selected from the group consisting of a request by a user for help, speech input that does not match any active grammar, and no speech input being received for a specified period of time, the detecting implemented with the multimodal application operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to a VoiceXML interpreter; retrieving, by the VoiceXML interpreter from a speech recognition grammar updated based on a changing information source, retrieved help text, wherein the retrieved help text includes first help text associated with at least one non-terminal element of the speech recognition grammar and second help text associated with at least one terminal element of the speech recognition grammar, wherein at least some of the retrieved help text is not hard-coded by a programmer of the multimodal application generating, by the VoiceXML interpreter, a vocal help prompt based, at least in part, on the first help text and the second help text; and presenting by the multimodal application the vocal help prompt through a computer user interface to a user. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. Apparatus for dynamically generating a vocal help prompt in a multimodal application, the apparatus comprising:
-
a computer processor; and a computer memory operatively coupled to the computer processor, the computer memory having disposed within it computer program instructions that, when executed by the computer processor, perform a method comprising; detecting a help-triggering event for an input element of a VoiceXML dialog, wherein the help-triggering event is selected from the group consisting of a request by a user for help, speech input that does not match any active grammar, and no speech input being received for a specified period of time, the detecting implemented with the multimodal application operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to a VoiceXML interpreter; retrieving, by the VoiceXML interpreter from a speech recognition grammar updated based on a changing information source, retrieved help text, associated with at least one non-terminal element of the speech recognition grammar and at least one terminal element of the speech recognition grammar, wherein the at least one non-terminal element includes a reference to at least one other element of the speech recognition grammar, wherein at least some of the help text is not hard-coded by a programmer of the multimodal application. forming, by the VoiceXML interpreter, the retrieved help text into a vocal help prompt; and presenting by the multimodal application the vocal help prompt through a computer user interface to a user. - View Dependent Claims (8, 9, 10, 11, 12)
-
-
13. A non-transitory computer-readable recordable medium encoded with a plurality of instructions that, when executed by a computer perform a method of dynamically generating a vocal help prompt in a multimodal application, the method comprising:
-
detecting a help-triggering event for an input element of a VoiceXML dialog, wherein the help-triggering event is selected from the group consisting of a request by a user for help, speech input that does not match any active grammar, and no speech input being received for a specified period of time, the detecting implemented with the multimodal application operating on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to a VoiceXML interpreter; retrieving, by the VoiceXML interpreter from a speech recognition grammar updated based on a changing information source, retrieved help text, wherein the retrieved help text includes first help text associated with at least one non-terminal element of the speech recognition grammar and second help text associated with at least one terminal element of the speech recognition grammar, wherein at least some of the retrieved help text is not hard-coded by a programmer of the multimodal application generating, by the VoiceXML interpreter, a vocal help prompt based, at least in part, on the first help text and the second help text; and presenting by the multimodal application the vocal help prompt through a computer user interface to a user. - View Dependent Claims (14, 15)
-
Specification