Multi-context conversational environment system and method
First Claim
Patent Images
1. A method of speech recognition processing that provides audible information over a communications device comprising:
- receiving a first speech input at a network server, said first speech input associated with a caller menu system and indicative of a first subject area;
initiating a first subject application associated with said first subject area;
receiving a second speech input at the network server, said second speech input associated with the caller menu system, said second speech input indicative of a second subject area associated with a second independent application;
storing at least one indicator indicating a current processing step of said first subject application; and
storing a current context associated with said first speech input associated with said first subject application in a context table and audibly outputting said current context upon a user request.
8 Assignments
0 Petitions
Accused Products
Abstract
An interactive speech-activated information retrieval application for use in automated telephone systems includes a control manager that interfaces between the caller'"'"'s speech input and applications and enables several applications to be open at the same time. The control manager continually monitors for control words, enabling the user to switch between applications at will. When a user switches to another application, the control manager suspends the first application and stores its context, enabling the user to later return to the application at the point where the application was previously suspended.
119 Citations
20 Claims
-
1. A method of speech recognition processing that provides audible information over a communications device comprising:
-
receiving a first speech input at a network server, said first speech input associated with a caller menu system and indicative of a first subject area;
initiating a first subject application associated with said first subject area;
receiving a second speech input at the network server, said second speech input associated with the caller menu system, said second speech input indicative of a second subject area associated with a second independent application;
storing at least one indicator indicating a current processing step of said first subject application; and
storing a current context associated with said first speech input associated with said first subject application in a context table and audibly outputting said current context upon a user request. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A speech recognition system comprising:
-
a speech recognition module located at a network server that processes speech input and translates said speech input into computer-readable input;
a control manager comprising;
a module that interfaces between said speech input and at least one of a plurality of caller menu application programs;
a module that initiates processing of a first application program; and
a module that monitors said speech input for a request to initiate a second independent application program;
a module that stores a current context of said first application program in a context table and audibly outputs said current context upon a user request; and
a speech synthesizing module for providing output information from said plurality of application programs. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19)
-
-
20. A computer-readable medium for storing computer-executable instructions for:
-
receiving a first speech input at a network server, said first speech input associated with a caller menu system and indicative of a first subject area;
initiating a first subject application associated with said first subject area;
receiving a second speech input at the network server, said second speech input associated with the caller menu system, said second speech input indicative of a second subject area associated with a second independent application;
storing at least one indicator indicating a current processing step of said first subject application; and
storing a current context associated with said first speech input associated with said first subject application in a context table and audibly outputting said current context upon a user request.
-
Specification