XML based architecture for controlling user interfaces with contextual voice commands
First Claim
1. A voice-enabled user interface comprising:
- a first user interface; and
a voice extension module associated with the first user interface and configured to voice-enable the first user interface, the voice extension module including;
a speech recognition engine;
an XML configuration repository that includes one or more XML files specifying one or more voice commands for signaling for execution of one or more semantic operations that may be performed using the first user interface;
a preprocessor that is configured to register with the speech recognition engine the one or more voice commands; and
an input handler that is configured to receive a first voice command and to communicate with the preprocessor to execute a semantic operation from the one or more semantic operations that may be performed using the first user interface, wherein the first voice command is one of the one or more voice commands registered with the speech recognition engine by the preprocessor, and wherein the first voice command signals for execution of the semantic operation.
2 Assignments
0 Petitions
Accused Products
Abstract
A voice-enabled user interface includes a first user interface. A voice extension module is associated with the first user interface and is configured to voice-enable the first user interface. The voice extension module includes a speech recognition engine, an XML configuration repository, a preprocessor, and an input handler. The XML configuration repository includes one or more XML files specifying one or more voice commands for signaling for execution of one or more semantic operations that may be performed using the first user interface. The preprocessor is configured to register with the speech recognition engine the one or more voice commands. The input handler is configured to receive a first voice command and to communicate with the preprocessor to execute a semantic operation from the one or more semantic operations that may be performed using the first user interface. The first voice command is one of the one or more voice commands registered with the speech recognition engine by the preprocessor, and the first voice command signals for execution of the semantic operation.
-
Citations
20 Claims
-
1. A voice-enabled user interface comprising:
-
a first user interface; and
a voice extension module associated with the first user interface and configured to voice-enable the first user interface, the voice extension module including;
a speech recognition engine;
an XML configuration repository that includes one or more XML files specifying one or more voice commands for signaling for execution of one or more semantic operations that may be performed using the first user interface;
a preprocessor that is configured to register with the speech recognition engine the one or more voice commands; and
an input handler that is configured to receive a first voice command and to communicate with the preprocessor to execute a semantic operation from the one or more semantic operations that may be performed using the first user interface, wherein the first voice command is one of the one or more voice commands registered with the speech recognition engine by the preprocessor, and wherein the first voice command signals for execution of the semantic operation. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A voice extension module for voice-enabling a user interface comprising:
-
a speech recognition engine;
an XML configuration repository that includes one or more XML files specifying one or more voice commands for signaling for execution of one or more semantic operations that may be performed using a first user interface;
a preprocessor that is configured to register with the speech recognition engine the one or more voice commands; and
an input handler that is configured to receive a first voice command and to communicate with the preprocessor to execute a semantic operation from the one or more semantic operations that may be performed using the first user interface, wherein the first voice command is one of the one or more voice commands registered with the speech recognition engine by the preprocessor, and wherein the first voice command signals for execution of the semantic operation. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A method for enabling a user interface to be controlled with voice commands, the method comprising:
-
accessing an XML configuration repository that specifies one or more voice commands for execution of one or more semantic operations that may be performed using a first user interface for a first application, each voice command corresponding to at least one of the semantic operations;
identifying at least one of the voice commands from the XML configuration repository;
registering the identified voice command with a speech recognition engine and an input handler to enable voice control of the first user interface; and
performing a particular one of the one or more semantic operations in response to a first voice command, wherein the first voice command is the voice command registered with the speech recognition engine and the input handler, and wherein the first voice command corresponds to the particular semantic operation. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification