Controlling user interfaces with contextual voice commands
First Claim
1. A voice-enabled user interface comprising:
- a first user interface; and
a voice extension module associated with the first user interface and configured to voice-enable the user interface, the voice extension module including;
a speech recognition engine;
a preprocessor that registers with the speech recognition engine one or more voice commands for signaling for execution of one or more semantic operations that may be performed using the first user interface; and
an input handler that receives a first voice command and communicates with the preprocessor to execute a semantic operation that is indicated by the first voice command, the first voice command being one of the voice commands registered with the speech recognition engine by the preprocessor.
2 Assignments
0 Petitions
Accused Products
Abstract
One or more voice-enabled user interfaces include a user interface, and a voice extension module associated with the user interface. The voice extension module is configured to voice-enable the user interface and includes a speech recognition engine, a preprocessor, and an input handler. The preprocessor registers with the speech recognition engine one or more voice commands for signaling for execution of one or more semantic operations that may be performed using a first user interface. The input handler receives a first voice command and communicates with the preprocessor to execute a semantic operation that is indicated by the first voice command. The first voice command is one of the voice commands registered with the speech recognition engine by the preprocessor.
-
Citations
20 Claims
-
1. A voice-enabled user interface comprising:
-
a first user interface; and
a voice extension module associated with the first user interface and configured to voice-enable the user interface, the voice extension module including;
a speech recognition engine;
a preprocessor that registers with the speech recognition engine one or more voice commands for signaling for execution of one or more semantic operations that may be performed using the first user interface; and
an input handler that receives a first voice command and communicates with the preprocessor to execute a semantic operation that is indicated by the first voice command, the first voice command being one of the voice commands registered with the speech recognition engine by the preprocessor. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A voice extension module for voice-enabling a user interface comprising:
-
a speech recognition engine;
a preprocessor that registers with the speech recognition engine one or more voice commands for signaling for execution of one or more semantic operations that may be performed using a user interface; and
an input handler that receives a first voice command and communicates with the preprocessor to execute a semantic operation that is indicated by the first voice command using the user interface, the first voice command being one of the voice commands registered with the speech recognition engine by the preprocessor. - View Dependent Claims (9, 10, 11, 12, 13)
-
-
14. A method for enabling a user interface to be controlled with voice commands, the method comprising:
-
accessing information describing a first user interface that enables interaction with a first application;
identifying one or more semantic operations that may be performed with the first user interface;
registering one or more voice commands with a speech recognition engine to enable voice control of the first user interface, each voice command corresponding to one of the semantic operations; and
performing one of the semantic operations in response to a first voice command, the first voice command being one of the voice commands registered with the speech recognition engine, the performed semantic operation corresponding to the first voice command. - View Dependent Claims (15, 16, 17, 18, 19, 20)
-
Specification