Voice-control for a user interface
First Claim
Patent Images
1. A voice-enabled user interface comprising:
- user interface elements; and
a speech recognition engine that receives voice input identifying a target user interface element, wherein the voice-enabled user interface resolves non-verbal ambiguities in associating the received voice input with the target user interface element by displaying a representational enumerated label associated positionally with the target user interface element after the voice input is received.
2 Assignments
0 Petitions
Accused Products
Abstract
Method and systems to voice-enable a user interface using a voice extension module are provided. A voice extension module includes a preprocessor, a speech recognition engine, and an input handler. The voice extension module receives user interface information, such as, a hypertext markup language (HTML) document, and voice-enables the document so that a user may interact with any user interface elements using voice commands.
91 Citations
41 Claims
-
1. A voice-enabled user interface comprising:
-
user interface elements; and
a speech recognition engine that receives voice input identifying a target user interface element, wherein the voice-enabled user interface resolves non-verbal ambiguities in associating the received voice input with the target user interface element by displaying a representational enumerated label associated positionally with the target user interface element after the voice input is received. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A representational enumerated label for resolving non-verbal ambiguities in a voice-enabled interface, the label comprising:
-
a unique identifier;
an association to a corresponding user interface element; and
a graphical representation presented in the voice-enabled interface to show the association to the corresponding user interface element;
wherein the unique identifier may be displayed after a voice input is recognized to resolve non-verbal ambiguities in associating the recognized voice input with the user interface element. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19, 20)
-
-
21. A method for resolving target ambiguity in a voice-enabled user interface comprising:
-
receiving first voice input that identifies more than one potential target user interface element based on an ambiguity of the user interface to present a target ambiguity;
displaying representational enumerated labels corresponding to each potential target user interface element after the first voice input is received, each representational enumerated label including positional association to a corresponding target user interface element and a unique identifier; and
receiving second voice input including the unique identifier of one of the representational enumerated labels to resolve the target ambiguity. - View Dependent Claims (22, 23, 24, 25, 26, 27, 28, 29, 30, 31)
-
-
32. A voice-enabled user interface comprising:
-
a user interface element configured to influence voice input of a user by providing a visual cue indicative to the user of a grammar associated with the user interface element;
an input handler that enables the user to specify the grammar to be associated with the user interface element;
a data store configured to store the association between the user interface element and the grammar; and
a speech recognition engine that receives voice input identifying the user interface element, queries the data store to determine the grammar associated with the user interface element, and processes data input using the determined grammar. - View Dependent Claims (33, 34, 35, 36, 37, 38)
-
-
39. A voice-enabled user interface comprising:
-
user interface elements; and
a speech recognition engine that receives voice input identifying a target user interface element, wherein the voice-enabled user interface resolves ambiguities in associating the received voice input with the target user interface element using implicit scoping. - View Dependent Claims (40, 41)
-
Specification