Multi-action voice macro method
First Claim
1. A method for implementing a multi-action voice macro for a voice recognition navigator program on a computer system, said method comprising the steps of:
- scanning a target application program to determine a plurality of target application states, each of said target application states being comprised of a plurality of window objects;
organizing each of said target application states in the form of a sub-context tree, each of said sub-context trees being comprised of a plurality of sub-context objects, said sub-context tree defining a hierarchical relationship among said sub-context objects;
determining a set of user inputs to which each of said window objects will be responsive, and assigning a corresponding set of said voice macros to each of the sub-context objects for simulating each of said user inputs in response to a spoken utterance;
defining each of said voice macros to include a vocabulary phrase, said vocabulary phrase defining the spoken utterance to which each of said voice macros is responsive;
further defining at least one of said voice macros to include a link field, said link field identifying at least one linked macro to be executed by said navigator program when the vocabulary phrase for said voice macro is spoken by a user, said link field comprising a sub-context object path from the root of said sub-context tree, to the sub-context object containing said linked macro;
storing the sub-context trees in an electronic memory device as a context data file;
executing said voice recognition navigator program on said computer system simultaneously with said target application program so that a spoken utterance corresponding to said vocabulary phrase will cause said linked macro to be executed.
1 Assignment
0 Petitions
Accused Products
Abstract
Method for implementing a multi-action voice macro (140) for a voice recognition navigator program (102) on a computer system. The method involves analyzing a target application program (22) to determine a plurality of target application states (24). Each of the target application states (24) is comprised of a plurality of window objects. The target application states are arranged in the form of one or more sub-context trees, with each of the sub-context trees comprised of a plurality of sub-context objects (50, 52, 54, 56, 58, 60, 62, 64, 66, 68). A set of user inputs is determined to which each of the window objects will be responsive. Each user input is assigned a corresponding voice macro (140) which simulates the user inputs in response to a spoken utterance. The voice macro (140) includes a link field (148), which identifies at least one linked macro to be executed by the navigator program (102) when a specific vocabulary phrase for the voice macro (140) is spoken by a user.
191 Citations
14 Claims
-
1. A method for implementing a multi-action voice macro for a voice recognition navigator program on a computer system, said method comprising the steps of:
-
scanning a target application program to determine a plurality of target application states, each of said target application states being comprised of a plurality of window objects; organizing each of said target application states in the form of a sub-context tree, each of said sub-context trees being comprised of a plurality of sub-context objects, said sub-context tree defining a hierarchical relationship among said sub-context objects; determining a set of user inputs to which each of said window objects will be responsive, and assigning a corresponding set of said voice macros to each of the sub-context objects for simulating each of said user inputs in response to a spoken utterance; defining each of said voice macros to include a vocabulary phrase, said vocabulary phrase defining the spoken utterance to which each of said voice macros is responsive; further defining at least one of said voice macros to include a link field, said link field identifying at least one linked macro to be executed by said navigator program when the vocabulary phrase for said voice macro is spoken by a user, said link field comprising a sub-context object path from the root of said sub-context tree, to the sub-context object containing said linked macro; storing the sub-context trees in an electronic memory device as a context data file; executing said voice recognition navigator program on said computer system simultaneously with said target application program so that a spoken utterance corresponding to said vocabulary phrase will cause said linked macro to be executed. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
Specification