SYNCHRONIZING VISUAL AND SPEECH EVENTS IN A MULTIMODAL APPLICATION
2 Assignments
0 Petitions
Accused Products
Abstract
Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.
11 Citations
45 Claims
-
1-20. -20. (canceled)
-
21. A method, comprising:
-
calling a voice form; receiving speech from a user; determining a semantic interpretation of at least a portion of the speech using the voice form; calling a global application update handler; identifying, by the global application update handler, an additional processing function based at least in part upon the semantic interpretation, wherein the additional processing function is independent of the voice form; and executing the additional processing function. - View Dependent Claims (22, 23, 24, 25, 26, 27)
-
-
28. A system, comprising:
-
at least one computer processor; at least one computer memory operatively coupled to the computer processor; and computer program instructions disposed within the computer memory that, when executed, cause the at least one computer processor to; call a voice form; receive speech from a user; determine a semantic interpretation of at least a portion of the speech using the voice form; call a global application update handler; identify, by the global application update handler, an additional processing function based at least in part upon the semantic interpretation, wherein the additional processing function is independent of the voice form; and execute the additional processing function. - View Dependent Claims (29, 30, 31, 32, 33, 34)
-
-
35. A computer-readable storage medium comprising instructions that, when executed on at least one computer processor, perform a method, comprising:
-
calling a voice form; receiving speech from a user; determining a semantic interpretation of at least a portion of the speech using the voice form; calling a global application update handler; identifying, by the global application update handler, an additional processing function based at least in part upon the semantic interpretation, wherein the additional processing function is independent of the voice form; and executing the additional processing function. - View Dependent Claims (36, 37, 38, 39, 40, 41)
-
-
42. A computer-readable storage medium comprising instructions that, when executed on at least one computer processor, perform a method, comprising:
-
receiving speech from a user; determining a semantic interpretation of at least a portion of the speech; and providing an advertisement based at least in part upon the semantic interpretation. - View Dependent Claims (43, 44, 45)
-
Specification