×

Synchronizing visual and speech events in a multimodal application

  • US 8,571,872 B2
  • Filed: 09/30/2011
  • Issued: 10/29/2013
  • Est. Priority Date: 06/16/2005
  • Status: Expired due to Fees
First Claim
Patent Images

1. A method, comprising:

  • receiving, by a multimodal application executing on a computer processor, multimodal input from a multimodal browser of a device, wherein the multimodal input comprises speech from a user;

    determining a semantic interpretation of at least a portion of the speech using a voice form;

    calling a global application update handler of the multimodal application;

    identifying, by the global application update handler, an additional processing function based at least in part upon the semantic interpretation and a geographical location, wherein the additional processing function is independent of the voice form; and

    executing the additional processing function, wherein the additional processing function executed depends on the semantic interpretation of the at least a portion of the speech,wherein determining a semantic interpretation of at least a portion of the speech comprises determining a plurality of semantic interpretations of the at least a portion of the speech, andwherein identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation comprises identifying, by the global application update handler, an additional processing function for each of the plurality of semantic interpretations.

View all claims
  • 2 Assignments
Timeline View
Assignment View
    ×
    ×