Systems and methods that employ correlated synchronous-on-asynchronous processing
First Claim
1. A system that employs dynamic load balancing to asynchronously process synchronous requests, comprising:
- one or more microprocessors that execute the following computer executable components stored on a non transitory computer readable storage medium;
a query management component that;
receives a web-based request from a client; and
publishes the web-based request in a queue;
an asynchronous processing component that;
detects available processing engine capacity;
predicts future processing engine capacity; and
distributes portions of the web-based request among processing engines based on the detected and predicted processing engine capacity, including distributing a same portion of the web-based request to a plurality of different processing engines, such that each of the different processing engines in the plurality of processing engines returns a result for the same portion of the web-based request, whereafter a first result returned from the plurality of processing engine is initially selected for use;
an error handling component that automatically determines if the first result returned is in error and uses a subsequent result returned from the plurality of processing engines if available, and discards the first result if the first result is in error or discards subsequent results when the first result returned is not in error, or if subsequent results are not available, conveys one or more portions of the web-based request associated with a failed processing engine to another processing engine, wherein the client is not informed of a processing failure;
a process engine component that groups processing engine results;
an output component that returns the grouped processing engine results synchronous with the web-based request; and
an orchestrator component that tracks and maintains one or more associations between the portions of the web-based request as the portions of the web-based request traverse through the processing engines.
2 Assignments
0 Petitions
Accused Products
Abstract
The present invention provides a novel technique for Web-based asynchronous processing of synchronous requests. The systems and methods of the present invention utilize a synchronous interface in order to couple with systems that synchronously communicate (e.g., to submit queries and receive results). The interface enables reception of synchronous requests, which are queued and parsed amongst subscribed processing servers within a server farm. Respective servers can serially and/or concurrently process the request and/or portions thereof via a dynamic balancing approach. Such approach distributes the request to servers based on server load, wherein respective portions can be re-allocated as server load changes. Results can be correlated with the request, aggregated, and returned such that it appears to the requester that the request was synchronously serviced. The foregoing mitigates the need for clients to perform client-side aggregation of asynchronous results.
55 Citations
4 Claims
-
1. A system that employs dynamic load balancing to asynchronously process synchronous requests, comprising:
one or more microprocessors that execute the following computer executable components stored on a non transitory computer readable storage medium; a query management component that; receives a web-based request from a client; and publishes the web-based request in a queue; an asynchronous processing component that; detects available processing engine capacity; predicts future processing engine capacity; and distributes portions of the web-based request among processing engines based on the detected and predicted processing engine capacity, including distributing a same portion of the web-based request to a plurality of different processing engines, such that each of the different processing engines in the plurality of processing engines returns a result for the same portion of the web-based request, whereafter a first result returned from the plurality of processing engine is initially selected for use; an error handling component that automatically determines if the first result returned is in error and uses a subsequent result returned from the plurality of processing engines if available, and discards the first result if the first result is in error or discards subsequent results when the first result returned is not in error, or if subsequent results are not available, conveys one or more portions of the web-based request associated with a failed processing engine to another processing engine, wherein the client is not informed of a processing failure; a process engine component that groups processing engine results; an output component that returns the grouped processing engine results synchronous with the web-based request; and an orchestrator component that tracks and maintains one or more associations between the portions of the web-based request as the portions of the web-based request traverse through the processing engines. - View Dependent Claims (2, 3, 4)
Specification