Server-processor hybrid system for processing data
First Claim
Patent Images
1. A server-processor hybrid system for processing data, comprising:
- a set of front-end servers configured to receive the data from an external source;
a set of back-end application optimized processors configured to receive the data from the set of front-end servers, process the data, and return processed data to the set of front-end servers, each of the set of back-end application optimized processors comprising;
a power processing element (PPE);
an element interconnect bus (EIB) coupled to the PPE; and
a set of special purpose engines (SPEs) coupled to the EIB, wherein the set of front-end servers performs a specific processor selection function in order to select one of the set of SPEs to process the data;
a staging storage device configured to store the received data prior to the set of back-end application optimized processors processing the data;
a processed data storage device, separate from the staging storage device, configured to store the processed data from the set of back-end application optimized processors; and
an interface within at least one of the set of front-end servers having a set of network interconnects, the interface connecting the set of front-end servers with the set of back-end application optimized processors, the interface configured to;
communicate the data received from the external source, from the set of front-end servers to the set of back-end application optimized processors by selectively invoking a push model or a pull model, andcommunicate the processed data from at least one of the processed data storage device and the back-end application optimized processors, to the set of front-end servers by selectively invoking the push model or the pull model,wherein the push model is selectively invoked when the data to be transmitted;
has a predefined length, has a latency bound, or is smaller than a predefined Push Threshold (PT), andthe pull model is selectively invoked when the data to be transmitted;
is a stream, does not have a predefined length, exceeds the predefined PT, or does not have a latency bound.
2 Assignments
0 Petitions
Accused Products
Abstract
The present invention relates to a server-processor hybrid system that comprises (among other things) a set (one or more) of front-end servers (e.g., mainframes) and a set of back-end application optimized processors. Moreover, implementations of the invention provide a server and processor hybrid system and method for distributing and managing the execution of applications at a fine-grained level via an I/O-connected hybrid system. This method allows one system to be used to manage and control the system functions, and one or more other systems to co-processor.
139 Citations
6 Claims
-
1. A server-processor hybrid system for processing data, comprising:
-
a set of front-end servers configured to receive the data from an external source; a set of back-end application optimized processors configured to receive the data from the set of front-end servers, process the data, and return processed data to the set of front-end servers, each of the set of back-end application optimized processors comprising; a power processing element (PPE); an element interconnect bus (EIB) coupled to the PPE; and a set of special purpose engines (SPEs) coupled to the EIB, wherein the set of front-end servers performs a specific processor selection function in order to select one of the set of SPEs to process the data; a staging storage device configured to store the received data prior to the set of back-end application optimized processors processing the data; a processed data storage device, separate from the staging storage device, configured to store the processed data from the set of back-end application optimized processors; and an interface within at least one of the set of front-end servers having a set of network interconnects, the interface connecting the set of front-end servers with the set of back-end application optimized processors, the interface configured to; communicate the data received from the external source, from the set of front-end servers to the set of back-end application optimized processors by selectively invoking a push model or a pull model, and communicate the processed data from at least one of the processed data storage device and the back-end application optimized processors, to the set of front-end servers by selectively invoking the push model or the pull model, wherein the push model is selectively invoked when the data to be transmitted;
has a predefined length, has a latency bound, or is smaller than a predefined Push Threshold (PT), andthe pull model is selectively invoked when the data to be transmitted;
is a stream, does not have a predefined length, exceeds the predefined PT, or does not have a latency bound. - View Dependent Claims (2, 3, 4, 5, 6)
-
Specification