Transaction processing using multiple protocol engines
DCFirst Claim
Patent Images
1. A cluster, comprising:
- a plurality of processing nodes; and
an interconnection controller coupled to the plurality of processing nodes, the interconnection controller including a plurality of protocol engines configured to process memory transactions in accordance with a cache coherence protocol, wherein each protocol engine of said plurality of protocol engines is configured to be assigned a distinct subset of a global memory space, the plurality of protocol engines further comprising;
(a) a first protocol engine configured to be assigned a first subset of the global memory space, said first subset of the global memory space corresponding to one of local and remote memory; and
(b) a second protocol engine configured to be assigned a second subset of the global memory space, said second subset of the global memory space corresponding to one of local and remote memory;
wherein the cluster further comprises circuitry configured to select the first protocol engine of the protocol engines using destination information associated with a memory transaction of the memory transactions where the destination information corresponds to the first subset of the global memory space and wherein the circuitry is configured to select the second protocol engine where the destination information corresponds to the second subset of the global memory space.
3 Assignments
Litigations
2 Petitions
Accused Products
Abstract
A multi-processor computer system is described in which transaction processing is distributed among multiple protocol engines. The system includes a plurality of local nodes and an interconnection controller interconnected by a local point-to-point architecture. The interconnection controller comprises a plurality of protocol engines for processing transactions. Transactions are distributed among the protocol engines using destination information associated with the transactions.
36 Citations
8 Claims
-
1. A cluster, comprising:
- a plurality of processing nodes; and
an interconnection controller coupled to the plurality of processing nodes, the interconnection controller including a plurality of protocol engines configured to process memory transactions in accordance with a cache coherence protocol, wherein each protocol engine of said plurality of protocol engines is configured to be assigned a distinct subset of a global memory space, the plurality of protocol engines further comprising;
(a) a first protocol engine configured to be assigned a first subset of the global memory space, said first subset of the global memory space corresponding to one of local and remote memory; and
(b) a second protocol engine configured to be assigned a second subset of the global memory space, said second subset of the global memory space corresponding to one of local and remote memory;
wherein the cluster further comprises circuitry configured to select the first protocol engine of the protocol engines using destination information associated with a memory transaction of the memory transactions where the destination information corresponds to the first subset of the global memory space and wherein the circuitry is configured to select the second protocol engine where the destination information corresponds to the second subset of the global memory space. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
- a plurality of processing nodes; and
Specification