Shared cache for point-to-point connected processing nodes
First Claim
Patent Images
1. A system comprising:
- a first plurality of point-to-point connected processing nodes comprising;
a processing node configured to;
broadcast a cache-miss data request for a data item;
receive a first response to the cache-miss data request comprising the data item; and
discard, after receiving the first response, a second response to the cache-miss data request;
a second plurality of point-to-point connected processing nodes;
a passive backplane comprising;
a first shared memory point-to-point connecting the first plurality of processing nodes and the second plurality of processing nodes and configured to;
observe the cache-miss data request for the data item;
identify the data item within the first shared memory;
transmit the first response comprising the data item to the processing node; and
a second shared memory point-to-point connecting the first plurality of processing nodes and the second plurality of processing nodes, wherein the second shared memory is address-interleaved with the first shared memory; and
a third shared memory disposed within the second plurality of processing nodes and configured to;
snoop the first response; and
update a local version of the data item in the third shared memory based on the first response.
2 Assignments
0 Petitions
Accused Products
Abstract
A shared cache is point-to-point connected to a plurality of point-to-point connected processing nodes, wherein the processing nodes may be integrated circuits or multiprocessing systems. In response to a local cache miss, a requesting processing node issues a broadcast for requested data which is observed by the shared cache. If the shared cache has a copy of the requested data, the shared cache forwards the copy of the requested data to the requesting processing node.
15 Citations
18 Claims
-
1. A system comprising:
-
a first plurality of point-to-point connected processing nodes comprising; a processing node configured to; broadcast a cache-miss data request for a data item; receive a first response to the cache-miss data request comprising the data item; and discard, after receiving the first response, a second response to the cache-miss data request; a second plurality of point-to-point connected processing nodes; a passive backplane comprising; a first shared memory point-to-point connecting the first plurality of processing nodes and the second plurality of processing nodes and configured to; observe the cache-miss data request for the data item; identify the data item within the first shared memory; transmit the first response comprising the data item to the processing node; and a second shared memory point-to-point connecting the first plurality of processing nodes and the second plurality of processing nodes, wherein the second shared memory is address-interleaved with the first shared memory; and a third shared memory disposed within the second plurality of processing nodes and configured to; snoop the first response; and update a local version of the data item in the third shared memory based on the first response. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A computer-implemented method comprising:
-
detecting, by a processing node of a first plurality of point-to-point connected processing nodes, a cache miss; broadcasting, by the processing node and in response to the cache-miss, a cache-miss data request, for a data item; receiving, by a memory selected from a group consisting of a first shared memory and a second shared memory residing on a passive backplane, the data request for the data item, wherein the first shared memory is address-interleaved with the second shared memory, and wherein the first shared memory and the second shared memory are connected to the first plurality of processing nodes and a second plurality of processing nodes by a plurality of point-to-point connections of the passive backplane; generating, by the memory, a first response comprising the data item to satisfy the data request; receiving, by the processing node, the first response comprising the data item; discarding, by the processing node, a second response to satisfy the data request after receiving the first response; snooping the first response by a third shared memory disposed within the second plurality of processing nodes; and updating, by the third shared memory, a local version of the data item based on the first response. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18)
-
Specification