CACHE COHERENCE UNIT FOR INTERCONNECTING MULTIPROCESSOR NODES HAVING PIPELINED SNOOPY PROTOCOL
First Claim
1. A data processing system comprising:
- a plurality of multiprocessor nodes;
a plurality of coherence units coupled to said multiprocessor nodes; and
a node interconnect coupled to said coherence units.
2 Assignments
0 Petitions
Accused Products
Abstract
The present invention consists of a cache coherence protocol within a cache coherence unit for use in a data processing system. The data processing system is comprised of multiple nodes, each node having a plurality of processors with associated caches, a memory, and input/output. The processors within the node are coupled to a memory bus operating according to a “snoopy” protocol. This invention includes a cache coherence protocol for a sparse directory in combination with the multiprocessor nodes. In addition, the invention has the following features: the current state and information from the incoming bus request are used to make an immediate decision on actions and next state; the decision mechanism for outgoing coherence is pipelined to follow the bus; and the incoming coherence pipeline acts independently of the outgoing coherence pipeline.
77 Citations
33 Claims
-
1. A data processing system comprising:
-
a plurality of multiprocessor nodes;
a plurality of coherence units coupled to said multiprocessor nodes; and
a node interconnect coupled to said coherence units.
-
-
2. A cache coherence unit for use in a data processing system having multiple nodes each having a plurality of processors coupled to a memory bus operating according to a snoopy protocol, each processor having an associated cache memory for caching information, said cache coherence unit having an address phase, a snoop phase, and a response phase, and comprising:
-
a bus interface element coupled to said memory bus;
a coherence control element coupled to said bus interface element and coupled to said cache memory; and
a directory coupled to said coherence control element for storing information of locations in said cache memory. - View Dependent Claims (3, 4, 5, 6, 7, 8, 9)
-
-
10. A method for maintaining cache coherence in a cache coherence unit in a data processing system having multiple nodes each having a plurality of processors coupled to a memory bus operating according to a snoopy protocol having an address phase, a snoop phase, and a response phase, each processor having an associated cache memory, comprising the steps of:
-
coupling a bus interface element to said memory bus;
coupling a coherence control element to said bus interface element and to said cache memory; and
storing information of locations in said cache memory locations in a directory coupled to said coherence control element. - View Dependent Claims (11, 12, 13)
-
-
14. A data processing system comprising:
-
a first node having a first memory coupled to a first memory bus, a plurality of processors coupled to said first memory bus, each processor having a respective cache and said memory bus operated according to a snoopy bus protocol for maintaining coherence between said caches, said snoopy bus protocol having an address phase, a snoop phase, and a response phase;
a second node having a second memory coupled to a second memory bus, a plurality of processors coupled to said second memory bus, each processor having a respective cache and said memory bus operated according to said snoopy bus protocol;
a first internode communication unit coupled to said first memory bus and having a first directory for indicating states of cached blocks of said first and second memories and a coherence control element coupled to said first bus and coupled to said first directory for reading state information from said first directory after said address phase and before said snoop phase and updating said state information before a next snoop phase; and
a second internode communication unit coupled to said second memory bus and coupled to said first internode communication unit, said first memory being accessible to said second node and said second memory being accessible to said first node, and said second coherence unit having a second directory for indicating states of cached blocks of said second memory and a coherence control element coupled to said second bus and coupled to said second directory for reading state information from said second directory after said address phase and before said snoop phase and updating said state information before a next snoop phase. - View Dependent Claims (15, 16, 17, 18)
-
-
19. A method of maintaining cache coherency within a data processing system having a plurality of nodes and an interconnect, each node having a plurality of multiprocessors with associated caches and memory, a memory bus controlled by a snoopy bus protocol, a directory for storing state information of cached memory lines, and a mesh interface controlled by a mesh protocol, comprising the steps of:
-
receiving a bus request from the said memory bus for the control of said cached memory line;
reading said state information of said cached line;
updating said state information of said cache line;
transforming said bus request into a network request; and
forwarding said network request to said mesh interface. - View Dependent Claims (20)
-
-
21. A cache coherence unit within a data processing system having a plurality of nodes, each node having a plurality of processors coupled to a memory bus operating according to a snoopy bus protocol, each processor having an associated cache, and coupled to an interconnect, said cache coherence unit comprising:
-
a control element for arbitrating with said memory bus;
a coherence control unit coupled to said control element for maintaining coherence of cached memory locations of bus requests received from said memory bus;
a directory coupled to said coherence control unit for storing state information of said cached memory locations;
a first interface unit coupled to said coherence control unit for transforming said bus requests into mesh-out requests;
a mesh interface coupled to said first interface unit for transferring said mesh-out requests to said interconnect;
a second interface unit coupled to said mesh interface for receiving mesh-in requests from said interconnect; and
a transfer agent coupled to said second interface unit and coupled to said memory bus for transferring said mesh-in requests from said second interface unit to said memory bus. - View Dependent Claims (22, 23, 24, 25)
-
-
26. A cache coherence unit within a data processing system having a plurality of nodes and an interconnect, each node having a plurality of multiprocessors with associated caches and memory, a memory bus controlled by a snoopy bus protocol, a directory for storing state information of cached memory lines, and a mesh interface controlled by a mesh protocol, comprising:
-
a control unit coupled to said memory bus for receiving bus requests;
means for reading said state information of said cached line;
means for updating said state information of said cache line; and
a mesh interface for forwarding said bus requests to said interconnect. - View Dependent Claims (27)
-
-
28. A method for maintaining cache coherence in a multi-node data. processing system, comprising the steps of:
-
receiving a request for a cache line from a local node;
determining the status of said cache line; and
sending a copy of said cache line to said local node. - View Dependent Claims (29, 30)
-
-
31. A system for maintaining cache coherence in a multi-node data processing system, comprising:
-
means for receiving a request for a cache line from a local node;
means for determining the status of said cache line; and
means for sending a copy of said cache line to said local node. - View Dependent Claims (32, 33)
-
Specification