Removal of posted operations from cache operations queue
First Claim
1. A method of managing a cache operations queue in a multi-processor system, comprising the steps of:
- loading a first cache operation in the cache operations queue to request modification of a value already held in a cache block of a cache associated with the queue, wherein the value also corresponds to a memory block of a system memory device;
loading a second cache operation in the cache operations queue to request writing of a new value for the cache block, wherein the second cache operation is loaded after the first cache operation; and
removing the first cache operation from the queue without executing the first cache operation in response to said step of loading the second cache operation in the queue.
1 Assignment
0 Petitions
Accused Products
Abstract
A method of avoiding deadlocks in cache coherency protocol for a multi-processor computer system, by loading a memory value into a plurality of cache blocks, assigning a first coherency state having a higher collision priority to only one of the cache blocks, and assigning one or more additional coherency states having lower collision priorities to all of the remaining cache blocks. Different system bus codes can be used to indicate the priority of conflicting requests (e.g., DClaim operations) to modify the memory value. The invention also allows folding or elimination of redundant DClaim operations, and can be applied in a global versus local manner within a multi-processor computer system having processing units grouped into at least two clusters.
-
Citations
20 Claims
-
1. A method of managing a cache operations queue in a multi-processor system, comprising the steps of:
-
loading a first cache operation in the cache operations queue to request modification of a value already held in a cache block of a cache associated with the queue, wherein the value also corresponds to a memory block of a system memory device;
loading a second cache operation in the cache operations queue to request writing of a new value for the cache block, wherein the second cache operation is loaded after the first cache operation; and
removing the first cache operation from the queue without executing the first cache operation in response to said step of loading the second cache operation in the queue. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A method of managing a cache operations queue in a multi-processor system, comprising the steps of:
-
assigning a first cache coherency state having a first collision priority to the cache block, and assigning one or more additional cache coherency states having one or more additional collision priorities which are lower than the first collision priority to one or more additional cache blocks of one or more additional caches other than the associated cache;
thereafter, loading a first cache operation in the cache operations queue to request modification of a value already held in a cache block of a cache associated with the queue, wherein the value also corresponds to a memory block of a system memory device;
loading a second cache operation in the cache operations queue to request writing of a new value for the cache block, wherein the second cache operation is loaded after the first cache operation; and
removing the first cache operation from the queue in response to said step of loading the second cache operation in the queue. - View Dependent Claims (8, 9, 10)
writing the new value from the second cache operation to the memory block; and
assigning an invalid cache coherency state to the one or more additional cache blocks in response to said writing step.
-
-
10. The method of claim 9 further comprising the step of protecting the cache block from other access attempts after said removing step, until the second cache operation is completed, by issuing retry messages in response to broadcasts of requests to modify the value from the one or more additional caches.
-
11. A computer system comprising:
-
a memory device;
a bus connected to said memory device;
a plurality of processing units connected to said bus, each processing unit having a cache, and each cache having a cache operations queue and a plurality of cache blocks for storing data values associated with respective memory blocks of said memory device; and
cache coherency means for (i) loading a first cache operation in a first one of said cache operations queue to request modification of a value already held in a first one of said cache blocks, wherein said first cache block is in a first one of said caches, and said first cache is associated with said first queue, (ii) loading a second cache operation in said first queue to request writing of a new value for said first cache block, wherein said second cache operation is loaded after said first cache operation, and (iii) removing said first cache operation from said first queue without executing the first cache operation in response to said loading of said second cache operation in said first queue. - View Dependent Claims (12, 13, 14)
-
-
15. A computer system comprising:
-
a memory device;
a bus connected to said memory device;
a plurality of processing units connected to said bus, each processing unit having a cache, and each cache having a cache operations queue and a plurality of cache blocks for storing data values associated with respective memory blocks of said memory device; and
cache coherency means for (i) loading a first cache operation in a first one of said cache operations queue to request modification of a value already held in a first one of said cache blocks, wherein said first cache block is in a first one of said caches, and said first cache is associated with said first queue, (ii) loading a second cache operation in said first queue to request writing of a new value for said first cache block, wherein said second cache operation is loaded after said first cache operation, and (iii) removing said first cache operation from said first queue in response to said loading of said second cache operation in said first queue, wherein said cache coherency means includes means for assigning a first cache coherency state having a first collision priority to said first cache block, and assigning one or more additional cache coherency states having one or more additional collision priorities which are lower than said first collision priority to one or more additional cache blocks of one or more additional caches other than said first cache which share the value, prior to said loading of said first and second cache operations. - View Dependent Claims (16, 17, 18, 19, 20)
an invalid state;
an exclusive state; and
a modified state.
-
-
17. The computer system of claim 15 wherein said cache coherency means includes means for writing the new value from said first cache to a first one of said memory blocks, and assigning an invalid cache coherency state to said one or more additional cache blocks in response to said writing of the new value.
-
18. The computer system of claim 17 wherein said first cache operations queue includes means for protecting said first cache block from other access attempts after said removing of said first cache operation, until said second cache operation is completed, by issuing retry messages in response to broadcasts of requests to modify the value from said one or more additional caches.
-
19. The computer system of claim 15 wherein said cache coherency means includes means for withdrawing a request from said one or more additional caches to claim a first one of said memory blocks which corresponds to said first cache block, when said request conflicts with another request from said first cache to claim said corresponding memory block.
-
20. The computer system of claim 15 wherein said first cache coherency state is a recently read state.
Specification