Memory cache with automatic alliased entry invalidation and method of operation
First Claim
Patent Images
1. A memory cache comprising:
- a semi-associative cache array storing a plurality of sets, each one of the plurality of sets further comprising a first tag, a second tag, a data field, and a means for indicating the validity of the one of the plurality of the sets, the semi-associative cache array receiving a first subset and a second subset of an index, the first subset specifying a group of the plurality of sets, the semi-associative cache outputting the second tag and the data field of a selected one of the plurality of sets, the selected one of the plurality of sets being a member of the group, the first tag of the selected one of the plurality of sets being logically equivalent to the second subset, the data field of the selected one output on a plurality of bit lines;
a cache reload buffer receiving a data field from an external source and a third tag, the cache reload buffer storing the third tag and the data field; and
a cache reload buffer driver coupled to the cache reload buffer and to the semi-associative cache array, the cache reload buffer driver coupling the data field of the cache reload buffer to the plurality of bit lines if the third tag and a third subset of the index are logically equivalent,wherein the cache array sets the means for indicating the validity of the selected one to an invalid state upon an equivalence of the third tag and the third subset.
2 Assignments
0 Petitions
Accused Products
Abstract
A memory cache (14) has a semi-associative cache array (50), a cache reload buffer (40), and a cache reload buffer driver (42). The memory cache writes received data to the cache reload buffer and waits until the data is requested again before it invalidates any cache aliased entries in the semi-associative cache array. This invalidation step requires no dedicated cycle but instead is a result of the memory cache being able to simultaneously read from the semi-associative cache array and the cache reload buffer.
18 Citations
18 Claims
-
1. A memory cache comprising:
-
a semi-associative cache array storing a plurality of sets, each one of the plurality of sets further comprising a first tag, a second tag, a data field, and a means for indicating the validity of the one of the plurality of the sets, the semi-associative cache array receiving a first subset and a second subset of an index, the first subset specifying a group of the plurality of sets, the semi-associative cache outputting the second tag and the data field of a selected one of the plurality of sets, the selected one of the plurality of sets being a member of the group, the first tag of the selected one of the plurality of sets being logically equivalent to the second subset, the data field of the selected one output on a plurality of bit lines; a cache reload buffer receiving a data field from an external source and a third tag, the cache reload buffer storing the third tag and the data field; and a cache reload buffer driver coupled to the cache reload buffer and to the semi-associative cache array, the cache reload buffer driver coupling the data field of the cache reload buffer to the plurality of bit lines if the third tag and a third subset of the index are logically equivalent, wherein the cache array sets the means for indicating the validity of the selected one to an invalid state upon an equivalence of the third tag and the third subset. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A memory cache comprising:
-
a memory management unit translating a received input address into a real tag; a cache array coupled to the memory management unit comprising; a plurality of bit lines; a plurality of camlets, each camlet receiving a first subset of the received input address, the first subset selecting one of the plurality of camlets, a selected camlet, each camlet comprising a plurality of cache lines, each cache line comprising; a plurality of content addressable memory bit cells storing a first cache line tag, the plurality of content addressable memory bit cells receiving a second subset of the received input address, the plurality of content addressable memory cells asserting a first control signal if the second subset and the first cache line tag are logically equivalent and if the plurality of content addressable memory bit cells are a member of the selected camlet; a plurality of cache line bit cells storing data and a second cache line tag; a wordline driver, the wordline driver of the selected camlet coupling a differing one of the plurality of bit lines to a differing one of the plurality of cache line bit cells responsive to the assertion of the first control signal; a cache reload buffer storing a cache reload buffer tag and a data field in a plurality of bit cells; and a cache reload buffer driver coupled to the cache reload buffer and to the cache array, the cache reload buffer driver coupling a differing one of the plurality of bit lines to a differing one of the plurality of cache reload buffer bit cells if the cache reload buffer tag and a third subset of the received input address are logically equivalent and the cache array simultaneously invalidating one of the plurality of cache lines if the second subset is equivalent to the first cache line tag of the one of the plurality of cache lines. - View Dependent Claims (7, 8, 9, 10, 11, 12)
-
-
13. A method of operating a memory cache comprising the steps of:
-
first receiving a data field external to the memory cache and a first tag in a cache reload buffer; storing the data field and the first tag in the cache reload buffer; second receiving a first subset and a second subset of an index in a semi-associative cache array, the semi-associative cache array storing a plurality of sets of a second tag, a third tag and a data field; selecting a group of the plurality of sets responsive to the first subset, a selected group; first comparing the second subset and a plurality of second tags of the selected group; outputting the data field of a selected one of the plurality of sets on a plurality of bit lines, the selected one being a member of the selected group, the second tag of the selected one being logically equivalent to the second subset; third receiving a third subset of the index in the cache reload buffer; and coupling the data field of the cache reload buffer to the plurality of the bit lines with a cache reload buffer driver if the first tag and a third subset of the index are logically equivalent and simultaneously invalidating one of the plurality of sets if the second tag of the one of the plurality of sets is logically equivalent to the second subset. - View Dependent Claims (14, 15, 16, 17, 18)
-
Specification