Fair sharing of a cache in a multi-core/multi-threaded processor by dynamically partitioning of the cache
First Claim
1. An integrated circuit comprising:
- a cache having a plurality of static portions and a dynamically shared portion; and
a first number of computing resources, each computing resource operable to victimize one of the static portions of the cache assigned to the computing resource and the dynamically shared portion.
1 Assignment
0 Petitions
Accused Products
Abstract
An apparatus and method for fairly accessing a shared cache with multiple resources, such as multiple cores, multiple threads, or both are herein described. A resource within a microprocessor sharing access to a cache is assigned a static portion of the cache and a dynamic portion. The resource is blocked from victimizing static portions assigned to other resources, yet, allowed to victimize the static portion assigned to the resource and the dynamically shared portion. If the resource does not access the cache enough times over a period of time, the static portion assigned to the resource is reassigned to the dynamically shared portion.
128 Citations
56 Claims
-
1. An integrated circuit comprising:
-
a cache having a plurality of static portions and a dynamically shared portion; and
a first number of computing resources, each computing resource operable to victimize one of the static portions of the cache assigned to the computing resource and the dynamically shared portion. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 54)
-
-
14. A microprocessor comprising:
-
a first resource having an associated first resource identifier (ID);
a second resource having an associated second resource ID;
a cache logically organized into a plurality of ways; and
a blocking mechanism to block the second resource from victimizing a first number of ways, of the plurality of ways, based at least in part on the second processor ID, to block the first resource from victimizing a second number of ways, of the plurality of ways, based at least in part on the first processor ID, and to allow the first and second resources to victimize a third number of ways, of the plurality of ways. - View Dependent Claims (15, 16, 17, 18, 19, 20, 21, 22, 23, 24)
-
-
25. An apparatus comprising:
-
a cache;
a first computing resource to access a first statically assigned portion of the cache and a dynamic portion of the cache;
a second computing resource to access a second statically assigned portion of the cache and the dynamic portion of the cache;
a counter to count a first number of accesses to the cache by the first computing resource over a period of time and a second number of accesses to the cache by the second computing resource over the period of time; and
logic operable to decrease the first statically assigned portion of the cache by a size and increase the dynamic portion of the cache by the size, if at the end of the period of time, the first number of accesses is less than a predetermined number, and decrease the second statically assigned portion of cache by the size and increase the dynamic portion of the cache by the size, if at the end of the period of time, the second number of accesses is less than the predetermined number. - View Dependent Claims (26, 27, 28, 29, 30, 31)
-
-
32. A system comprising:
-
a system memory comprising a plurality of memory locations to store elements, each memory location referenced by a physical address; and
a microprocessor coupled to the system memory comprising an address translation unit to translate virtual memory addresses to physical addresses, the physical addresses referencing the plurality of memory locations, a cache logically organized into a plurality of ways to store recently fetched elements from the plurality of memory locations, a plurality of resources assigned a dynamically shared first number of ways, of the plurality of ways, wherein each resource is also assigned a static second number of ways, of the plurality of ways, and logic to reassign at least one of the static second number of ways assigned to a first resource, of the plurality of resources, to the dynamically shared first number of ways, if the first resource does not access the cache a predetermined number of times over a period of time. - View Dependent Claims (33, 34, 35, 36, 37, 38, 39)
-
-
40. A method comprising:
-
generating an address associated with an instruction scheduled for execution on a first resource, the address referencing a memory location of an element;
requesting the element from a cache;
determining if the element is present in the cache; and
if the element is not present in the cache, allowing the first resource to victimize at least a first way of the cache assigned to the first resource and at least a second way of the cache shared by at least the first and a second resource, and blocking the first resource from victimizing at least a third way of the cache assigned to the second resource. - View Dependent Claims (41, 42, 43, 44, 45, 46, 47, 48, 49, 50)
-
-
51. A method comprising:
-
assigning a first way of a cache to a first computing resource of a plurality of computing resources, each computing resource being assigned at least one way of the cache;
assigning a dynamically shared portion of the cache to the plurality of computing resources;
counting the number of access to the cache made by the first computing resource over a first period of time;
re-assigning the first way to the dynamically shared portion of the cache, if the number of access made by the first computing resource over the first period of time is less than a predetermined number. - View Dependent Claims (52, 53, 55, 56)
-
Specification