Method for dynamically allocating LRU/MRU managed memory among concurrent sequential processes
First Claim
Patent Images
1. A CPU-implementable method for dynamically adjusting the portions of LRU-referenceable memory space shared among concurrently executing sequential processes in which a supervisory process is invoked to manage the memory referencing, wherein the steps include:
- (a) determining an optimal space allocation among the processes by(1) accumulating a trace of consecutive references to items stored in the LRU memory space;
(2) partitioning the space over a range of predetermined sizes;
(3) ascertaining the hit/miss ratios from the accumulated trace as a function of LRU memory space partition sizes; and
(4) responsive to each trace reference, LRU ordering the items in the partitioned space and adjusting for overflow among the partitions; and
(b) reallocating the partitions among the concurrent processes according to and in overlapped relation with the determination step by the supervisory process.
0 Assignments
0 Petitions
Accused Products
Abstract
Short traces of consecutive CPU references to storage are accumulated and processed to ascertain hit ratio as a function of cache size. From this determination, an allocation of cache can be made. Because this determination requires minimal processing time, LRU-referenceable memory space among concurrently executing sequential processes is used dynamically by a CPU cache manager.
193 Citations
4 Claims
-
1. A CPU-implementable method for dynamically adjusting the portions of LRU-referenceable memory space shared among concurrently executing sequential processes in which a supervisory process is invoked to manage the memory referencing, wherein the steps include:
-
(a) determining an optimal space allocation among the processes by (1) accumulating a trace of consecutive references to items stored in the LRU memory space; (2) partitioning the space over a range of predetermined sizes; (3) ascertaining the hit/miss ratios from the accumulated trace as a function of LRU memory space partition sizes; and (4) responsive to each trace reference, LRU ordering the items in the partitioned space and adjusting for overflow among the partitions; and (b) reallocating the partitions among the concurrent processes according to and in overlapped relation with the determination step by the supervisory process.
-
-
2. A machine-implemented method for dynamically selecting pageable groups of data and associated cache sizes with respect to one or more caches of a CPU-accessible demand paging hierarchical storage system, said system having an LRU/MRU page replacement policy, including the erasure of cache stored items, the cache being shared among concurrently executing sequential processes in which a supervisory process is invoked to manage the memory references, the method steps include:
-
(a) determining the optimal space allocation among the processes by (1) accumulating a trace of consecutive references to items stored in the cache, (2) processing the traces to obtain hit/miss ratios as a function of q pageable groups and p cache sizes, said processing step including the step of partitioning an LRU page stack into p+1 equivalence classes, all pages in any given partition having the same stack distance, (3) arranging the groups of pageable data responsive to each reference by ordering the items in the cache and adjusting for overflow so as to maintain the highest hit ratio as a function of cache size; and (b) reallocating the cache among the concurrent processes according to and in overlapped relation with the determination step by the supervisory process. - View Dependent Claims (3, 4)
-
Specification