Sequentially processing data in a cached data storage system
First Claim
1. In a method of operating a cache which is interposed between a host processor and a backing store, the cache and the backing store including addressable like-capacity data storing tracks, where a track of data is the amount of data storable on one track of a magnetic disk recorder the cache including means for addressing tracks in the cache using addresses of the backing store whenever the cache tracks are individually allocated for storing data with respect to a respective addressable track in the backing store;
- the steps of;
measuring and storing the measurement of those portions of each of said allocated tracks that are accessed by the host processor during the current allocation of the cache track to one of the tracks in the backing store;
selecting a group of the cache tracks that are addressable by contiguous ones of the backing store addresses;
establishing first and second access extent thresholds;
in each of the selected groups, comparing, as at first comparison, the stored measurement for one of the tracks in the selected group with said first access extent threshold, if the first comparison shows that the stored measurement for the one track is less than the first access extent threshold, operating the cache for said one cache track in a random access mode, if the first comparison shows that the one cache track stored measurement is greater than the first access extent threshold, then combining the stored measurements for all cache tracks in the group having the one cache track; and
comparing, as a second comparison, said combined stored measurement with said second access extent threshold, if said second comparison shows that the combined measurement exceeds said second access extent threshold, preparing the cache for cache bypass operations, if said second comparison shows that the combined measurement is not greater than said second access extent threshold, then establishing for tracks in the backing store having addresses contiguous with the backing store address of the one cache track a predetermined requested data promotion and demotion algorithm.
1 Assignment
0 Petitions
Accused Products
Abstract
The disclosure relates to sequential performance of a cached data storage subsystem with a minimal control signal processing. Sequential access is first detected by monitoring and examining the quantity of data accessed per unit of data storage (track) across a set of contiguously addressable tracks. Since the occupancy of the data in the cache is usually time limited, this examination provides an indication of the rate of sequential processing for a data set, i.e., a data set is being processed usually in contiguously addressable data storage units of a data storage system. Based upon the examination of a group of the tracks in a cache, the amount of data to be promoted to the cache from a backing store in anticipation of future host processor references is optimized. A promotion factor is calculated by combining the access extents monitored in the individual data storage areas and is expressed in a number of tracks units to be promoted. The examination of the group of tracks units and the implementation of the data promotion and demotion (early cast-out) is synchronized which results in a synergistic effect for increasing throughput of the cache for sequentially-processed data. A limit of promotion is determined to create a window of sequential data processing.
-
Citations
19 Claims
-
1. In a method of operating a cache which is interposed between a host processor and a backing store, the cache and the backing store including addressable like-capacity data storing tracks, where a track of data is the amount of data storable on one track of a magnetic disk recorder the cache including means for addressing tracks in the cache using addresses of the backing store whenever the cache tracks are individually allocated for storing data with respect to a respective addressable track in the backing store;
-
the steps of; measuring and storing the measurement of those portions of each of said allocated tracks that are accessed by the host processor during the current allocation of the cache track to one of the tracks in the backing store; selecting a group of the cache tracks that are addressable by contiguous ones of the backing store addresses; establishing first and second access extent thresholds; in each of the selected groups, comparing, as at first comparison, the stored measurement for one of the tracks in the selected group with said first access extent threshold, if the first comparison shows that the stored measurement for the one track is less than the first access extent threshold, operating the cache for said one cache track in a random access mode, if the first comparison shows that the one cache track stored measurement is greater than the first access extent threshold, then combining the stored measurements for all cache tracks in the group having the one cache track; and comparing, as a second comparison, said combined stored measurement with said second access extent threshold, if said second comparison shows that the combined measurement exceeds said second access extent threshold, preparing the cache for cache bypass operations, if said second comparison shows that the combined measurement is not greater than said second access extent threshold, then establishing for tracks in the backing store having addresses contiguous with the backing store address of the one cache track a predetermined requested data promotion and demotion algorithm. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. In a method of operating a cache store connected to and electrically interposed between a using unit and a backing store, the cache store having a large plurality of addressable data storing tracks, where a track of data is the amount of data storable on one track of a magnetic disk recorder, each of the tracks having a predetermined data storing capacity;
-
said using unit accessing the data storing tracks for reading data stored therein and for writing data therein; the machine-executed steps of; monitoring the accessing of the data storing tracks by the using unit including identifying which of the data storing tracks are accessed and the extent of each such access; separately summing for each of the data storing tracks the monitored access extents and separately storing the sums as current access extents for the respective data storing tracks; for each of the data storing tracks, establishing an access threshold which if exceeded indicates a possible sequential mode of using unit access to data stored in data storing tracks having addresses within a locality of references, establishing a locality of references for each of the data storing tracks as a predetermined range of contiguous addresses for data storing tracks which range includes the address of said each respective data storing track; after each attempted access to each of said data storing tracks, defining such data storing track as a current track for purposes of comparing predetermined current access extents with predetermined access thresholds; comparing the current access extent for the current track with said access threshold, whenever the current access extent exceeds said access threshold for the current track, determining said locality of references for the current track, then comparing the respective current access extent of each data storing track in cache within said locality of references for the current track, if all of the addresses of data storing tracks within said locality of references for the current track are in fact in cache and if the respective current access extents for each of the data storing tracks in said locality of references respectively exceed the respective access threshold for such data storing tracks, then indicating that the using unit is sequentially accessing data stored in the data storing tracks within said locality of references of the current track; and summing the current access extents for the data storing tracks within said locality of references and using the current access extent sum to indicate a rate of sequential access by the using unit within said locality of references of the current track. - View Dependent Claims (14, 15, 16, 17, 18, 19)
-
Specification