CONCURRENT CONTENT MANAGEMENT AND WEAR OPTIMIZATION FOR A NON-VOLATILE SOLID-STATE CACHE
First Claim
1. A method comprising:
- making write allocation decisions for a cache implemented as non-volatile solid-state storage in a storage system, by executing a cache replacement algorithm that concurrently applies cache hit optimization and wear optimization for the cache; and
writing to locations in the cache according to the write allocation decisions.
1 Assignment
0 Petitions
Accused Products
Abstract
Described is a technique for managing the content of a nonvolatile solid-state memory data cache to improve cache performance while at the same time, and in a complementary manner, providing for automatic wear leveling. A modified circular first-in first-out (FIFO) log/algorithm is generally used to determine cache content replacement. The algorithm is used as the default mechanism for determining cache content to be replaced when the cache is full but is subject to modification in some instances. In particular, data are categorized according to different data classes prior to being written to the cache, based on usage. Once cached, data belonging to certain classes are treated differently than the circular FIFO replacement algorithm would dictate. Further, data belonging to each class are localized to designated regions within the cache.
9 Citations
39 Claims
-
1. A method comprising:
-
making write allocation decisions for a cache implemented as non-volatile solid-state storage in a storage system, by executing a cache replacement algorithm that concurrently applies cache hit optimization and wear optimization for the cache; and writing to locations in the cache according to the write allocation decisions. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
-
-
16. A method comprising:
-
receiving at a storage server a plurality of data access requests from a plurality of storage clients over a network, the storage server including a primary cache and a secondary cache, the secondary cache implemented as non-volatile solid-state storage including a plurality of erase blocks; defining a first data class in the storage server to include data which are expected to remain valid in the secondary cache for at least a complete cycle of a circular cache replacement log and that should be replaced in the secondary cache if they are determined to be not recently used; defining a second data class in the storage server to include data in the secondary cache which are expected to remain valid for less than a complete cycle of the cache replacement log; defining a third data class in the storage server to include data in the secondary cache which are expected to remain valid for a plurality of complete cycles of the cache replacement log; classifying each of a plurality of data units in the storage server into one of the first, second or third data class, based on a usage frequency of each said data unit; and making write allocation decisions for the plurality of data units with respect to the secondary cache, by selecting locations in the secondary cache so that data from each of the first, second or third data class are confined at any given time to a separate set of one or more erase block stripes, each said erase block stripe including a set of erase blocks distributed across a plurality of physical memory devices. - View Dependent Claims (18, 19, 20)
-
-
17. A method as recited in claim 165, wherein each said erase block stripe is a RAID parity group.
-
21. A storage system comprising:
-
a random access memory facility; a cache formed from non-volatile solid-state storage and designated to cache data evicted from the random access memory facility; a processor; and a memory storing instructions for execution by the processor to cause the storage server to execute operations including monitoring usage of data in the storage system; based on results of said monitoring, classifying data in the storage system into a plurality of data classes according to usage characteristics of the data, each of the data classes being associated with a different one of a plurality of cache replacement policies applied to the cache; and selecting storage locations in the cache for the data, based on results of said classifying, including localizing within the cache at least some of the data in each of the data classes. - View Dependent Claims (22, 23, 24, 25, 26, 27, 28)
-
-
29. A storage server comprising:
-
a network interface through which to receive a plurality of data access requests from a plurality of storage clients over a network; a primary storage facility; a cache implemented as non-volatile solid-state storage, the non-volatile solid-state storage including a plurality of erase blocks; and a processor configured to apply a plurality of cache replacement policies associated with the cache, to data managed by the storage server, each of the cache replacement policies corresponding to a different one of a plurality of classes of data usage, and to implement the cache replacement policies by confining write activity for data in at least one of the classes to designated erase blocks or sets of erase blocks of the non-volatile solid-state storage. - View Dependent Claims (30, 31, 32, 33, 34, 35, 36, 37, 38)
-
-
39. A storage system comprising:
-
a cache implemented as non-volatile solid-state storage; means for making write allocation decisions for a data set in relation to the cache, based on expected usage of data in the data set, to implement concurrently a wear optimization scheme and a cache replacement scheme for the cache; and means for writing to locations in the cache according to the write allocation decisions.
-
Specification