Apparatus, method and system for using real-time performance feedback for modeling and improving access to solid state media
First Claim
Patent Images
1. An apparatus, comprising:
- a memory;
a processor configured to issue the storage commands to the memory based on a number of concurrent storage commands being serviced by the memory and based on an expected latency associated with the number of concurrent storage commands;
identify a performance curve for the memory, wherein the performance curve maps the number of concurrent storage commands with the expected latency;
issue the storage commands to the memory based on the performance curve;
identify a first section of the performance curve associated with overhead processing;
identify a second section of the performance curve associated with stalling in the memory;
issue the storage commands to the memory based on the number of concurrent storage commands associated with the first and second section of the performance curve;
identify a first slope for the first section of the performance curve;
identify a second slope for the second section of the performance curve;
issue the storage commands to the memory based on the first slope and the second slope;
identify changes in the performance curve;
dynamically change the number of concurrent storage commands issued to in the memory based on the changes in the performance curve;
identify a write limit based on the performance curve, wherein the write limit is associated with writing data into the memory;
measure a latency for one of the storage commands;
compare the measured latency to the write limit;
discontinue writing data to the memory in response to the latency being outside of the write limit;
identify a use limit based on the performance curve, wherein the use limit is associated with reading data from the memory;
compare the latency to the use limit; and
erase data in the memory in response to the latency being outside of the use limit.
10 Assignments
0 Petitions
Accused Products
Abstract
A storage processor identifies latency of memory drives for different numbers of concurrent storage operations. The identified latency is used to identify debt limits for the number of concurrent storage operations issued to the memory drives. The storage processor may issue additional storage operations to the memory devices when the number of storage operations is within the debt limit. Storage operations may be deferred when the number of storage operations is outside the debt limit.
88 Citations
21 Claims
-
1. An apparatus, comprising:
-
a memory; a processor configured to issue the storage commands to the memory based on a number of concurrent storage commands being serviced by the memory and based on an expected latency associated with the number of concurrent storage commands; identify a performance curve for the memory, wherein the performance curve maps the number of concurrent storage commands with the expected latency; issue the storage commands to the memory based on the performance curve; identify a first section of the performance curve associated with overhead processing; identify a second section of the performance curve associated with stalling in the memory; issue the storage commands to the memory based on the number of concurrent storage commands associated with the first and second section of the performance curve; identify a first slope for the first section of the performance curve; identify a second slope for the second section of the performance curve; issue the storage commands to the memory based on the first slope and the second slope; identify changes in the performance curve; dynamically change the number of concurrent storage commands issued to in the memory based on the changes in the performance curve; identify a write limit based on the performance curve, wherein the write limit is associated with writing data into the memory; measure a latency for one of the storage commands; compare the measured latency to the write limit; discontinue writing data to the memory in response to the latency being outside of the write limit; identify a use limit based on the performance curve, wherein the use limit is associated with reading data from the memory; compare the latency to the use limit; and erase data in the memory in response to the latency being outside of the use limit. - View Dependent Claims (2, 3)
-
-
4. A method, comprising:
-
receiving a read operation; identifying a memory device associated with the read operation; identifying a latency of the memory device for servicing the read operation; identifying a device debt for the memory device, wherein the device debt is associated with a number of read operations being processed by the memory device; and tracking average latencies for different numbers of concurrent read operations issued to the memory device; identifying patterns in the average latencies; identifying a limit for the device debt based on the patterns; identifying a predicted latency for the read operation based on the patterns; measuring an actual latency of the read operation; comparing the actual latency with the predicted latency; increasing the device debt based on the comparison of the actual latency with the predicted latency; deferring the issuing of the read operation to the memory device when the device debt is outside the limit; and issuing the read operation to the memory device based on the identified latency. - View Dependent Claims (5, 6, 7, 8, 9)
-
-
10. A storage processor, comprising:
-
a command queue configured to; receive storage commands; and maintain threads configured to initiate the storage commands to memory devices; a command scheduler configured to; identify device debts for the memory devices, wherein the device debts are associated with a number of storage operations pending in the memory devices, the command scheduler further configured to assign the storage commands to the threads based on the device debts; the storage processor further comprising logic circuitry configured to; identify latency patterns for the memory devices for different numbers of concurrent storage operations; identify latency limits for the memory devices based on the latency patterns; measure storage access latencies for the memory devices; and erase the memo devices when the storage access latencies for the memory devices are outside the latency limits for the memory devices. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. A storage processor, comprising:
a command queue configured to; receive storage commands; and maintain threads configured to initiate the storage commands to memory devices; and a command scheduler configured to; identify device debts for the memory devices, wherein the device debts are associated with a number of storage operations pending in the memory devices, the command scheduler further configured to assign the storage commands to the threads based on the device debts; measure latencies of the memory devices for different numbers of concurrent storage operations; and identify concurrent storage access limits for the memory devices based on the latencies, wherein the command scheduler is configured to assign the storage commands to the threads when the device debts for the memory devices accessed by the threads are within the concurrent storage access limits for the memory devices; and
, to defer assigning the storage commands to the threads when the devices debts for the memory devices associated with the storage commands are outside of the concurrent storage access limits for the memory devices.
-
21. A storage processor, comprising:
-
a command queue configured to; receive storage commands; and maintain threads configured to initiate the storage commands to memory devices; a command scheduler configured to; identify device debts for the memory devices, wherein the device debts are associated with a number of storage operations pending in the memory devices, the command scheduler further configured to assign the storage commands to the threads based on the device debts; identify predicted storage access latencies for the memory devices associated with the storage commands based on the device debts for the associated memory devices; and assign the storage commands to the threads based on the predicted storage access latencies; measure actual storage access latencies of the memory devices; receive new storage commands; assign the new storage commands to the threads; and increase the device debts for the memory devices accessed by the new storage commands based on the predicted storage access latencies and the actual storage access latencies.
-
Specification