×

METHOD FOR DEPLOYING STORAGE SYSTEM RESOURCES WITH LEARNING OF WORKLOADS APPLIED THERETO

  • US 20170242729A1
  • Filed: 02/24/2016
  • Published: 08/24/2017
  • Est. Priority Date: 02/24/2016
  • Status: Active Grant
First Claim
Patent Images

1. A method for deploying storage system resources with learning of workloads applied to a storage system, comprising the steps of:

  • A. setting state-action fuzzy rules for deviated percentages of parameters of a storage system from SLAs (Service Level Agreement) of workloads under specific scenarios and adjustments of resources, and action-reward fuzzy rules for adjustments of resources and reward values, wherein the scenario is a specific relation between a deviated direction of the parameters and change of a corresponding resource;

    B. providing an experience matrix where entries in each row refer to reward values under a specific state, and entries in each column refer to reward values for an adjustment of at least one resource, wherein the experience matrix is re-zeroed for all entries and the state is a specific combination of deviated percentages of parameters;

    C. collecting current deviated percentages of parameters from one of the workloads, and providing predicted deviated percentages of parameters for said workload in a plurality of later time points;

    D. randomly choosing one scenario and processing fuzzification, fuzzy inference, and result aggregation by inputting the collected deviated percentages of parameters of said workload to membership functions of the state-action fuzzy rules of the chosen scenario to have a first action range;

    E. defuzzifying the first action range to have an adjusted amount for at least one resource;

    F. executing the adjusted amount in the storage system for the workload;

    G. processing fuzzification, fuzzy inference, and result aggregation by inputting the provided predicted percentages of parameters of said workload to membership functions of the action-reward fuzzy rules to have a reward range;

    H. defuzzifying the reward range to have a deviated reward value;

    I. for the rows of the experience matrix corresponding to the states of the predicted deviated percentages of parameters, searching for the maximum value in each of rows;

    J. accumulating a deviated reward value and chosen values in a previous time point from the step I as an updated reward value and replacing the entry of the experience matrix under the state of the deviated percentages of parameters and action amount of the previous time point with the updated reward value;

    K. repeating step C to step J until each entry satisfies a converged condition, wherein the step D is processed for all workloads in turns; and

    L. choosing a row in the experience matrix corresponding to observed deviated percentages of parameters and executing the specific adjustment of the resources corresponding to the maximum value among the entries in the row in the storage system.

View all claims
  • 1 Assignment
Timeline View
Assignment View
    ×
    ×