RESOURCE COST OPTIMIZATION SYSTEM, METHOD, AND PROGRAM
First Claim
1. A computer implemented method for generating a policy for optimizing a cost of a resource under a predetermined cost structure, the method comprising:
- preparing an error distribution that indicates a deviation of an amount of usage from a predicted value, a characteristic of a storing means for storing or releasing the resource, and the cost structure in a computer-readable form;
calculating an expected cost in a Markov decision process and a parameter that includes a transition probability on the basis of the error distribution, the characteristic of the storing means, and the cost structure, the Markov decision process including a state that includes a usage amount error, an amount of the resource in the storing means, a specification of a section, and a set target; and
deciding an optimal policy that includes an action of storing or releasing the resource of the storing means for the state using the expected cost in the Markov decision process and the parameter including the transition probability.
1 Assignment
0 Petitions
Accused Products
Abstract
Apparatus and method use a Markov decision process (MDP) to reduce the cost of variations in electric power usage. The user notifies a power company of a predicted value for a period. The period is divided into subsections. For each subsection, on the basis of a MDP including a state that depends on an electric power usage amount error, charge amount, and set target, the amount of charging and discharging of a storage battery as an action at any given time is optimally decided depending on the electric power usage amount error, charge amount, time, and set target at that time. A predetermined time in a subsection is a target setting time, at which a future target is further set as the action. The action includes deciding the charging and discharging amount in that subsection and deciding a future target in a subsection whose target should be set.
-
Citations
21 Claims
-
1. A computer implemented method for generating a policy for optimizing a cost of a resource under a predetermined cost structure, the method comprising:
-
preparing an error distribution that indicates a deviation of an amount of usage from a predicted value, a characteristic of a storing means for storing or releasing the resource, and the cost structure in a computer-readable form; calculating an expected cost in a Markov decision process and a parameter that includes a transition probability on the basis of the error distribution, the characteristic of the storing means, and the cost structure, the Markov decision process including a state that includes a usage amount error, an amount of the resource in the storing means, a specification of a section, and a set target; and deciding an optimal policy that includes an action of storing or releasing the resource of the storing means for the state using the expected cost in the Markov decision process and the parameter including the transition probability. - View Dependent Claims (2, 3, 4, 5, 6, 13, 14, 15)
-
-
7. A computer executed program product for generating a policy for optimizing a cost of a resource under a predetermined cost structure, the program product causing the computer to execute:
-
a step of preparing an error distribution that indicates a deviation of an amount of usage from a predicted value, a characteristic of a storing means for storing or releasing the resource, and the cost structure in a computer-readable form; a step of calculating an expected cost in a Markov decision process and a parameter that includes a transition probability on the basis of the error distribution, the characteristic of the storing means, and the cost structure, the Markov decision process including a state that includes a usage amount error, an amount of resource in the storing means, a specification of a section, and a set target; and a step of deciding an optimal policy that includes an action of storing or releasing the resource of the storing means for the state using the expected cost in the Markov decision process and the parameter including the transition probability. - View Dependent Claims (8, 9, 10, 11, 12, 16, 17, 18, 19, 20, 21)
-
Specification