Title |
Distribution Network Reconfiguration to Minimize Power Loss Using Deep Reinforcement Learning |
Authors |
임세헌(Se-Heon Lim) ; 김태근(Tae-Geun Kim) ; 윤성국(Sung-Guk Yoon) |
DOI |
https://doi.org/10.5370/KIEE.2020.69.11.1659 |
Keywords |
deep Q network (DQN); distribution netwok reconfiguration (DNR); line loss; reinforcement learning; renewable energy curtailment |
Abstract |
Distribution network reconfiguration (DNR) is a technique that changes the status of sectionalizing and tie switches for various purposes such as loss minimization, voltage profile improvement, load leveling, and hosting capacity increase. Although previous algorithms for DNR show good performance, they still have practical limitations. Most of the algorithms assumed that a central coordinator knows all parameters and/or perfect states in a distribution network. Reinforcement learning which is a model-free optimization technique can be a key way to overcome these limitations. This work proposes a DNR scheme using deep reinforcement learning to minimize power loss defined by the amount of line loss and renewable energy curtailment. We model the DNR problem as a Markov decision process (MDP) problem and apply the reinforcement learning algorithm to solve this problem in real-time. Simulation result using 33-bus radial distribution system shows that the proposed scheme shows similar performance compared to an existing method which uses all information on the distribution network |