• Corpus ID: 231740940

Scalable Voltage Control using Structure-Driven Hierarchical Deep Reinforcement Learning

@article{Mukherjee2021ScalableVC,
  title={Scalable Voltage Control using Structure-Driven Hierarchical Deep Reinforcement Learning},
  author={Sayak Mukherjee and Renke Huang and Qiuhua Huang and Thanh Long Vu and Tianzhixi Yin},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.00077}
}
This paper presents a novel hierarchical deep reinforcement learning (DRL) based design for the voltage control of power grids. DRL agents are trained for fast, and adaptive selection of control actions such that the voltage recovery criterion can be met following disturbances. Existing voltage control techniques suffer from the issues of speed of operation, optimal coordination between different locations, and scalability. We exploit the area-wise division structure of the power system to… 

Figures and Tables from this paper

Stability Constrained Reinforcement Learning for Decentralized Real-Time Voltage Control

This paper proposes a stability-constrained reinforcement learning (RL) method for real-time voltage control, that guarantees system stability both during policy learning and deployment of the learned policy, while always achieving voltage stability.

Reinforcement Learning for Decision-Making and Control in Power Systems: Tutorial, Review, and Vision

This paper provides a tutorial on various RL techniques and how they can be applied to decision-making and control in power systems, and illustrates RL-based models and solutions in three key applications, including frequency regulation, voltage control, and energy management.

Reinforcement Learning for Selective Key Applications in Power Systems: Recent Advances and Future Challenges

This paper provides a comprehensive review of various RL techniques and how they can be applied to decision-making and control in power systems and selects three key applications, i.e., frequency regulation, voltage control, and energy management, as examples to illustrate RL-based models and solutions.

Stability Constrained Reinforcement Learning for Real-Time Voltage Control

A stability constrained reinforcement learning method for real-time voltage control in distribution grids is proposed and it is proved that the proposed approach provides a formal voltage stability guarantee.

References

SHOWING 1-10 OF 40 REFERENCES

Accelerated Deep Reinforcement Learning Based Load Shedding for Emergency Voltage Control

An accelerated DRL algorithm named PARS was developed and tailored for power system voltage stability control via load shedding that features high scalability and is easy to tune with only five main hyperparameters.

Adaptive Power System Emergency Control Using Deep Reinforcement Learning

An open-source platform named Reinforcement Learning for Grid Control (RLGC) has been designed for the first time to assist the development and benchmarking of DRL algorithms for power system control.

Data-Efficient Hierarchical Reinforcement Learning

This paper studies how to develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control.

Load Shedding Scheme with Deep Reinforcement Learning to Improve Short-term Voltage Stability

In this paper, a novel load shedding scheme against voltage instability with deep reinforcement learning (DRL). The convolutional neural networks (CNNs) are chosen to automatically learn the features

A Hierarchical Data-Driven Method for Event-Based Load Shedding Against Fault-Induced Delayed Voltage Recovery in Power Systems

A hierarchical data-driven method is proposed for the online prediction of event-based load shedding (ELS) against fault-induced delayed voltage recovery, which is very accurate in prediction with excellent control performance.

Continuous control with deep reinforcement learning

This work presents an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces, and demonstrates that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.

Some Reflections on Model Predictive Control of Transmission Voltages

This paper deals with the application of algorithms inspired by model predictive control to solve voltage-related power system control problems in both normal and emergency operating conditions. In

Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor

This paper proposes soft actor-critic, an off-policy actor-Critic deep RL algorithm based on the maximum entropy reinforcement learning framework, and achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off- policy methods.

Simple random search of static linear policies is competitive for reinforcement learning

This work introduces a model-free random search algorithm for training static, linear policies for continuous control problems and evaluates the performance of this method over hundreds of random seeds and many different hyperparameter configurations for each benchmark task.

Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms

This chapter reviews the theoretical results of MARL algorithms mainly within two representative frameworks, Markov/stochastic games and extensive-form games, in accordance with the types of tasks they address, i.e., fully cooperative, fully competitive, and a mix of the two.