# Hot-Starting the Ac Power Flow with Convolutional Neural Networks

@article{Chen2020HotStartingTA, title={Hot-Starting the Ac Power Flow with Convolutional Neural Networks}, author={Liang-Hung Chen and Joseph Euzebe Tate}, journal={ArXiv}, year={2020}, volume={abs/2004.09342} }

Obtaining good initial conditions to solve the Newton-Raphson (NR) based ac power flow (ACPF) problem can be a very difficult task. In this paper, we propose a framework to obtain the initial bus voltage magnitude and phase values that decrease the solution iterations and time for the NR based ACPF model, using the dc power flow (DCPF) results and one dimensional convolutional neural networks (1D CNNs). We generate the dataset used to train the 1D CNNs by sampling from a distribution of load…

## 13 Citations

### DeepOPF: A Feasibility-Optimized Deep Neural Network Approach for AC Optimal Power Flow Problems

- Computer ScienceIEEE Systems Journal
- 2022

An efficient Deep Neural Network approach, DeepOPF, is developed to ensure the feasibility of the generated solution of the AC-OPF problem, by employing a penalty approach in training the DNN.

### Learning-based AC-OPF Solvers on Realistic Network and Realistic Loads

- Computer ScienceArXiv
- 2022

An AC-OPF formulation-ready dataset called TAS-97 is constructed that contains realistic network information and realistic bus loads from Tasmania’s electricity network and it is found that the realistic loads in Tasmania are correlated between buses and they show signs of an underlying multivariate normal distribution.

### Emulating AC OPF Solvers With Neural Networks

- Computer ScienceIEEE Transactions on Power Systems
- 2022

A neural network is trained to emulate an iterative solver in order to cheaply and approximately iterate towards the optimum, and it is shown that the proposed method can find “difficult” AC OPF solutions that cause flat-start or DC-warm started algorithms to diverge.

### Spatial Network Decomposition for Fast and Scalable AC-OPF Learning

- Computer ScienceIEEE Transactions on Power Systems
- 2022

A novel machine-learning approach for predicting AC-OPF solutions that features a fast and scalable training that exploits a spatial decomposition of the power network that is viewed as a set of regions.

### A Sample-Efficient OPF Learning Method Based on Annealing Knowledge Distillation

- Computer ScienceIEEE Access
- 2022

This work proposes a sample-efficient OPF learning method to maximize the utilization of limited samples by decomposing the OPF task before knowledge distillation, deep learning complexity is reduced and the focal loss function and teacher annealing strategy are adopted.

### Emulating AC OPF solvers for Obtaining Sub-second Feasible, Near-Optimal Solutions

- Computer Science
- 2020

A neural network is trained to emulate an iterative solver in order to cheaply and approximately iterate towards the optimum, and it is shown that the proposed method can solve “difﬁcult” AC OPF solutions that cause DC-warm started algorithms to diverge.

### Deep learning architectures for inference of AC-OPF solutions

- Computer ScienceArXiv
- 2020

A systematic comparison between neural network architectures for inference of AC-OPF solutions is presented and the efficacy of leveraging network topology in the models by constructing abstract representations of electrical grids in the graph domain, for both convolutional and graph NNs is demonstrated.

### Confidence-Aware Graph Neural Networks for Learning Reliability Assessment Commitments

- Computer ScienceArXiv
- 2022

Experimental results on exact RAC formulations used by the Midcontinent Independent System Operator (MISO) and an actual transmission network show that the RACL EARN framework can speed up RAC optimization by factors ranging from 2 to 4 with negligible loss in solution quality.

### A Fixed-Point Algorithm for the AC Power Flow Problem

- EngineeringArXiv
- 2022

—This paper presents an algorithm that solves the AC power ﬂow problem for balanced, three-phase transmission systems at steady state. The algorithm extends the “ﬁxed-point power ﬂow” algorithm in…

### Truncation Error Analysis of Linear Power Flow Model

- Engineering2020 IEEE Sustainable Power and Energy Conference (iSPEC)
- 2020

The nonlinearity of the power flow equation is the significant cause of the non-convexity of optimization problems in the power system. The existing linear power flow model is derived based on…

## References

SHOWING 1-10 OF 33 REFERENCES

### DeepOPF: Deep Neural Network for DC Optimal Power Flow

- Computer Science2019 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm)
- 2019

Simulation results of IEEE test cases show that DeepOPF always generates feasible solutions with negligible optimality loss, while speeding up the computing time by two orders of magnitude as compared to conventional approaches implemented in a state-of-the-art solver.

### Optimal Power Flow Using Graph Neural Networks

- Computer ScienceICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2020

Optimal power flow (OPF) is one of the most important optimization problems in the energy industry. In its simplest form, OPF attempts to find the optimal power that the generators within the grid…

### ImageNet classification with deep convolutional neural networks

- Computer ScienceCommun. ACM
- 2012

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

### Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

- Computer ScienceICLR
- 2016

The "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies and significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers.

### Revisiting Small Batch Training for Deep Neural Networks

- Computer ScienceArXiv
- 2018

The collected experimental results show that increasing the mini-batch size progressively reduces the range of learning rates that provide stable convergence and acceptable test performance, which contrasts with recent work advocating the use ofmini-batch sizes in the thousands.

### Understanding the difficulty of training deep feedforward neural networks

- Computer ScienceAISTATS
- 2010

The objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future.

### PowerAI DDL

- Computer ScienceArXiv
- 2017

A software-hardware co-optimized distributed Deep Learning system that can achieve near-linear scaling up to hundreds of GPUs using a multi-ring communication pattern that provides a good tradeoff between latency and bandwidth and adapts to a variety of system configurations.

### Learning an Optimally Reduced Formulation of OPF through Meta-optimization

- Computer ScienceArXiv
- 2019

A neural network that predicts the binding status of constraints of the system is used to generate an initial reduced OPF problem, defined by removing the predicted non-binding constraints, which leads to a classifier that significantly outperforms conventional loss functions used to train neural network classifiers.

### Artificial neural network based load flow solution of Saudi national grid

- Engineering2017 Saudi Arabia Smart Grid (SASG)
- 2017

Investigations reveal that the proposed ANN based load flow approach is a potential candidate for the on-line applications in the load dispatch center.

### Exploring Hidden Dimensions in Parallelizing Convolutional Neural Networks

- Computer ScienceICML
- 2018

The experiments show that layer-wise parallelism outperforms current parallelization approaches by increasing training speed, reducing communication costs, achieving better scalability to multiple GPUs, while maintaining the same network accuracy.