Airfoil Shape Optimization using Deep Q-Network

@article{Rout2019AirfoilSO,
  title={Airfoil Shape Optimization using Deep Q-Network},
  author={Siddharth Rout and Chao Lin},
  journal={ArXiv},
  year={2019},
  volume={abs/2211.17189}
}
The document explores the feasibility of using reinforcement learning for drag minimization and lift maximization of standard two-dimensional airfoils. Deep Q-network (DQN) is used over Markov’s decision process (MDP) to learn the optimal shape by learning the best changes to the initial shape. The airfoil profile is generated by using Bezier control points. The drag and lift values are calculated from coefficient of pressure values along the profile generated using Xfoil potential flow solver… 

Figures from this paper

References

SHOWING 1-4 OF 4 REFERENCES

Playing Atari with Deep Reinforcement Learning

This work presents the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning, which outperforms all previous approaches on six of the games and surpasses a human expert on three of them.

XFOIL: An Analysis and Design System for Low Reynolds Number Airfoils

Calculation procedures for viscous/inviscid analysis and mixed-inverse design of subcritical airfoils are presented. An inviscid linear-vorticity panel method with a Karman-Tsien compressiblity

Learning to Predict by the Methods of Temporal Differences

This article introduces a class of incremental learning procedures specialized for prediction – that is, for using past experience with an incompletely known system to predict its future behavior – and proves their convergence and optimality for special cases and relation to supervised-learning methods.