Airfoil Shape Optimization using Deep Q-Network
@article{Rout2019AirfoilSO, title={Airfoil Shape Optimization using Deep Q-Network}, author={Siddharth Rout and Chao Lin}, journal={ArXiv}, year={2019}, volume={abs/2211.17189} }
The document explores the feasibility of using reinforcement learning for drag minimization and lift maximization of standard two-dimensional airfoils. Deep Q-network (DQN) is used over Markov’s decision process (MDP) to learn the optimal shape by learning the best changes to the initial shape. The airfoil profile is generated by using Bezier control points. The drag and lift values are calculated from coefficient of pressure values along the profile generated using Xfoil potential flow solver…
References
SHOWING 1-4 OF 4 REFERENCES
Playing Atari with Deep Reinforcement Learning
- Computer ScienceArXiv
- 2013
This work presents the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning, which outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
Residual Algorithms: Reinforcement Learning with Function Approximation
- Computer ScienceICML
- 1995
XFOIL: An Analysis and Design System for Low Reynolds Number Airfoils
- Engineering, Physics
- 1989
Calculation procedures for viscous/inviscid analysis and mixed-inverse design of subcritical airfoils are presented. An inviscid linear-vorticity panel method with a Karman-Tsien compressiblity…
Learning to Predict by the Methods of Temporal Differences
- PsychologyMachine Learning
- 2005
This article introduces a class of incremental learning procedures specialized for prediction – that is, for using past experience with an incompletely known system to predict its future behavior – and proves their convergence and optimality for special cases and relation to supervised-learning methods.