• Corpus ID: 233481371

Rapid Aerodynamic Shape Optimization Under Parametric and Turbulence Model Uncertainty: A Stochastic Gradient Approach

@inproceedings{Jofre2021RapidAS,
  title={Rapid Aerodynamic Shape Optimization Under Parametric and Turbulence Model Uncertainty: A Stochastic Gradient Approach},
  author={Llu{\'i}s Jofre and Alireza Doostan},
  year={2021}
}
Aerodynamic optimization is ubiquitous in the design of most engineering systems inter-acting with fluids. A common approach is to optimize a performance function – subject to some constraints – defined by a choice of an aerodynamic model, e.g., turbulence RANS model, and at nominal operating conditions. Practical experience indicates that such a deterministic, i.e., single-point, approach may result in considerably sub-optimal designs when the adopted aerodynamic model does not lead to accurate… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 51 REFERENCES

A One-Equation Turbulence Model for Aerodynamic Flows

Optimization Methods for Large-Scale Machine Learning

TLDR
A major theme of this study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter, leading to a discussion about the next generation of optimization methods for large- scale machine learning.

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization

TLDR
This work describes and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal functions that can be chosen in hindsight.

Reliability-based topology optimization using stochastic gradients

TLDR
A stochastic gradient-based approach, where the probability of failure at every few iterations is estimated using an efficient sampling strategy, and overcomes the accuracy issues of traditional methods that rely on approximating the limit state function.

Bi-fidelity stochastic gradient descent for structural optimization under uncertainty

TLDR
The results show that the proposed use of a bi-fidelity approach for the SGD method can improve the convergence, and two analytical proofs are provided that show the linear convergence of these two algorithms under appropriate assumptions.

DAFoam: An Open-Source Adjoint Framework for Multidisciplinary Design Optimization with OpenFOAM

The adjoint method is an efficient approach for computing derivatives that allow gradient-based optimization to handle systems parameterized with a large number of design variables. Despite this ad...
...