• Corpus ID: 239998322

Failure-averse Active Learning for Physics-constrained Systems

  title={Failure-averse Active Learning for Physics-constrained Systems},
  author={Cheolhei Lee and Xing Wang and Jianguo Wu and Xiaowei Yue},
Active learning is a subfield of machine learning that is devised for design and modeling of systems with highly expensive sampling costs. Industrial and engineering systems are generally subject to physics constraints that may induce fatal failures when they are violated, while such constraints are frequently underestimated in active learning. In this paper, we develop a novel active learning method that avoids failures considering implicit physics constraints that govern the system. The… 

Figures and Tables from this paper


Safe Exploration for Active Learning with Gaussian Processes
This paper proposes an approach for learning data-based regression models from technical and industrial systems using a limited budget of measured and exploring new data regions based on Gaussian processes GP's, using a problem specific GP classifier to identify safe and unsafe regions, while using a differential entropy criterion for exploring relevant data regions.
Partitioned Active Learning for Heterogeneous Systems
This work proposes the partitioned active learning strategy established upon partitioned GP (PGP) modeling, which seeks the most informative design point for PGP modeling systematically in two steps and provides numerical remedies to alleviate the computational cost of active learning.
REIF: A novel active-learning function toward adaptive Kriging surrogate models for structural reliability analysis
Numerical validity of the proposed active-learning functions in conjunction with adaptively truncated sampling region and low-discrepancy samples is demonstrated by several structural reliability examples in the literature.
Stagewise Safe Bayesian Optimization with Gaussian Processes
An efficient safe Bayesian optimization algorithm is developed, StageOpt, that separates safe region expansion and utility function maximization into two distinct stages and provides theoretical guarantees for both the satisfaction of safety constraints as well as convergence to the optimal utility value.
Safe Exploration for Interactive Machine Learning
A novel framework is introduced that renders any existing unsafe IML algorithm safe and works as an add-on that takes suggested decisions as input and exploits regularity assumptions in terms of a Gaussian process prior in order to efficiently learn about their safety.
Modeling an Augmented Lagrangian for Blackbox Constrained Optimization
This hybridization presents a simple yet effective solution that allows existing objective-oriented statistical approaches, like those based on Gaussian process surrogates and expected improvement heuristics, to be applied to the constrained setting with minor modification.
Active Learning for Gaussian Process Considering Uncertainties With Application to Shape Control of Composite Fuselage
Two new active learning algorithms for the Gaussian process with uncertainties are proposed, which take variance-based information measure and Fisher information measure into consideration and can incorporate the impact of uncertainties and realize better prediction performance.
Optimization Under Unknown Constraints
A new integrated improvement criterion is proposed to recognize that responses from inputs that violate the constraint may still be informative about the function, and thus could potentially be useful in the optimization.
Predictive Entropy Search for Bayesian Optimization with Unknown Constraints
This paper presents a new information-based method called Predictive Entropy Search with Constraints (PESC), and shows that it compares favorably to EI-based approaches on synthetic and benchmark problems, as well as several real-world examples.
Sequential Laplacian regularized V-optimal design of experiments for response surface modeling of expensive tests: An application in wind tunnel testing
This article proposes an active learning methodology based on the fundamental idea of adding a ridge and a Laplacian penalty to the V-optimal design to shrink the weight of less significant factors, while looking for the most informative settings to be tested.