• Corpus ID: 244954018

RID-Noise: Towards Robust Inverse Design under Noisy Environments

@article{Yang2021RIDNoiseTR,
  title={RID-Noise: Towards Robust Inverse Design under Noisy Environments},
  author={Jia-Qi Yang and Ke-Bin Fan and Hao Ma and De-Chuan Zhan},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.03912}
}
From an engineering perspective, a design should not only perform well in an ideal condition, but should also resist noises. Such a design methodology, namely robust design, has been widely implemented in the industry for product quality control. However, classic robust design requires a lot of evaluations for a single design target, while the results of these evaluations could not be reused for a new target. To achieve a data-efficient robust design, we propose Robust Inverse Design under… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 33 REFERENCES
Conditioning by adaptive sampling for robust design
TLDR
This work proposes a new method, Conditioning by Adaptive Sampling, which yields state-of-the-art results on a protein fluorescence problem, as compared to other recently published approaches.
Analyzing Inverse Problems with Invertible Neural Networks
TLDR
It is argued that a particular class of neural networks is well suited for this task -- so-called Invertible Neural Networks (INNs), and it is verified experimentally that INNs are a powerful analysis tool to find multi-modalities in parameter space, to uncover parameter correlations, and to identify unrecoverable parameters.
Robust Design: An Overview
Robust design has been developed with the expectation that an insensitive design can be obtained. That is, a product designed by robust design should be insensitive to external noises or tolerances.
Benchmarking deep inverse models over time, and the neural-adjoint method
TLDR
A solution that uses a deep learning model to approximate the forward model, and then uses backpropagation to search for good inverse solutions is explored, termed the neural-adjoint, which achieves the best performance in many scenarios.
Learning Structured Output Representation using Deep Conditional Generative Models
TLDR
A deep conditional generative model for structured output prediction using Gaussian latent variables is developed, trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using Stochastic feed-forward inference.
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Conditional Invertible Neural Networks for Diverse Image-to-Image Translation
TLDR
A new architecture called a conditional invertible neural network (cINN), which combines the purely generative INN model with an unconstrained feed-forward network, which efficiently preprocesses the conditioning image into maximally informative features.
Noise2Noise: Learning Image Restoration without Clean Data
TLDR
It is shown that under certain common circumstances, it is possible to learn to restore signals without ever observing clean ones, at performance close or equal to training using clean exemplars.
Local Likelihood Estimation
Abstract A scatterplot smoother is applied to data of the form {(x 1, y 1), (x 2, y 2, …, (xn, yn )} and uses local fitting to estimate the dependence of Y on X. A simple example is the running lines
Robust likelihood functions in Bayesian inference
...
...