Twin Neural Network Regression

@article{Wetzel2022TwinNN,
  title={Twin Neural Network Regression},
  author={Sebastian Johann Wetzel and Kevin Ryczko and Roger G. Melko and Isaac Tamblyn},
  journal={ArXiv},
  year={2022},
  volume={abs/2012.14873}
}
We introduce twin neural network (TNN) regression. This method predicts differences between the target values of two different data points rather than the targets themselves. The solution of a traditional regression problem is then obtained by averaging over an ensemble of all predicted differences between the targets of an unseen data point and all training data points. Whereas ensembles are normally costly to produce, TNN regression intrinsically creates an ensemble of predictions of twice… 

Figures and Tables from this paper

Twin neural network regression is a semi-supervised regression algorithm

Semi-supervised training of twin neural network regression improves TNNR performance, which is already state of the art, significantly.

Applying the Case Difference Heuristic to Learn Adaptations from Deep Network Features

This paper investigates a two-phase process combining deep learning for feature extraction and neural network based adaptation learning from extracted features and shows that the combined process can successfully learn adaptation knowledge applicable to nonsymbolic differences in cases.

References

SHOWING 1-10 OF 55 REFERENCES

Snapshot Ensembles: Train 1, get M for free

This paper proposes a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost by training a single neural network, converging to several local minima along its optimization path and saving the model parameters.

Neural Network Ensembles, Cross Validation, and Active Learning

It is shown how to estimate the optimal weights of the ensemble members using unlabeled data and how the ambiguity can be used to select new training data to be labeled in an active learning scheme.

Learning with Pseudo-Ensembles

A novel regularizer based on making the behavior of a pseudo-ensemble robust with respect to the noise process generating it is presented, which naturally extends to the semi-supervised setting, where it produces state-of-the-art results.

Addressing uncertainty in atomistic machine learning.

This work addresses the types of errors that might arise in atomistic machine learning, the unique aspects of atomistic simulations that make machine-learning challenging, and highlights how uncertainty analysis can be used to assess the validity of machine- learning predictions.

A quantitative uncertainty metric controls error in neural network-driven chemical discovery.

Tightening latent distance cutoffs systematically drives down predicted model errors below training errors, thus enabling predictive error control in chemical discovery or identification of useful data points for active learning.

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.

Siamese Neural Networks for One-Shot Image Recognition

A method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs and is able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.

Fast and Accurate Uncertainty Estimation in Chemical Machine Learning.

An inexpensive and reliable estimate of the uncertainty associated with the predictions of a machine-learning model of atomic and molecular properties is presented, based on resampling, with multiple models being generated based on subsampling of the same training data.

Hydra: Preserving Ensemble Diversity for Model Distillation

This work proposes a distillation method based on a single multi-headed neural network that improves distillation performance on classification and regression settings while capturing the uncertainty behaviour of the original ensemble over both in-domain and out-of-distribution tasks.

Deep learning and density-functional theory

We show that deep neural networks can be integrated into, or fully replace, the Kohn-Sham density functional theory (DFT) scheme for multielectron systems in simple harmonic oscillator and random
...