• Corpus ID: 2952130

Label-Free Supervision of Neural Networks with Physics and Domain Knowledge

@inproceedings{Stewart2017LabelFreeSO,
  title={Label-Free Supervision of Neural Networks with Physics and Domain Knowledge},
  author={Russell Stewart and Stefano Ermon},
  booktitle={AAAI},
  year={2017}
}
In many machine learning applications, labeled data is scarce and obtaining more labels is expensive. [] Key Method These constraints are derived from prior domain knowledge, e.g., from known laws of physics. We demonstrate the effectiveness of this approach on real world and simulated computer vision tasks. We are able to train a convolutional neural network to detect and track objects without any labeled examples. Our approach can significantly reduce the need for labeled training data, but introduces new…

Figures from this paper

Learning with Weak Supervision from Physics and Data-Driven Constraints
TLDR
This work introduces a new approach to supervising learning algorithms without labels by enforcing a small number of domain-specific constraints over the algorithms’ outputs and finds that algorithms supervised with constraints achieve high accuracies with only a small amount of labels, or with no labels at all in some cases.
Weakly-Supervised Deep Learning of Heat Transport via Physics Informed Loss
TLDR
This work demonstrates that knowledge of the partial differential equations governing a system can be encoded into the loss function of a neural network via an appropriately chosen convolutional kernel, and demonstrates that this method can be used to speed up exact calculation of the solution to the differential equations via finite difference.
Structured Prediction with Adversarial Constraint Learning
TLDR
An adversarial constraint learning framework in which invariants are automatically extracted from data, in which users only need to provide a black-box simulator that generates valid system outputs.
A Domain Knowledge—Enabled Hybrid Semi-Supervision Learning Method
  • Yifu Wu, Jin Wei
  • Computer Science
    2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP)
  • 2019
TLDR
A novel computing method is proposed, called hybrid semi-supervision machine learning, that exploits the loose domain knowledge to enable the accurate results even in the presence of limited labeled data.
Domain Constraint Approximation based Semi Supervision
TLDR
This paper proposed a fuzzy domain-constraint-based framework which loses the requirement of traditional constraint learning and enhances the model quality for semi supervision andSimulations results show the effectiveness of the design.
Augmenting Deep Learning with Relational Knowledge from Markov Logic Networks
TLDR
This paper develops a novel model that combines the best of both worlds, namely, the scalable learning capabilities of DNNs and symbolic knowledge specified in MLNs, and outperforms purely-MLN or purely-DNN based models in several different problem domains.
Fidelity-Weighted Learning
TLDR
Fidelity-weighted learning (FWL) is proposed, a semi-supervised student-teacher approach for training deep neural networks using weakly-labeled data that makes better use of strong and weak labels, and leads to better task-dependent data representations.
Train Neural Network by Embedding Space Probabilistic Constraint
TLDR
A novel algorithm is designed that jointly optimizes output probability distribution on a clustered embedding space to make neural networks draw effective decision boundaries and empirically proves that the model can converge to an accuracy higher than other state-of-art semi-supervised learning models with less high quality labeled training examples.
Snorkel: Rapid Training Data Creation with Weak Supervision
TLDR
Snorkel is a first-of-its-kind system that enables users to train state- of- the-art models without hand labeling any training data and proposes an optimizer for automating tradeoff decisions that gives up to 1.8× speedup per pipeline execution.
A Semantic Loss Function for Deep Learning Under Weak Supervision
TLDR
A semantic loss function is defined that bridges between neural output vectors and logical constraints and significantly increases the ability of the neural network to predict structured objects under weak supervision, such as rankings and shortest paths.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 52 REFERENCES
Guiding Semi-Supervision with Constraint-Driven Learning
TLDR
The experimental results presented in the information extraction domain demonstrate that applying constraints helps the model to generate better feedback during learning, and hence the framework allows for high performance learning with significantly less training data than was possible before on these tasks.
From Group to Individual Labels Using Deep Features
TLDR
This paper proposes a new objective function that encourages smoothness of inferred instance-level labels based on instance- level similarity, while at the same time respecting group-level label constraints, and applies this approach to the problem of predicting labels for sentences given labels for reviews, using a convolutional neural network to infer sentence similarity.
Building high-level features using large scale unsupervised learning
TLDR
Contrary to what appears to be a widely-held intuition, the experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not.
Data Programming: Creating Large Training Sets, Quickly
TLDR
A paradigm for the programmatic creation of training sets called data programming is proposed in which users express weak supervision strategies or domain heuristics as labeling functions, which are programs that label subsets of the data, but that are noisy and may conflict.
Learning from measurements in exponential families
TLDR
A Bayesian decision-theoretic framework is presented, which allows us to both integrate diverse measurements and choose new measurements to make, and a variational inference algorithm is used, which exploits exponential family duality.
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Fast Training of Triplet-Based Deep Binary Embedding Networks
TLDR
This paper proposes to formulate high-order binary codes learning as a multi-label classification problem by explicitly separating learning into two interleaved stages and proposes to map the original image to compact binary codes via carefully designed deep convolutional neural networks and the hashing function fitting can be solved by training binary CNN classifiers.
Dropout: a simple way to prevent neural networks from overfitting
TLDR
It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
...
1
2
3
4
5
...