Corpus ID: 7735158

Self Supervised Boosting

@inproceedings{Welling2002SelfSB,
  title={Self Supervised Boosting},
  author={Max Welling and Richard S. Zemel and Geoffrey E. Hinton},
  booktitle={NIPS},
  year={2002}
}
Boosting algorithms and successful applications thereof abound for classification and regression learning problems, but not for unsupervised learning. We propose a sequential approach to adding features to a random field model by training them to improve classification performance between the data and an equal-sized sample of "negative examples" generated from the model's current estimate of the data density. Training in each boosting round proceeds in three stages: first we sample negative… Expand
AdaGAN: Boosting Generative Models
TLDR
An iterative procedure, called AdaGAN, is proposed, where at every step the authors add a new component into a mixture model by running a GAN algorithm on a re-weighted sample by inspired by boosting algorithms. Expand
An Optimization Framework for Combining Ensembles of Classifiers and Clusterers with Applications to Nontransductive Semisupervised Learning and Transfer Learning
TLDR
A general optimization framework that takes as input class membership estimates from existing classifiers learned on previously encountered “source” data, as well as a similarity matrix from a cluster ensemble operating solely on the target data to be classified, and yields a consensus labeling of thetarget data is described. Expand
Applying Boosting Techniques to the training of RBMs and VAEs
Boosting algorithms have shown much success in the realm of supervised learning. As a natural next step, various papers have presented boosting-style algorithms for the unsupervised problem ofExpand
Boosted Generative Models
TLDR
A novel approach for using unsupervised boosting to create an ensemble of generative models, where models are trained in sequence to correct earlier mistakes, which allows the ensemble to include discriminative models trained to distinguish real data from model-generated data. Expand
On better training the infinite restricted Boltzmann machines
TLDR
Experimental results indicate that the proposed training strategy can greatly accelerate learning and enhance generalization ability of iRBMs. Expand
Learning Generative Models via Discriminative Approaches
  • Zhuowen Tu
  • Computer Science
  • 2007 IEEE Conference on Computer Vision and Pattern Recognition
  • 2007
TLDR
A new learning framework is proposed in this paper which progressively learns a target generative distribution through discriminative approaches, which improves the modeling capability of discrim inative models and improves robustness. Expand
Towards Resisting Large Data Variations via Introspective Learning
TLDR
This paper proposes a principled approach to train networks with significantly improved resistance to large variations between training and testing data by embedding a learnable transformation module into the introspective network, which is a convolutional neural network (CNN) classifier empowered with generative capabilities. Expand
Deep Learning
TLDR
Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data. Expand
Introspective Neural Networks for Generative Modeling
TLDR
A generative model built from progressively learned deep convolutional neural networks is developed, capable of "introspection" in a sense — being able to self-evaluate the difference between its generated samples and the given training data. Expand
Boosting multiplicative model combination
In this paper we define a new boosting-type algorithm for multiplicative model combination using as loss function the Hyvärinen scoring rule. In particular, we focus on density estimation problemsExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 12 REFERENCES
Improved Boosting Algorithms using Confidence-Rated Predictions
We describe several improvements to Freund and Schapire‘s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give aExpand
Greedy function approximation: A gradient boosting machine.
Function estimation/approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansionsExpand
Inducing Features of Random Fields
TLDR
The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated. Expand
Boosting Algorithms as Gradient Descent
TLDR
Following previous theoretical results bounding the generalization performance of convex combinations of classifiers in terms of general cost functions of the margin, a new algorithm (DOOM II) is presented for performing a gradient descent optimization of such cost functions. Expand
Discussion of the Paper \additive Logistic Regression: a Statistical View of Boosting" By
The main and important contribution of this paper is in establishing a connection between boosting, a newcomer to the statistics scene, and additive models. One of the main properties of boostingExpand
Boosting and Maximum Likelihood for Exponential Models
We derive an equivalence between AdaBoost and the dual of a convex optimization problem, showing that the only difference between minimizing the exponential loss used by AdaBoost and maximumExpand
Unsupervised Learning of Distributions of Binary Vectors Using 2-Layer Networks
TLDR
It is shown that arbitrary distributions of binary vectors can be approximated by the combination model and shown how the weight vectors in the model can be interpreted as high order correlation patterns among the input bits, and how the combination machine can be used as a mechanism for detecting these patterns. Expand
Training Products of Experts by Minimizing Contrastive Divergence
TLDR
A product of experts (PoE) is an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Expand
Boosting Density Estimation
TLDR
This work applies gradient-based boosting methodology to the unsupervised learning problem of density estimation and shows convergence properties of the algorithm and proves that a strength of weak learnability property applies to this problem. Expand
Minimax Entropy Principle and Its Application to Texture Modeling
TLDR
The minimax entropy principle is applied to texture modeling, where a novel Markov random field model, called FRAME, is derived, and encouraging results are obtained in experiments on a variety of texture images. Expand
...
1
2
...