# Self Supervised Boosting

@inproceedings{Welling2002SelfSB, title={Self Supervised Boosting}, author={Max Welling and Richard S. Zemel and Geoffrey E. Hinton}, booktitle={NIPS}, year={2002} }

Boosting algorithms and successful applications thereof abound for classification and regression learning problems, but not for unsupervised learning. We propose a sequential approach to adding features to a random field model by training them to improve classification performance between the data and an equal-sized sample of "negative examples" generated from the model's current estimate of the data density. Training in each boosting round proceeds in three stages: first we sample negative… Expand

#### 48 Citations

AdaGAN: Boosting Generative Models

- Computer Science, Mathematics
- NIPS
- 2017

An iterative procedure, called AdaGAN, is proposed, where at every step the authors add a new component into a mixture model by running a GAN algorithm on a re-weighted sample by inspired by boosting algorithms. Expand

An Optimization Framework for Combining Ensembles of Classifiers and Clusterers with Applications to Nontransductive Semisupervised Learning and Transfer Learning

- Computer Science
- TKDD
- 2014

A general optimization framework that takes as input class membership estimates from existing classifiers learned on previously encountered “source” data, as well as a similarity matrix from a cluster ensemble operating solely on the target data to be classified, and yields a consensus labeling of thetarget data is described. Expand

Applying Boosting Techniques to the training of RBMs and VAEs

- 2019

Boosting algorithms have shown much success in the realm of supervised learning. As a natural next step, various papers have presented boosting-style algorithms for the unsupervised problem of… Expand

Boosted Generative Models

- Computer Science, Mathematics
- AAAI
- 2018

A novel approach for using unsupervised boosting to create an ensemble of generative models, where models are trained in sequence to correct earlier mistakes, which allows the ensemble to include discriminative models trained to distinguish real data from model-generated data. Expand

On better training the infinite restricted Boltzmann machines

- Computer Science, Mathematics
- Machine Learning
- 2018

Experimental results indicate that the proposed training strategy can greatly accelerate learning and enhance generalization ability of iRBMs. Expand

Learning Generative Models via Discriminative Approaches

- Computer Science
- 2007 IEEE Conference on Computer Vision and Pattern Recognition
- 2007

A new learning framework is proposed in this paper which progressively learns a target generative distribution through discriminative approaches, which improves the modeling capability of discrim inative models and improves robustness. Expand

Towards Resisting Large Data Variations via Introspective Learning

- Computer Science
- 2018

This paper proposes a principled approach to train networks with significantly improved resistance to large variations between training and testing data by embedding a learnable transformation module into the introspective network, which is a convolutional neural network (CNN) classifier empowered with generative capabilities. Expand

Deep Learning

- Medicine, Computer Science
- Nature
- 2015

Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data. Expand

Introspective Neural Networks for Generative Modeling

- Computer Science
- 2017 IEEE International Conference on Computer Vision (ICCV)
- 2017

A generative model built from progressively learned deep convolutional neural networks is developed, capable of "introspection" in a sense — being able to self-evaluate the difference between its generated samples and the given training data. Expand

Boosting multiplicative model combination

- Mathematics
- 2020

In this paper we define a new boosting-type algorithm for multiplicative model combination using as loss function the Hyvärinen scoring rule. In particular, we focus on density estimation problems… Expand

#### References

SHOWING 1-10 OF 12 REFERENCES

Improved Boosting Algorithms using Confidence-Rated Predictions

- Mathematics, Computer Science
- COLT
- 1998

We describe several improvements to Freund and Schapire‘s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a… Expand

Greedy function approximation: A gradient boosting machine.

- Mathematics
- 2001

Function estimation/approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions… Expand

Inducing Features of Random Fields

- Computer Science
- IEEE Trans. Pattern Anal. Mach. Intell.
- 1997

The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated. Expand

Boosting Algorithms as Gradient Descent

- Computer Science
- NIPS
- 1999

Following previous theoretical results bounding the generalization performance of convex combinations of classifiers in terms of general cost functions of the margin, a new algorithm (DOOM II) is presented for performing a gradient descent optimization of such cost functions. Expand

Discussion of the Paper \additive Logistic Regression: a Statistical View of Boosting" By

- 2000

The main and important contribution of this paper is in establishing a connection between boosting, a newcomer to the statistics scene, and additive models. One of the main properties of boosting… Expand

Boosting and Maximum Likelihood for Exponential Models

- Computer Science, Mathematics
- NIPS
- 2001

We derive an equivalence between AdaBoost and the dual of a convex optimization problem, showing that the only difference between minimizing the exponential loss used by AdaBoost and maximum… Expand

Unsupervised Learning of Distributions of Binary Vectors Using 2-Layer Networks

- Mathematics, Computer Science
- NIPS
- 1991

It is shown that arbitrary distributions of binary vectors can be approximated by the combination model and shown how the weight vectors in the model can be interpreted as high order correlation patterns among the input bits, and how the combination machine can be used as a mechanism for detecting these patterns. Expand

Training Products of Experts by Minimizing Contrastive Divergence

- Mathematics, Computer Science
- Neural Computation
- 2002

A product of experts (PoE) is an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Expand

Boosting Density Estimation

- Computer Science, Mathematics
- NIPS
- 2002

This work applies gradient-based boosting methodology to the unsupervised learning problem of density estimation and shows convergence properties of the algorithm and proves that a strength of weak learnability property applies to this problem. Expand

Minimax Entropy Principle and Its Application to Texture Modeling

- Mathematics, Computer Science
- Neural Computation
- 1997

The minimax entropy principle is applied to texture modeling, where a novel Markov random field model, called FRAME, is derived, and encouraging results are obtained in experiments on a variety of texture images. Expand