Latent regression Bayesian network for data representation

@article{Nie2016LatentRB,
  title={Latent regression Bayesian network for data representation},
  author={Siqi Nie and Yue Zhao and Qiang Ji},
  journal={2016 23rd International Conference on Pattern Recognition (ICPR)},
  year={2016},
  pages={3494-3499}
}
  • S. Nie, Yue Zhao, Q. Ji
  • Published 2016
  • Computer Science, Mathematics
  • 2016 23rd International Conference on Pattern Recognition (ICPR)
Restricted Boltzmann machines (RBMs) are widely used for data representation and feature learning in various machine learning tasks. The undirected structure of an RBM allows inference to be performed efficiently, because the latent variables are dependent on each other given the visible variables. However, we believe the correlations among latent variables are crucial for faithful data representation. Driven by this idea, we propose a counterpart of RBMs, namely latent regression Bayesian… Expand
Facial Action Unit Recognition Augmented by Their Dependencies
TLDR
Experimental results on three benchmark databases demonstrate that the proposed approaches can successfully capture complex AU relationships, and the expression labels available only during training are benefit for AU recognition during testing. Expand
Facial Action Unit Recognition and Intensity Estimation Enhanced Through Label Dependencies
TLDR
The results demonstrate that the proposed approaches faithfully model the complex and global inherent AU dependencies, and the expression labels available only during training can boost the estimation of AU dependencies for both AU recognition and intensity estimation. Expand
Probabilistic spiking neural networks : Supervised, unsupervised and adversarial trainings
TLDR
This research presents a probabilistic framework for evaluating the impact of environmental influences on innovation in the context of sport and human evolution. Expand
Posed and Spontaneous Expression Distinction Using Latent Regression Bayesian Networks
TLDR
This work constructs several latent regression Bayesian networks to capture spatial patterns from spontaneous and posed facial expressions given expression-related factors and conducts experiments to showcase the superiority of the proposed approach in both modeling spatial patterns and classifying expressions as either posed or spontaneous. Expand

References

SHOWING 1-10 OF 27 REFERENCES
On the quantitative analysis of deep belief networks
TLDR
It is shown that Annealed Importance Sampling (AIS) can be used to efficiently estimate the partition function of an RBM, and a novel AIS scheme for comparing RBM's with different architectures is presented. Expand
Neural Variational Inference and Learning in Belief Networks
TLDR
This work proposes a fast non-iterative approximate inference method that uses a feedforward network to implement efficient exact sampling from the variational posterior and shows that it outperforms the wake-sleep algorithm on MNIST and achieves state-of-the-art results on the Reuters RCV1 document dataset. Expand
Auto-Encoding Variational Bayes
TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced. Expand
Bounding the Test Log-Likelihood of Generative Models
TLDR
A more efficient estimator is proposed, and it is proved that it provides a lower bound on the true test log-likelihood, and an unbiased estimator as the number of generated samples goes to infinity, although one that incorporates the effect of poor mixing. Expand
Stochastic Spectral Descent for Discrete Graphical Models
TLDR
A new, largely tuning-free algorithm that derives novel majorization bounds based on the Schatten- ∞ norm and demonstrates empirically that this algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models. Expand
Efficient Learning of Deep Boltzmann Machines
We present a new approximate inference algorithm for Deep Boltzmann Machines (DBM’s), a generative model with many layers of hidden variables. The algorithm learns a separate “recognition” model thatExpand
Training Products of Experts by Minimizing Contrastive Divergence
TLDR
A product of experts (PoE) is an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Expand
Deep Mixtures of Factor Analysers
TLDR
This paper presents a greedy layer-wise learning algorithm for Deep Mixtures of Factor Analysers (DMFAs) and demonstrates empirically that DMFAs learn better density models than both MFAs and two types of Restricted Boltzmann Machine on a wide variety of datasets. Expand
Connectionist Learning of Belief Networks
  • R. Neal
  • Computer Science
  • Artif. Intell.
  • 1992
TLDR
The “Gibbs sampling” simulation procedure for “sigmoid” and “noisy-OR” varieties of probabilistic belief networks can support maximum-likelihood learning from empirical data through local gradient ascent. Expand
A Fast Learning Algorithm for Deep Belief Nets
TLDR
A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. Expand
...
1
2
3
...