#### Filter Results:

#### Publication Year

2001

2016

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on top of the Ladder network proposed by Valpola [1] which we extend by combining… (More)

- Kyunghyun Cho, Tapani Raiko, Alexander Ilin
- Neural Computation
- 2013

Restricted Boltzmann machines (RBMs) are often used as building blocks in greedy learning of deep networks. However, training this simple model can be laborious. Traditional learning algorithms often converge only with the right choice of metaparameters that specify, for example, learning rate scheduling and the scale of the initial weights. They are also… (More)

- Alexander Ilin, Tapani Raiko
- Journal of Machine Learning Research
- 2010

Principal component analysis (PCA) is a classical data analysis technique that finds linear transformations of data that retain the maximal amount of variance. We study a case where some of the data values are missing, and show that this problem has many features which are usually associated with nonlinear models, such as overfitting and bad locally optimal… (More)

- Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, Tapani Raiko
- ArXiv
- 2015

We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on the Ladder network proposed by Valpola (2015), which we extend by combining the… (More)

- Tapani Raiko, Alexander Ilin, Juha Karhunen
- ECML
- 2007

Principal component analysis (PCA) is a well-known classical data analysis technique. There are a number of algorithms for solving the problem, some scaling better than others to problems with high di-mensionality. They also differ in their ability to handle missing values in the data. We study a case where the data are high-dimensional and a majority of… (More)

Variational autoencoders are a powerful framework for unsupervised learning. However, previous work has been restricted to shallow models with one or two layers of fully factorized stochastic latent variables, limiting the flexibility of the latent representation. We propose three advances in training algorithms of variational au-toencoders, for the first… (More)

- Kyunghyun Cho, Tapani Raiko, Alexander Ilin
- ICML
- 2011

Boltzmann machines are often used as building blocks in greedy learning of deep networks. However, training even a simplified model, known as restricted Boltzmann machine (RBM), can be extremely laborious: Traditional learning algorithms often converge only with the right choice of the learning rate scheduling and the scale of the initial weights. They are… (More)

- Tapani Raiko, Mathias Berglund, Guillaume Alain, Laurent Dinh
- ArXiv
- 2014

Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potential benefits when compared to deterministic MLP networks. (1) They allow to learn one-to-many type of mappings. (2) They can be used in struc-tured prediction problems, where modeling the internal structure of the output is important. (3) Stochasticity has… (More)

- Kyunghyun Cho, Alexander Ilin, Tapani Raiko
- ICANN
- 2012

In this paper, we study a Tikhonov-type regularization for restricted Boltzmann machines (RBM). We present two alternative formulations of the Tikhonov-type regularization which encourage an RBM to learn a smoother probability distribution. Both formulations turn out to be combinations of the widely used weight-decay and sparsity regularization. We… (More)

- Kristian Kersting, Luc De Raedt, Tapani Raiko
- J. Artif. Intell. Res.
- 2006

Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter… (More)