#### Filter Results:

- Full text PDF available (93)

#### Publication Year

1978

2016

- This year (0)
- Last 5 years (48)
- Last 10 years (76)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on top of the Ladder network proposed by Valpola [1] which we extend by combining… (More)

- Alexander Ilin, Tapani Raiko
- Journal of Machine Learning Research
- 2010

Principal component analysis (PCA) is a classical data analysis technique that finds linear transformations of data that retain the maximal amount of variance. We study a case where some of the data values are missing, and show that this problem has many features which are usually associated with nonlinear models, such as overfitting and bad locally optimal… (More)

- Tapani Raiko, Alexander Ilin, Juha Karhunen
- ECML
- 2007

Principal component analysis (PCA) is a well-known classical data analysis technique. There are a number of algorithms for solving the problem, some scaling better than others to problems with high dimensionality. They also differ in their ability to handle missing values in the data. We study a case where the data are high-dimensional and a majority of the… (More)

- Kyunghyun Cho, Tapani Raiko, Alexander Ilin
- Neural Computation
- 2013

Restricted Boltzmann machines (RBMs) are often used as building blocks in greedy learning of deep networks. However, training this simple model can be laborious. Traditional learning algorithms often converge only with the right choice of metaparameters that specify, for example, learning rate scheduling and the scale of the initial weights. They are also… (More)

- Kyunghyun Cho, Alexander Ilin, Tapani Raiko
- ICANN
- 2011

We propose a few remedies to improve training of Gaussian-Bernoulli restricted Boltzmann machines (GBRBM), which is known to be difficult. Firstly, we use a different parameterization of the energy function, which allows for more intuitive interpretation of the parameters and facilitates learning. Secondly, we propose parallel tempering learning for GBRBM.… (More)

- Tapani Raiko, Mathias Berglund, Guillaume Alain, Laurent Dinh
- ArXiv
- 2014

Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potential benefits when compared to deterministic MLP networks. (1) They allow to learn one-to-many type of mappings. (2) They can be used in structured prediction problems, where modeling the internal structure of the output is important. (3) Stochasticity has been… (More)

- Kristian Kersting, Luc De Raedt, Tapani Raiko
- J. Artif. Intell. Res.
- 2006

Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov models to deal with sequences of structured symbols in the form of logical atoms, rather than flat characters. This note formally introduces LOHMMs and presents solutions to the three central inference problems for LOHMMs: evaluation, most likely hidden state sequence and parameter… (More)

Variational autoencoders are powerful models for unsupervised learning. However deep models with several layers of dependent stochastic variables are difficult to train which limits the improvements obtained using these highly expressive models. We propose a new inference model, the Ladder Variational Autoencoder, that recursively corrects the generative… (More)

- Antti Honkela, Tapani Raiko, Mikael Kuusela, Matti Tornio, Juha Karhunen
- Journal of Machine Learning Research
- 2010

Variational Bayesian (VB) methods are typically only applied to models in the conjugate-exponential family using the variational Bayesian expectation maximisation (VB EM) algorithm or one of its variants. In this paper we present an efficient algorithm for applying VB to more general models. The method is based on specifying the functional form of the… (More)

- Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, Tapani Raiko
- ArXiv
- 2015