• Publications
  • Influence
Autoencoding beyond pixels using a learned similarity metric
TLDR
An autoencoder that leverages learned representations to better measure similarities in data space is presented and it is shown that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.
SignalP 5.0 improves signal peptide predictions using deep neural networks
TLDR
A deep neural network-based approach that improves SP prediction across all domains of life and distinguishes between three types of prokaryotic SPs is presented.
Ladder Variational Autoencoders
TLDR
A new inference model is proposed, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network.
Sequential Neural Models with Stochastic Layers
TLDR
Stochastic recurrent neural networks are introduced which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural generative model.
Auxiliary Deep Generative Models
TLDR
This work extends deep generative models with auxiliary variables which improves the variational approximation and proposes a model with two stochastic layers and skip connections which shows state-of-the-art performance within semi-supervised learning on MNIST, SVHN and NORB datasets.
DeepLoc: prediction of protein subcellular localization using deep learning
TLDR
This work presents a prediction algorithm using deep neural networks to predict protein subcellular localization relying only on sequence information, outperforming current state‐of‐the‐art algorithms, including those relying on homology information.
Bayesian Non-negative Matrix Factorization
TLDR
An iterated conditional modes algorithm is presented that rivals existing state-of-the-art NMF algorithms on an image feature extraction problem and discusses how the Gibbs sampler can be used for model order selection by estimating the marginal likelihood.
A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning
TLDR
The Kalman variational auto-encoder is introduced, a framework for unsupervised learning of sequential data that disentangles two latent representations: an object's representation, coming from a recognition model, and a latent state describing its dynamics.
Growth-rate regulated genes have profound impact on interpretation of transcriptome profiling in Saccharomyces cerevisiae
TLDR
The data show that the cellular growth rate has great influence on transcriptional regulation, and implies that one should be cautious when comparing mutants with different growth rates.
Detecting sequence signals in targeting peptides using deep learning
During the development of TargetP 2.0, a state-of-the-art method to predict targeting signal, we find a previously overlooked biological signal for subcellular targeting using the output from a deep
...
...