Alireza Makhzani

Learn More
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions , sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the " k-sparse(More)
In this paper, we propose the " adversarial autoencoder " (AAE), which is a proba-bilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution. Matching the aggregated posterior(More)
In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convo-lutional winner-take-all(More)
We explore combining the benefits of convolutional architectures and autoen-coders for learning deep representations in an unsupervised manner. A major challenge is to achieve appropriate sparsity among hidden variables, since neighbouring variables in each feature map tend to be highly correlated and a suppression mechanism is therefore needed. Previously,(More)
  • 1