Alireza Makhzani

Learn More
In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convo-lutional winner-take-all(More)
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions , sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the " k-sparse(More)
In this paper, we propose the " adversarial autoencoder " (AAE), which is a proba-bilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution. Matching the aggregated posterior(More)
  • 1