Learn More
We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial(More)
We present an algorithm for minimizing a sum of functions that combines the computational efficiency of stochastic gradient descent (SGD) with the second order curvature information leveraged by quasi-Newton methods. We unify these dis-parate approaches by maintaining an independent Hessian approximation for each contributing function in the sum. We(More)
Part- and whole-task conditions were created by manipulating the presence of certain components of the Space Fortress video game. A cognitive model was created for two-part games that could be combined into a model that performed the whole game. The model generated predictions both for behavioral patterns and activation patterns in various brain regions.(More)
We study the expressivity of deep neural networks with random weights. We provide several results, both theoretical and experimental, precisely characterizing their functional properties in terms of the depth and width of the network. In doing so, we illustrate inherent connections between the length of a latent trajectory, local neuron transitions, and(More)
We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal dis-criminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the(More)
We present the bilateral solver, a novel algorithm for edge-aware smoothing that combines the flexibility and speed of simple filtering approaches with the accuracy of domain-specific optimization algorithms. Our technique is capable of matching or improving upon state-of-the-art results on several different computer vision tasks (stereo, depth(More)
We combine Riemannian geometry with the mean field theory of high dimensional chaos to study the nature of signal propagation in generic, deep neural networks with random weights. Our results reveal an order-to-chaos expressivity phase transition, with networks in the chaotic phase computing nonlinear functions whose global curvature grows exponentially(More)
Autoencoders have emerged as a useful framework for unsupervised learning of internal representations, and a wide variety of apparently conceptually disparate regularization techniques have been proposed to generate useful features. Here we extend existing denoising autoencoders to additionally inject noise before the non-linearity, and at the hidden unit(More)
Populations of V1 neurons recorded from three macaque monkeys were analyzed to evaluate the relationship between the distribution of the disparity tuning properties and the scene statistics of depth distribution relative to fixation depth. We found that there are more neurons preferring disparities with a higher probability of occurrence in the natural(More)