Share This Author
node2vec: Scalable Feature Learning for Networks
In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.
Decision Transformer: Reinforcement Learning via Sequence Modeling
Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.
Graphite: Iterative Generative Modeling of Graphs
This work proposes Graphite, an algorithmic framework for unsupervised learning of representations over nodes in large graphs using deep latent variable generative models, parameterizes variational autoencoders (VAE) with graph neural networks, and uses a novel iterative graph refinement strategy inspired by low-rank approximations for decoding.
Stochastic Optimization of Sorting Networks via Continuous Relaxations
This work proposes NeuralSort, a general-purpose continuous relaxation of the output of the sorting operator from permutation matrices to the set of unimodal row-stochastic matrices, which permits straight-through optimization of any computational graph involve a sorting operation.
Pretrained Transformers as Universal Computation Engines
It is shown that pretraining on natural language can improve performance and compute efﬁciency on non-language downstream tasks and an analysis of the architecture is performed, comparing the performance of a random initialized transformer to a random LSTM.
Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models
Flow-GANs is proposed, a generative adversarial network for which one can perform exact likelihood evaluation, thus supporting both adversarial and maximum likelihood training and demonstrating that hybrid training can attain high held-out likelihoods while retaining visual fidelity in the generated samples.
Learning Controllable Fair Representations
- Jiaming Song, Pratyusha Kalluri, Aditya Grover, Shengjia Zhao, S. Ermon
- Computer ScienceAISTATS
- 11 December 2018
Exploiting duality, this work introduces a method that optimizes the model parameters as well as the expressiveness-fairness trade-off and achieves higher expressiveness at a lower computational cost.
AlignFlow: Cycle Consistent Learning from Multiple Domains via Normalizing Flows
AlignFlow is proposed, a generative modeling framework that models each domain via a normalizing flow that outperforms relevant baselines on image-to-image translation and unsupervised domain adaptation and can be used to simultaneously interpolate across the various domains using the learned representation.
Learning Policy Representations in Multiagent Systems
- Aditya Grover, Maruan Al-Shedivat, Jayesh K. Gupta, Yuri Burda, Harrison Edwards
- Computer ScienceICML
- 17 June 2018
A general learning framework for modeling agent behavior in any multiagent system using only a handful of interaction data is proposed and a novel objective inspired by imitation learning and agent identification is constructed and an algorithm for unsupervised learning of representations of agent policies is designed.
Permutation Invariant Graph Generation via Score-Based Generative Modeling
- Chenhao Niu, Yang Song, Jiaming Song, Shengjia Zhao, Aditya Grover, S. Ermon
- Computer ScienceAISTATS
- 2 March 2020
A permutation invariant approach to modeling graphs, using the recent framework of score-based generative modeling, which design a permutation equivariant, multi-channel graph neural network to model the gradient of the data distribution at the input graph.