• Publications
  • Influence
Representation Learning on Graphs with Jumping Knowledge Networks
TLDR
This work explores an architecture -- jumping knowledge (JK) networks -- that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation in graphs.
Retrosynthesis Prediction with Conditional Graph Logic Network
TLDR
The Conditional Graph Logic Network is proposed, a conditional graphical model built upon graph neural networks that learns when rules from reaction templates should be applied, implicitly considering whether the resulting reaction would be both chemically feasible and strategic.
Fast DPP Sampling for Nystrom with Application to Kernel Methods
TLDR
It is shown that (under certain conditions) Markov chain DPP sampling requires only linear time in the size of the data, and it is proved that landmarks selected via DPPs guarantee bounds on approximation errors.
Batched High-dimensional Bayesian Optimization via Structural Kernel Learning
TLDR
This paper proposes to tackle high-dimensional black-box functions by assuming a latent additive structure in the function and inferring it properly for more efficient and effective BO, and performing multiple evaluations in parallel to reduce the number of iterations required by the method.
Polynomial time algorithms for dual volume sampling
TLDR
This work develops an exact (randomized) polynomial time sampling algorithm as well as its derandomization and proves that its distribution satisfies the “Strong Rayleigh” property, including a provably fast-mixing Markov chain sampler that makes dual volume sampling much more attractive to practitioners.
Retro*: Learning Retrosynthetic Planning with Neural Guided A* Search
TLDR
Experiments show that, the proposed Retro*, a neural-based A*-like algorithm that finds high-quality synthetic routes efficiently outperforms existing state-of-the-art with respect to both the success rate and solution quality, while being more efficient at the same time.
Efficient Sampling for k-Determinantal Point Processes
TLDR
This work proposes a new method for approximate sampling from discrete $k$-DPPs that takes advantage of the diversity property of subsets sampled from a DPP, and proceeds in two stages: first it constructs coresets for the ground set of items; thereafter, it efficiently samples subsets based on the constructed coresets.
Randomized Greedy Inference for Joint Segmentation, POS Tagging and Dependency Parsing
TLDR
A randomized greedy algorithm is employed that jointly predicts segmentations, POS tags and dependency trees and readily handles different segmentation tasks, such as morphological segmentation for Arabic and word segmentations for Chinese.
Distributional Adversarial Networks
TLDR
Inspired by discrepancy measures and two-sample tests between probability distributions, a framework for adversarial training that relies on a sample rather than a single sample point as the fundamental unit of discrimination is proposed.
Neural Program Lattices
TLDR
The capability of NPL to learn to perform long-hand addition and arrange blocks in a grid-world environment is demonstrated and experiments show that it performs on par with NPI while using weak supervision in place of most of the strong supervision, thus indicating its ability to infer the high-level program structure from examples containing only the low-level operations.
...
1
2
3
...