• Publications
  • Influence
Multitask learning and benchmarking with clinical time series data
TLDR
We propose four clinical prediction benchmarks using data derived from the publicly available Medical Information Mart for Intensive Care (MIMIC-III) database. Expand
MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing
TLDR
We propose a new model, MixHop, that can learn these relationships, including difference operators, by repeatedly mixing feature representations of neighbors at various distances. Expand
Scalable Temporal Latent Space Inference for Link Prediction in Dynamic Social Networks
TLDR
We propose a temporal latent space model for link prediction in dynamic social networks, where the goal is to predict links over time based on a sequence of previous graph snapshots. Expand
A Survey on Bias and Fairness in Machine Learning
TLDR
We review research investigating how biases in data skew what is learned by machine learning algorithms, and we listed different sources of biases that can affect AI applications. Expand
Mathematical Model of Foraging in a Group of Robots: Effect of Interference
TLDR
We present a mathematical model of foraging in a homogeneous multi-robot system, with the goal of understanding quantitatively the effects of interference. Expand
Analysis of Dynamic Task Allocation in Multi-Robot Systems
TLDR
Dynamic task allocation is a class of task allocation in which the assignment of robots to sub-tasks is a dynamic process and may need to be continuously adjusted. Expand
The DARPA Twitter Bot Challenge
TLDR
We need to identify and eliminate "influence bots" - realistic, automated identities that illicitly shape discussions on social media - before they get too influential. Expand
Information transfer in social media
TLDR
We propose a measure of causal relationships between nodes based on the information--theoretic notion of transfer entropy, or information transfer, which allows us to differentiate between weak and strong influence over large groups. Expand
Invariant Representations without Adversarial Training
TLDR
We show that adversarial training is unnecessary and sometimes counter-productive; we cast invariant representation learning as a single information-theoretic objective that can be directly optimized. Expand
Better Automatic Evaluation of Open-Domain Dialogue Systems with Contextualized Embeddings
TLDR
In this paper, we explore using contextualized word embeddings to compute more accurate relatedness scores for automatic evaluation of open-domain dialogue systems. Expand
...
1
2
3
4
5
...