• Publications
  • Influence
Joint Reasoning for Temporal and Causal Relations
TLDR
This paper forms the joint problem as an integer linear programming (ILP) problem, enforcing constraints that are inherent in the nature of time and causality, and shows that the joint inference framework results in statistically significant improvement in the extraction of both temporal and causal relations from text.
A Structured Learning Approach to Temporal Relation Extraction
TLDR
It is suggested that it is important to take dependencies into account while learning to identify temporal relations between events and a structured learning approach is proposed to address this challenge.
CogCompTime: A Tool for Understanding Time in Natural Language
TLDR
This paper introduces CogCompTime, a system that has these two important functionalities and incorporates the most recent progress, achieves state-of-the-art performance, and is publicly available at http://cogcomp.org/page/publication_view/844.
CogCompNLP: Your Swiss Army Knife for NLP
TLDR
This work presents the library COGCOMPNLP, which simplifies the process of design and development of NLP applications by providing modules to address different challenges, and provides a corpus-reader module that supports popular corpora in the NLP community, a module for various low-level data-structures and operations, and an extensive suite of annotation modules for a wide range of semantic and syntactic tasks.
Strong Coresets for Subspace Approximation and k-Median in Nearly Linear Time
TLDR
The first polynomial time, and in fact nearly linear time, algorithms for constructing strong coresets for subspace approximation and median of size poly(k/\epsilon) are given.
Does Data Augmentation Lead to Positive Margin?
TLDR
Lower bounds on the number of augmented data points required for non-zero margin are presented, and it is shown that commonly used DA techniques may only introduce significant margin after adding exponentially many points to the data set.
Online Learning with Graph-Structured Feedback against Adaptive Adversaries
  • Zhili Feng, Po-Ling Loh
  • Computer Science, Mathematics
    IEEE International Symposium on Information…
  • 1 April 2018
We derive upper and lower bounds for the policy regret of $T$-round online learning problems with graph-structured feedback, where the adversary is nonoblivious but assumed to have a bounded memory.
Dimensionality Reduction for the Sum-of-Distances Metric
TLDR
A dimensionality reduction procedure to approximate the sum of distances of a given set of n points in R to any “shape” that lies in a k-dimensional subspace of R, and can be used to obtain poly(k/ε) size coresets for k-median and (k, 1)-subspace approximation problems in polynomial time.
Provable Adaptation across Multiway Domains via Representation Learning
TLDR
This paper studies zero-shot domain adaptation where each domain is indexed on a multidimensional array, and they only have data from a small subset of domains, and proposes a model which consists of a domain-invariant latent representation layer and adomain-specific linear prediction layer with a low-rank tensor structure.
...
...