• Publications
  • Influence
R2N2: Residual Recurrent Neural Networks for Multivariate Time Series Forecasting
TLDR
We propose a hybrid model called R2N2 (Residual RNN), which first models the time series with a simple linear model (like VAR) and then models its residual errors using RNNs. Expand
  • 14
  • 4
  • PDF
Fighting Offensive Language on Social Media with Unsupervised Text Style Transfer
TLDR
We propose a new method for training encoder-decoders using non-parallel data that combines a collaborative classifier, attention and the cycle consistency loss that outperforms a state-of-the-art text style transfer system in two out of three quantitative metrics. Expand
  • 51
  • 3
  • PDF
Estimating Structured Vector Autoregressive Models
TLDR
We consider estimating structured VAR (vector auto-regressive model), where the structure can be captured by any suitable norm, and establish bounds on the non-asymptotic estimation error. Expand
  • 23
  • 3
  • PDF
Deep learning algorithm for data-driven simulation of noisy dynamical system
TLDR
We present a deep learning model, DE-LSTM, for the simulation of a stochastic process with an underlying nonlinear dynamics. Expand
  • 37
  • 2
  • PDF
Estimating Structured Vector Autoregressive Model
While considerable advances have been made in estimating high-dimensional structured models from independent data using Lasso-type models, limited progress has been made for settings when the samplesExpand
  • 16
  • 2
  • PDF
Detection of Precursors to Aviation Safety Incidents Due to Human Factors
TLDR
We proposed a framework for detecting precursors to aviation safety incidents due to human factors based on Hidden Semi-Markov Models (HSMM). Expand
  • 15
  • 2
Estimating Information Flow in Deep Neural Networks
TLDR
We study the estimation of the mutual information I(X;T`) between the input X to a deep neural network (DNN) and the output vector T of its `th hidden layer (an “internal representation”). Expand
  • 46
  • 1
  • PDF
Estimating Information Flow in Neural Networks
TLDR
We study the flow of information and the evolution of internal representations during deep neural network (DNN) training, aiming to demystify the compression aspect of the information bottleneck theory. Expand
  • 24
  • 1
  • PDF
Improved Image Captioning with Adversarial Semantic Alignment
TLDR
We propose a new conditional GAN for image captioning that enforces semantic alignment between images and captions through a co-attentive discriminator and a context-aware LSTM sequence generator. Expand
  • 14
  • 1
  • PDF
Improved Neural Text Attribute Transfer with Non-parallel Data
TLDR
We propose a novel algorithm for text attribute transfer with non-parallel corpora based on the encoder-decoder architecture with attention, augmented with the collaborative classifier and a set of content preservation losses. Expand
  • 17
  • 1
  • PDF