• Publications
  • Influence
An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
TLDR
A systematic evaluation of generic convolutional and recurrent architectures for sequence modeling concludes that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutionals should be regarded as a natural starting point for sequence modeled tasks. Expand
REDD : A Public Data Set for Energy Disaggregation Research
TLDR
The Reference Energy Disaggregation Data Set (REDD), a freely available data set containing detailed power usage information from several homes, is presented, aimed at furthering research on energy disaggregation. Expand
Certified Adversarial Robustness via Randomized Smoothing
TLDR
Strong empirical results suggest that randomized smoothing is a promising direction for future research into adversarially robust classification on smaller-scale datasets where competing approaches to certified $\ell_2$ robustness are viable, smoothing delivers higher certified accuracies. Expand
Provable defenses against adversarial examples via the convex outer adversarial polytope
TLDR
A method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations, and it is shown that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. Expand
Approximate Inference in Additive Factorial HMMs with Application to Energy Disaggregation
TLDR
This paper proposes an alternative inference method for additive factorial hidden Markov models, an extension to HMMs where the state factors into multiple independent chains, and the output is an additive function of all the hidden states. Expand
Dynamic Weighted Majority: An Ensemble Method for Drifting Concepts
TLDR
It is concluded that *DWM* outperformed other learners that only incrementally learn concept descriptions, that maintain and use previously encountered examples, and that employ an unweighted, fixed-size ensemble of experts. Expand
Fast is better than free: Revisiting adversarial training
TLDR
It is made the surprising discovery that it is possible to train empirically robust models using a much weaker and cheaper adversary, an approach that was previously believed to be ineffective, rendering the method no more costly than standard training in practice. Expand
OptNet: Differentiable Optimization as a Layer in Neural Networks
TLDR
OptNet is presented, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end trainable deep networks, and shows how techniques from sensitivity analysis, bilevel optimization, and implicit differentiation can be used to exactly differentiate through these layers. Expand
Scaling provable adversarial defenses
TLDR
This paper presents a technique for extending these training procedures to much more general networks, with skip connections and general nonlinearities, and shows how to further improve robust error through cascade models. Expand
Multimodal Transformer for Unaligned Multimodal Language Sequences
TLDR
Comprehensive experiments on both aligned and non-aligned multimodal time-series show that the MulT model outperforms state-of-the-art methods by a large margin, and empirical analysis suggests that correlated crossmodal signals are able to be captured by the proposed cross modal attention mechanism in MulT. Expand
...
1
2
3
4
5
...