• Publications
  • Influence
Verifiable Reinforcement Learning via Policy Extraction
VIPER, an algorithm that combines ideas from model compression and imitation learning to learn decision tree policies guided by a DNN policy and its Q-function, is proposed and it is shown that it substantially outperforms two baselines.
Measuring Neural Net Robustness with Constraints
This work proposes metrics for measuring the robustness of a neural net and devise a novel algorithm for approximating these metrics based on an encoding of robustness as a linear program and generates more informative estimates of robusts metrics compared to estimates based on existing algorithms.
An Efficient Homomorphic Encryption Protocol for Multi-User Systems
It is proven that the security of the encryption scheme is equivalent to the large integer factorization problem, and it can withstand an attack with up to lnpoly chosen plaintexts for any predetermined , where is the security parameter.
Interpreting Blackbox Models via Model Extraction
A novel algorithm for extracting decision tree explanations that actively samples new training points to avoid overfitting is devised and several insights provided by the interpretations are described, including a causal issue validated by a physician.
Program synthesis using conflict-driven learning
The notion of equivalence modulo conflict is introduced and it is shown how this idea can be used to learn useful lemmas that allow the synthesizer to prune large parts of the search space.
Synthesizing program input grammars
An algorithm for synthesizing a context-free grammar encoding the language of valid program inputs from a set of input examples and blackbox access to the program and an implementation that leverages the grammar synthesized by the algorithm to fuzz test programs with structured inputs.
Interpreting Predictive Models for Human-in-the-Loop Analytics
Machine learning is increasingly used to inform consequential decisions. Yet, these predictive models have been found to exhibit unexpected defects when trained on real-world observational data,
Calibrated Prediction with Covariate Shift via Unsupervised Domain Adaptation
This work proposes an algorithm for calibrating predictions that accounts for the possibility of covariate shift, given labeled examples from the training distribution and unlabeledExamples from the real-world distribution.
"How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations
This work proposes a novel theoretical framework for understanding and generating misleading explanations, and carries out a user study with domain experts to demonstrate how these explanations can be used to mislead users.
Automated Synthesis of Semantic Malware Signatures using Maximum Satisfiability
This paper proposes a technique for automatically learning semantic malware signatures for Android from very few samples of a malware family, and implements it in a tool called ASTROID, which has a number of advantages over state-of-the-art malware detection techniques.