Learn More
An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an(More)
This paper develops the first class of algorithms that enable unbiased estimation of steady-state expectations for multidimensional reflected Brownian motion. In order to explain our ideas, we first consider the case of compound Poisson (possibly Markov modulated) input. In this case, we analyze the complexity of our procedure as the dimension of the(More)
Automatic translation from natural language descriptions into programs is a longstanding challenging problem. In this work, we consider a simple yet important sub-problem: translation from textual descriptions to If-Then programs. We devise a novel neural network architecture for this task which we train end-toend. Specifically, we introduce Latent(More)
Climate change has increased the need of information on amount of forest biomass. The biomass and carbon storage for larch (Larix spp.) in large geographic regions in China were failed to be accurately estimated from current biomass equations, because they were usually based on a few sample trees on local sites, generally incompatible to volume estimation,(More)
Mesp family proteins comprise two members named mesodermal posterior 1 (Mesp1) and mesodermal posterior 2 (Mesp2). Both Mesp1 and Mesp2 are transcription factors and they share an almost identical basic helix-loop-helix motif. They have been shown to play critical regulating roles in mammalian heart and somite development. Mesp1 sits in the core of the(More)
Traditional classification algorithms assume that training and test data come from similar distributions. This assumption is violated in adversarial settings, where malicious actors modify instances to evade detection. A number of custom methods have been developed for both adversarial evasion attacks and robust learning. We propose the first systematic and(More)
Ongoing research has proposed several methods to defend neural networks against adversarial examples, many of which researchers have shown to be ineffective. We ask whether a strong defense can be created by combining multiple (possibly weak) defenses. To answer this question, we study three defenses that follow this approach. Two of these are recently(More)
Due to deep cascades of nonlinear units, deep neural networks (DNNs) can automatically learn non-local generalization priors from data and have achieved high performance in various applications. However, such properties have also opened a door for adversaries to generate the so-called adversarial examples to fool DNNs. Specifically, adversaries can inject(More)