• Corpus ID: 235417051

Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation

@article{Zhang2021ProgressiveScaleBB,
  title={Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation},
  author={Jiawei Zhang and Linyi Li and Huichen Li and Xiaolu Zhang and Shuang Yang and B. Li},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.06056}
}
Boundary based blackbox attack has been recognized as practical and effective, given that an attacker only needs to access the final model prediction. However, the query efficiency of it is in general high especially for high dimensional image data. In this paper, we show that such efficiency highly depends on the scale at which the attack is applied, and attacking at the optimal scale significantly improves the efficiency. In particular, we propose a theoretical framework to analyze and show… 

Query Efficient Decision Based Sparse Attacks Against Black-Box Deep Learning Models

An evolution-based algorithm for sparse attacks in black-box settings and an attack algorithm that is compet-itive with only a limited query budget against the state-of-the-art gradient-based whitebox attacks in standard computer vision tasks such as ImageNet.

TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness

A practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness and an effective Transferability Reduced Smooth ensemble training strategy to train a robust ensemble with low transferability by enforcing both gradient orthogonality and model smoothness between base Models.

Double Sampling Randomized Smoothing

Theoretically, under mild assumptions, it is proved that DSRS can certify Θ( √ d ) robust radius under ℓ 2 norm where d is the input dimension, implying thatDSRS may be able to break the curse of dimensionality of randomized smoothing.

Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations

A UAP fingerprinting method for DNN models is proposed and an encoder via contrastive learning that takes fingerprints as inputs, outputs a similarity score is trained that has good generalizability across different model architectures and is robust against post-modifications on stolen models.

TPC: Transformation-Specific Smoothing for Point Cloud Models

A transformation-specific smoothing framework TPC is proposed, which provides tight and scalable robustness guarantees for point cloud models against semantic transformation attacks and outperforms state of the art on several common 3D transformations.

Improving Certified Robustness via Statistical Learning with Logical Reasoning

It is proved that the computational complexity of certifying the robustness of MLN is #P -hard, and it is shown that the certified robustness with knowledge-based logical reasoning indeed outperforms that of the pure data-driven approaches.

G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators

This work proposes a novel privacy-preserving data Generative model based on the PATE framework (G-PATE), aiming to train a scalable differentially private data generator which preserves high generated data utility.

References

SHOWING 1-10 OF 58 REFERENCES

QEBA: Query-Efficient Boundary-Based Blackbox Attack

This paper proposes a Query-Efficient Boundary-based blackbox Attack (QEBA) based only on model’s final prediction labels, theoretically shows why previous boundary-based attack with gradient estimation on the whole gradient space is not efficient in terms of query numbers, and provides optimality analysis for dimension reduction-based gradient estimation.

Nonlinear Gradient Estimation for Query Efficient Blackbox Attack

A novel query-efficient Nonlinear Gradient Projection-based Boundary-based Blackbox Attack (NonLinear-BA) is proposed, which applies deep generative models such as AEs, VAEs, and GANs as the nonlinear projections to perform blackbox attack and therefore demonstrates the power of the projection-based gradient estimator empirically.

AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks

Experimental results suggest that, by applying AutoZOOM to a state-of-the-art black-box attack (ZOO), a significant reduction in model queries can be achieved without sacrificing the attack success rate and the visual quality of the resulting adversarial examples.

Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms

Novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability and decouple the number of queries required to generate each adversarial sample from the dimensionality of the input are proposed.

Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks

Experimental results show that the method can gain up to 2x and 4x reductions in the requisite mean and medium numbers of queries with much lower failure rates even if the reference models are trained on a small and inadequate dataset disjoint to the one for training the victim model.

Improving Black-box Adversarial Attacks with a Transfer-based Prior

A prior-guided random gradient-free (P-RGF) method to improve black-box adversarial attacks, which takes the advantage of a transfer-based prior and the query information simultaneously and is appropriately integrated into the algorithm by an optimal coefficient derived by a theoretical analysis.

RayS: A Ray Searching Method for Hard-label Adversarial Attack

This paper presents the Ray Searching attack (RayS), which greatly improves the hard-label attack effectiveness as well as efficiency and reformulates the continuous problem of finding the closest decision boundary into a discrete problem that does not require any zeroth-order gradient estimation.

Projection & Probability-Driven Black-Box Attack

  • Jie LiRongrong Ji Q. Tian
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
This paper proposes Projection & Probability-driven Black-box Attack (PPBA), a method to tackle the problem of generating adversarial examples in a black-box setting by reducing the solution space and providing better optimization.

Efficient Decision-Based Black-Box Adversarial Attacks on Face Recognition

  • Yinpeng DongHang Su Jun Zhu
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
This paper evaluates the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model.

Black-box Adversarial Attacks with Limited Queries and Information

This work defines three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partial-information setting, and the label-only setting and develops new attacks that fool classifiers under these more restrictive threat models.
...