Adversarial Attacks with Gaussian Noise and Flipping Strategy
CSTR:
Author:
Affiliation:

1.Command and Control Engineering College, Army Engineering University of PLA, Nanjing 210007, China;2.Zhenjiang Campus, Army Military Transportation University of PLA, Zhenjiang 212001, China;3.Communication Engineering College, Army Engineering University of PLA, Nanjing 210007, China

Clc Number:

TP181

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    For adversarial attacks, black-box attacks are more challenging and applicable than white-box attacks. Recently, black-box attacks based on the transferability of adversarial examples have become mainstream methods. However, the adversarial examples generated by most existing methods exhibit low efficiency in black-box attacks. In this paper, a combination strategy based on Gaussian noise and flipping is proposed to enhance the transferability of adversarial examples, thus achieving higher black-box attack success rates. Moreover, this strategy can be integrated into any gradient-based method to obtain stronger attacks. Extensive experiments on an ImageNet-compatible dataset show that our proposed method can generate more transferable adversarial examples. In addition, our best attack can fool six state-of-the-art defense models with an average success rate of 86.2%, and deliver 8.0% success rate increasement compared with the state-of-the-art gradient-based attack.

    Reference
    Related
    Cited by
Get Citation

ZHANG Wu, DUAN Yexin, ZOU Junhua, PAN Zhisong, ZHOU Xingyu. Adversarial Attacks with Gaussian Noise and Flipping Strategy[J].,2021,36(2):248-259.

Copy
Related Videos

Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:August 23,2020
  • Revised:December 14,2020
  • Adopted:
  • Online: March 25,2021
  • Published:
Article QR Code