Authors: Ke Sun,Zhanxing Zhu,Zhouchen Lin
ArXiv: 1902.11029
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1902.11029v1
Deep neural networks have been widely deployed in various machine learning
tasks. However, recent works have demonstrated that they are vulnerable to
adversarial examples: carefully crafted small perturbations to cause
misclassification by the network. In this work, we propose a novel defense
mechanism called Boundary Conditional GAN to enhance the robustness of deep
neural networks against adversarial examples. Boundary Conditional GAN, a
modified version of Conditional GAN, can generate boundary samples with true
labels near the decision boundary of a pre-trained classifier. These boundary
samples are fed to the pre-trained classifier as data augmentation to make the
decision boundary more robust. We empirically show that the model improved by
our approach consistently defenses against various types of adversarial attacks
successfully. Further quantitative investigations about the improvement of
robustness and visualization of decision boundaries are also provided to
justify the effectiveness of our strategy. This new defense mechanism that uses
boundary samples to enhance the robustness of networks opens up a new way to
defense adversarial attacks consistently.