Warren He,James Wei,Xinyun Chen,Nicholas Carlini,Dawn Song
Abstract URL: http://arxiv.org/abs/1706.04701v1
Ongoing research has proposed several methods to defend neural networks
against adversarial examples, many of which researchers have shown to be
ineffective. We ask whether a strong defense can be created by combining
multiple (possibly weak) defenses. To answer this question, we study three
defenses that follow this approach. Two of these are recently proposed defenses
that intentionally combine components designed to work well together. A third
defense combines three independent defenses. For all the components of these
defenses and the combined defenses themselves, we show that an adaptive
adversary can create adversarial examples successfully with low distortion.
Thus, our work implies that ensemble of weak defenses is not sufficient to
provide strong defense against adversarial examples.