Authors: Yangsibo Huang,Yushan Su,Sachin Ravi,Zhao Song,Sanjeev Arora,Kai Li
ArXiv: 2003.01876
Document:
PDF
DOI
Abstract URL: https://arxiv.org/abs/2003.01876v1
This paper attempts to answer the question whether neural network pruning can be used as a tool to achieve differential privacy without losing much data utility. As a first step towards understanding the relationship between neural network pruning and differential privacy, this paper proves that pruning a given layer of the neural network is equivalent to adding a certain amount of differentially private noise to its hidden-layer activations. The paper also presents experimental results to show the practical implications of the theoretical finding and the key parameter values in a simple practical setting. These results show that neural network pruning can be a more effective alternative to adding differentially private noise for neural networks.