This portal has been archived. Explore the next generation of this technology.

Knowledge Extraction with No Observable Data

lib:e864e5a42c5a97bd (v1.0.0)

Vote to reproduce this paper and share portable workflows   1 
Authors: Jaemin Yoo,Minyong Cho,Taebum Kim,U Kang
Where published: NeurIPS 2019 12
Document:  PDF  DOI 
Artifact development version: GitHub
Abstract URL: http://papers.nips.cc/paper/8538-knowledge-extraction-with-no-observable-data


Knowledge distillation is to transfer the knowledge of a large neural network into a smaller one and has been shown to be effective especially when the amount of training data is limited or the size of the student model is very small. To transfer the knowledge, it is essential to observe the data that have been used to train the network since its knowledge is concentrated on a narrow manifold rather than the whole input space. However, the data are not accessible in many cases due to the privacy or confidentiality issues in medical, industrial, and military domains. To the best of our knowledge, there has been no approach that distills the knowledge of a neural network when no data are observable. In this work, we propose KegNet (Knowledge Extraction with Generative Networks), a novel approach to extract the knowledge of a trained deep neural network and to generate artificial data points that replace the missing training data in knowledge distillation. Experiments show that KegNet outperforms all baselines for data-free knowledge distillation. We provide the source code of our paper in https://github.com/snudatalab/KegNet.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!