Check the preview of 2nd version of this platform being developed by the open MLCommons taskforce on automation and reproducibility as a free, open-source and technology-agnostic on-prem platform.

Quantifying Layerwise Information Discarding of Neural Networks

lib:0ea79cf2fde61987 (v1.0.0)

Authors: Haotian Ma,Yinqing Zhang,Fan Zhou,Quanshi Zhang
ArXiv: 1906.04109
Document:  PDF  DOI 
Abstract URL: https://arxiv.org/abs/1906.04109v1


This paper presents a method to explain how input information is discarded through intermediate layers of a neural network during the forward propagation, in order to quantify and diagnose knowledge representations of pre-trained deep neural networks. We define two types of entropy-based metrics, i.e., the strict information discarding and the reconstruction uncertainty, which measure input information of a specific layer from two perspectives. We develop a method to enable efficient computation of such entropy-based metrics. Our method can be broadly applied to various neural networks and enable comprehensive comparisons between different layers of different networks. Preliminary experiments have shown the effectiveness of our metrics in analyzing benchmark networks and explaining existing deep-learning techniques.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!