We are very excited to join forces with MLCommons and OctoML.ai! Contact Grigori Fursin for more details!

ConvNets and ImageNet Beyond Accuracy: Understanding Mistakes and Uncovering Biases

lib:2c574284d8434d99 (v1.0.0)

Authors: Pierre Stock,Moustapha Cisse
Where published: ECCV 2018 9
ArXiv: 1711.11443
Document:  PDF  DOI 
Abstract URL: http://arxiv.org/abs/1711.11443v2

ConvNets and Imagenet have driven the recent success of deep learning for image classification. However, the marked slowdown in performance improvement combined with the lack of robustness of neural networks to adversarial examples and their tendency to exhibit undesirable biases question the reliability of these methods. This work investigates these questions from the perspective of the end-user by using human subject studies and explanations. The contribution of this study is threefold. We first experimentally demonstrate that the accuracy and robustness of ConvNets measured on Imagenet are vastly underestimated. Next, we show that explanations can mitigate the impact of misclassified adversarial examples from the perspective of the end-user. We finally introduce a novel tool for uncovering the undesirable biases learned by a model. These contributions also show that explanations are a valuable tool both for improving our understanding of ConvNets' predictions and for designing more reliable models.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives


Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!