Authors: Sarah Tan,Rich Caruana,Giles Hooker,Yin Lou
ArXiv: 1710.06169
Document:
PDF
DOI
Artifact development version:
GitHub
Abstract URL: http://arxiv.org/abs/1710.06169v4
Black-box risk scoring models permeate our lives, yet are typically
proprietary or opaque. We propose Distill-and-Compare, a model distillation and
comparison approach to audit such models. To gain insight into black-box
models, we treat them as teachers, training transparent student models to mimic
the risk scores assigned by black-box models. We compare the student model
trained with distillation to a second un-distilled transparent model trained on
ground-truth outcomes, and use differences between the two models to gain
insight into the black-box model. Our approach can be applied in a realistic
setting, without probing the black-box model API. We demonstrate the approach
on four public data sets: COMPAS, Stop-and-Frisk, Chicago Police, and Lending
Club. We also propose a statistical test to determine if a data set is missing
key features used to train the black-box model. Our test finds that the
ProPublica data is likely missing key feature(s) used in COMPAS.