Authors: Lydia T. Liu,Max Simchowitz,Moritz Hardt
ArXiv: 1808.10013
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1808.10013v2
We clarify what fairness guarantees we can and cannot expect to follow from
unconstrained machine learning. Specifically, we characterize when
unconstrained learning on its own implies group calibration, that is, the
outcome variable is conditionally independent of group membership given the
score. We show that under reasonable conditions, the deviation from satisfying
group calibration is upper bounded by the excess risk of the learned score
relative to the Bayes optimal score function. A lower bound confirms the
optimality of our upper bound. Moreover, we prove that as the excess risk of
the learned score decreases, it strongly violates separation and independence,
two other standard fairness criteria.
Our results show that group calibration is the fairness criterion that
unconstrained learning implicitly favors. On the one hand, this means that
calibration is often satisfied on its own without the need for active
intervention, albeit at the cost of violating other criteria that are at odds
with calibration. On the other hand, it suggests that we should be satisfied
with calibration as a fairness criterion only if we are at ease with the use of
unconstrained machine learning in a given application.