Chapter 6 Learning to Discriminate
The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing
Author(s)
Davies, Benjamin
Douglas, Thomas
Collection
European Research Council (ERC)Language
EnglishAbstract
It is often thought that traditional recidivism prediction tools used in criminal
sentencing, though biased in many ways, can straightforwardly avoid one particularly
pernicious type of bias: direct racial discrimination. They can avoid this by excluding race
from the list of variables employed to predict recidivism. A similar approach could be
taken to the design of newer, machine learning-based (ML) tools for predicting recidivism:
information about race could be withheld from the ML tool during its training phase,
ensuring that the resulting predictive model does not use race as an explicit predictor.
However, if race is correlated with measured recidivism in the training data, the ML tool
may ‘learn’ a perfect proxy for race. If such a proxy is found, the exclusion of race would
do nothing to weaken the correlation between risk (mis)classifications and race. Is this a
problem? We argue that, on some explanations of the wrongness of discrimination, it is.
On these explanations, the use of an ML tool that perfectly proxies race would (likely) be
more wrong than the use of a traditional tool that imperfectly proxies race. Indeed, on
some views, use of a perfect proxy for race is plausibly as wrong as explicit racial profiling.
We end by drawing out four implications of our arguments.
Keywords
Discrimination; Profiling; Machine Learning; Algorithmic Fairness; Racial Bias; Redundant Encoding; Criminal Recidivism; Crime Prediction; Artificial Intelligence; AIISBN
9780197539538Publisher
Oxford University PressPublisher website
https://0-global-oup-com.catalogue.libraries.london.ac.uk/Publication date and place
2022Grantor
Classification
Criminal investigation and detection
Crime and criminology