Show simple item record

dc.contributor.authorDavies, Benjamin
dc.contributor.authorDouglas, Thomas
dc.date.accessioned2024-05-23T12:05:33Z
dc.date.available2024-05-23T12:05:33Z
dc.date.issued2022
dc.identifier.urihttps://0-library-oapen-org.catalogue.libraries.london.ac.uk/handle/20.500.12657/90555
dc.description.abstractIt is often thought that traditional recidivism prediction tools used in criminal sentencing, though biased in many ways, can straightforwardly avoid one particularly pernicious type of bias: direct racial discrimination. They can avoid this by excluding race from the list of variables employed to predict recidivism. A similar approach could be taken to the design of newer, machine learning-based (ML) tools for predicting recidivism: information about race could be withheld from the ML tool during its training phase, ensuring that the resulting predictive model does not use race as an explicit predictor. However, if race is correlated with measured recidivism in the training data, the ML tool may ‘learn’ a perfect proxy for race. If such a proxy is found, the exclusion of race would do nothing to weaken the correlation between risk (mis)classifications and race. Is this a problem? We argue that, on some explanations of the wrongness of discrimination, it is. On these explanations, the use of an ML tool that perfectly proxies race would (likely) be more wrong than the use of a traditional tool that imperfectly proxies race. Indeed, on some views, use of a perfect proxy for race is plausibly as wrong as explicit racial profiling. We end by drawing out four implications of our arguments.en_US
dc.languageEnglishen_US
dc.subject.classificationthema EDItEUR::J Society and Social Sciences::JK Social services and welfare, criminology::JKV Crime and criminology::JKVF Criminal investigation and detectionen_US
dc.subject.classificationthema EDItEUR::J Society and Social Sciences::JK Social services and welfare, criminology::JKV Crime and criminologyen_US
dc.subject.otherDiscrimination; Profiling; Machine Learning; Algorithmic Fairness; Racial Bias; Redundant Encoding; Criminal Recidivism; Crime Prediction; Artificial Intelligence; AIen_US
dc.titleChapter 6 Learning to Discriminateen_US
dc.title.alternativeThe Perfect Proxy Problem in Artificially Intelligent Criminal Sentencingen_US
dc.typechapter
oapen.relation.isPublishedByb9501915-cdee-4f2a-8030-9c0b187854b2en_US
oapen.relation.isPartOfBook6a453af5-fc90-47b7-ae93-bda93088bb1den_US
oapen.relation.isFundedBy178e65b9-dd53-4922-b85c-0aaa74fce079*
oapen.collectionEuropean Research Council (ERC)en_US
oapen.pages26en_US
oapen.grant.number819757
oapen.grant.projectProtMind


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record