Colloq: Kristian Lum

Date: 

Monday, November 28, 2016, 4:15pm to 5:15pm

Location: 

Science Center Rm. 300H

Bias in, bias out: predictive models in the criminal justice system

Predictive models are increasingly used in the criminal justice system to try to predict who will commit crime in the future and where that crime will occur. But what happens when these models are trained using biased data? In this talk, I will present two examples of how biased data is used in the criminal justice system. In the first example, I will introduce a recently published model used for location-based predictive policing. Using a case study from Oakland, CA, I will demonstrate how predictive policing not only perpetuates the biases that were previously encoded in the police data, but – under some circumstances – actually amplifies those biases. In the second example, I will focus on “risk assessment models” that are used to inform decisions throughout the judicial process, from bail and sentencing to parole. In many cases, the data available to train such models is also highly biased, and there is significant interest among policymakers in ways to “neutralize” the bias. I consider several notions of “neutrality” and present a general method for adjusting a set of covariates such that models estimated on the adjusted data produce neutral predictions. The method is applied to neutralizing racial bias in a model for recidivism prediction.