University of Florida Homepage

Countering Bias in Predictive Policing

Police departments and courts worldwide have begun looking to artificial intelligence to prevent crime and sentence offenders. But predictive policing and other AI systems have given rise to concerns that biases baked into their algorithms could target certain groups and worsen societal inequalities.

Duncan Purves, PhD

Duncan Purves, assistant professor of philosophy, has been studying the ethics of using AI to assist police officers in anticipating crime before it happens. Supported by a National Science Foundation grant, he and his collaborators are working on setting standards to reduce bias. “We’re hoping to reach police departments to try to start a conversation with them,” Purves said.

Purves notes that even as more police departments adopt machine learning systems, the efficacy of those systems has not been well studied. “The evidence base is just not there to confirm that it’s actually better at preventing crime than human crime analysts,” Purves said.

While algorithmic systems pose new ethical dilemmas, they also offer more transparency in law enforcement and sentencing decisions. Purves noted that the software COMPAS, for example, scores the risk of reoffending for defendants based on publicly available factors. “You can tweak the weight of those factors to help fix or address some of the issues of bias,” he said. “That is in some ways an improvement over human decision-makers because we don’t know, really, what’s going on in, say, a judge’s head.”

This story is excerpted from a previously published feature

More stories about artificial intelligence from the fall 2021 issue of Ytori Magazine.