Can a machine be racist? Artificial Intelligence has shown troubling signs of bias, but there are reasons for optimism
- Written by Charles Barbour, Associate Professor, Philosophy, Western Sydney University
One day in mid-2013, four people, including two police officers and a social worker, arrived unannounced at the home of Chicago resident Robert McDaniel[1].
McDaniel had only ever had minor run-ins with the law – street gambling, marijuana possession, nothing even remotely violent. But his visitors informed him that a computer program had determined that the person living at his address was unusually likely to be involved in a future shooting.
Perhaps he would be the perpetrator, perhaps the victim. The computer wasn’t sure. But due to something called “predictive policing”, the social worker and the police would be visiting him on a regular basis.
Review: More than a Glitch: Confronting Race, Gender and Ability Bias in Tech – Meredith Broussard (MIT Press)