Hao, K., & Stray, J. (2022, January 10). Can you make AI fairer than a judge? play our courtroom algorithm game. MIT Technology Review. https://www.technologyreview.com/2019/10/17/75285/ai-fairer-than-judge-criminal-risk-assessment-algorithm/
Published on October 17, 2023, MIT’s Technology Review: Can you make AI fairer than a judge?, by Karen Hao and Jonathan Stray, analyzes predictive AI, specifically the implementation of COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) within the justice system.
Hao and Stray’s article is interactive, allowing readers to adjust the COMPAS algorithm. The authors do so to highlight three main issues: (1) better predictions can always help reduce error rates, but they can never eliminate them entirely, (2) even accepting COMPAS’s recommendations, humans have to input “high risk” thresholds, and finally (3) there are significant error rates between races. The article explores two solutions: either adjust thresholds by race or accept the inconsistencies between error rates.
The COMPAS system does not consider race; however, in the United States, there is a long and prevalent history of discriminatory policing. Therefore, the data that COMPAS uses has inherent bias and, in turn, produces predictions consistent with that bias. The article thus poses the question: Is predictive AI reducing inequities by creating consistency, or is it perpetuating existing injustice?
This article is helpful for our class discussions, as it exemplifies the complexities of predictive AI. It highlights the structure of those systems and clarifies the ways in which objective technology can produce biased outcomes.

Leave a Reply