Algorithmic Discrimination in the U.S. Justice System: A Quantitative Assessment of Racial and Gender Bias Encoded in the Data Analytics Model of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS)
The fourth-generation risk-need assessment instruments such as Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) have opened the opportunities for the use of big data analytics to assist judicial decision-making across the criminal justice system in U.S. While the COMPAS system becomes increasingly popular in supporting correctional professionals’ judgement on an offender’s risk of committing future crime, little research has been published to investigate the potential systematic bias encoded in the algorithms behind these assessment tools that could possibly work against certain ethnic or gender groups. This paper uses two-sample t-test and ordinary least-square regression model to demonstrate that COMPAS algorithms systemically generates a higher risk score for African-American and male offenders in terms of the risk of failure to appear, risk of recidivism, and risk of violence. Although race was explicitly excluded when the COMPAS algorithms were developed, the results showed that such an analytic model still systematically discriminates against African- American offenders. This paper introduced the importance of examining algorithmic fairness in big data analytic applications and offers the methodology as well as tools to investigate systematic bias encoded in machine leaning algorithms. Additionally, the implications of this paper also suggest that simply removing the protected variable in a big data algorithm could not be sufficient to eliminate the systematic bias that can still affect the protected groups, and that further research is needed for solutions to thoroughly address the algorithmic bias in big data analytics.