• Login
    View Item 
    •   JScholarship Home
    • Advanced Academic Programs
    • Government Analytics
    • View Item
    •   JScholarship Home
    • Advanced Academic Programs
    • Government Analytics
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Algorithmic Discrimination in the U.S. Justice System: A Quantitative Assessment of Racial and Gender Bias Encoded in the Data Analytics Model of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS)

    Thumbnail
    View/Open
    Capstone Paper (993.4Kb)
    Date
    2017-04
    Author
    Li, Yubin
    Metadata
    Show full item record
    Abstract
    The fourth-generation risk-need assessment instruments such as Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) have opened the opportunities for the use of big data analytics to assist judicial decision-making across the criminal justice system in U.S. While the COMPAS system becomes increasingly popular in supporting correctional professionals’ judgement on an offender’s risk of committing future crime, little research has been published to investigate the potential systematic bias encoded in the algorithms behind these assessment tools that could possibly work against certain ethnic or gender groups. This paper uses two-sample t-test and ordinary least-square regression model to demonstrate that COMPAS algorithms systemically generates a higher risk score for African-American and male offenders in terms of the risk of failure to appear, risk of recidivism, and risk of violence. Although race was explicitly excluded when the COMPAS algorithms were developed, the results showed that such an analytic model still systematically discriminates against African- American offenders. This paper introduced the importance of examining algorithmic fairness in big data analytic applications and offers the methodology as well as tools to investigate systematic bias encoded in machine leaning algorithms. Additionally, the implications of this paper also suggest that simply removing the protected variable in a big data algorithm could not be sufficient to eliminate the systematic bias that can still affect the protected groups, and that further research is needed for solutions to thoroughly address the algorithmic bias in big data analytics.
    URI
    http://jhir.library.jhu.edu/handle/1774.2/61818
    Collections
    • Government Analytics

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     

    Browse

    All of JScholarshipCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    Theme by 
    Atmire NV