COMPAS (software)
| Part of a series on |
| Algocracy |
|---|
| Examples |
Correctional Offender Management Profiling for Alternative Sanctions (COMPAS)[1] is a case management and decision support software developed and owned by Northpointe (now Equivant), used by U.S. courts to assess the likelihood of a defendant becoming a recidivist.[2][3]
COMPAS has been used by the U.S. states of New York, Wisconsin, California, Florida's Broward County, and other jurisdictions.[4]
Background
[edit]COMPAS was created in 1998 by Northpointe, Inc., which merged with other justice technology firms to become equivant in January 2017.[5] It is categorized as a "fourth-generation" risk assessment instrument because it evaluates both static data (such as prior criminal record) and dynamic "criminogenic" needs (such as current social environment and employment status).[6]
While originally designed for correctional rehabilitation planning, its use expanded to judicial sentencing. In 2012, the Wisconsin Department of Corrections adopted COMPAS statewide, a move that later led to the landmark legal challenge in Loomis v. Wisconsin.[7]
Risk assessment
[edit]The COMPAS software uses an algorithm to assess potential recidivism risk. Northpointe created risk scales for general and violent recidivism, and for pretrial misconduct. According to the COMPAS Practitioner's Guide, the scales were designed using behavioral and psychological constructs "of very high relevance to recidivism and criminal careers."[8]
- Pretrial release risk scale
- Pretrial risk is a measure of the potential for an individual to fail to appear and/or to commit new felonies while on release. According to the research that informed the creation of the scale, "current charges, pending charges, prior arrest history, previous pretrial failure, residential stability, employment status, community ties, and substance abuse" are the most significant indicators affecting pretrial risk scores.[8]
- General recidivism scale
- The General recidivism scale is designed to predict new offenses upon release, and after the COMPAS assessment is given. The scale uses an individual's criminal history and associates, drug involvement, and indications of juvenile delinquency.[9]
- Violent recidivism scale
- The violent recidivism score is meant to predict violent offenses following release. The scale uses data or indicators that include a person's "history of violence, history of non-compliance, vocational/educational problems, the person's age-at-intake and the person's age-at-first-arrest."[10]
The violent recidivism risk scale is calculated as follows:
where is the violent recidivism risk score, is a weight multiplier, is current age, is the age at first arrest, is the history of violence, is vocational education scale, and is history of noncompliance. The weight, , is "determined by the strength of the item's relationship to person offense recidivism that we observed in our study data."[11]
Support and criticism
[edit]Risk assessment tools such as COMPAS are used because of the desire for objective, evidence-based sentencing procedures as well as increased efficiency in the court system.[12] Proponents of using AI and algorithms in the courtroom tend to argue that these solutions will mitigate predictable biases and errors in judges' reasoning, such as the hungry judge effect (the phenomenon that judges are more likely to make lenient decisions after eating a meal).[13] Alternatives to risk assessment tools are possible, but are difficult to implement.[14]
A general critique of the use of proprietary software such as COMPAS is that since the algorithms it uses are trade secrets, they cannot be examined by the public and affected parties, which has been described as a violation of due process.[14][15] Additionally, simple, transparent and more interpretable algorithms have been shown to perform predictions approximately as well as the COMPAS algorithm.[14][16] Existing analyses of the algorithms have used the publicly available questionnaires and reverse-engineered approximations based on the publicly available data.[14]
Another general criticism of machine-learning based algorithms is that since they are data-dependent, if the data are biased, the software will likely yield biased results.[17] Similarly, the initial version of the related LSI-R algorithm was primarily trained on Caucasian offenders, which resulted in lower validity for black and Latino offenders.[14] Algorithms may also exhibit other types of bias which are given less attention due to the focus on racial bias.[12]
COMPAS risk assessments have been argued to violate 14th Amendment Equal Protection rights on the basis of race, since the algorithms are argued to be racially discriminatory, to result in disparate treatment, and to not be narrowly tailored.[18]
Accuracy
[edit]Empirical analysis of algorithmic risk assessment tools was inspired by a 2016 ProPublica investigation of COMPAS and a subsequent study by Dressel and Farid (2018).[12] ProPublica found that COMPAS was racially biased against black defendants,[19][15][20] and Northpointe responded that the algorithm predicted recidivism accurately regardless of race.[14] Counterintuitively, both statements are true: they refer to two mutually exclusive definitions of fairness.[14] ProPublica's analysis focused on the rate of classification errors (the chance that a defendant's recidivism is predicted incorrectly), while Northpointe focused on the accuracy of prediction (whether the algorithm treats all defendants equally). If the algorithm is fair on one metric, it will be biased on the other.[14]
The study by Dressel and Farid (2018) found that COMPAS software is somewhat more accurate than individuals with little or no criminal justice expertise, yet less accurate than groups of such individuals.[21][15] However, a subsequent review found that these results "seem[ed] like a specific occurrence and less reflective of general and real conditions" and that algorithms performed better than humans under conditions closer to the real world.[12] For example, a replication study found that the algorithms did better when the chance (base rate) of rearrest was low, while Dressel and Farid assumed that recidivism and non-recidivism were about equally likely.[12]
Risk assessment tools do not explicitly incorporate race, and doing so would likely violate the US constitution.[14] However, because factors such as education level or employment status are correlated with race, algorithms using these factors produce different results for different racial groups.[14]
One of the proposed benefits of risk assessment tools is an expected reduction in incarceration rates.[12] In 2024, an analysis of the practical impact of COMPAS in Broward County found that its use led to a reduced rate of confinement across demographic groups, but that it also exacerbated the differences between racial groups.[22]
Legal rulings
[edit]In July 2016, the Wisconsin Supreme Court ruled that COMPAS risk scores can be considered by judges during sentencing, but there must be warnings given to the scores to represent the tool's "limitations and cautions."[4]
See also
[edit]- Algorithmic bias
- Garbage in, garbage out
- Legal expert systems
- Loomis v. Wisconsin
- Criminal sentencing in the United States
References
[edit]- ^ "DOC COMPAS". Retrieved April 4, 2023.
- ^ Sam Corbett-Davies, Emma Pierson, Avi Feller and Sharad Goel (October 17, 2016). "A computer program used for bail and sentencing decisions was labeled biased against blacks. It's actually not that clear". The Washington Post. Retrieved January 1, 2018.
{{cite news}}: CS1 maint: multiple names: authors list (link) - ^ Aaron M. Bornstein (December 21, 2017). "Are Algorithms Building the New Infrastructure of Racism?". Nautilus. No. 55. Retrieved January 2, 2018.
- ^ a b Kirkpatrick, Keith (January 23, 2017). "It's not the algorithm, it's the data". Communications of the ACM. 60 (2): 21–23. doi:10.1145/3022181. S2CID 33993859.
- ^ "The History of equivant". equivant. Retrieved March 9, 2026.
- ^ "Risk Assessment Instruments Validated and Implemented in Correctional Settings" (PDF). National Institute of Justice. 2013. Retrieved March 9, 2026.
- ^ "The Loomis Case: The Use of Proprietary Algorithms at Sentencing". State Bar of Wisconsin. June 15, 2016. Retrieved March 9, 2026.
- ^ a b Northpointe 2015, p. 27.
- ^ Northpointe 2015, p. 26.
- ^ Northpointe 2015, p. 28.
- ^ Northpointe 2015, p. 29.
- ^ a b c d e f Scaria, Arul George; Subramanian, Vidya; George, Nevin K.; Sengupta, Nandana (October 16, 2024). "Algorithms and Recidivism: A Multi-disciplinary Systematic Review". Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 7: 1292–1305. doi:10.1609/aies.v7i1.31724. ISSN 3065-8365. Retrieved April 7, 2026.
- ^ Chatziathanasiou, Konstantin (May 2022). "Beware the Lure of Narratives: "Hungry Judges" Should Not Motivate the Use of "Artificial Intelligence" in Law". German Law Journal. 23 (4): 452–464. doi:10.1017/glj.2022.32. ISSN 2071-8322. S2CID 249047713.
- ^ a b c d e f g h i j Humerick, Jacob (2020). "Reprogramming Fairness: Affirmative Action in Algorithmic Criminal Sentencing" (PDF). Columbia Human Rights Law Review Online. 4 (2): 213–245.
- ^ a b c Yong, Ed (January 17, 2018). "A Popular Algorithm Is No Better at Predicting Crimes Than Random People". Retrieved November 21, 2019.
- ^ Angelino, Elaine; Larus-Stone, Nicholas; Alabi, Daniel; Seltzer, Margo; Rudin, Cynthia (June 2018). "Learning Certifiably Optimal Rule Lists for Categorical Data". Journal of Machine Learning Research. 18 (234): 1–78. arXiv:1704.01701. Retrieved July 20, 2023.
- ^ O'Neil, Cathy (2016). Weapons of Math Destruction. Crown. p. 87. ISBN 978-0-553-41881-1.
- ^ Thomas, C.; Nunez, A. (2022). "Automating Judicial Discretion: How Algorithmic Risk Assessments in Pretrial Adjudications Violate Equal Protection Rights on the Basis of Race". Law & Inequality. 40 (2): 371–407. doi:10.24926/25730037.649.
- ^ Angwin, Julia; Larson, Jeff (May 23, 2016). "Machine Bias". ProPublica. Retrieved November 21, 2019.
- ^ Israni, Ellora (October 26, 2017). "When an Algorithm Helps Send You to Prison (Opinion)". The New York Times. Retrieved November 21, 2019.
- ^ Dressel, Julia; Farid, Hany (January 17, 2018). "The accuracy, fairness, and limits of predicting recidivism". Science Advances. 4 (1) eaao5580. Bibcode:2018SciA....4.5580D. doi:10.1126/sciadv.aao5580. PMC 5777393. PMID 29376122.
- ^ Bahl, Utsav; Topaz, Chad; Obermüller, Lea; Goldstein, Sophie; Sneirson, Mira (May 22, 2024). "Algorithms in Judges' Hands: Incarceration and Inequity in Broward County, Florida". UCLA Law Review. Retrieved March 10, 2025.
Further reading
[edit]- Northpointe (March 15, 2015). "A Practitioner's Guide to COMPAS Core" (PDF).
- Angwin, Julia; Larson, Jeff (May 23, 2016). "Machine Bias". ProPublica. Retrieved November 21, 2019.
- Flores, Anthony; Lowenkamp, Christopher; Bechtel, Kristin. "False Positives, False Negatives, and False Analyses" (PDF). Community Resources for Justice. Retrieved November 21, 2019.
- Sample COMPAS Risk Assessment