Should an Algorithm be the Judge?

A few times each year the use of COMPASS in courtrooms raises concerns. It’s a program that predicts recidivism and is used in some courts to determine who can be released on bail, and to assist judges with sentencing. (For examples of past discussions, see articles from 2016, 2017, 2017, 2018). Most discussions center around the ethics of the algorithm and whether the algorithm has bias that makes it unjust. However, as technology increases predictive capabilities and becomes more pervasive, the conversation needs to shift beyond the ethics of COMPASS to the question of what is the appropriate and ethical use of algorithms in a courtroom.

Focusing on algorithmic bias implies that biased algorithms represent a coding flaw that can be easily corrected. This idea that machines and algorithms aren’t biased is also a frequent argument in favor of broader use. However, predictive algorithms are typically trained on existing sets of data. Based on the patterns, they create predictions for the future. However, humans generate these baseline data (arrest records, sentence lengths, etc). Any bias that we have as humans will be reflected in the data, and thus in the algorithm. So the issue of a biased algorithm does not necessarily require a code fix, but deciding what is an un-biased data set and training on that. Defining such a set and creating one requires so much human input that it will likely again involve some bias or may not even be feasible. Predictive power is valuable, but trying to make it unbiased is an extremely challenging problem (though some organizations are trying).

Though algorithms are biased, we should continue to use them in courtrooms – but with some guidance. They still have the potential to reveal valuable information, and help create smoother processes for all involved. However, before an algorithm is approved for courtroom use, I propose three requirements:

1) Potential biases against protected classes are measured and within a reasonable margin of error. There are many ways to test this, such as entering all of the same information and just changing the class to see how much the recommendation changes. With this, courtrooms can reject programs with unfair or excessive biases.

2) Until biases are fully adjusted for, the program is only used to assist with decisions, and not as a decision maker itself. As many of the linked articles state, judges go through significant training and practice to understand the nuances associated with cases. While the programs can be used to inform decisions, judges should still make the final decision. This will account for elements of the case that the program does not consider, and provides an opportunity to correct for programmatic biases.

3) Algorithms are retrained based on newer data every 6 months. Making an optimistic assumption that the arc of history moves away from bias, retraining based on newer data will ensure the algorithms are based on data with decreasing bias. Thus they will also trend towards unbiased decisions and will continue to improve as well.

While we should still continue to work actively on reducing bias in programs used in courtrooms, we cannot block the use of them until we create bias-free programs. For now, these three requirements can help create a more just system.

Leave a comment