Measuring Algorithmic Fairness

Algorithmic decision making is both increasingly common and increasingly controversial. Critics worry that algorithmic tools are not transparent, accountable, or fair. Assessing the fairness of these tools has been especially fraught as it requires that we agree about what fairness is and what it requires. Unfortunately, we do not. The technological literature is now littered with a multitude of measures, each purporting to assess fairness along some dimension. Two types of measures stand out. According to one, algorithmic fairness requires that the score an algorithm produces should be equally accurate for members of legally protected groups—blacks and whites, for example. According to the other, algorithmic fairness requires that the algorithm produce the same percentage of false positives or false negatives for each of the groups at issue. Unfortunately, there is often no way to achieve parity in both these dimensions. This fact has led to a pressing question. Which type of measure should we prioritize and why?

This Article makes three contributions to the debate about how best to measure algorithmic fairness: one conceptual, one normative, and one legal. Equal predictive accuracy ensures that a score means the same thing for each group at issue. As such, it relates to what one ought to believe about a scored individual. Because questions of fairness usually relate to action, not belief, this measure is ill-suited as a measure of fairness. This is the Article’s conceptual contribution. Second, this Article argues that parity in the ratio of false positives to false negatives is a normatively significant measure. While a lack of parity in this dimension is not constitutive of unfairness, this measure provides important reasons to suspect that unfairness exists. This is the Article’s normative contribution. Interestingly, improving the accuracy of algorithms overall will lessen this unfairness. Unfortunately, a common assumption that anti-discrimination law prohibits the use of racial and other protected classifications in all contexts is inhibiting those who design algorithms from making them as fair and accurate as possible. This Article’s third contribution is to show that the law poses less of a barrier than many assume.

Introduction

At an event celebrating Martin Luther King, Jr. Day, Representative Alexandria Ocasio-Cortez (D-NY) expressed the concern, shared by many, that algorithmic decision making is biased. “Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions,” she asserted. “They’re just automated. And if you don’t fix the bias, then you are automating the bias.”1.Blackout for Human Rights, MLK Now 2019, Riverside Church in the City of N.Y. (Jan. 21, 2019), https://www.trcnyc.org/mlknow2019/ [https://perma.cc/L45Q-SN9T] (interview with Rep. Ocasio-Cortez begins at approximately minute 16, and comments regarding algorithms begin at approximately minute 40); see also Danny Li, AOC Is Right: Algorithms Will Always Be Biased as Long as There’s Systemic Racism in This Country, Slate (Feb. 1, 2019, 3:47 PM), https://slate.com/news-and-politics/2019/02/aoc-algorithms-racist-bias.html [https://perma.cc/S97Z-UH2U] (quoting Ocasio-Cortez’s comments at the event in New York); Cat Zakrzewski, The Technology 202: Alexandria Ocasio-Cortez Is Using Her Social Media Clout To Tackle Bias in Algorithms, Wash. Post: PowerPost (Jan. 28, 2019), https://www.washingtonpost.com/news/powerpost/paloma/the-technology-202/2019/01/28 /the-technology-202-alexandria-ocasio-cortez-is-using-her-social-media-clout-to-tackle-bias-in-algorithms/5c4dfa9b1b326b29c37­78cdd/?utm_term=.541cd0827a23 [https://perma.cc/ LL4Y-FWDK] (discussing Ocasio-Cortez’s comments and reactions to them).Show More The audience inside the room applauded. Outside the room, the reaction was more mixed. “Socialist Rep. Alexandria Ocasio-Cortez . . . claims that algorithms, which are driven by math, are racist,” tweeted a writer for the Daily Wire.2.Ryan Saavedra (@RealSaavedra), Twitter (Jan. 22, 2019, 12:27 AM), https://twitter.com/RealSaavedra/status/1087627739861897216 [https://perma.cc/32DD-QK5S]. The coverage of Ocasio-Cortez’s comments is mixed. See, e.g., Zakrzewski, supra note 1 (describing conservatives’ criticism of and other media outlets’ and experts’ support of Ocasio-Cortez’s comments).Show More Math is just math, this commentator contends, and the idea that math can be unfair is crazy.

This controversy is just one of many to challenge the fairness of algorithmic decision making.3.See, e.g., Hiawatha Bray, The Software That Runs Our Lives Can Be Biased—But We Can Fix It, Bos. Globe, Dec. 22, 2017, at B9 (describing a New York City Council member’s proposal to audit the city government’s computer decision systems for bias); Drew Harwell, Amazon’s Facial-Recognition Software Has Fraught Accuracy Rate, Study Finds, Wash. Post, Jan. 26, 2019, at A14 (reporting on an M.I.T. Media Lab study that found that Amazon facial-recognition software is less accurate with regard to darker-skinned women than lighter-skinned men, and Amazon’s criticism of the study); Tracy Jan, Mortgage Algorithms Found To Have Racial Bias, Wash. Post, Nov. 15, 2018, at A21 (reporting on a University of California at Berkeley study that found that black and Latino home loan customers pay higher interest rates than white or Asian customers on loans processed online or in person); Tony Romm & Craig Timberg, Under Bipartisan Fire from Congress, CEO Insists Google Does Not Take Sides, Wash. Post, Dec. 12, 2018, at A16 (reporting on Congresspeople’s concerns regarding Google algorithms which were voiced at a House Judiciary Committee hearing with Google’s CEO).Show More The use of algorithms, and in particular their connection with machine learning and artificial intelligence, has attracted significant attention in the legal literature as well. The issues raised are varied, and include concerns about transparency,4.See, e.g., Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 1288–97 (2008); Natalie Ram, Innovating Criminal Justice, 112 Nw. U. L. Rev. 659 (2018); Rebecca Wexler, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, 70 Stan. L. Rev. 1343 (2018).Show More accountability,5.See, e.g., Margot E. Kaminski, Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability, 92 S. Cal. L. Rev. 1529 (2019); Joshua A. Kroll et al., Accountable Algorithms, 165 U. Pa. L. Rev. 633 (2017); Anne L. Washington, How To Argue with an Algorithm: Lessons from the COMPAS-ProPublica Debate, 17 Colo. Tech. L.J. 131 (2018) (arguing for standards governing the information available about algorithms so that their accuracy and fairness can be properly assessed). But see Jon Kleinberg et al., Discrimination in the Age of Algorithms (Nat’l Bureau of Econ. Research, Working Paper No. 25548, 2019), http://www.nber.org/papers/w25548 [https://perma.cc/JU6H-HG3W] (analyzing the potential benefits of algorithms as tools to prove discrimination).Show More privacy,6.See generally Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (2015) (discussing and critiquing internet and finance companies’ non-transparent use of data tracking and algorithms to influence and manage people); Anupam Chander, The Racist Algorithm?, 115 Mich. L. Rev. 1023, 1024 (2017) (reviewing Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (2015)) (arguing that instead of “transparency in the design of the algorithm” that Pasquale argues for, “[w]hat we need . . . is a transparency of inputs and results”) (emphasis omitted).Show More and fairness.7.See, e.g., Aziz Z. Huq, Racial Equity in Algorithmic Criminal Justice, 68 Duke L.J. 1043 (2019) (arguing that current constitutional doctrine is ill-suited to the task of evaluating algorithmic fairness and that current standards offered in the technology literature miss important policy concerns); Sandra G. Mayson, Bias In, Bias Out, 128 Yale L.J. 2218 (2019) (discussing how past and existing racial inequalities in crime and arrests mean that methods to predict criminal risk based on existing information will result in racial inequality).Show More This Article focuses on fairness—the issue raised by Ocasio-Cortez. It focuses on how we should assess what makes algorithmic decision making fair. Fairness is a moral concept, and a contested one at that. As a result, we should expect that different people will offer well-reasoned arguments for different conceptions of fairness. And this is precisely what we find.

The computer science literature is filled with a proliferation of measures, each purporting to capture fairness along some dimension. This Article provides a pathway through that morass. It makes three contributions: one conceptual, one normative, and one legal. This Article argues that one of the dominant measures of fairness offered in the literature tells us what to believe, not what to do, and thus is ill-suited as a measure of fair treatment. This is the conceptual claim. Second, this Article argues that the ratio between false positives and false negatives offers an important indicator of whether members of two groups scored by an algorithm are treated fairly, vis-à-vis each other. This is the normative claim. Third, this Article challenges a common assumption that anti-discrimination law prohibits the use of racial and other protected classifications in all contexts. Because using race within algorithms can increase both their accuracy and fairness, this misunderstanding has important implications. This Article’s third contribution is to show that the law poses less of a barrier than many assume.

We can use the controversy over a common risk assessment tool used by many states for bail, sentencing, and parole to illustrate the controversy about how best to measure fairness.8.See Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), https://www.pro­publica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [https://perma.cc/BA53-JT7V].Show More The tool, called COMPAS, assigns each person a score that indicates the likelihood that the person will commit a crime in the future.9.Equivant, Practitioner’s Guide to COMPAS Core 7 (2019), http://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf [https://perma.cc/LRY6-RXAH].Show More In a high-profile exposé, the website ProPublica claimed that COMPAS treated blacks and whites differently because black arrestees and inmates were far more likely to be erroneously classified as risky than were white arrestees and inmates despite the fact that COMPAS did not explicitly use race in its algorithm.10 10.See Angwin et al., supra note 8 (“Northpointe’s core product is a set of scores derived from 137 questions that are either answered by defendants or pulled from criminal records. Race is not one of the questions.”).Show More The essence of ProPublica’s claim was this:

In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways. The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. White defendants were mislabeled as low risk more often than black defendants.11 11.Id.Show More

Northpointe12 12.Northpointe, along with CourtView Justice Solutions Inc. and Constellation Justice Systems, rebranded to Equivant in January 2017. Equivant, Frequently Asked Questions 1, http://my.courtview.com/rs/322-KWH-233/images/Equivant%20Customer%20FAQ%20-%20FINAL.pdf [https://perma.cc/7HH8-LVQ6].Show More (the company that developed and owned COMPAS) responded to the criticism by arguing that ProPublica was focused on the wrong measure. In essence, Northpointe stressed the point ProPublica conceded—that COMPAS made mistakes with black and white defendants at roughly equal rates.13 13.See William Dieterich et al., COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity, Northpointe 9–10 (July 8, 2016), http://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf [https://perma.cc/N5RL-M9RN].Show More Although Northpointe and others challenged some of the accuracy of ProPublica’s analysis,14 14.For a critique of ProPublica’s analysis, see Anthony W. Flores et al., False Positives, False Negatives, and False Analyses: A Rejoinder to “Machine Bias: There’s Software Used Across the Country To Predict Future Criminals. And It’s Biased Against Blacks.”, 80 Fed. Prob. 38 (2016).Show More the main thrust of Northpointe’s defense was that COMPAS does treat blacks and whites the same. The controversy focused on the manner in which such similarity is assessed. Northpointe focused on the fact that if a black person and a white person were each given a particular score, the two people would be equally likely to recidivate.15 15.See Dieterich et al., supra note 13, at 9–11.Show More ProPublica looked at the question from a different angle. Rather than asking whether a black person and a white person with the same score were equally likely to recidivate, it focused instead on whether a black and white person who did not go on to recidivate were equally likely to have received a low score from the algorithm.16 16.See Angwin et al., supra note 8 (“In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.”).Show More In other words, one measure begins with the score and asks about its ability to predict reality. The other measure begins with reality and asks about its likelihood of being captured by the score.

The easiest way to fix the problem would be to treat the two groups equally in both respects. A high score and low score should mean the same thing for both blacks and whites (the measure Northpointe emphasized), and law-abiding blacks and whites should be equally likely to be mischaracterized by the tool (the measure ProPublica emphasized). Unfortunately, this solution has proven impossible to achieve. In a series of influential papers, computer scientists demonstrated that, in most circumstances, it is simply not possible to equalize both measures.17 17.See, e.g., Richard Berk et al., Fairness in Criminal Justice Risk Assessments: The State of the Art, Soc. Methods & Res. OnlineFirst 1, 23 (2018), https://journals.sagepub.com/doi/­10.1177/0049124118782533 [https://perma.cc/GG9L-9AEU] (discussing the required trade­off between predictive accuracy and various fairness measures); Alexandra Chouldechova, Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments, 5 Big Data 153, 157 (2017) (demonstrating that recidivism prediction instruments cannot simultaneously meet all fairness criteria where recidivism rates differ across groups because its error rates will be unbalanced across the groups when the instrument achieves predictive parity); Jon Kleinberg et al., Inherent Trade-Offs in the Fair Determination of Risk Scores, 67 LIPIcs 43:1, 43:5–8 (2017), https://drops.dagstuhl.de/opus/volltexte/2017/8156/pdf/LIPIcs-ITCS-2017-43.pdf [https://perma.cc/S9DM-PER2] (demonstrating how difficult it is for algorithms to simultaneously achieve the fairness goals of calibration and balance in predictions involving different groups).Show More The reason it is impossible relates to the fact that the underlying rates of recidivism among blacks and whites differ.18 18.See Bureau of Justice Statistics, U.S. Dep’t of Justice, 2018 Update on Prisoner Recidivism: A 9-Year Follow-up Period (2005–2014) 6 tbl.3 (2018), https://www.bjs.gov/­content/pub/pdf/18upr9yfup0514.pdf [https://perma.cc/3UE3-AS5S] (analyzing rearrests of state prisoners released in 2005 in 30 states and finding that 86.9% of black prisoners and 80.9% of white prisoners were arrested in the nine years following their release); see also Dieterich et al., supra note 13, at 6 (“[I]n comparison with blacks, whites have much lower base rates of general recidivism . . . .”). Of course, the data on recidivism itself may be flawed. This consideration is discussed below. See infra text accompanying notes 33–37.Show More When the two groups at issue (whatever they are) have different rates of the trait predicted by the algorithm, it is impossible to achieve parity between the groups in both dimensions.19 19.This is true unless the tool makes no mistakes at all. Kleinberg et al., supra note 17, at 43:5–6.Show More The example discussed in Part I illustrates this phenomenon.20 20.See infra Section I.A.Show More This fact gives rise to the question: in which dimension is such parity more important and why?

These different measures are often described as different conceptions of fairness.21 21.For example, Berk et al. consider six different measures of algorithmic fairness. See Berk et al., supra note 17, at 12–15.Show More This is a mistake. The measure favored by Northpointe is relevant to what we ought to believe about a particular scored individual. If a high-risk score means something different for blacks than for whites, then we do not know whether to believe (or how much confidence to have) in the claim that a particular scored individual is likely to commit a crime in the future. The measure favored by ProPublica relates instead to what we ought to do. If law-abiding blacks and law-abiding whites are not equally likely to be mischaracterized by the score, we will not know whether or how to use the scores in making decisions. If we are comparing a measure that is relevant to what we ought to believe to one that is relevant to what we ought to do, we are truly comparing apples to oranges.

This conclusion does not straightforwardly suggest that we should instead focus on the measure touted by ProPublica, however. A sophisticated understanding of the significance of these measures is fast-moving and evolving. Some computer scientists now argue that the lack of parity in the ProPublica measure is less meaningful than one might think.22 22.See Sam Corbett-Davies & Sharad Goel, The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning (arXiv, Working Paper No. 1808.00023v2, 2018), http://arxiv.org/abs/1808.00023 [https://perma.cc/ML4Y-EY6S].Show More The better way to understand the measure highlighted by ProPublica would be to say that it suggests that something is likely amiss. Differences in the ratio of false positive rates to false negative rates indicate that the algorithmic tool may rely on data that are themselves infected with bias or that the algorithm may be compounding a prior injustice. Because these possibilities have normative implications for how the algorithm should be used, this measure relates to fairness.

The most promising way to enhance algorithmic fairness is to improve the accuracy of the algorithm overall.23 23.See Sumegha Garg et al., Tracking and Improving Information in the Service of Fairness (arXiv, Working Paper No. 1904.09942v2, 2019), http://arxiv.org/abs/1904.09942 [https://perma.cc/D8ZN-CJ83].Show More And we can do that by permitting the use of protected traits (like race and sex) within the algorithm to determine what other traits will be used to predict the target variable (like recidivism). For example, housing instability might be more predictive of recidivism for whites than for blacks.24 24.See Sam Corbett-Davies et al., Algorithmic Decision Making and the Cost of Fairness, 2017 Proc. 23d ACM SIGKDD Int’l Conf. on Knowledge Discovery and Data Mining 797, 805.Show More If the algorithm includes a racial classification, it can segment its analysis such that this trait is used to predict recidivism for whites but not for blacks. Although this approach would improve risk assessment and thereby lessen the inequity highlighted by ProPublica, many in the field believe this approach is off the table because it is prohibited by law.25 25.See id. (“[E]xplicitly including race as an input feature raises legal and policy complications, and as such it is common to simply exclude features with differential predictive power.”).Show More This is not the case.

The use of racial classifications only sometimes constitutes disparate treatment on the basis of race and thus only sometimes gives rise to strict scrutiny. The fact that some uses of racial classifications do not constitute disparate treatment reveals that the concept of disparate treatment is more elusive than is often recognized. This observation is important given the central role that the distinction between disparate treatment and disparate impact plays in equal protection doctrine and statutory anti-discrimination law. In addition, it is important because it opens the door to more creative ways to improve algorithmic fairness.

The Article proceeds as follows. Part I develops the conceptual claim. It shows that the two most prominent types of measures used to assess algorithmic fairness are geared to different tasks. One is relevant to belief and the other to decision and action. This Part begins with a detailed explanation of the two measures and then explores the factors that affect belief and action in individual cases. Turning to the comparative context, Part I argues that predictive parity (the measure favored by Northpointe) is relevant to belief but not directly to the fair treatment of different groups.

Part II makes a normative claim. It argues that differences in the ratio of false positives to false negatives between protected groups (a variation on the measure put forward by ProPublica) suggest unfairness, and it explains why this is so. This Part begins by clarifying three distinct ways in which the concept of fairness is used in the literature. It then explains both the normative appeal of focusing on the parity in the ratio of false positives to false negatives and, at the same time, why doing so can be misleading. Despite these drawbacks, Part II argues that the disparity in the ratio of false positive to false negative rates tells us something important about the fairness of the algorithm.

Part III explores what can be done to diminish this unfairness. It argues that using protected classifications like race and sex within algorithms can improve their accuracy and fairness. Because constitutional anti­discrimination law generally disfavors racial classifications, computer scientists and others who work with algorithms are reluctant to deploy this approach. Part III argues that this reluctance rests on an overly simplistic view of the law. Focusing on constitutional law and on racial classification in particular, this Part argues that the doctrine’s resistance to the use of racial classifications is not categorical. Part III explores contexts in which the use of racial classifications does not constitute disparate treatment on the basis of race and extracts two principles from these examples. Using these principles, this Part argues that the use of protected classifications within algorithms may well be permissible. A conclusion follows.

  1. * D. Lurton Massee, Jr. Professor of Law and Roy L. and Rosamond Woodruff Morgan Professor of Law at the University of Virginia School of Law. I would like to thank Charles Barzun, Aloni Cohen, Aziz Huq, Kim Ferzan, Niko Kolodny, Sandy Mayson, Tom Nachbar, Richard Schragger, Andrew Selbst, and the participants in the Caltech 10th Workshop in Decisions, Games, and Logic: Ethics, Statistics, and Fair AI, the Dartmouth Law and Philosophy Workshop, and the computer science department at UVA for comments and critique. In addition, I would like to thank Kristin Glover of the University of Virginia Law Library and Judy Baho for their excellent research assistance. Any errors or confusions are my own.
  2. Blackout for Human Rights, MLK Now 2019, Riverside Church in the City of N.Y. (Jan. 21, 2019), https://www.trcnyc.org/mlknow2019/ [https://perma.cc/L45Q-SN9T] (interview with Rep. Ocasio-Cortez begins at approximately minute 16, and comments regarding algorithms begin at approximately minute 40); see also Danny Li, AOC Is Right: Algorithms Will Always Be Biased as Long as There’s Systemic Racism in This Country, Slate (Feb. 1, 2019, 3:47 PM), https://slate.com/news-and-politics/2019/02/aoc-algorithms-racist-bias.html [https://perma.cc/S97Z-UH2U] (quoting Ocasio-Cortez’s comments at the event in New York); Cat Zakrzewski, The Technology 202: Alexandria Ocasio-Cortez Is Using Her Social Media Clout To Tackle Bias in Algorithms, Wash. Post: PowerPost (Jan. 28, 2019), https://www.washingtonpost.com/news/powerpost/paloma/the-technology-202/2019/01/28 /the-technology-202-alexandria-ocasio-cortez-is-using-her-social-media-clout-to-tackle-bias-in-algorithms/5c4dfa9b1b326b29c37­78cdd/?utm_term=.541cd0827a23 [https://perma.cc/ LL4Y-FWDK] (discussing Ocasio-Cortez’s comments and reactions to them).
  3. Ryan Saavedra (@RealSaavedra), Twitter (Jan. 22, 2019, 12:27 AM), https://twitter.com/RealSaavedra/status/1087627739861897216 [https://perma.cc/32DD-QK5S]. The coverage of Ocasio-Cortez’s comments is mixed. See, e.g., Zakrzewski, supra note 1 (describing conservatives’ criticism of and other media outlets’ and experts’ support of Ocasio-Cortez’s comments).
  4. See, e.g., Hiawatha Bray, The Software That Runs Our Lives Can Be Biased—But We Can Fix It, Bos. Globe, Dec. 22, 2017, at B9 (describing a New York City Council member’s proposal to audit the city government’s computer decision systems for bias); Drew Harwell, Amazon’s Facial-Recognition Software Has Fraught Accuracy Rate, Study Finds, Wash. Post, Jan. 26, 2019, at A14 (reporting on an M.I.T. Media Lab study that found that Amazon facial-recognition software is less accurate with regard to darker-skinned women than lighter-skinned men, and Amazon’s criticism of the study); Tracy Jan, Mortgage Algorithms Found To Have Racial Bias, Wash. Post, Nov. 15, 2018, at A21 (reporting on a University of California at Berkeley study that found that black and Latino home loan customers pay higher interest rates than white or Asian customers on loans processed online or in person); Tony Romm & Craig Timberg, Under Bipartisan Fire from Congress, CEO Insists Google Does Not Take Sides, Wash. Post, Dec. 12, 2018, at A16 (reporting on Congresspeople’s concerns regarding Google algorithms which were voiced at a House Judiciary Committee hearing with Google’s CEO).
  5. See, e.g., Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 1288–97 (2008); Natalie Ram, Innovating Criminal Justice, 112 Nw. U. L. Rev. 659 (2018); Rebecca Wexler, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, 70 Stan. L. Rev. 1343 (2018).
  6. See, e.g., Margot E. Kaminski, Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability, 92 S. Cal. L. Rev. 1529 (2019); Joshua A. Kroll et al., Accountable Algorithms, 165 U. Pa. L. Rev. 633 (2017); Anne L. Washington, How To Argue with an Algorithm: Lessons from the COMPAS-ProPublica Debate, 17 Colo. Tech. L.J. 131 (2018) (arguing for standards governing the information available about algorithms so that their accuracy and fairness can be properly assessed). But see Jon Kleinberg et al., Discrimination in the Age of Algorithms (Nat’l Bureau of Econ. Research, Working Paper No. 25548, 2019), http://www.nber.org/papers/w25548 [https://perma.cc/JU6H-HG3W] (analyzing the potential benefits of algorithms as tools to prove discrimination).
  7. See generally Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (2015) (discussing and critiquing internet and finance companies’ non-transparent use of data tracking and algorithms to influence and manage people); Anupam Chander, The Racist Algorithm?, 115 Mich. L. Rev. 1023, 1024 (2017) (reviewing Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (2015)) (arguing that instead of “transparency in the design of the algorithm” that Pasquale argues for, “[w]hat we need . . . is a transparency of inputs and results”) (emphasis omitted).
  8. See, e.g., Aziz Z. Huq, Racial Equity in Algorithmic Criminal Justice, 68 Duke L.J. 1043 (2019) (arguing that current constitutional doctrine is ill-suited to the task of evaluating algorithmic fairness and that current standards offered in the technology literature miss important policy concerns); Sandra G. Mayson, Bias In, Bias Out, 128 Yale L.J. 2218 (2019) (discussing how past and existing racial inequalities in crime and arrests mean that methods to predict criminal risk based on existing information will result in racial inequality).
  9. See Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), https://www.pro­publica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [https://perma.cc/BA53-JT7V].
  10. Equivant, Practitioner’s Guide to COMPAS Core 7 (2019), http://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf [https://perma.cc/LRY6-RXAH].
  11. See Angwin et al., supra note 8 (“Northpointe’s core product is a set of scores derived from 137 questions that are either answered by defendants or pulled from criminal records. Race is not one of the questions.”).
  12. Id.
  13. Northpointe, along with CourtView Justice Solutions Inc. and Constellation Justice Systems, rebranded to Equivant in January 2017. Equivant, Frequently Asked Questions 1, http://my.courtview.com/rs/322-KWH-233/images/Equivant%20Customer%20FAQ%20-%20FINAL.pdf [https://perma.cc/7HH8-LVQ6].
  14. See William Dieterich et al., COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity, Northpointe 9–10 (July 8, 2016), http://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf [https://perma.cc/N5RL-M9RN].
  15. For a critique of ProPublica’s analysis, see Anthony W. Flores et al., False Positives, False Negatives, and False Analyses: A Rejoinder to “Machine Bias: There’s Software Used Across the Country To Predict Future Criminals. And It’s Biased Against Blacks.”, 80 Fed. Prob. 38 (2016).
  16. See Dieterich et al., supra note 13, at 9–11.
  17. See Angwin et al., supra note 8 (“In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.”).
  18. See, e.g., Richard Berk et al., Fairness in Criminal Justice Risk Assessments: The State of the Art, Soc. Methods & Res. OnlineFirst 1, 23 (2018), https://journals.sagepub.com/doi/­10.1177/0049124118782533 [https://perma.cc/GG9L-9AEU] (discussing the required trade­off between predictive accuracy and various fairness measures); Alexandra Chouldechova, Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments, 5 Big Data 153, 157 (2017) (demonstrating that recidivism prediction instruments cannot simultaneously meet all fairness criteria where recidivism rates differ across groups because its error rates will be unbalanced across the groups when the instrument achieves predictive parity); Jon Kleinberg et al., Inherent Trade-Offs in the Fair Determination of Risk Scores, 67 LIPIcs 43:1, 43:5–8 (2017), https://drops.dagstuhl.de/opus/volltexte/2017/8156/pdf/LIPIcs-ITCS-2017-43.pdf [https://perma.cc/S9DM-PER2] (demonstrating how difficult it is for algorithms to simultaneously achieve the fairness goals of calibration and balance in predictions involving different groups).
  19. See Bureau of Justice Statistics, U.S. Dep’t of Justice, 2018 Update on Prisoner Recidivism: A 9-Year Follow-up Period (2005–2014) 6 tbl.3 (2018), https://www.bjs.gov/­content/pub/pdf/18upr9yfup0514.pdf [https://perma.cc/3UE3-AS5S] (analyzing rearrests of state prisoners released in 2005 in 30 states and finding that 86.9% of black prisoners and 80.9% of white prisoners were arrested in the nine years following their release); see also Dieterich et al., supra note 13, at 6 (“[I]n comparison with blacks, whites have much lower base rates of general recidivism . . . .”). Of course, the data on recidivism itself may be flawed. This consideration is discussed below. See infra text accompanying notes 33–37.
  20. This is true unless the tool makes no mistakes at all. Kleinberg et al., supra note 17, at 43:5–6.
  21. See infra Section I.A.
  22. For example, Berk et al. consider six different measures of algorithmic fairness. See Berk et al., supra note 17, at 12–15.
  23. See Sam Corbett-Davies & Sharad Goel, The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning (arXiv, Working Paper No. 1808.00023v2, 2018), http://arxiv.org/abs/1808.00023 [https://perma.cc/ML4Y-EY6S].
  24. See Sumegha Garg et al., Tracking and Improving Information in the Service of
    Fairness (arXiv, Working Paper No. 1904.09942v2, 2019), http://arxiv.org/abs/1904.09942 [https://perma.cc/D8ZN-CJ83].
  25. See Sam Corbett-Davies et al., Algorithmic Decision Making and the Cost of Fairness, 2017 Proc. 23d ACM SIGKDD Int’l Conf. on Knowledge Discovery and Data Mining 797, 805.
  26. See id. (“[E]xplicitly including race as an input feature raises legal and policy complications, and as such it is common to simply exclude features with differential predictive power.”).

Myopic Consumer Law

People make mistakes with debt, partly because the chance to buy now and pay later tempts them to do things that are not in their long-term interest. Lenders sell credit products that exploit this vulnerability. In this Article, I argue that critiques of these products that draw insights from behavioral law and economics have a blind spot: they ignore what the borrowed funds are used for. By evaluating financing transactions in isolation from the underlying purchase, the cost-benefit analysis of consumer financial regulation is truncated and misleading. I show that the same psychological bias that allows someone to be sold an exploitative loan also makes it possible that the exploitative loan benefits them by causing them to purchase a product or service that they should, but would not otherwise, buy. I demonstrate the importance of this effect in a study of tax refund anticipation loans. I find that regulation curtailing these loans increased the use of an alternative credit product and reduced the use of paid tax preparers and the take-up of the earned income tax credit.

Introduction

Behavioral law and economics has had significant influence on the regulation of consumer credit.1.See, e.g., Ryan Bubb & Richard H. Pildes, How Behavioral Economics Trims Its Sails and Why, 127 Harv. L. Rev. 1593, 1644–47 (2014).Show More This is both important and justified. It is important because consumer finance is central to the functioning of a modern economy; it is what President Obama called the “lifeblood” during the height of the financial crisis in 2009.2.Address Before a Joint Session of the Congress, 1 Pub. Papers 145, 147 (Feb. 24, 2009).Show More At the level of individual households, consumer credit is important because the timing of income and expenses are rarely contemporaneous. And yet, credit transactions are fraught. Credit both reflects and perpetuates wide differences in individuals’ economic opportunities and their vulnerability to financial adversity. Credit is more expensive for the poor, and this fact creates a patina of exploitation and abuse over debt transactions that has resulted in extensive state and federal regulation.

The influence of behavioral economics on consumer credit regulation is justified because two features of consumer credit raise doubts about consumers’ ability to make borrowing choices that are in their best interests. The first feature is complexity. Consumer debt often has a complex fee structure, opaque repayment terms, and default consequences that are hard to evaluate.3.For a discussion of the importance of complexity and faulty borrower comprehension in consumer credit markets, see Lauren E. Willis, Decisionmaking and the Limits of Disclosure: The Problem of Predatory Lending: Price, 65 Md. L. Rev. 707, 766–98 (2006) [hereinafter Willis, Decisionmaking and the Limits of Disclosure]. Unfortunately, interventions to increase consumer financial literacy do not appear to help remedy these problems. Lauren E. Willis, Against Financial-Literacy Education, 94 Iowa L. Rev. 197, 201 (2008); Lauren E. Willis, The Financial Education Fallacy, 101 Am. Econ.Rev. 429, 429 (2011). Because financial education and disclosure have proven to be largely ineffective, Professor Willis has provocatively argued for an alternative known as “performance-based consumer law.” Lauren E. Willis, Performance-Based Consumer Law, 82 U. Chi. L. Rev. 1309, 1311 (2015).Show More The second feature is the tradeoff between current and future purchasing power that is at the heart of every credit transaction. It is the essence of debt that the borrower exchanges her promise to pay amounts in the future for the ability to consume more now. This intertemporal tradeoff is one that individuals often struggle to make properly, and the challenge is especially great for individuals who focus excessively on the short term and who are therefore inclined to borrow impulsively and on terms that they subsequently regret.4.See Ian M. McDonald, The Global Financial Crisis and Behavioural Economics, 28 Econ. Papers 249, 251 (2009).Show More Both complexity and intertemporal choice are areas where behavioral law and economics scholarship is able to traffic in deep intuitions and draw on strong empirical evidence to make recommendations about how to regulate imperfectly rational consumers.5.I am unaware of any data about the intuitive appeal of complexity and impatience as explanations for why people struggle to evaluate credit contracts. Nevertheless, I trust that most readers, particularly those with home mortgages, will be inclined to agree that understanding all the terms of a secured loan, even when one is trained in law or economics, demands a great deal of time and effort. It is unsurprising then that some do not even make the effort. Judge Posner famously declined to read the “boilerplate” on his own home mortgage. David Lat, Do Lawyers Actually Read Boilerplate Contracts?, Above the Law (June 22, 2010, 2:42 PM), http://abovethelaw.com/2010/06/do-lawyers-actaully-read-boilerplate-contracts-judge-richard-posner-doesnt-do-you/ [https://perma.cc/R574-VCQS]. I also expect that most of us identify with the present-biased individual, who procrastinates when it comes to unpleasant tasks and acts impulsively when it comes to food or leisure. For a review of the literature, see Lee Anne Fennell, Willpower and Legal Policy, 5 Ann. Rev. L. & Soc. Sci. 91 (2009).Show More

In this Article, I focus on arguments about consumer finance regulation that draw on research about “present bias,” which is a sort of myopia that causes people to focus on the present and neglect the future. I argue that consumer law scholarship that draws on these insights has itself been myopic. People borrow money in order to buy things, and scholarship has generally neglected to consider what borrowed funds are used for.6.Some researchers do think it is broadly relevant what consumers do with the loan proceeds, but none evaluate the bundled loan and purchase together from the perspective of a biased consumer. See, e.g., Shmuel I. Beecher, Yuval Feldman & Orly Lobel, Poor Consumer(s) Law: The Case of High-Cost Credit and Payday Loans, in Legal Applications of Marketing Theory (Jacob Gersen & Joel Steckel eds.) (forthcoming 2020) (manuscript at 10), http://ssrn.com/abstract = 3235810 [https://perma.cc/2ZRB-QQ44].Show More I show that focusing on the terms of a loan, isolated from the good or service that is purchased with the proceeds, leads to misleading conclusions about the benefits to the borrower. Integrating the costs and benefits of the underlying purchase with the terms of the credit transaction can upend standard conclusions about the effects of present bias and relocate efforts to improve consumer welfare from the regulation of financial products to the circumstances that create demand for high-cost credit in the first place. I demonstrate the significance of this theoretical claim by reporting results from a study of tax refund anticipation loans (RALs), which shows how RALs increase the use of paid tax preparers and the take-up of the earned income tax credit (EITC) by low-income households. Because of the size of the EITC, these loans may make present-biased taxpayers better off, even if the loans are designed to exploit their bias.

When considering the benefits of credit transactions for present-biased consumers, why do the motivating purchases matter? The answer is that many goods and services are characterized by significant upfront costs but benefits that are only realized in the future. As I show in Part I, present-biased consumers tend to undervalue products with this temporal pattern of costs and benefits.7.See discussion infra Section I.A.Show More Durable goods, such as homes, cars, and appliances, are like this. Purchasing durable goods involves a significant cash outlay at the time of purchase in exchange for a stream of consumption benefits that are realized over time. In fact, all sorts of choices present this same temporal pattern of immediate costs and future benefits. For example, the benefits of education are mostly realized long after the classroom experience. Applying for social welfare benefits can require an upfront investment of time and effort in exchange for benefits that are received in the future. The EITC, which is the largest federal cash transfer to low-income households,8.Chris Edwards & Veronique de Rugy, Earned Income Tax Credit: Small Benefits, Large Costs, Cato Inst. (Oct. 14, 2015), https://www.cato.org/publications/tax-budget-bulletin/earned-income-tax-credit-small-benefits-large-costs [https://perma.cc/5L9L-RHX9].Show More is only available to individuals who file a tax return and complete the burdensome earned income credit (EIC) schedule.9.On the difficulties of filing for the EITC, see Michelle Lyon Drumbl, Beyond Polemics: Poverty, Taxes, and Noncompliance, 14 eJournal Tax Res. 253, 275–77 (2016); Francine J. Lipman, The Working Poor Are Paying for Government Benefits: Fixing the Hole in the Anti-Poverty Purse, 2003 Wis. L. Rev. 461, 464; George K. Yin et al., Improving the Delivery of Benefits to the Working Poor: Proposals to Reform the Earned Income Tax Credit Program, 11 Am. J. Tax Pol’y 225, 254–56 (1994). In her latest annual report to Congress, however, the National Taxpayer Advocate noted that the IRS has been working to improve EITC outreach and education. Internal Revenue Serv., Nat’l Taxpayer Advoc.,Ann. Rep. to Congress 144 (2017).Show More The key point is that when the deferred costs and immediate benefits of certain exploitative credit products are added to the immediate costs and deferred benefits of durable goods and services, the bundled transaction may be one that is appealing to a present-biased individual and makes them better off. The exploitative loan tempts the present-biased individual to do something that is in her interest but that she would not otherwise do.10 10.I say that a loan is exploitative if only biased borrowers want to borrow on its terms. This definition does not imply anything about the profitability of these loans to the lender or about the division of the gains from trade. For a philosophical treatment of exploitation, see Alan Wertheimer, Exploitation 7–8(1996).Show More

The results from this analysis sound a note of caution about decontextualizing the choices that consumers make. At the most general level, this Article shows that if consumer law is to help imperfectly rational consumers, it is not enough to show that certain goods or services would only be purchased by consumers acting on a bias that operates against their own interests. It must also consider what other choices these consumers are likely to make that depend on that product and how the exploitative product fits into the overall way that they have arranged their lives. The personal affairs of present-biased individuals are likely to be characterized by a variety of biased decisions that may be interconnected in important ways. Although the entire constellation of choices made by present-biased individuals will leave them worse off than if they made the same choices rationally, this does not imply that compelling them to make any one of these choices rationally will leave them better off.11 11.Law and economics scholars will recognize this as an application of the general theory of the second best to intra-personal choice. R.G. Lipsey & R.K. Lancaster, TheGeneral Theory of Second Best, 24 Rev. Econ. Stud. 11, 11–12 (1956).Show More

The second contribution of this Article is to consumer finance regulation specifically. Regulating the substantive terms of consumer credit requires distinguishing between different kinds of loan products and the uses to which the loan proceeds are put. Specifically, secured debt that must be used to purchase goods and services with deferred benefits has different effects on present-biased consumers than general unsecured debt that can be used to change the timing of consumption generally.12 12.Seediscussion infra Section I.A.Show More When we integrate the loan’s terms with the pattern of costs and benefits from the purchase that necessitated the loan, we see that the bundled transaction may in fact be beneficial for present-biased consumers.13 13.Seediscussion infra Section I.A.Show More If the bundled transaction is beneficial, then prohibiting credit terms that are designed to tempt present-biased individuals might hurt those that the ban is meant to help.

Third, and at the level of most direct application, the results of my empirical study have very specific implications for the regulation of RALs and refund anticipation checks (RACs). The results sound a warning to regulators about the effects of eliminating these products. RALs disappeared almost entirely following a regulatory change in 2011,14 14.See discussion infra Section II.D.Show More a change that was celebrated by consumer advocates.15 15.Chi Chi Wu & Jean Ann Fox, Nat’l Consumer Law Ctr. & Consumer Fed’n of Am., The Party’s Over for Quickie Tax Loans: But Traps Remain for Unwary Taxpayers 2 (2012), https://www.nclc.org/images/pdf/pr-reports/report-ral-2012.pdf [https://perma.cc/J9QX-QM­XK] (“While an occasional fringe lender may make a tax-time loan, the sale of RALs as a widespread industry-wide practice is over. RALs will no longer drain the tax refunds of millions of mostly low-income taxpayers.”).Show More The near elimination of RALs reduced the use of paid tax preparers, lowered take-up of the EITC, and increased demand for RACs.16 16.See discussion infra Part II.Show More RACs are popular, and RALs have begun to make a comeback, but both credit products are the focus of opposition from advocates and concern by regulators.17 17.Tax RALs are resurgent, albeit in smaller amounts than before. For a sense of the magnitude of this resurgence, there were 35,000 refund loans made in 2014 and approximately one million loans made in 2016. Kevin Wack, Tax Refund Loans Get a Second Life, Am. Banker (June 15, 2016, 2:49 PM), https://www.americanbanker.com/news/tax-refund-loans-get-a-second-life [https://perma.cc/ZG58-WG4M].Show More Thus, understanding the role they play in affecting tax compliance and the take-up of valuable social benefits is important and timely.

To be clear, present bias is not the only reason to be suspicious of credit transactions, and the purpose of my analysis is not to provide an all-things-considered appraisal of high-cost credit products. Complexity, unrealistic optimism about repayment prospects, and other psychological biases may cause people to choose financial products that are not in their best interests.18 18.Overly optimistic borrowers may borrow too much or too little. See Richard M. Hynes, Overoptimism and Overborrowing, 2004 BYU L. Rev. 127, 131.Show More I agree with scholars who emphasize the problem of complexity and the potential role for regulation in this area.19 19.See, e.g., Saurabh Bhargava & George Loewenstein, Behavioral Economics and Public Policy 102: Beyond Nudging, 105 Am. Econ. Rev. 396, 396 (2015) (arguing that behavioral economics should leverage gaps in the traditional economic approach that assume fully rational and informed individuals to deliver policy solutions).Show More But when regulation is motivated by concerns about borrowers’ psychological biases, it must consider not just how those biases generate demand for the product being regulated but also how that product is likely to fit into the life of someone who exhibits that bias more generally.

Part I explains the present bias framework for thinking about credit transactions and describes how present bias has been used to explain demand for three economically important, high-cost credit products. I show how integrating the underlying purchase transaction into the analysis of these credit products can change our conclusions about whether these products are beneficial. In Parts II–V, I report and discuss the results of an original study of the effects of regulating RALs. The results illustrate the theoretical effects I describe in Part I, provide evidence that is relevant for regulating this financial product, and raise hard questions about the intermediating role of the private sector between individuals and the U.S. Treasury. In Part VI, I describe a framework for thinking about the regulation of consumer credit products, paying special attention to RALs.

  1. * Class of 1948 Professor of Scholarly Research in Law, University of Virginia School of Law. Thanks to Jennifer Arlen, Oren Bar-Gill, Gustavo Bobonis, Tom Brennan, Ryan Bubb, Mihir Desai, Brian Galle, Yehonatan Givati, Jacob Goldin, Daniel Hemel, Louis Kaplow, Lewis Kornhauser, Kory Kroft, Day Manoli, Ruth Mason, Patricia McCoy, Alex Raskolnikov, Kyle Rozema, Emily Satterthwaite, David Schizer, Kathryn Spier, Rory Van Loo, David Walker, George Yin, workshop participants at the American Law and Economics Association Annual Meeting, the Columbia Law School-Hebrew University Tax Conference, the University of Toronto, Boston University, Cardozo Law School, New York University, and Harvard Law School. Thanks to the Brookings Institution and AggData LLC for providing data. I am especially indebted to Kent Olson of the UVA Law Library for exceptional research assistance.
  2. See, e.g., Ryan Bubb & Richard H. Pildes, How Behavioral Economics Trims Its Sails and Why, 127 Harv. L. Rev. 1593, 1644–47 (2014).
  3. Address Before a Joint Session of the Congress, 1 Pub. Papers 145, 147 (Feb. 24, 2009).
  4. For a discussion of the importance of complexity and faulty borrower comprehension in consumer credit markets, see Lauren E. Willis, Decisionmaking and the Limits of Disclosure: The Problem of Predatory Lending: Price, 65 Md. L. Rev. 707, 766–98 (2006) [hereinafter Willis, Decisionmaking and the Limits of Disclosure]. Unfortunately, interventions to increase consumer financial literacy do not appear to help remedy these problems. Lauren E. Willis, Against Financial-Literacy Education, 94 Iowa L. Rev. 197, 201 (2008); Lauren E. Willis, The Financial Education Fallacy, 101 Am. Econ.

    Rev. 429, 429 (2011). Because financial education and disclosure have proven to be largely ineffective, Professor Willis has provocatively argued for an alternative known as “performance-based consumer law.” Lauren E. Willis, Performance-Based Consumer Law, 82 U. Chi. L. Rev. 1309, 1311 (2015).

  5. See Ian M. McDonald, The Global Financial Crisis and Behavioural Economics, 28 Econ. Papers 249, 251 (2009).
  6. I am unaware of any data about the intuitive appeal of complexity and impatience as explanations for why people struggle to evaluate credit contracts. Nevertheless, I trust that most readers, particularly those with home mortgages, will be inclined to agree that understanding all the terms of a secured loan, even when one is trained in law or economics, demands a great deal of time and effort. It is unsurprising then that some do not even make the effort. Judge Posner famously declined to read the “boilerplate” on his own home mortgage. David Lat, Do Lawyers Actually Read Boilerplate Contracts?, Above the Law (June 22, 2010, 2:42 PM), http://abovethelaw.com/2010/06/do-lawyers-actaully-read-boilerplate-contracts-judge-richard-posner-doesnt-do-you/ [https://perma.cc/R574-VCQS]. I also expect that most of us identify with the present-biased individual, who procrastinates when it comes to unpleasant tasks and acts impulsively when it comes to food or leisure. For a review of the literature, see Lee Anne Fennell, Willpower and Legal Policy, 5 Ann. Rev. L. & Soc. Sci. 91 (2009).
  7. Some researchers do think it is broadly relevant what consumers do with the loan proceeds, but none evaluate the bundled loan and purchase together from the perspective of a biased consumer. See, e.g., Shmuel I. Beecher, Yuval Feldman & Orly Lobel, Poor Consumer(s) Law: The Case of High-Cost Credit and Payday Loans, in Legal Applications of Marketing Theory (Jacob Gersen & Joel Steckel eds.) (forthcoming 2020) (manuscript at 10), http://ssrn.com/abstract = 3235810 [https://perma.cc/2ZRB-QQ44].
  8. See discussion infra Section I.A.
  9. Chris Edwards & Veronique de Rugy, Earned Income Tax Credit: Small Benefits, Large Costs, Cato Inst. (Oct. 14, 2015), https://www.cato.org/publications/tax-budget-bulletin/earned-income-tax-credit-small-benefits-large-costs [https://perma.cc/5L9L-RHX9].
  10. On the difficulties of filing for the EITC, see Michelle Lyon Drumbl, Beyond Polemics: Poverty, Taxes, and Noncompliance, 14 eJournal Tax Res. 253, 275–77 (2016); Francine J. Lipman, The Working Poor Are Paying for Government Benefits: Fixing the Hole in the Anti-Poverty Purse, 2003 Wis. L. Rev. 461, 464; George K. Yin et al., Improving the Delivery of Benefits to the Working Poor: Proposals to Reform the Earned Income Tax Credit Program, 11 Am. J. Tax Pol’y 225, 254–56 (1994). In her latest annual report to Congress, however, the National Taxpayer Advocate noted that the IRS has been working to improve EITC outreach and education. Internal Revenue Serv., Nat’l Taxpayer Advoc.,

    Ann. Rep. to Congress 144 (2017).

  11. I say that a loan is exploitative if only biased borrowers want to borrow on its terms. This definition does not imply anything about the profitability of these loans to the lender or about the division of the gains from trade. For a philosophical treatment of exploitation, see Alan Wertheimer, Exploitation 7–8

    (1996).

  12. Law and economics scholars will recognize this as an application of the general theory of the second best to intra-personal choice. R.G. Lipsey & R.K. Lancaster, The General Theory of Second Best, 24 Rev. Econ. Stud. 11, 11–12 (1956).
  13. See discussion infra Section I.A.
  14. See discussion infra Section I.A.
  15. See discussion infra Section II.D.
  16. Chi Chi Wu & Jean Ann Fox, Nat’l Consumer Law Ctr. & Consumer Fed’n of Am., The Party’s Over for Quickie Tax Loans: But Traps Remain for Unwary Taxpayers 2 (2012), https://www.nclc.org/images/pdf/pr-reports/report-ral-2012.pdf [https://perma.cc/J9QX-QM­XK] (“While an occasional fringe lender may make a tax-time loan, the sale of RALs as a widespread industry-wide practice is over. RALs will no longer drain the tax refunds of millions of mostly low-income taxpayers.”).
  17. See discussion infra Part II.
  18. Tax RALs are resurgent, albeit in smaller amounts than before. For a sense of the magnitude of this resurgence, there were 35,000 refund loans made in 2014 and approximately one million loans made in 2016. Kevin Wack, Tax Refund Loans Get a Second Life, Am. Banker (June 15, 2016, 2:49 PM), https://www.americanbanker.com/news/tax-refund-loans-get-a-second-life [https://perma.cc/ZG58-WG4M].
  19. Overly optimistic borrowers may borrow too much or too little. See Richard M. Hynes, Overoptimism and Overborrowing, 2004 BYU L. Rev. 127, 131.
  20. See, e.g., Saurabh Bhargava & George Loewenstein, Behavioral Economics and Public Policy 102: Beyond Nudging, 105 Am. Econ. Rev. 396, 396 (2015) (arguing that behavioral economics should leverage gaps in the traditional economic approach that assume fully rational and informed individuals to deliver policy solutions).

A Right to a Human Decision

Recent advances in computational technologies have spurred anxiety about a shift of power from human to machine decision makers. From welfare and employment to bail and other risk assessments, state actors increasingly lean on machine-learning tools to directly allocate goods and coercion among individuals. Machine-learning tools are perceived to be eclipsing, even extinguishing, human agency in ways that compromise important individual interests. An emerging legal response to such worries is to assert a novel right to a human decision. European law embraced the idea in the General Data Protection Regulation. American law, especially in the criminal justice domain, is moving in the same direction. But no jurisdiction has defined with precision what that right entails, furnished a clear justification for its creation, or defined its appropriate domain.

This Article investigates the legal possibilities and normative appeal of a right to a human decision. I begin by sketching its conditions of technological plausibility. This requires the specification of both a feasible domain of machine decisions and the margins along which machine decisions are distinct from human ones. With this technological accounting in hand, I analyze the normative stakes of a right to a human decision. I consider four potential normative justifications: (a) a concern with population-wide accuracy; (b) a grounding in individual subjects’ interests in participation and reason giving; (c) arguments about the insufficiently reasoned or individuated quality of state action; and (d) objections grounded in negative externalities. None of these yields a general justification for a right to a human decision. Instead of being derived from normative first principles, limits to machine decision making are appropriately found in the technical constraints on predictive instruments. Within that domain, concerns about due process, privacy, and discrimination in machine decisions are typically best addressed through a justiciable “right to a well-calibrated machine decision.”

Introduction

Every tectonic technological change—from the first grain domesticated to the first smartphone set abuzz1.For recent treatments of these technological causes of social transformations, see generally James C. Scott, Against the Grain: A Deep History of the Earliest States (2017), and Ravi Agrawal, India Connected: How the Smartphone is Transforming the World’s Largest Democracy (2018).Show More—begets a new society. Among the ensuing birth pangs are novel anxieties about how power is distributed—how it is to be gained, and how it will be lost. A spate of sudden advances in the computational technology known as machine learning has stimulated the most recent rush of inky public anxiety. These new technologies apply complex algorithms,2.An algorithm is simply a “well-defined set of steps for accomplishing a certain goal.” Joshua A. Kroll et al., Accountable Algorithms, 165 U. Pa. L. Rev. 633, 640 n.14 (2017); see also Thomas H. Cormen et al., Introduction to Algorithms 5 (3d ed. 2009) (defining an algorithm as “any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output” (emphasis omitted)). The task of computing, at its atomic level, comprises the execution of serial algorithms. Martin Erwig, Once Upon an Algorithm: How Stories Explain Computing 1–4 (2017).Show More called machine-learning instruments, to vast pools of public and government data so as to execute tasks previously beyond mere human ability.3.Machine learning is a general purpose technology that, in broad terms, encompasses “algorithms and systems that improve their knowledge or performance with experience.” Peter Flach, Machine Learning: The Art and Science of Algorithms that Make Sense of Data 3 (2012); see also Ethem Alpaydin, Introduction to Machine Learning 2–3 (3d ed. 2014) (defining machine learning in similar terms). For the uses of machine learning, see Susan Athey, Beyond Prediction: Using Big Data for Policy Problems, 355 Science 483, 483 (2017) (noting the use of machine learning to solve prediction problems). I discuss the technological scope of the project, and define relevant terms, infra at text accompanying note 111. I will use the terms “algorithmic tools” and “machine learning” interchangeably, even though the class of algorithms is technically much larger.Show More Corporate and state actors increasingly lean on these tools to make “decisions that affect people’s lives and livelihoods—from loan approvals, to recruiting, legal sentencing, and college admissions.”4.Kartik Hosanagar & Vivian Jair, We Need Transparency in Algorithms, But Too Much Can Backfire, Harv. Bus. Rev. (July 23, 2018), https://hbr.org/2018/07/we-need-transparency-in-algorithms-but-too-much-can-backfire [https://perma.cc/7KQ9-QMF3]; accord Cary Coglianese & David Lehr, Regulating by Robot: Administrative Decision Making in the Machine-Learning Era, 105 Geo. L.J. 1147, 1149 (2017).Show More

As a result, many people feel a loss of control over key life decisions.5.Shoshana Zuboff, Big Other: Surveillance Capitalism and the Prospects of an Information Civilization, 30 J. Info. Tech. 75, 75 (2015) (describing a “new form of information capitalism [that] aims to predict and modify human behavior as a means to produce revenue and market control”).Show More Machines, they fear, resolve questions of critical importance on grounds that are beyond individuals’ ken or control.6.See, e.g., Rachel Courtland, The Bias Detectives, 558 Nature 357, 357 (2018) (documenting concerns among the public that algorithmic risk scores for detecting child abuse fail to account for an “effort . . . to turn [a] life around”).Show More Many individuals experience a loss of elementary human agency and a corresponding vulnerability to an inhuman and inhumane machine logic. For some, “the very idea of an algorithmic system making an important decision on the basis of past data seem[s] unfair.”7.Reuben Binns et al., ‘It’s Reducing a Human Being to a Percentage’; Perceptions of Justice in Algorithmic Decisions, 2018 CHI Conf. on Hum. Factors Computing Systems 9 (emphasis omitted).Show More Machines, it is said, want fatally for “empathy.”8.Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor 168 (2017).Show More For others, machine decisions seem dangerously inscrutable, non-transparent, and so hazardously unpredictable.9.Will Knight, The Dark Secret at the Heart of AI, MIT Tech. Rev. (Apr. 11, 2017), https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ [https://perma.cc/L94L-LYTJ] (“The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.”).Show More Worse, governments and companies wield these tools freely to taxonomize their populations, predict individual behavior, and even manipulate behavior and preferences in ways that give them a new advantage over the human subjects of algorithmic classification.10 10.For consideration of these issues, see Mariano-Florentino Cuéllar & Aziz Z. Huq, Economies of Surveillance, 133 Harv. L. Rev. 1280 (2020), and Mariano-Florentino Cuéllar & Aziz Z. Huq, Privacy’s Political Economy and the State of Machine Learning: An Essay in Honor of Stephen J. Schulhofer, N.Y.U. Ann. Surv. Am. L. (forthcoming 2020).Show More Even the basic terms of political choice seem compromised.11 11.See, e.g., Daniel Kreiss & Shannon C. McGregor, Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google with Campaigns During the 2016 U.S. Presidential Cycle, 35 Pol. Comm. 155, 156–57 (2018) (describing the role of technology firms in shaping campaigns).Show More At the same time that machine learning is poised to recalibrate the ordinary forms of interaction between citizen and government (or big tech), advances in robotics as well as machine learning appear to be about to displace huge tranches of both blue-collar and white-collar labor markets.12 12.For what has become the standard view, see Larry Elliott, Robots Will Take Our Jobs. We’d Better Plan Now, Before It’s Too Late, Guardian (Feb. 1, 2018, 1:00 AM), https://www.theguardian.com/commentisfree/2018/feb/01/robots-take-our-jobs-amazon-go-seattle [https://perma.cc/2CFP-3JJV]. For a more nuanced account, see Martin Ford, Rise of the Robots: Technology and the Threat of a Jobless Future 282–83 (2015).Show More A fearful future looms, one characterized by massive economic dislocation, wherein people have lost control of many central life choices, and basic consumer and political preferences are no longer really one’s own.

This Article is about one nascent and still inchoate legal response to these fears: the possibility that an individual being assigned a benefit or a coercive intervention has a right to a human decision rather than a decision reached by a purely automated process (a “machine decision”). European law has embraced the idea. American law, especially in the criminal justice domain, is flirting with it.13 13.See infra text accompanying notes 70–73.Show More My aim in this Article is to test this burgeoning proposal, to investigate its relationship with technological possibilities, and to ascertain whether it is a cogent response to growing distributional, political, and epistemic anxieties. My focus is not on the form of such a right—statutory, constitutional, or treaty-based—or how it is implemented—say, in terms of liability or property rule protection—but more simply on what might ab initio justify its creation.

To motivate this inquiry, consider some of the anxieties unfurling already in public debate: A nursing union, for instance, launched a campaign urging patients to demand human medical judgments rather than technological assessment.14 14.‘When It Matters Most, Insist on a Registered Nurse,’ Nat’l Nurses United, https://www.­nationalnursesunited.org/insist-registered-nurse [https://perma.cc/MB66-XTXW] (last visited Jan. 19, 2020).Show More And a majority of patients surveyed in a 2018 Accenture survey preferred treatment by a doctor in person to virtual care.15 15.Accenture Consulting, 2018 Consumer Survey on Digital Health: US Results 9 (2018), https://www.accenture.com/_acnmedia/PDF-71/Accenture-Health-2018-Consumer-Survey-Digital-Health.pdf#zoom=50 [https://perma.cc/TU5F-9J82].Show More When California proposed replacing money bail with a “risk-based pretrial assessment” tool, a state court judge warned that “[t]echnology cannot replace the depth of judicial knowledge, experience, and expertise in law enforcement that prosecutors and defendants’ attorneys possess.”16 16.Quentin L. Kopp, Replacing Judges with Computers Is Risky, Harv. L. Rev. Blog (Feb. 20, 2018), https://blog.harvardlawreview.org/replacing-judges-with-computers-is-risky/ [https://perma.cc/WS5S-ARVF]. On the current state of affairs, see California Set to Greatly Expand Controversial Pretrial Risk Assessments, Filter (Aug. 7, 2019), https://filtermag.org/­california-slated-to-greatly-expand-controversial-pretrial-risk-assessments/ [https://perma.cc­/2FNX-U3C9].Show More In 2018, the City of Flint, Michigan, discontinued the use of a highly effective machine-learning tool designed to identify defective water pipes, reverting under community pressure to human decision making with a far lower hit rate for detecting defective pipes.17 17.Alexis C. Madrigal, How a Feel-Good AI Story Went Wrong in Flint, Atlantic (Jan. 3, 2019), https://www.theatlantic.com/technology/archive/2019/01/how-machine-learning-fou­nd-flints-lead-pipes/578692/ [https://perma.cc/V8VA-F22W].Show More Finally, and perhaps most powerfully, consider the worry congealed in an anecdote told by data scientist Cathy O’Neil: An Arkansas woman named Catherine Taylor is denied federal housing assistance because she fails an automated, “webcrawling[,] data-gathering” background check.18 18.Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy 152–53 (2016).Show More It is only when “one conscientious human being” takes the trouble to look into the quality of this machine result that it is discovered that Taylor has been red-flagged in error.19 19.Id. at 153.Show More O’Neil’s plainly troubling anecdote powerfully captures the fear that machines will be unfair, incomprehensive, or incompatible with the flexing of elementary human agency: it provides a sharp spur to the inquiry that follows.

The most important formulation of a right to a human decision to date is found in European law. In April 2016, the European Parliament enacted a new regime of data protection in the form of a General Data Protection Regulation (GDPR).20 20.Regulation 2016/679, of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) (EU) [hereinafter GDPR]; see also Christina Tikkinen-Piri, Anna Rohunen & Jouni Markkula, EU General Data Protection Regulation: Changes and Implications for Personal Data Collecting Companies, 34 Computer L. & Security Rev. 134, 134–35 (2018) (documenting the enactment process of the GDPR).Show More Unlike the legal regime it superseded,21 21.See Directive 95/46, of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data, art. 1, 1995 O.J. (L 281) (EC) [hereinafter Directive 95/46].Show More the GDPR as implemented in May 2018 is legally mandatory even in the absence of implementing legislation by member states of the European Union (EU).22 22.Bryce Goodman & Seth Flaxman, European Union Regulations on Algorithmic Decision Making and a “Right to Explanation,” AI Mag., Fall 2017, at 51–52 (explaining the difference between a non-binding directive and a legally binding regulation under European law).Show More Hence, it can be directly enforced in court through hefty financial penalties.23 23.Id. at 52.Show More Article 22 of the GDPR endows a natural individual with “the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”24 24.GDPR, supra note 20, arts. 4(1), 22(1) (inter alia, defining “data subject”).Show More That right covers private and some (but not all) state entities.25 25.See id. art. 4(7)–(8) (defining “controller” and “processor” as key scope terms). The Regulation, however, does not apply to criminal and security investigations. Id. art. 2(2)(d).Show More On its face, it fashions an opt-out of quite general scope from automated decision making.26 26.As I explain below, this is not the only provision of the GDPR that can be interpreted to create a right to a human decision. See infra text accompanying notes 53–58.Show More

The GDPR also has extraterritorial effect.27 27.GDPR, supra note 20, art. 3.Show More It reaches platforms, such as Google and Facebook, that offer services within the EU.28 28.There is sharp divergence in the scholarship over the GDPR’s extraterritorial scope, which ranges from the measured, see Griffin Drake, Note, Navigating the Atlantic: Understanding EU Data Privacy Compliance Amidst a Sea of Uncertainty, 91 S. Cal. L. Rev. 163, 166 (2017) (documenting new legal risks to American companies pursuant to the GDPR), to the alarmist, see Mira Burri, The Governance of Data and Data Flows in Trade Agreements: The Pitfalls of Legal Adaptation, 51 U.C. Davis L. Rev. 65, 92 (2017) (“The GDPR is, in many senses, excessively burdensome and with sizeable extraterritorial effects.”).Show More And American law is also making tentative moves toward a similar right to a human decision. In 2016, for example, the Wisconsin Supreme Court held that an algorithmically generated risk score “may not be considered as the determinative factor in deciding whether the offender can be supervised safely and effectively in the community” as a matter of due process.29 29.State v. Loomis, 881 N.W.2d 749, 760 (Wis. 2016).Show More That decision precludes full automation of bail determinations. There must be a human judge in the loop. The Wisconsin court’s holding is unlikely to prove unique. State deployment of machine learning has, more generally, elicited sharp complaints sounding in procedural justice and fairness terms.30 30.See, e.g., Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchner, Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks, ProPublica 2 (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [https://perma.cc/Q9ZU-VY6J] (criticizing machine-learning instruments in the criminal justice context).Show More Further, the Sixth Amendment’s right to a jury trial has to date principally been deployed to resist judicial factfinding.31 31.See, e.g., Apprendi v. New Jersey, 530 U.S. 466, 477 (2000) (explaining that the Fifth and Sixth Amendments “indisputably entitle a criminal defendant to a jury determination that [he] is guilty of every element of the crime with which he is charged, beyond a reasonable doubt” (alteration in original) (internal quotation marks omitted) (quoting United States v. Gaudin, 515 U.S. 506, 510 (1995))).Show More But there is no conceptual reason why the Sixth Amendment could not be invoked to preclude at least some forms of algorithmically generated inputs to criminal sentencing. Indeed, it would seem to follow a fortiori that a right precluding a jury’s substitution with a judge would also block its displacement by a mere machine.

In this Article, I start by situating a right to a human decision in its contemporary technological milieu. I can thereby specify the feasible domain of machine decisions. I suggest this comprises decisions taken at high volume in which sufficient historical data exists to generate effective predictions. Importantly, this excludes many matters presently resolved through civil or criminal trials but sweeps in welfare determinations, hiring decisions, and predictive judgments in the criminal justice contexts of bail and sentencing. Second, I examine the margins along which machine decisions are distinct from human ones. My focus is on a group of related technologies known as machine learning. This is the form of artificial intelligence diffusing most rapidly today.32 32.See infra text accompanying note 88 (defining machine learning). I am not alone in this focus. Legal scholars are paying increasing attention to new algorithmic technologies. For leading examples, see Kate Crawford & Jason Schultz, Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms, 55 B.C. L. Rev. 93, 109 (2014) (arguing for “procedural data due process [to] regulate the fairness of Big Data’s analytical processes with regard to how they use personal data (or metadata . . . )”); Andrew Guthrie Ferguson, Big Data and Predictive Reasonable Suspicion, 163 U. Pa. L. Rev. 327, 383–84 (2015) (discussing the possible use of algorithmic prediction in determining “reasonable suspicion” in criminal law); Kroll et al., supra note 2, at 636–37; Michael L. Rich, Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment, 164 U. Pa. L. Rev. 871, 929 (2016) (developing a “framework” for integrating machine-learning technologies into Fourth Amendment analysis).Show More A right to a human decision cannot be defined or evaluated without some sense of the technical differences between human decision making and decisions reached by these machine-learning technologies. Indeed, careful analysis of how machine learning is designed and implemented reveals that the distinctions between human and machine decisions are less crisp than might first appear. Claims about a right to human decision, I suggest, are better understood to turn on the timing, and not the sheer fact, of such involvement.

With this technical foundation in hand, I evaluate the right to a human decision in relation to four normative ends it might plausibly be understood to further. A first possibility turns on overall accuracy worries. My second line of analysis takes up the interests of an individual exposed to a machine decision. The most pertinent of these interests hinge upon an individual’s participation in decision making and her opportunity to offer reasons. A third analytic salient tracks ways that a machine instrument might be intrinsically objectionable because it uses a deficient decisional protocol. I focus here on worries about the absence of individualized consideration and a machine’s failure to offer reasoned judgments. Finally, I consider dynamic, system-level effects (i.e., negative spillovers), in particular in relation to social power. None of these arguments ultimately provides sure ground for a legal right to a human decision.

Rather, I suggest that the limits of machine decision making be plotted based on its technical constraints. Machines should not be used when there is no tractable parameter amenable to prediction. For example, if there is no good parameter that tracks job performance, then machine evaluation of those employees should be abandoned. Nor should they be used when decision making entails ethical or otherwise morally charged judgments. Most important, I suggest that machine decisions should be subject to a right to a well-calibrated machine decision that folds in due process, privacy, and equality values.33 33.A forthcoming companion piece develops a more detailed account of how this right would be vindicated in practice through a mix of litigation and regulation. See Aziz Z. Huq, Constitutional Rights in the Machine Learning State, 105 Cornell L. Rev. (forthcoming 2020).Show More This is a better response than a right to a human decision to the many instruments now implemented by the government that are highly flawed.34 34.For a catalog, see Meredith Whittaker et al., AI Now Inst., AI Now Report 2018, at 18–22 (2018), https://ainowinstitute.org/AI_Now_2018_Report.pdf [https://perma.cc/2BCG-M4­54].Show More

My analysis here focuses on state action that imposes benefits or coercion on individuals—and not on either private action or a broader array of state action—for three reasons. First, salient U.S. legal frameworks, unlike the GDPR’s coverage, are largely (although not exclusively) trained on state action. Accordingly, a focus on state action makes sense in terms of explaining and evaluating the current U.S. regulatory landscape. Second, the range of private uses of algorithmic tools is vast and heterogenous. Algorithms are now deployed in private activities ranging from Google’s PageRank instrument,35 35.See, e.g., David Segal, The Dirty Little Secrets of Search: Why One Retailer Kept Popping Up as No. 1, N.Y. Times, Feb. 13, 2011, at BU1.Show More to “fintech” applied to generate new revenue streams,36 36.See Falguni Desai, The Age of Artificial Intelligence in Fintech, Forbes (June 30, 2016, 10:42 PM), http://www.forbes.com/sites/falgunidesai/2016/06/30/the-age-of-artificial-intelli­gence-in-fintech [https://perma.cc/DG8N-8NVS] (describing how fintech firms use artificial intelligence to improve investment strategies and analyze consumer financial activity).Show More to medical instruments used to calculate stroke risk,37 37.See, e.g., Benjamin Letham, Cynthia Rudin, Tyler H. McCormick & David Madigan, Interpretable Classifiers Using Rules and Bayesian Analysis: Building a Better Stroke Prediction Model, 9 Annals Applied Stat. 1350, 1350 (2015).Show More to engineers’ identification of new stable inorganic compounds.38 38.See, e.g., Paul Raccuglia et al., Machine-Learning-Assisted Materials Discovery Using Failed Experiments, 533 Nature 73, 73 (2016) (identifying new vanadium compounds).Show More Algorithmic tools are also embedded within new applications, such as voice recognition software, translation software, and visual recognition systems.39 39.Yann LeCun et al., Deep Learning, 521 Nature 436, 438–41 (2015).Show More In contrast, the state is to date an unimaginative user of machine learning, with a relatively constrained domain of deployments.40 40.See infra text accompanying notes 117–21 (describing state uses of machine learning).Show More This makes for a more straightforward analysis. Third, where the state does use algorithmic tools, it often results directly or indirectly in deprivations of liberty, freedom of movement, bodily integrity, or basic income. These normatively freighted machine decisions present arguably the most compelling circumstances for adopting a right to a human decision and so are a useful focus of normative inquiry.

The Article proceeds in three steps. Part I catalogs ways in which law has crafted, or could craft, a right to a human decision. This taxonomical enterprise demonstrates that such a right is far from fanciful. Part II defines the class of computational tools to be considered, explores the manner in which such instruments can be used, and teases out how they are (or are not) distinct from human decisions. Doing so helps illuminate the plausible forms of a right to a human decision. Part III then turns to the potential normative foundations of such a right. It provides a careful taxonomy of those grounds. It then shows why they all fall short. Finally, a brief conclusion inverts the Article’s analytic lens to gesture at the possibility that a right to a well-calibrated machine decision can be imagined, and even defended, on more persuasive terms than a right to a human decision.

  1. * Frank and Bernice J. Greenberg Professor of Law, University of Chicago Law School. Thanks to Faith Laken for terrific research aid. Thanks to Tony Casey, David Driesen, Lauryn Gouldin, Daniel Hemel, Darryl Li, Anup Malani, Richard McAdams, Eric Posner, Julie Roin, Lior Strahilevitz, Rebecca Wexler, and Annette Zimmermann for thoughtful conversation. Workshop participants at the University of Chicago, Stanford Law School, the University of Houston, William and Mary Law School, and Syracuse University School of Law also provided thoughtful feedback. I am grateful to Christiana Zgourides, Erin Brown, and the other law review editors for their careful work on this Article. All errors are mine, not the machine’s.
  2. For recent treatments of these technological causes of social transformations, see generally James C. Scott, Against the Grain: A Deep History of the Earliest States (2017), and Ravi Agrawal, India Connected: How the Smartphone is Transforming the World’s Largest Democracy (2018).
  3. An algorithm is simply a “well-defined set of steps for accomplishing a certain goal.” Joshua A. Kroll et al., Accountable Algorithms, 165 U. Pa. L. Rev. 633, 640 n.14 (2017); see also Thomas H. Cormen et al., Introduction to Algorithms 5 (3d ed. 2009) (defining an algorithm as “any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output” (emphasis omitted)). The task of computing, at its atomic level, comprises the execution of serial algorithms. Martin Erwig, Once Upon an Algorithm: How Stories Explain Computing 1–4 (2017).
  4. Machine learning is a general purpose technology that, in broad terms, encompasses “algorithms and systems that improve their knowledge or performance with experience.” Peter Flach, Machine Learning: The Art and Science of Algorithms that Make Sense of Data 3 (2012); see also Ethem Alpaydin, Introduction to Machine Learning 2–3 (3d ed. 2014) (defining machine learning in similar terms). For the uses of machine learning, see Susan Athey, Beyond Prediction: Using Big Data for Policy Problems, 355 Science 483, 483 (2017) (noting the use of machine learning to solve prediction problems). I discuss the technological scope of the project, and define relevant terms, infra at text accompanying note 111. I will use the terms “algorithmic tools” and “machine learning” interchangeably, even though the class of algorithms is technically much larger.
  5. Kartik Hosanagar & Vivian Jair, We Need Transparency in Algorithms, But Too Much Can Backfire, Harv. Bus. Rev. (July 23, 2018), https://hbr.org/2018/07/we-need-transparency-in-algorithms-but-too-much-can-backfire [https://perma.cc/7KQ9-QMF3]; accord Cary Coglianese & David Lehr, Regulating by Robot: Administrative Decision Making in the Machine-Learning Era, 105 Geo. L.J. 1147, 1149 (2017).
  6. Shoshana Zuboff, Big Other: Surveillance Capitalism and the Prospects of an Information Civilization, 30 J. Info. Tech. 75, 75 (2015) (describing a “new form of information capitalism [that] aims to predict and modify human behavior as a means to produce revenue and market control”).
  7. See, e.g., Rachel Courtland, The Bias Detectives, 558 Nature 357, 357 (2018) (documenting concerns among the public that algorithmic risk scores for detecting child abuse fail to account for an “effort . . . to turn [a] life around”).
  8. Reuben Binns et al., ‘It’s Reducing a Human Being to a Percentage’; Perceptions of Justice in Algorithmic Decisions, 2018 CHI Conf. on Hum. Factors Computing Systems 9 (emphasis omitted).
  9. Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor 168 (2017).
  10. Will Knight, The Dark Secret at the Heart of AI, MIT Tech. Rev. (Apr. 11, 2017), https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ [https://perma.cc/L94L-LYTJ] (“The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.”).
  11. For consideration of these issues, see Mariano-Florentino Cuéllar & Aziz Z. Huq, Economies of Surveillance, 133 Harv. L. Rev. 1280 (2020), and Mariano-Florentino Cuéllar & Aziz Z. Huq, Privacy’s Political Economy and the State of Machine Learning: An Essay in Honor of Stephen J. Schulhofer, N.Y.U. Ann. Surv. Am. L. (forthcoming 2020).
  12. See, e.g., Daniel Kreiss & Shannon C. McGregor, Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google with Campaigns During the 2016 U.S. Presidential Cycle, 35 Pol. Comm. 155, 156–57 (2018) (describing the role of technology firms in shaping campaigns).
  13. For what has become the standard view, see Larry Elliott, Robots Will Take Our Jobs. We’d Better Plan Now, Before It’s Too Late, Guardian (Feb. 1, 2018, 1:00 AM), https://www.theguardian.com/commentisfree/2018/feb/01/robots-take-our-jobs-amazon-go-seattle [https://perma.cc/2CFP-3JJV]. For a more nuanced account, see Martin Ford, Rise of the Robots: Technology and the Threat of a Jobless Future 282–83 (2015).
  14. See infra text accompanying notes 70–73.
  15. ‘When It Matters Most, Insist on a Registered Nurse,’ Nat’l Nurses United, https://www.­nationalnursesunited.org/insist-registered-nurse [https://perma.cc/MB66-XTXW] (last visited Jan. 19, 2020).
  16. Accenture Consulting, 2018 Consumer Survey on Digital Health: US Results 9 (2018), https://www.accenture.com/_acnmedia/PDF-71/Accenture-Health-2018-Consumer-Survey-Digital-Health.pdf#zoom=50 [https://perma.cc/TU5F-9J82].
  17. Quentin L. Kopp, Replacing Judges with Computers Is Risky, Harv. L. Rev. Blog
    (Feb. 20, 2018), https://blog.harvardlawreview.org/replacing-judges-with-computers-is-risky/ [https://perma.cc/WS5S-ARVF]. On the current state of affairs, see California Set to Greatly Expand Controversial Pretrial Risk Assessments, Filter (Aug. 7, 2019), https://filtermag.org/­california-slated-to-greatly-expand-controversial-pretrial-risk-assessments/ [https://perma.cc­/2FNX-U3C9].
  18. Alexis C. Madrigal, How a Feel-Good AI Story Went Wrong in Flint, Atlantic (Jan. 3, 2019), https://www.theatlantic.com/technology/archive/2019/01/how-machine-learning-fou­nd-flints-lead-pipes/578692/ [https://perma.cc/V8VA-F22W].
  19. Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy 152–53 (2016).
  20. Id. at 153.
  21. Regulation 2016/679, of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) (EU) [hereinafter GDPR]; see also Christina Tikkinen-Piri, Anna Rohunen & Jouni Markkula, EU General Data Protection Regulation: Changes and Implications for Personal Data Collecting Companies, 34 Computer L. & Security Rev. 134, 134–35 (2018) (documenting the enactment process of the GDPR).
  22. See Directive 95/46, of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data, art. 1, 1995 O.J. (L 281) (EC) [hereinafter Directive 95/46].
  23. Bryce Goodman & Seth Flaxman, European Union Regulations on Algorithmic Decision Making and a “Right to Explanation,” AI Mag., Fall 2017, at 51–52 (explaining the difference between a non-binding directive and a legally binding regulation under European law).
  24. Id. at 52.
  25. GDPR, supra note 20, arts. 4(1), 22(1) (inter alia, defining “data subject”).
  26. See id. art. 4(7)–(8) (defining “controller” and “processor” as key scope terms). The Regulation, however, does not apply to criminal and security investigations. Id. art. 2(2)(d).
  27. As I explain below, this is not the only provision of the GDPR that can be interpreted to create a right to a human decision. See infra text accompanying notes 53–58.
  28. GDPR, supra note 20, art. 3.
  29. There is sharp divergence in the scholarship over the GDPR’s extraterritorial scope, which ranges from the measured, see Griffin Drake, Note, Navigating the Atlantic: Understanding EU Data Privacy Compliance Amidst a Sea of Uncertainty, 91 S. Cal. L. Rev. 163, 166 (2017) (documenting new legal risks to American companies pursuant to the GDPR), to the alarmist, see Mira Burri, The Governance of Data and Data Flows in Trade Agreements: The Pitfalls of Legal Adaptation, 51 U.C. Davis L. Rev. 65, 92 (2017) (“The GDPR is, in many senses, excessively burdensome and with sizeable extraterritorial effects.”).
  30. State v. Loomis, 881 N.W.2d 749, 760 (Wis. 2016).
  31. See, e.g., Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchner, Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks, ProPublica 2 (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [https://perma.cc/Q9ZU-VY6J] (criticizing machine-learning instruments in the criminal justice context).
  32. See, e.g., Apprendi v. New Jersey, 530 U.S. 466, 477 (2000) (explaining that the Fifth and Sixth Amendments “indisputably entitle a criminal defendant to a jury determination that [he] is guilty of every element of the crime with which he is charged, beyond a reasonable doubt” (alteration in original) (internal quotation marks omitted) (quoting United States v. Gaudin, 515 U.S. 506, 510 (1995))).
  33. See infra text accompanying note 88 (defining machine learning). I am not alone in this focus. Legal scholars are paying increasing attention to new algorithmic technologies. For leading examples, see Kate Crawford & Jason Schultz, Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms, 55 B.C. L. Rev. 93, 109 (2014) (arguing for “procedural data due process [to] regulate the fairness of Big Data’s analytical processes with regard to how they use personal data (or metadata . . . )”); Andrew Guthrie Ferguson, Big Data and Predictive Reasonable Suspicion, 163 U. Pa. L. Rev. 327, 383–84 (2015) (discussing the possible use of algorithmic prediction in determining “reasonable suspicion” in criminal law); Kroll et al., supra note 2, at 636–37; Michael L. Rich, Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment, 164 U. Pa. L. Rev. 871, 929 (2016) (developing a “framework” for integrating machine-learning technologies into Fourth Amendment analysis).
  34. A forthcoming companion piece develops a more detailed account of how this right would be vindicated in practice through a mix of litigation and regulation. See Aziz Z. Huq, Constitutional Rights in the Machine Learning State, 105 Cornell L. Rev. (forthcoming 2020).
  35. For a catalog, see Meredith Whittaker et al., AI Now Inst., AI Now Report 2018, at 18–22 (2018), https://ainowinstitute.org/AI_Now_2018_Report.pdf [https://perma.cc/2BCG-M4­54].
  36. See, e.g., David Segal, The Dirty Little Secrets of Search: Why One Retailer Kept Popping Up as No. 1, N.Y. Times, Feb. 13, 2011, at BU1.
  37. See Falguni Desai, The Age of Artificial Intelligence in Fintech, Forbes (June 30, 2016, 10:42 PM), http://www.forbes.com/sites/falgunidesai/2016/06/30/the-age-of-artificial-intelli­gence-in-fintech [https://perma.cc/DG8N-8NVS] (describing how fintech firms use artificial intelligence to improve investment strategies and analyze consumer financial activity).
  38. See, e.g., Benjamin Letham, Cynthia Rudin, Tyler H. McCormick & David Madigan, Interpretable Classifiers Using Rules and Bayesian Analysis: Building a Better Stroke Prediction Model, 9 Annals Applied Stat. 1350, 1350 (2015).
  39. See, e.g., Paul Raccuglia et al., Machine-Learning-Assisted Materials Discovery Using Failed Experiments, 533 Nature 73, 73 (2016) (identifying new vanadium compounds).
  40. Yann LeCun et al., Deep Learning, 521 Nature 436, 438–41 (2015).
  41. See infra text accompanying notes 117–21 (describing state uses of machine learning).