Technology

A Right to a Human Decision

Article — Volume 106, Issue 3

106 Va. L. Rev. 611
Download PDF
*Frank and Bernice J. Greenberg Professor of Law, University of Chicago Law School. Thanks to Faith Laken for terrific research aid. Thanks to Tony Casey, David Driesen, Lauryn Gouldin, Daniel Hemel, Darryl Li, Anup Malani, Richard McAdams, Eric Posner, Julie Roin, Lior Strahilevitz, Rebecca Wexler, and Annette Zimmermann for thoughtful conversation. Workshop participants at the University of Chicago, Stanford Law School, the University of Houston, William and Mary Law School, and Syracuse University School of Law also provided thoughtful feedback. I am grateful to Christiana Zgourides, Erin Brown, and the other law review editors for their careful work on this Article. All errors are mine, not the machine’s.Show More

Recent advances in computational technologies have spurred anxiety about a shift of power from human to machine decision makers. From welfare and employment to bail and other risk assessments, state actors increasingly lean on machine-learning tools to directly allocate goods and coercion among individuals. Machine-learning tools are perceived to be eclipsing, even extinguishing, human agency in ways that compromise important individual interests. An emerging legal response to such worries is to assert a novel right to a human decision. European law embraced the idea in the General Data Protection Regulation. American law, especially in the criminal justice domain, is moving in the same direction. But no jurisdiction has defined with precision what that right entails, furnished a clear justification for its creation, or defined its appropriate domain.

This Article investigates the legal possibilities and normative appeal of a right to a human decision. I begin by sketching its conditions of technological plausibility. This requires the specification of both a feasible domain of machine decisions and the margins along which machine decisions are distinct from human ones. With this technological accounting in hand, I analyze the normative stakes of a right to a human decision. I consider four potential normative justifications: (a) a concern with population-wide accuracy; (b) a grounding in individual subjects’ interests in participation and reason giving; (c) arguments about the insufficiently reasoned or individuated quality of state action; and (d) objections grounded in negative externalities. None of these yields a general justification for a right to a human decision. Instead of being derived from normative first principles, limits to machine decision making are appropriately found in the technical constraints on predictive instruments. Within that domain, concerns about due process, privacy, and discrimination in machine decisions are typically best addressed through a justiciable “right to a well-calibrated machine decision.”

Introduction

Every tectonic technological change—from the first grain domesticated to the first smartphone set abuzz1.For recent treatments of these technological causes of social transformations, see generally James C. Scott, Against the Grain: A Deep History of the Earliest States (2017), and Ravi Agrawal, India Connected: How the Smartphone is Transforming the World’s Largest Democracy (2018).Show More—begets a new society. Among the ensuing birth pangs are novel anxieties about how power is distributed—how it is to be gained, and how it will be lost. A spate of sudden advances in the computational technology known as machine learning has stimulated the most recent rush of inky public anxiety. These new technologies apply complex algorithms,2.An algorithm is simply a “well-defined set of steps for accomplishing a certain goal.” Joshua A. Kroll et al., Accountable Algorithms, 165 U. Pa. L. Rev. 633, 640 n.14 (2017); see also Thomas H. Cormen et al., Introduction to Algorithms 5 (3d ed. 2009) (defining an algorithm as “any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output” (emphasis omitted)). The task of computing, at its atomic level, comprises the execution of serial algorithms. Martin Erwig, Once Upon an Algorithm: How Stories Explain Computing 1–4 (2017).Show More called machine-learning instruments, to vast pools of public and government data so as to execute tasks previously beyond mere human ability.3.Machine learning is a general purpose technology that, in broad terms, encompasses “algorithms and systems that improve their knowledge or performance with experience.” Peter Flach, Machine Learning: The Art and Science of Algorithms that Make Sense of Data 3 (2012); see also Ethem Alpaydin, Introduction to Machine Learning 2–3 (3d ed. 2014) (defining machine learning in similar terms). For the uses of machine learning, see Susan Athey, Beyond Prediction: Using Big Data for Policy Problems, 355 Science 483, 483 (2017) (noting the use of machine learning to solve prediction problems). I discuss the technological scope of the project, and define relevant terms, infra at text accompanying note 111. I will use the terms “algorithmic tools” and “machine learning” interchangeably, even though the class of algorithms is technically much larger.Show More Corporate and state actors increasingly lean on these tools to make “decisions that affect people’s lives and livelihoods—from loan approvals, to recruiting, legal sentencing, and college admissions.”4.Kartik Hosanagar & Vivian Jair, We Need Transparency in Algorithms, But Too Much Can Backfire, Harv. Bus. Rev. (July 23, 2018), https://hbr.org/2018/07/we-need-transparency-in-algorithms-but-too-much-can-backfire [https://perma.cc/7KQ9-QMF3]; accord Cary Coglianese & David Lehr, Regulating by Robot: Administrative Decision Making in the Machine-Learning Era, 105 Geo. L.J. 1147, 1149 (2017).Show More

As a result, many people feel a loss of control over key life decisions.5.Shoshana Zuboff, Big Other: Surveillance Capitalism and the Prospects of an Information Civilization, 30 J. Info. Tech. 75, 75 (2015) (describing a “new form of information capitalism [that] aims to predict and modify human behavior as a means to produce revenue and market control”).Show More Machines, they fear, resolve questions of critical importance on grounds that are beyond individuals’ ken or control.6.See, e.g., Rachel Courtland, The Bias Detectives, 558 Nature 357, 357 (2018) (documenting concerns among the public that algorithmic risk scores for detecting child abuse fail to account for an “effort . . . to turn [a] life around”).Show More Many individuals experience a loss of elementary human agency and a corresponding vulnerability to an inhuman and inhumane machine logic. For some, “the very idea of an algorithmic system making an important decision on the basis of past data seem[s] unfair.”7.Reuben Binns et al., ‘It’s Reducing a Human Being to a Percentage’; Perceptions of Justice in Algorithmic Decisions, 2018 CHI Conf. on Hum. Factors Computing Systems 9 (emphasis omitted).Show More Machines, it is said, want fatally for “empathy.”8.Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor 168 (2017).Show More For others, machine decisions seem dangerously inscrutable, non-transparent, and so hazardously unpredictable.9.Will Knight, The Dark Secret at the Heart of AI, MIT Tech. Rev. (Apr. 11, 2017), https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ [https://perma.cc/L94L-LYTJ] (“The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.”).Show More Worse, governments and companies wield these tools freely to taxonomize their populations, predict individual behavior, and even manipulate behavior and preferences in ways that give them a new advantage over the human subjects of algorithmic classification.10 10.For consideration of these issues, see Mariano-Florentino Cuéllar & Aziz Z. Huq, Economies of Surveillance, 133 Harv. L. Rev. 1280 (2020), and Mariano-Florentino Cuéllar & Aziz Z. Huq, Privacy’s Political Economy and the State of Machine Learning: An Essay in Honor of Stephen J. Schulhofer, N.Y.U. Ann. Surv. Am. L. (forthcoming 2020).Show More Even the basic terms of political choice seem compromised.11 11.See, e.g., Daniel Kreiss & Shannon C. McGregor, Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google with Campaigns During the 2016 U.S. Presidential Cycle, 35 Pol. Comm. 155, 156–57 (2018) (describing the role of technology firms in shaping campaigns).Show More At the same time that machine learning is poised to recalibrate the ordinary forms of interaction between citizen and government (or big tech), advances in robotics as well as machine learning appear to be about to displace huge tranches of both blue-collar and white-collar labor markets.12 12.For what has become the standard view, see Larry Elliott, Robots Will Take Our Jobs. We’d Better Plan Now, Before It’s Too Late, Guardian (Feb. 1, 2018, 1:00 AM), https://www.theguardian.com/commentisfree/2018/feb/01/robots-take-our-jobs-amazon-go-seattle [https://perma.cc/2CFP-3JJV]. For a more nuanced account, see Martin Ford, Rise of the Robots: Technology and the Threat of a Jobless Future 282–83 (2015).Show More A fearful future looms, one characterized by massive economic dislocation, wherein people have lost control of many central life choices, and basic consumer and political preferences are no longer really one’s own.

This Article is about one nascent and still inchoate legal response to these fears: the possibility that an individual being assigned a benefit or a coercive intervention has a right to a human decision rather than a decision reached by a purely automated process (a “machine decision”). European law has embraced the idea. American law, especially in the criminal justice domain, is flirting with it.13 13.See infra text accompanying notes 70–73.Show More My aim in this Article is to test this burgeoning proposal, to investigate its relationship with technological possibilities, and to ascertain whether it is a cogent response to growing distributional, political, and epistemic anxieties. My focus is not on the form of such a right—statutory, constitutional, or treaty-based—or how it is implemented—say, in terms of liability or property rule protection—but more simply on what might ab initio justify its creation.

To motivate this inquiry, consider some of the anxieties unfurling already in public debate: A nursing union, for instance, launched a campaign urging patients to demand human medical judgments rather than technological assessment.14 14.‘When It Matters Most, Insist on a Registered Nurse,’ Nat’l Nurses United, https://www.­nationalnursesunited.org/insist-registered-nurse [https://perma.cc/MB66-XTXW] (last visited Jan. 19, 2020).Show More And a majority of patients surveyed in a 2018 Accenture survey preferred treatment by a doctor in person to virtual care.15 15.Accenture Consulting, 2018 Consumer Survey on Digital Health: US Results 9 (2018), https://www.accenture.com/_acnmedia/PDF-71/Accenture-Health-2018-Consumer-Survey-Digital-Health.pdf#zoom=50 [https://perma.cc/TU5F-9J82].Show More When California proposed replacing money bail with a “risk-based pretrial assessment” tool, a state court judge warned that “[t]echnology cannot replace the depth of judicial knowledge, experience, and expertise in law enforcement that prosecutors and defendants’ attorneys possess.”16 16.Quentin L. Kopp, Replacing Judges with Computers Is Risky, Harv. L. Rev. Blog (Feb. 20, 2018), https://blog.harvardlawreview.org/replacing-judges-with-computers-is-risky/ [https://perma.cc/WS5S-ARVF]. On the current state of affairs, see California Set to Greatly Expand Controversial Pretrial Risk Assessments, Filter (Aug. 7, 2019), https://filtermag.org/­california-slated-to-greatly-expand-controversial-pretrial-risk-assessments/ [https://perma.cc­/2FNX-U3C9].Show More In 2018, the City of Flint, Michigan, discontinued the use of a highly effective machine-learning tool designed to identify defective water pipes, reverting under community pressure to human decision making with a far lower hit rate for detecting defective pipes.17 17.Alexis C. Madrigal, How a Feel-Good AI Story Went Wrong in Flint, Atlantic (Jan. 3, 2019), https://www.theatlantic.com/technology/archive/2019/01/how-machine-learning-fou­nd-flints-lead-pipes/578692/ [https://perma.cc/V8VA-F22W].Show More Finally, and perhaps most powerfully, consider the worry congealed in an anecdote told by data scientist Cathy O’Neil: An Arkansas woman named Catherine Taylor is denied federal housing assistance because she fails an automated, “webcrawling[,] data-gathering” background check.18 18.Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy 152–53 (2016).Show More It is only when “one conscientious human being” takes the trouble to look into the quality of this machine result that it is discovered that Taylor has been red-flagged in error.19 19.Id. at 153.Show More O’Neil’s plainly troubling anecdote powerfully captures the fear that machines will be unfair, incomprehensive, or incompatible with the flexing of elementary human agency: it provides a sharp spur to the inquiry that follows.

The most important formulation of a right to a human decision to date is found in European law. In April 2016, the European Parliament enacted a new regime of data protection in the form of a General Data Protection Regulation (GDPR).20 20.Regulation 2016/679, of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) (EU) [hereinafter GDPR]; see also Christina Tikkinen-Piri, Anna Rohunen & Jouni Markkula, EU General Data Protection Regulation: Changes and Implications for Personal Data Collecting Companies, 34 Computer L. & Security Rev. 134, 134–35 (2018) (documenting the enactment process of the GDPR).Show More Unlike the legal regime it superseded,21 21.See Directive 95/46, of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data, art. 1, 1995 O.J. (L 281) (EC) [hereinafter Directive 95/46].Show More the GDPR as implemented in May 2018 is legally mandatory even in the absence of implementing legislation by member states of the European Union (EU).22 22.Bryce Goodman & Seth Flaxman, European Union Regulations on Algorithmic Decision Making and a “Right to Explanation,” AI Mag., Fall 2017, at 51–52 (explaining the difference between a non-binding directive and a legally binding regulation under European law).Show More Hence, it can be directly enforced in court through hefty financial penalties.23 23.Id. at 52.Show More Article 22 of the GDPR endows a natural individual with “the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”24 24.GDPR, supra note 20, arts. 4(1), 22(1) (inter alia, defining “data subject”).Show More That right covers private and some (but not all) state entities.25 25.See id. art. 4(7)–(8) (defining “controller” and “processor” as key scope terms). The Regulation, however, does not apply to criminal and security investigations. Id. art. 2(2)(d).Show More On its face, it fashions an opt-out of quite general scope from automated decision making.26 26.As I explain below, this is not the only provision of the GDPR that can be interpreted to create a right to a human decision. See infra text accompanying notes 53–58.Show More

The GDPR also has extraterritorial effect.27 27.GDPR, supra note 20, art. 3.Show More It reaches platforms, such as Google and Facebook, that offer services within the EU.28 28.There is sharp divergence in the scholarship over the GDPR’s extraterritorial scope, which ranges from the measured, see Griffin Drake, Note, Navigating the Atlantic: Understanding EU Data Privacy Compliance Amidst a Sea of Uncertainty, 91 S. Cal. L. Rev. 163, 166 (2017) (documenting new legal risks to American companies pursuant to the GDPR), to the alarmist, see Mira Burri, The Governance of Data and Data Flows in Trade Agreements: The Pitfalls of Legal Adaptation, 51 U.C. Davis L. Rev. 65, 92 (2017) (“The GDPR is, in many senses, excessively burdensome and with sizeable extraterritorial effects.”).Show More And American law is also making tentative moves toward a similar right to a human decision. In 2016, for example, the Wisconsin Supreme Court held that an algorithmically generated risk score “may not be considered as the determinative factor in deciding whether the offender can be supervised safely and effectively in the community” as a matter of due process.29 29.State v. Loomis, 881 N.W.2d 749, 760 (Wis. 2016).Show More That decision precludes full automation of bail determinations. There must be a human judge in the loop. The Wisconsin court’s holding is unlikely to prove unique. State deployment of machine learning has, more generally, elicited sharp complaints sounding in procedural justice and fairness terms.30 30.See, e.g., Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchner, Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks, ProPublica 2 (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [https://perma.cc/Q9ZU-VY6J] (criticizing machine-learning instruments in the criminal justice context).Show More Further, the Sixth Amendment’s right to a jury trial has to date principally been deployed to resist judicial factfinding.31 31.See, e.g., Apprendi v. New Jersey, 530 U.S. 466, 477 (2000) (explaining that the Fifth and Sixth Amendments “indisputably entitle a criminal defendant to a jury determination that [he] is guilty of every element of the crime with which he is charged, beyond a reasonable doubt” (alteration in original) (internal quotation marks omitted) (quoting United States v. Gaudin, 515 U.S. 506, 510 (1995))).Show More But there is no conceptual reason why the Sixth Amendment could not be invoked to preclude at least some forms of algorithmically generated inputs to criminal sentencing. Indeed, it would seem to follow a fortiori that a right precluding a jury’s substitution with a judge would also block its displacement by a mere machine.

In this Article, I start by situating a right to a human decision in its contemporary technological milieu. I can thereby specify the feasible domain of machine decisions. I suggest this comprises decisions taken at high volume in which sufficient historical data exists to generate effective predictions. Importantly, this excludes many matters presently resolved through civil or criminal trials but sweeps in welfare determinations, hiring decisions, and predictive judgments in the criminal justice contexts of bail and sentencing. Second, I examine the margins along which machine decisions are distinct from human ones. My focus is on a group of related technologies known as machine learning. This is the form of artificial intelligence diffusing most rapidly today.32 32.See infra text accompanying note 88 (defining machine learning). I am not alone in this focus. Legal scholars are paying increasing attention to new algorithmic technologies. For leading examples, see Kate Crawford & Jason Schultz, Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms, 55 B.C. L. Rev. 93, 109 (2014) (arguing for “procedural data due process [to] regulate the fairness of Big Data’s analytical processes with regard to how they use personal data (or metadata . . . )”); Andrew Guthrie Ferguson, Big Data and Predictive Reasonable Suspicion, 163 U. Pa. L. Rev. 327, 383–84 (2015) (discussing the possible use of algorithmic prediction in determining “reasonable suspicion” in criminal law); Kroll et al., supra note 2, at 636–37; Michael L. Rich, Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment, 164 U. Pa. L. Rev. 871, 929 (2016) (developing a “framework” for integrating machine-learning technologies into Fourth Amendment analysis).Show More A right to a human decision cannot be defined or evaluated without some sense of the technical differences between human decision making and decisions reached by these machine-learning technologies. Indeed, careful analysis of how machine learning is designed and implemented reveals that the distinctions between human and machine decisions are less crisp than might first appear. Claims about a right to human decision, I suggest, are better understood to turn on the timing, and not the sheer fact, of such involvement.

With this technical foundation in hand, I evaluate the right to a human decision in relation to four normative ends it might plausibly be understood to further. A first possibility turns on overall accuracy worries. My second line of analysis takes up the interests of an individual exposed to a machine decision. The most pertinent of these interests hinge upon an individual’s participation in decision making and her opportunity to offer reasons. A third analytic salient tracks ways that a machine instrument might be intrinsically objectionable because it uses a deficient decisional protocol. I focus here on worries about the absence of individualized consideration and a machine’s failure to offer reasoned judgments. Finally, I consider dynamic, system-level effects (i.e., negative spillovers), in particular in relation to social power. None of these arguments ultimately provides sure ground for a legal right to a human decision.

Rather, I suggest that the limits of machine decision making be plotted based on its technical constraints. Machines should not be used when there is no tractable parameter amenable to prediction. For example, if there is no good parameter that tracks job performance, then machine evaluation of those employees should be abandoned. Nor should they be used when decision making entails ethical or otherwise morally charged judgments. Most important, I suggest that machine decisions should be subject to a right to a well-calibrated machine decision that folds in due process, privacy, and equality values.33 33.A forthcoming companion piece develops a more detailed account of how this right would be vindicated in practice through a mix of litigation and regulation. See Aziz Z. Huq, Constitutional Rights in the Machine Learning State, 105 Cornell L. Rev. (forthcoming 2020).Show More This is a better response than a right to a human decision to the many instruments now implemented by the government that are highly flawed.34 34.For a catalog, see Meredith Whittaker et al., AI Now Inst., AI Now Report 2018, at 18–22 (2018), https://ainowinstitute.org/AI_Now_2018_Report.pdf [https://perma.cc/2BCG-M4­54].Show More

My analysis here focuses on state action that imposes benefits or coercion on individuals—and not on either private action or a broader array of state action—for three reasons. First, salient U.S. legal frameworks, unlike the GDPR’s coverage, are largely (although not exclusively) trained on state action. Accordingly, a focus on state action makes sense in terms of explaining and evaluating the current U.S. regulatory landscape. Second, the range of private uses of algorithmic tools is vast and heterogenous. Algorithms are now deployed in private activities ranging from Google’s PageRank instrument,35 35.See, e.g., David Segal, The Dirty Little Secrets of Search: Why One Retailer Kept Popping Up as No. 1, N.Y. Times, Feb. 13, 2011, at BU1.Show More to “fintech” applied to generate new revenue streams,36 36.See Falguni Desai, The Age of Artificial Intelligence in Fintech, Forbes (June 30, 2016, 10:42 PM), http://www.forbes.com/sites/falgunidesai/2016/06/30/the-age-of-artificial-intelli­gence-in-fintech [https://perma.cc/DG8N-8NVS] (describing how fintech firms use artificial intelligence to improve investment strategies and analyze consumer financial activity).Show More to medical instruments used to calculate stroke risk,37 37.See, e.g., Benjamin Letham, Cynthia Rudin, Tyler H. McCormick & David Madigan, Interpretable Classifiers Using Rules and Bayesian Analysis: Building a Better Stroke Prediction Model, 9 Annals Applied Stat. 1350, 1350 (2015).Show More to engineers’ identification of new stable inorganic compounds.38 38.See, e.g., Paul Raccuglia et al., Machine-Learning-Assisted Materials Discovery Using Failed Experiments, 533 Nature 73, 73 (2016) (identifying new vanadium compounds).Show More Algorithmic tools are also embedded within new applications, such as voice recognition software, translation software, and visual recognition systems.39 39.Yann LeCun et al., Deep Learning, 521 Nature 436, 438–41 (2015).Show More In contrast, the state is to date an unimaginative user of machine learning, with a relatively constrained domain of deployments.40 40.See infra text accompanying notes 117–21 (describing state uses of machine learning).Show More This makes for a more straightforward analysis. Third, where the state does use algorithmic tools, it often results directly or indirectly in deprivations of liberty, freedom of movement, bodily integrity, or basic income. These normatively freighted machine decisions present arguably the most compelling circumstances for adopting a right to a human decision and so are a useful focus of normative inquiry.

The Article proceeds in three steps. Part I catalogs ways in which law has crafted, or could craft, a right to a human decision. This taxonomical enterprise demonstrates that such a right is far from fanciful. Part II defines the class of computational tools to be considered, explores the manner in which such instruments can be used, and teases out how they are (or are not) distinct from human decisions. Doing so helps illuminate the plausible forms of a right to a human decision. Part III then turns to the potential normative foundations of such a right. It provides a careful taxonomy of those grounds. It then shows why they all fall short. Finally, a brief conclusion inverts the Article’s analytic lens to gesture at the possibility that a right to a well-calibrated machine decision can be imagined, and even defended, on more persuasive terms than a right to a human decision.

  1. * Frank and Bernice J. Greenberg Professor of Law, University of Chicago Law School. Thanks to Faith Laken for terrific research aid. Thanks to Tony Casey, David Driesen, Lauryn Gouldin, Daniel Hemel, Darryl Li, Anup Malani, Richard McAdams, Eric Posner, Julie Roin, Lior Strahilevitz, Rebecca Wexler, and Annette Zimmermann for thoughtful conversation. Workshop participants at the University of Chicago, Stanford Law School, the University of Houston, William and Mary Law School, and Syracuse University School of Law also provided thoughtful feedback. I am grateful to Christiana Zgourides, Erin Brown, and the other law review editors for their careful work on this Article. All errors are mine, not the machine’s.
  2. For recent treatments of these technological causes of social transformations, see generally James C. Scott, Against the Grain: A Deep History of the Earliest States (2017), and Ravi Agrawal, India Connected: How the Smartphone is Transforming the World’s Largest Democracy (2018).
  3. An algorithm is simply a “well-defined set of steps for accomplishing a certain goal.” Joshua A. Kroll et al., Accountable Algorithms, 165 U. Pa. L. Rev. 633, 640 n.14 (2017); see also Thomas H. Cormen et al., Introduction to Algorithms 5 (3d ed. 2009) (defining an algorithm as “any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output” (emphasis omitted)). The task of computing, at its atomic level, comprises the execution of serial algorithms. Martin Erwig, Once Upon an Algorithm: How Stories Explain Computing 1–4 (2017).
  4. Machine learning is a general purpose technology that, in broad terms, encompasses “algorithms and systems that improve their knowledge or performance with experience.” Peter Flach, Machine Learning: The Art and Science of Algorithms that Make Sense of Data 3 (2012); see also Ethem Alpaydin, Introduction to Machine Learning 2–3 (3d ed. 2014) (defining machine learning in similar terms). For the uses of machine learning, see Susan Athey, Beyond Prediction: Using Big Data for Policy Problems, 355 Science 483, 483 (2017) (noting the use of machine learning to solve prediction problems). I discuss the technological scope of the project, and define relevant terms, infra at text accompanying note 111. I will use the terms “algorithmic tools” and “machine learning” interchangeably, even though the class of algorithms is technically much larger.
  5. Kartik Hosanagar & Vivian Jair, We Need Transparency in Algorithms, But Too Much Can Backfire, Harv. Bus. Rev. (July 23, 2018), https://hbr.org/2018/07/we-need-transparency-in-algorithms-but-too-much-can-backfire [https://perma.cc/7KQ9-QMF3]; accord Cary Coglianese & David Lehr, Regulating by Robot: Administrative Decision Making in the Machine-Learning Era, 105 Geo. L.J. 1147, 1149 (2017).
  6. Shoshana Zuboff, Big Other: Surveillance Capitalism and the Prospects of an Information Civilization, 30 J. Info. Tech. 75, 75 (2015) (describing a “new form of information capitalism [that] aims to predict and modify human behavior as a means to produce revenue and market control”).
  7. See, e.g., Rachel Courtland, The Bias Detectives, 558 Nature 357, 357 (2018) (documenting concerns among the public that algorithmic risk scores for detecting child abuse fail to account for an “effort . . . to turn [a] life around”).
  8. Reuben Binns et al., ‘It’s Reducing a Human Being to a Percentage’; Perceptions of Justice in Algorithmic Decisions, 2018 CHI Conf. on Hum. Factors Computing Systems 9 (emphasis omitted).
  9. Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor 168 (2017).
  10. Will Knight, The Dark Secret at the Heart of AI, MIT Tech. Rev. (Apr. 11, 2017), https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ [https://perma.cc/L94L-LYTJ] (“The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.”).
  11. For consideration of these issues, see Mariano-Florentino Cuéllar & Aziz Z. Huq, Economies of Surveillance, 133 Harv. L. Rev. 1280 (2020), and Mariano-Florentino Cuéllar & Aziz Z. Huq, Privacy’s Political Economy and the State of Machine Learning: An Essay in Honor of Stephen J. Schulhofer, N.Y.U. Ann. Surv. Am. L. (forthcoming 2020).
  12. See, e.g., Daniel Kreiss & Shannon C. McGregor, Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google with Campaigns During the 2016 U.S. Presidential Cycle, 35 Pol. Comm. 155, 156–57 (2018) (describing the role of technology firms in shaping campaigns).
  13. For what has become the standard view, see Larry Elliott, Robots Will Take Our Jobs. We’d Better Plan Now, Before It’s Too Late, Guardian (Feb. 1, 2018, 1:00 AM), https://www.theguardian.com/commentisfree/2018/feb/01/robots-take-our-jobs-amazon-go-seattle [https://perma.cc/2CFP-3JJV]. For a more nuanced account, see Martin Ford, Rise of the Robots: Technology and the Threat of a Jobless Future 282–83 (2015).
  14. See infra text accompanying notes 70–73.
  15. ‘When It Matters Most, Insist on a Registered Nurse,’ Nat’l Nurses United, https://www.­nationalnursesunited.org/insist-registered-nurse [https://perma.cc/MB66-XTXW] (last visited Jan. 19, 2020).
  16. Accenture Consulting, 2018 Consumer Survey on Digital Health: US Results 9 (2018), https://www.accenture.com/_acnmedia/PDF-71/Accenture-Health-2018-Consumer-Survey-Digital-Health.pdf#zoom=50 [https://perma.cc/TU5F-9J82].
  17. Quentin L. Kopp, Replacing Judges with Computers Is Risky, Harv. L. Rev. Blog
    (Feb. 20, 2018), https://blog.harvardlawreview.org/replacing-judges-with-computers-is-risky/ [https://perma.cc/WS5S-ARVF]. On the current state of affairs, see California Set to Greatly Expand Controversial Pretrial Risk Assessments, Filter (Aug. 7, 2019), https://filtermag.org/­california-slated-to-greatly-expand-controversial-pretrial-risk-assessments/ [https://perma.cc­/2FNX-U3C9].
  18. Alexis C. Madrigal, How a Feel-Good AI Story Went Wrong in Flint, Atlantic (Jan. 3, 2019), https://www.theatlantic.com/technology/archive/2019/01/how-machine-learning-fou­nd-flints-lead-pipes/578692/ [https://perma.cc/V8VA-F22W].
  19. Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy 152–53 (2016).
  20. Id. at 153.
  21. Regulation 2016/679, of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) (EU) [hereinafter GDPR]; see also Christina Tikkinen-Piri, Anna Rohunen & Jouni Markkula, EU General Data Protection Regulation: Changes and Implications for Personal Data Collecting Companies, 34 Computer L. & Security Rev. 134, 134–35 (2018) (documenting the enactment process of the GDPR).
  22. See Directive 95/46, of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data, art. 1, 1995 O.J. (L 281) (EC) [hereinafter Directive 95/46].
  23. Bryce Goodman & Seth Flaxman, European Union Regulations on Algorithmic Decision Making and a “Right to Explanation,” AI Mag., Fall 2017, at 51–52 (explaining the difference between a non-binding directive and a legally binding regulation under European law).
  24. Id. at 52.
  25. GDPR, supra note 20, arts. 4(1), 22(1) (inter alia, defining “data subject”).
  26. See id. art. 4(7)–(8) (defining “controller” and “processor” as key scope terms). The Regulation, however, does not apply to criminal and security investigations. Id. art. 2(2)(d).
  27. As I explain below, this is not the only provision of the GDPR that can be interpreted to create a right to a human decision. See infra text accompanying notes 53–58.
  28. GDPR, supra note 20, art. 3.
  29. There is sharp divergence in the scholarship over the GDPR’s extraterritorial scope, which ranges from the measured, see Griffin Drake, Note, Navigating the Atlantic: Understanding EU Data Privacy Compliance Amidst a Sea of Uncertainty, 91 S. Cal. L. Rev. 163, 166 (2017) (documenting new legal risks to American companies pursuant to the GDPR), to the alarmist, see Mira Burri, The Governance of Data and Data Flows in Trade Agreements: The Pitfalls of Legal Adaptation, 51 U.C. Davis L. Rev. 65, 92 (2017) (“The GDPR is, in many senses, excessively burdensome and with sizeable extraterritorial effects.”).
  30. State v. Loomis, 881 N.W.2d 749, 760 (Wis. 2016).
  31. See, e.g., Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchner, Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks, ProPublica 2 (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [https://perma.cc/Q9ZU-VY6J] (criticizing machine-learning instruments in the criminal justice context).
  32. See, e.g., Apprendi v. New Jersey, 530 U.S. 466, 477 (2000) (explaining that the Fifth and Sixth Amendments “indisputably entitle a criminal defendant to a jury determination that [he] is guilty of every element of the crime with which he is charged, beyond a reasonable doubt” (alteration in original) (internal quotation marks omitted) (quoting United States v. Gaudin, 515 U.S. 506, 510 (1995))).
  33. See infra text accompanying note 88 (defining machine learning). I am not alone in this focus. Legal scholars are paying increasing attention to new algorithmic technologies. For leading examples, see Kate Crawford & Jason Schultz, Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms, 55 B.C. L. Rev. 93, 109 (2014) (arguing for “procedural data due process [to] regulate the fairness of Big Data’s analytical processes with regard to how they use personal data (or metadata . . . )”); Andrew Guthrie Ferguson, Big Data and Predictive Reasonable Suspicion, 163 U. Pa. L. Rev. 327, 383–84 (2015) (discussing the possible use of algorithmic prediction in determining “reasonable suspicion” in criminal law); Kroll et al., supra note 2, at 636–37; Michael L. Rich, Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment, 164 U. Pa. L. Rev. 871, 929 (2016) (developing a “framework” for integrating machine-learning technologies into Fourth Amendment analysis).
  34. A forthcoming companion piece develops a more detailed account of how this right would be vindicated in practice through a mix of litigation and regulation. See Aziz Z. Huq, Constitutional Rights in the Machine Learning State, 105 Cornell L. Rev. (forthcoming 2020).
  35. For a catalog, see Meredith Whittaker et al., AI Now Inst., AI Now Report 2018, at 18–22 (2018), https://ainowinstitute.org/AI_Now_2018_Report.pdf [https://perma.cc/2BCG-M4­54].
  36. See, e.g., David Segal, The Dirty Little Secrets of Search: Why One Retailer Kept Popping Up as No. 1, N.Y. Times, Feb. 13, 2011, at BU1.
  37. See Falguni Desai, The Age of Artificial Intelligence in Fintech, Forbes (June 30, 2016, 10:42 PM), http://www.forbes.com/sites/falgunidesai/2016/06/30/the-age-of-artificial-intelli­gence-in-fintech [https://perma.cc/DG8N-8NVS] (describing how fintech firms use artificial intelligence to improve investment strategies and analyze consumer financial activity).
  38. See, e.g., Benjamin Letham, Cynthia Rudin, Tyler H. McCormick & David Madigan, Interpretable Classifiers Using Rules and Bayesian Analysis: Building a Better Stroke Prediction Model, 9 Annals Applied Stat. 1350, 1350 (2015).
  39. See, e.g., Paul Raccuglia et al., Machine-Learning-Assisted Materials Discovery Using Failed Experiments, 533 Nature 73, 73 (2016) (identifying new vanadium compounds).
  40. Yann LeCun et al., Deep Learning, 521 Nature 436, 438–41 (2015).
  41. See infra text accompanying notes 117–21 (describing state uses of machine learning).

Click on a link below to access the full text of this article. These are third-party content providers and may require a separate subscription for access.

  Volume 106 / Issue 3  

Constitutionalism in Unexpected Places

Before, during, and after the ratification of the Federal Constitution of 1787, Americans believed that they were governed under an unwritten constitution, a constitution that described an arrangement of power, confirmed ancient rights, and …

By Farah Peterson
106 Va. L. Rev. 559
Technology

A Right to a Human Decision

Recent advances in computational technologies have spurred anxiety about a shift of power from human to machine decision makers. From welfare and employment to bail and other risk assessments, state actors increasingly lean on machine-learning tools …

By Aziz Z. Huq
106 Va. L. Rev. 611
Corporate Law

Myopic Consumer Law

People make mistakes with debt, partly because the chance to buy now and pay later tempts them to do things that are not in their long-term interest. Lenders sell credit products that exploit this vulnerability. In this Article, I argue that …

By Andrew T. Hayashi
106 Va. L. Rev. 689
Legal History

Colonial Virginia: Incubator of Judicial Review

What is the historical origin of judicial review in the United States? Although scholars have acknowledged that British imperial “disallowance” of colonial law was an influential antecedent, the extant historical scholarship devoted to the mechanics …

By Justin W. Aimonetti
106 Va. L. Rev. 765