Designing Business Forms to Pursue Social Goals

The long-standing debate about the purpose and role of business firms has recently regained momentum. Business firms face growing pressure to pursue social goals and benefit corporation statutes proliferate across many U.S. states. This trend is largely based on the idea that firms increase long-term shareholder value when they contribute (or appear to contribute) to society. Contrary to this trend, this Article argues that the pressing issue is whether policies to create social impact actually generate value for third-party beneficiaries—rather than for shareholders. Because it is difficult to measure social impact with precision, the design of legal forms for firms that pursue social missions should incorporate organizational structures that generate both the incentives and competence to pursue such missions effectively. Specifically, firms that have a commitment to transacting with different types of disadvantaged groups demonstrate these attributes and should thus serve as the basis for designing legal forms.

While firms with such a commitment may be created using a variety of control and contractual mechanisms, the related transaction costs tend to be very high. This Article develops a social enterprise legal form that draws on the legal regime for community development financial institutions (CDFIs) and European legal forms for work-integration social enterprises (WISEs). This form would certify to investors, consumers, and governments that designated firms have a commitment as social enterprises. By obviating the need for costly social impact measurement, this form would facilitate the provision of subsidy-donations to social enterprises from multiple groups, particularly investors (through below-market investment) and consumers (via premiums over market prices). Thus, this social enterprise form would be to altruistic investors and consumers what the nonprofit form is to donors.

Moreover, the proposal could facilitate the flow of investments by foundations in social enterprises (known as program-related investments, “PRIs”) because it would help foundations verify the social impact of their investees. In addition, by giving subsidy-providers greater assurance that social enterprises pursue social missions effectively, the proposed legal form could facilitate public markets for social enterprises.

Introduction

In recent years, there have been efforts to encourage firms to pursue social goals. In a striking statement to public corporations, Larry Fink, Blackrock’s CEO, wrote: “Society is demanding that companies, both public and private, serve a social purpose. To prosper over time, every company must not only deliver financial performance, but also show how it makes a positive contribution to society.”1.Letter from Larry Fink, Chairman & Chief Exec. Officer, Blackrock, to CEOs (2018), https://www.blackrock.com/corporate/investor-relations/2018-larry-fink-ceo-letter [https://­perma.cc/7QRQ-9DG6]. For a similar statement by Martin Lipton, the renowned legal advisor for public corporations, see Martin Lipton et al., The New Paradigm: A Roadmap for an Implicit Corporate Governance Partnership Between Corporations and Investors To Achieve Sustainable Long-Term Investment and Growth, Harv. L. Sch. F. on Corp. Governance (Jan. 11, 2017), https://corpgov.law.harvard.edu/2017/01/11/corporate-governance-the-new-para­digm/ [https://perma.cc/B5AJ-EWNW].Show More The imperative that firms pursue social goals, however, is very vague. What range of permissible non-pecuniary goals should companies be encouraged to pursue?2.See generally Oliver Hart & Luigi Zingales, Companies Should Maximize Shareholder Welfare Not Market Value, 2 J.L. Fin. & Acct. 247 (2017) (arguing that company and asset managers should pursue policies consistent with the non-pecuniary preferences of their investors).Show More This question reflects a much re-hashed debate regarding the role and purpose of corporations. Many studies view this topic as a matter of corporate governance. That is, the key question is whether policies that seek to create social impact—often referred to as “CSR” (for corporate social responsibility)—maximize shareholder value in the long term. If the answer is yes, then it is a win-win situation for all because such policies are assumed to benefit society.

This Article takes a different approach by arguing that the pressing question should be: Does the pursuit of social missions by for-profit organizations actually benefit the intended beneficiaries? While the literature is not conclusive,3.Compare Ronald W. Masulis & Syed Walid Reza, Agency Problems of Corporate Philanthropy, 28 Rev. Fin. Stud. 592, 619–21 (2015) (claiming that corporate donations advance CEO interests and reduce firm value), with Allen Ferrell, Hao Liang & Luc Renneboog, Socially Responsible Firms, 122 J. Fin. Econ. 585, 585–91, 596–605 (2016) (arguing that well-governed firms are more engaged in CSR, and there is a positive association between CSR and shareholder value).Show More it is easy to see how a reputation for being socially responsible can help companies sell more products, attract investments, or even get more lenient treatment from regulators. However, just having a good reputation does not mean that CSR policies achieve their putative purpose of helping stakeholders and society at large. Without a mechanism for ensuring that CSR actually benefits the stakeholders, companies can easily use it as a means of “greenwashing.”4.“Greenwashing occurs when a corporation increases its sales or boosts its brand image through environmental rhetoric or advertising, but in reality does not make good on these environmental claims.” Miriam A. Cherry, The Law and Economics of Corporate Social Responsibility and Greenwashing, 14 U.C. Davis Bus. L.J. 281, 282 (2013).Show More Greenwashing may be particularly conducive to shareholder value because it promotes a strong reputation and higher sales without actually doing anything substantial for society.5.This arguably explains why well-governed firms that are more accountable to their shareholders tend to engage in value-enhancing CSR. See generallyFerrell, Liang & Renneboog, supra note 3. For a similar argument in the context of regulation, see Steven L. Schwarcz, Misalignment: Corporate Risk-Taking and Public Duty, 92 Notre Dame L. Rev. 1, 3–4 (2016) (arguing that regulation designed to align managers’ and investors’ interests does not necessarily help address negative externalities).Show More But—while false signals of doing good may increase shareholder value—those who support companies for their good deeds would presumably be disappointed were the truth to come to light.

The problem is that it is extremely difficult to verify companies’ social impact. Existing measures of social impact tend to be vague, include metrics that are difficult to quantify, and even mix shareholder protection metrics with environmental or societal ones.6.This is most obviously manifested in the ESG metrics because they include both (i) governance metrics, which are supposed to increase accountability to shareholders and (ii) social and environmental metrics, which are supposed to measure firms’ contributions to social and environmental objectives.Show More But if measurement is rarely available, how do we know that firms are pursuing social goals effectively?

The legal approach to addressing these questions has been to introduce legal hybrid forms—in particular, the benefit corporation.7.See infra Part II.Show More These forms are supposed to communicate to investors, consumers, workers, and society at large that firms’ activities benefit society. To date, as many as thirty-six states, including Delaware, have adopted one or more such legal forms.8.B Lab, State by State Status of Legislation, Benefit Corp., http://benefitcorp.net/policy­makers/state-by-state-status? [https://perma.cc/X524-35UE] (last visited Mar. 18, 2020).Show More However, existing legal forms fail to clarify the actual impact of companies’ social goals.9.See, e.g., John E. Tyler III, Evan Absher, Kathleen Garman & Anthony Luppino, Purposes, Priorities and Accountability Under Social Business Structures: Resolving Ambiguities and Enhancing Adoption, 19 Advances Entrepreneurship Firm Emergence & Growth 39, 39 (2017) (arguing that “social business models do not meaningfully prioritize or impose accountability to ‘social good’ over other purposes”).Show More Just like CSR, these forms could portray a misleading picture of companies’ social contributions. Many of the companies that adopt these legal forms have little or no discernible social impact.10 10.See Ofer Eldar, The Role of Social Enterprise and Hybrid Organizations, 2017 Colum. Bus. L. Rev. 92, 99 (discussing Laureate University, a for-profit network of universities incorporated as a benefit corporation but that uses aggressive promotional tactics and has low graduation and loan repayment rates); see also Michael B. Dorff, James Hicks & Steven Davidoff Solomon, The Future or Fancy? An Empirical Study of Public Benefit Corporations 46 (Eur. Corp. Governance Inst., Working Paper No. 495, 2020), https://papers.ssrn.com­/sol3/papers.cfm?abstract_id=3433772 [https://perma.cc/D9R8-VZWC]. Dorff et al. list standard firms, such as Ripple Foods, as having incorporated as benefit corporations, even though these firms do not have any clear social impact other than producing goods (such as dairy-free milk) that appeal to certain consumers.Show More And companies that appear to be highly successful in pursuing social missions already had such impact before they adopted the legal forms.11 11.Two such examples include the Greyston Bakery and Patagonia. See Eldar, supra note 10, at 189 n.270.Show More

Why have these forms seemingly failed to generate greater social impact? In this Article, I claim that they suffer from the same underlying problem as CSR policies. These forms are simply not structured in a way that makes companies more likely to pursue social goals effectively. Therefore, the legal forms cannot serve as useful signals to investors or consumers that the firms benefit society in the ways they purport to.

An effective legal form must meet two conditions. First, the form must give firms incentives to pursue social missions effectively. At the very least, the goal of maximizing shareholders’ profits should not interfere with the firm’s social mission. Ideally, the firm should have a financial stake in the accomplishment of the social mission. Second, the firm should have the competence to pursue such missions. Competence is particularly important because social goals, such as unemployment or access to capital, tend to be complex. Accomplishing complex social goals requires the firm to tailor its social programs to the specific attributes and needs of the beneficiaries.

The issues of incentives and competence are very similar to standard issues in corporate governance. Broadly stated, the main goal of corporate governance policy is to ensure that managers have both (i) the incentives to maximize shareholder value and (ii) the competence to make decisions on behalf of the corporation.12 12.See Zohar Goshen & Richard Squire, Principal Costs: A New Theory for Corporate Law and Governance, 117 Colum. L. Rev. 767, 784 (2017) (identifying conflict costs and competence costs as the two main sources of costs that corporate governance is designed to address).Show More What complicates things when it comes to social responsibility is that a firm that purports to pursue CSR not only makes profits on behalf of the investors, but it also serves as a conduit for a subsidy or a donation. As I explain elsewhere, these subsidies or donations need not be direct transfers from the government or donors. In fact, they are usually latent in the sense that they reflect premium prices paid by consumers or below-market returns from investors.13 13.Eldar, supra note 10, at 104–05.Show More

For policy makers, the main design issue is how to assure those who provide subsidy-donations that they will be used effectively. Thus, the principal goal of this Article is to develop a legal form with key structural elements that give managers the incentives and competence to accomplish this. This form can signal to stakeholders that firms professing to promote social impact actually do what they claim.

The policy I propose is modeled on the structural elements found in social enterprises that transact with their beneficiaries (e.g., as consumers or workers), which I have addressed in previous work.14 14.See id.; see also Ofer Eldar, The Organization of Social Enterprises: Transacting Versus Giving 10–15 (July 27, 2018) (unpublished paper), https://papers.ssrn.com/sol3/papers.­cfm?abstract_id=3217663 [https://perma.cc/S36D-3LWP].Show More The transactional relationship with its beneficiaries gives the firm a stake in helping them develop, and also enables the firm to observe beneficiaries’ abilities and needs. Thus, such firms have both the incentives and competence to serve certain social goals. The proposal builds on the regulatory regime for community development financial institutions (CDFIs), which certifies financial institutions as firms that serve low-income populations,15 15.The CDFI regime is currently limited to low-income borrowers, but it could be extended to a wider class of beneficiaries, and extended beyond the U.S.Show More and combines this regime with certain elements found in benefit corporations.16 16.Specifically, as in benefit corporations, a qualified majority voting is required to change the mission of the firm. See infra text accompanying note 111.Show More

In essence, the proposal is to introduce a new social enterprise (SE) legal form. Firms organized under the SE legal form would be required to obtain a government certification as a “Social Enterprise” if they commit, in their charters, to transacting with one or more carefully defined classes of beneficiaries. These beneficiaries may include, among others, workers, borrowers, and consumers. Beneficiaries will be divided into different classes in accordance with certain criteria of need (e.g., level of income). To maintain the certification, firms must commit to having a minimum percentage of their business associated with beneficiary transactions. Whereas current benefit corporation laws permit companies to choose a third-party standard that measures their social purpose,17 17.The MBCL provides criteria for third-party standards, but companies have discretion to select how their performance will be measured. See infra Part II.Show More my proposed reform would require companies to adhere to one federal standard defined by a single federal certifier.

The main goal of this proposed policy is to facilitate the flow of subsidized capital and income to social enterprises. This legal form is necessary to attract subsidies from dispersed subsidy-providers, such as investors and consumers. Currently, investors and consumers mainly rely on costly contractual and ownership mechanisms to ensure that relevant firms transact with their beneficiaries. Under the proposed system, investors and consumers would have notice that the firm transacts with beneficiaries before they purchase shares or products. In this respect, the proposed law would be to altruistic investors and consumers essentially what the nonprofit form is to donors.18 18.The nonprofit form assures donors that the managers of donative organizations have limited incentives to expropriate the subsidy-donations; hence, they are more likely to distribute donations to the intended beneficiaries. Henry B. Hansmann, The Role of Nonprofit Enterprise, 89 Yale L.J. 835, 838–39 (1980). Similarly, the proposed legal form would assure investors and consumers that the firm has incentives to use subsidies effectively.Show More Thus, the proposal is likely to unlock much-needed capital to scale social enterprises and increase social impact.

The ability of the SE legal form to source subsidies from a wider range of subsidy-providers could serve two additional complementary objectives. First, it could help facilitate the process for allocating subsidized investments (known as program-related investments or “PRIs”) from foundations. While most policy initiatives seek to attract institutional shareholder investment to channel capital for social goals, the best candidates for investing in social impact are foundations. The reason is that they have vast amounts of capital that they are supposed to employ to further philanthropic goals.19 19.See, e.g., Matt Onek, Philanthropic Pioneers: Foundations and the Rise of Impact Investing, Stan. Soc. Innovation Rev. (Jan. 17, 2017) https://ssir.org/articles/entry/­philanthropic_pioneers_foundations_and_the_rise_of_impact_investing# [https://perma.cc/­MJ7A-52Q8].Show More Paradoxically, foundations often resist making PRIs in for-profit social enterprises because such investments could expose them to tax penalties if they cannot verify the social mission of their investees. Currently, such verification is cumbersome and subject to legal uncertainty. Thus, making certified firms eligible for PRIs would facilitate the process for allocating such investments.

Second, more ambitiously, the proposal has the potential to meet a long-awaited goal of social entrepreneurs: facilitating their access to capital markets. The inability of social enterprises to tap into capital markets substantially burdens their ability to grow and increase their social impact. Attempts to establish social exchanges for firms that combine profit and missions have largely been futile, primarily due to the difficulties of measuring social impact. A new legal form could help by providing adequate assurance to the investors who are expected to subsidize such impact.

One objection to this proposal might be that a legal hybrid form based solely on firms’ transactional relationships with their beneficiaries is overly reductive or too narrow. Should a legal hybrid form not capture the universe of social missions, such as the protection of the environment, diversity, and human rights? These objectives are indeed laudable, but it does not follow that legal forms can adequately address them. In the absence of credible certification mechanisms and clear metrics of social impact, legal forms for organizations with broad social purposes are not likely to signal that these firms pursue social missions effectively. Furthermore, the class of organizations that transact with disadvantaged persons is large and highly consequential.20 20.For example, they range from microfinance institutions to firms that provide eyeglasses in developing countries.Show More Concentrating on these firms could transform legal hybrid forms from a marginal phenomenon to a remarkable vehicle for promoting development.

This Article proceeds as follows: Part I describes how legal hybrid forms are supposed to serve as a commitment device to potential subsidy providers and explains why a new form is necessary to facilitate the formation of social enterprises. Part II critically evaluates the principal existing legal forms for companies with a social purpose and explains why they fail to serve as adequate commitment devices. Part III discusses the key elements of the CDFI regime and why other certification mechanisms do not work as well. Part IV proposes a design for a new legal form for social enterprises and discusses its principal elements in detail. Part V discusses the design of possible government subsidies for the proposed legal hybrid form.

  1. * Duke University School of Law; Duke Innovation and Entrepreneurship Initiative. I thank Richard Brooks, Jamie Boyle, John Coyle, Elisabeth De Fontenay, Brian Galle, Henry Hansmann, Yair Listokin, Richard Schmalbeck, Steven Schwarcz, Michael Simkovic, Emily Strauss, Rory Van Loo, Andrew Verstein, and participants in seminars at Duke University School of Law and Boston University School of Law for helpful comments and suggestions. I am also grateful to Heather Cron, Zach Lankford, Renuka Medury, Kelsey Moore, Catherine Prater, and Hadar Tanne for excellent research assistance. Email: eldar@law.duke.edu.
  2. Letter from Larry Fink, Chairman & Chief Exec. Officer, Blackrock, to CEOs (2018), https://www.blackrock.com/corporate/investor-relations/2018-larry-fink-ceo-letter [https://­perma.cc/7QRQ-9DG6]. For a similar statement by Martin Lipton, the renowned legal advisor for public corporations, see Martin Lipton et al., The New Paradigm: A Roadmap for an Implicit Corporate Governance Partnership Between Corporations and Investors To Achieve Sustainable Long-Term Investment and Growth, Harv. L. Sch. F. on Corp. Governance (Jan. 11, 2017), https://corpgov.law.harvard.edu/2017/01/11/corporate-governance-the-new-para­digm/ [https://perma.cc/B5AJ-EWNW].
  3. See generally Oliver Hart & Luigi Zingales, Companies Should Maximize Shareholder Welfare Not Market Value, 2 J.L. Fin. & Acct. 247 (2017) (arguing that company and asset managers should pursue policies consistent with the non-pecuniary preferences of their investors).
  4. Compare Ronald W. Masulis & Syed Walid Reza, Agency Problems of Corporate Philanthropy, 28 Rev. Fin. Stud. 592, 619–21 (2015) (claiming that corporate donations advance CEO interests and reduce firm value), with Allen Ferrell, Hao Liang & Luc Renneboog, Socially Responsible Firms, 122 J. Fin. Econ. 585, 585–91, 596–605 (2016) (arguing that well-governed firms are more engaged in CSR, and there is a positive association between CSR and shareholder value).
  5. “Greenwashing occurs when a corporation increases its sales or boosts its brand image through environmental rhetoric or advertising, but in reality does not make good on these environmental claims.” Miriam A. Cherry, The Law and Economics of Corporate Social Responsibility and Greenwashing, 14 U.C. Davis Bus. L.J. 281, 282 (2013).
  6. This arguably explains why well-governed firms that are more accountable to their shareholders tend to engage in value-enhancing CSR. See generally Ferrell, Liang & Renneboog, supra note 3. For a similar argument in the context of regulation, see Steven L. Schwarcz, Misalignment: Corporate Risk-Taking and Public Duty, 92 Notre Dame L. Rev. 1, 3–4 (2016) (arguing that regulation designed to align managers’ and investors’ interests does not necessarily help address negative externalities).
  7. This is most obviously manifested in the ESG metrics because they include both (i) governance metrics, which are supposed to increase accountability to shareholders and (ii) social and environmental metrics, which are supposed to measure firms’ contributions to social and environmental objectives.
  8. See infra Part II.
  9. B Lab, State by State Status of Legislation, Benefit Corp., http://benefitcorp.net/policy­makers/state-by-state-status? [https://perma.cc/X524-35UE] (last visited Mar. 18, 2020).
  10. See, e.g., John E. Tyler III, Evan Absher, Kathleen Garman & Anthony Luppino, Purposes, Priorities and Accountability Under Social Business Structures: Resolving Ambiguities and Enhancing Adoption, 19 Advances Entrepreneurship Firm Emergence & Growth 39, 39 (2017) (arguing that “social business models do not meaningfully prioritize or impose accountability to ‘social good’ over other purposes”).
  11. See Ofer Eldar, The Role of Social Enterprise and Hybrid Organizations, 2017 Colum. Bus. L. Rev. 92, 99 (discussing Laureate University, a for-profit network of universities incorporated as a benefit corporation but that uses aggressive promotional tactics and has low graduation and loan repayment rates); see also Michael B. Dorff, James Hicks & Steven Davidoff Solomon, The Future or Fancy? An Empirical Study of Public Benefit Corporations 46 (Eur. Corp. Governance Inst., Working Paper No. 495, 2020), https://papers.ssrn.com­/sol3/papers.cfm?abstract_id=3433772 [https://perma.cc/D9R8-VZWC]. Dorff et al. list standard firms, such as Ripple Foods, as having incorporated as benefit corporations, even though these firms do not have any clear social impact other than producing goods (such as dairy-free milk) that appeal to certain consumers.
  12. Two such examples include the Greyston Bakery and Patagonia. See Eldar, supra note 10, at 189 n.270.
  13. See Zohar Goshen & Richard Squire, Principal Costs: A New Theory for Corporate Law and Governance, 117 Colum. L. Rev. 767, 784 (2017) (identifying conflict costs and competence costs as the two main sources of costs that corporate governance is designed to address).
  14. Eldar, supra note 10, at 104–05.
  15. See id.; see also Ofer Eldar, The Organization of Social Enterprises: Transacting Versus Giving 10–15 (July 27, 2018) (unpublished paper), https://papers.ssrn.com/sol3/papers.­cfm?abstract_id=3217663 [https://perma.cc/S36D-3LWP].
  16. The CDFI regime is currently limited to low-income borrowers, but it could be extended to a wider class of beneficiaries, and extended beyond the U.S.
  17. Specifically, as in benefit corporations, a qualified majority voting is required to change the mission of the firm. See infra text accompanying note 111.
  18. The MBCL provides criteria for third-party standards, but companies have discretion to select how their performance will be measured. See infra Part II.
  19. The nonprofit form assures donors that the managers of donative organizations have limited incentives to expropriate the subsidy-donations; hence, they are more likely to distribute donations to the intended beneficiaries. Henry B. Hansmann, The Role of Nonprofit Enterprise, 89 Yale L.J. 835, 838–39 (1980). Similarly, the proposed legal form would assure investors and consumers that the firm has incentives to use subsidies effectively.
  20. See, e.g., Matt Onek, Philanthropic Pioneers: Foundations and the Rise of Impact Investing, Stan. Soc. Innovation Rev. (Jan. 17, 2017) https://ssir.org/articles/entry/­philanthropic_pioneers_foundations_and_the_rise_of_impact_investing# [https://perma.cc/­MJ7A-52Q8].
  21. For example, they range from microfinance institutions to firms that provide eyeglasses in developing countries.

Manipulating Opportunity

Concerns about online manipulation have centered on fears about undermining the autonomy of consumers and citizens. What has been overlooked is the risk that the same techniques of personalizing information online can also threaten equality. When predictive algorithms are used to allocate information about opportunities like employment, housing, and credit, they can reproduce past patterns of discrimination and exclusion in these markets. This Article explores these issues by focusing on the labor market, which is increasingly dominated by tech intermediaries. These platforms rely on predictive algorithms to distribute information about job openings, match job seekers with hiring firms, or recruit passive candidates. Because algorithms are built by analyzing data about past behavior, their predictions about who will make a good match for which jobs will likely reflect existing occupational segregation and inequality. When tech intermediaries cause discriminatory effects, they may be liable under Title VII, and Section 230 of the Communications Decency Act should not bar such actions. However, because of the practical challenges that litigants face in identifying and proving liability retrospectively, a more effective approach to preventing discriminatory effects should focus on regulatory oversight to ensure the fairness of algorithmic systems.

I. Introduction

Our online experiences are increasingly personalized. Facebook and Google micro-target advertisements aimed to meet our immediate needs. Amazon, Netflix, and Spotify offer up books, movies, and music tailored to match our tastes. Our news feeds are populated with stories intended to appeal to our particular interests and biases. This drive toward increasing personalization is powered by complex machine learning algorithms built to discern our preferences and anticipate our behavior. Personalization offers benefits because companies can efficiently offer consumers the precise products and services they desire.

Online personalization, however, has come under considerable criticism lately. Shoshana Zuboff assails our current economic system, which is built on companies amassing and exploiting ever more detailed personal information.1.Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power 8–11 (2019).Show More Ryan Calo and Tal Zarsky explain that firms are applying the insights of behavioral science to manipulate consumers by exploiting their psychological or emotional vulnerabilities.2.SeeRyan Calo, Digital Market Manipulation, 82 Geo. Wash. L. Rev. 995, 996, 999 (2014); Tal Z. Zarsky, Privacy and Manipulation in the Digital Age, 20 Theoretical Inquiries L. 157, 158, 160–61 (2019).Show More Daniel Susser, Beate Roessler, and Helen Nissenbaum describe how information technology is enabling manipulative practices on a massive scale.3.Daniel Susser, Beate Roessler & Helen Nissenbaum, Online Manipulation: Hidden Influences in a Digital World, 4 Geo. L. Tech. Rev. 1, 2, 10 (2019).Show More Julie Cohen similarly argues that “[p]latform-based, massively-intermediated processes of search and social networking are inherently processes of market manipulation.”4.Julie E. Cohen, Law for the Platform Economy, 51 U.C. Davis L. Rev. 133, 165 (2017); see also Julie E. Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism 75–77, 83–89, 96 (2019) (describing how techniques for behavioral surveillance and micro-targeting contribute to social harms such as polarization and extremism).Show More In the political sphere as well, concerns have been raised about manipulation, with warnings that news personalization is creating “filter bubble[s]” and increasing polarization.5.See, e.g., Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You 13–14 (2011); Michael J. Abramowitz, Stop the Manipulation of Democracy Online, N.Y. Times (Dec. 11, 2017), https://www.nytimes.com/2017/12/11/opinion/fake-news-russia-kenya.html [https://perma.cc/9YWF-PED7]; James Doubek, How Disinformation and Distortions on Social Media Affected Elections Worldwide, NPR (Nov. 16, 2017, 2:28 PM), https://www.npr.org­/sections/alltechconsidered/2017/11/16/564542100/how-disinformation-and-distortions-on-social-media-affected-elections-worldwide [https://perma.cc/ZJ97-GQ SZ]; Jon Keegan, Blue Feed, Red Feed: See Liberal Facebook and Conservative Facebook, Side by Side, Wall St. J. (Aug. 19, 2019), http://graphics.wsj.com/blue-feed-red-feed/ [https://perma.cc/GJA8-4U9W].Show More These issues were highlighted by revelations that Cambridge Analytica sent personalized ads based on psychological profiles of eighty-seven million Facebook users in an effort to influence the 2016 presidential election.6.Carole Cadwalladr & Emma Graham-Harrison, Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach,Guardian (Mar. 17, 2018, 6:03 PM), https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influ­ence-us-election [https://perma.cc/72CR-9Y8K]; Alex Hern, Cambridge Analytica: How Did It Turn Clicks into Votes?, Guardian (May 6, 2018, 3:00 AM), https://www.theguardian.com/­news/2018/may/06/cambridge-analytica-how-turn-clicks-into-votes-christopher-wylie [https://perma.cc/AD8H-PF3M]; Matthew Rosenberg, Nicholas Confessore & Carole Cadwalladr, How Trump Consultants Exploited the Facebook Data of Millions, N.Y. Times (Mar. 17, 2018), https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html [https://perma.cc/3WYQ-3YKP].Show More The extensive criticism of personalization is driven by concerns that online manipulation undermines personal autonomy and compromises rational decision making.

Largely overlooked in these discussions is the possibility that online manipulation also threatens equality. Online platforms increasingly operate as key intermediaries in the markets for employment, housing, and financial services—what I refer to as opportunity markets. Predictive algorithms are also used in these markets to segment the audience and determine precisely what information will be delivered to which users. The risk is that in doing so, these intermediaries will direct opportunities in ways that reproduce or reinforce historical forms of discrimination. Predictive algorithms are built by observing past patterns of behavior, and one of the enduring patterns in American economic life is the unequal distribution of opportunities along the lines of race, gender, and other personal characteristics. As a result, these systems are likely to distribute information about future opportunities in ways that reflect existing inequalities and may reinforce historical patterns of disadvantage.

The way in which information about opportunities is distributed matters, because these markets provide access to resources that are critical for human flourishing and well-being. In that sense, access to them is foundational. People need jobs and housing before they can act as consumers or voters. They need access to financial services in order to function in the modern economy. Of course, many other factors contribute to inequality, such as unequal educational resources, lack of access to health care, and over-policing in certain communities. Decisions by landlords, employers, or banks can also contribute to inequality. Tech intermediaries are thus just one part of a much larger picture. Nevertheless, they will be an increasingly important part as more and more transactions are mediated online.7.See, e.g., Miranda Bogen & Aaron Rieke, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias 5–6 (2018) (describing the role of platforms in the hiring process); Geoff Boeing, Online Rental Housing Market Representation and the Digital Reproduction of Urban Inequality, 52 Env’t & Plan. A 449, 450 (2019) (documenting the growing impact of Internet platforms in shaping the rental housing market).Show More Because they control access to information about opportunities, they have the potential to significantly impact how these markets operate.

Online intermediaries have unprecedented potential to finely calibrate the distribution of information. In the past, traditional print or broadcast media might aim at a particular audience, but they could not prevent any particular individual from accessing information that they published. And if an advertiser tried to signal its interest in only a particular group—as has happened with real estate ads that used code words or featured only white models—the attempts at exclusion were plainly visible. In contrast, online intermediaries have the ability to precisely target an audience, selecting some users to receive information and others to be excluded in ways that are not at all transparent.

The issue is illustrated by Facebook’s ad-targeting tools. Several lawsuits alleged that employers or landlords could use the company’s tools to exclude users on the basis of race, gender, or age from their audience.8.See infra Section II.B.Show More To a large extent, these concerns were resolved by a recent settlement in which Facebook agreed to bar the use of sensitive demographic variables to target employment, housing, and credit advertisements.9.See Galen Sherwin & Esha Bhandari, Facebook Settles Civil Rights Cases by Making Sweeping Changes to Its Online Ad Platform, ACLU (Mar. 19, 2019, 2:00 PM), https://www.aclu.org/blog/womens-rights/womens-rights-workplace/facebook-settles-civil-rights-cases-making-sweeping [https://perma.cc/H6D6-UMJ4].Show More However, the settlement failed to address another potential source of bias—Facebook’s ad-delivery algorithm, which determines which users within a targeted audience actually receive an ad. As explained below, even if an advertiser uses neutral targeting criteria and intends to reach a diverse audience, an ad-targeting algorithm may distribute information about opportunities in a biased way.10 10.See infra Section II.C.Show More This is an example of a much broader concern—namely, that when predictive algorithms are used to allocate access to opportunities, there is a significant risk that they will do so in a way that reproduces existing patterns of inequality and disadvantage.

Concerns about the distributive effects of predictive algorithms are relevant to all kinds of opportunity markets, including for housing, employment, and basic financial services. Each of these markets operates somewhat differently and is regulated under different laws. They deserve separate attention and more detailed consideration than can be provided here. This Article focuses on the labor market and the relevant laws regulating it; however, the issues it raises likely plague other opportunity markets as well.

Examining employment practices reveals dramatic change. Just a couple of decades ago, employers had a handful of available strategies for recruiting new workers, such as advertising in newspapers or hiring through an employment agency. Today, firms increasingly rely on tech intermediaries to fill job openings.11 11.See Bogen & Rieke, supra note 7, at 5–6.Show More Recent surveys suggest that somewhere from 84% to 93% of job recruiters use online strategies to find potential employees.12 12.Soc’y for Human Res. Mgmt., SHRM Survey Findings: Using Social Media for Talent Acquisition—Recruitment and Screening 3 (Jan. 7, 2016), https://www.shrm.org/hr-today/trends-and-forecasting/research-and-surveys/Documents/SHRM-Social-Media-Recruiting-Screening-2015.pdf [https://perma.cc/L6NT-N4KL]. The Society for Human Resource Management conducts biennial surveys of job recruiters. The surveys demonstrated an increase in the use of online recruiting by employers, rising from fifty-six percent in 2011 to seventy-seven percent in 2013 to eighty-four percent in 2015.Id.; Soc’y for Human Res. Mgmt., SHRM Survey Findings: Social Networking Websites and Recruiting/Selection 2 (Apr. 11, 2013), https://www.shrm.org/hr-today/trends-and-forecasting/research-and-sur­veys/Pages/shrm-social-networking-websites-recruiting-job-candidates.aspx [https://perma.cc/U4HN-E7U7]; see also Jobvite’s New 2015 Recruiter Nation Survey Reveals Talent Crunch, Jobvite (Sept. 22, 2015), https://www.jobvite.com/news_item/­jobvites-new-2015-recruiter-nation-survey-reveals-talent-crunch-95-recruiters-anticipate-similar-increased-competition-skilled-workers-coming-year-86-expect-exp/ [https://perma.cc /H66S-8E5Z] (stating that 92% of recruiters use social media to discover or evaluate candidates).Show More Employers distribute information about positions through social media. They also rely on specialized job platforms like ZipRecruiter, LinkedIn, and Monster to recruit applicants and recommend the strongest candidates.13 13.See Bogen & Rieke, supra note 7, at 5, 19–20, 24.Show More In addition, passive recruiting—using data to identify workers who are not actively looking for another position—is a growing strategy for recruiting new talent.14 14.Id. at 22.Show More

The use of algorithms and artificial intelligence in the hiring process has not gone unnoticed. Numerous commenters and scholars have described how employers are using automated decision systems and have raised concerns that these developments may cause discrimination or threaten employee privacy.15 15.See, e.g., Ifeoma Ajunwa, Kate Crawford & Jason Schultz, Limitless Worker Surveillance, 105 Calif. L. Rev. 735, 738–39 (2017); Ifeoma Ajunwa, The Paradox of Automation as Anti-Bias Intervention, 41 Cardozo L. Rev. (forthcoming 2020) (manuscript at 14) (on file with author); Richard A. Bales & Katherine V.W. Stone, The Invisible Web of Work: Artificial Intelligence and Electronic Surveillance in the Workplace, 41 Berkeley J. Lab. & Emp. L. (forthcoming 2020) (manuscript at 3) (on file with author); Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671, 673–75 (2016); Matthew T. Bodie, Miriam A. Cherry, Marcia L. McCormick & Jintong Tang, The Law and Policy of People Analytics, 88 U. Colo. L. Rev. 961, 989–92 (2017); James Grimmelmann & Daniel Westreich, Incomprehensible Discrimination, 7 Calif. L. Rev. Online 164, 170–72, 176–77 (2017); Jeffrey M. Hirsch, Future Work, 2020 U. Ill. L. Rev. (forthcoming 2020) (manuscript at 3) (on file with author); Pauline T. Kim, Data-Driven Discrimination at Work, 58 Wm. & Mary L. Rev. 857, 860–61 (2017) [hereinafter Kim, Data-Driven Discrimination at Work]; Pauline T. Kim, Data Mining and the Challenges of Protecting Employee Privacy Under U.S. Law, 40 Comp. Lab. L. & Pol’y J. 405, 406 (2019); Pauline T. Kim & Erika Hanson, People Analytics and the Regulation of Information Under the Fair Credit Reporting Act, 61 St. Louis U. L.J. 17, 18–19 (2016); Charles A. Sullivan, Employing AI, 63 Vill. L. Rev. 395, 396 (2018).Show More However, previous work has focused on whether employers can or should be held liable when they use predictive algorithms or other artificial intelligence tools to make personnel decisions. What is missing from this literature is close scrutiny of how tech intermediaries are shaping labor markets and the implications for equality.

This Article undertakes that analysis, arguing that the use of predictive algorithms by labor market intermediaries risks reinforcing or even worsening existing patterns of inequality and that these intermediaries should be accountable for those effects. A number of studies have documented instances of biased delivery of employment ads.16 16.See infra Section II.C.Show More Although the exact mechanism is unclear, it should not be surprising that predictive algorithms distribute information about job opportunities in biased ways. These algorithms are built by analyzing existing data, and one of the most persistent facts of the U.S. labor market is ongoing occupational segregation along the lines of race and gender.17 17.See infra Section II.D.Show More If predictions are based solely on observations about past behavior—without regard to what social forces shaped that behavior—then they are likely to reproduce those patterns.

Tech intermediaries may not intend to cause discriminatory effects, but they are nevertheless responsible for them.18 18.Building predictive models involves numerous choices, many of them implicating value judgments. See, e.g., Barocas & Selbst, supra note 15, at 674; Margot E. Kaminski, Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability, 92 S. Cal. L. Rev. 1529, 1539 (2019); David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, 51 U.C. Davis L. Rev. 653, 703–04 (2017); Andrew D. Selbst & Solon Barocas, The Intuitive Appeal of Explainable Machines, 87 Fordham L. Rev. 1085, 1130–31 (2018).Show More They make choices when designing the algorithms that distribute information about job opportunities or suggest the best matches for job seekers and hiring firms. In doing so, they decide what goals to optimize—typically revenue—and those choices influence how information is channeled, making some opportunities visible and obscuring others. Thus, these technologies shape how the market participants—both workers and employers—perceive their available options and thereby also influence their behavior.19 19.Karen Levy and Solon Barocas have explored how the design choices made by platforms “can both mitigate and aggravate bias.” Karen Levy & Solon Barocas, Designing Against Discrimination in Online Markets, 32 Berkeley Tech. L.J. 1183, 1185 (2017). The focus of their analysis is on user bias in online markets like ride matching, consumer-to-consumer sales, short-term rentals, and dating. Id. at 1189–90. Because the design choices platforms make will structure users’ interactions with one another, these choices influence behavior, affecting whether or to what extent users can act on explicit or implicit biases. Levy and Barocas review multiple platforms across domains and develop a taxonomy of policy and design elements that have been used to address the risks of bias. Although the focus of this Article is on the impact of predictive algorithms rather than user bias, the issues are obviously interrelated. Past bias by users can cause predictive algorithms to discriminate. Conversely, algorithmic outputs in the form of recommendations or rankings can activate or exacerbate implicit user biases. To that extent, some, but not all, of the strategies they identify may be relevant to addressing bias in online opportunity markets.Show More When these intermediaries structure access to opportunities in ways that reflect historical patterns of discrimination and exclusion, they pose a threat to workplace equality. Even if the discriminatory effects are unintentional, the harm to workers can be real. Employment discrimination law has long targeted discriminatory effects, not just invidious motivation.20 20.See Griggs v. Duke Power Co., 401 U.S. 424, 431 (1971).Show More

The risk that tech intermediaries will contribute to workplace inequality poses a number of challenges for the law. Discrimination law has largely focused on employers, examining their decisions and practices for discriminatory intent or impact. However, if bias affects how potential applicants are screened out before they even interact with a hiring firm, then focusing on employer behavior will be inadequate to dismantle patterns of occupational segregation. Holding tech intermediaries directly responsible for their effects on labor markets, however, will raise a different set of challenges. Some of these are legal, such as whether existing law reaches these types of intermediaries,21 21.See infra Part III.Show More and whether they can avoid liability by relying on Section 230 of the Communications Decency Act (CDA),22 22.47 U.S.C. § 230 (2012).Show More which gives websites a defense to some types of liability. Other obstacles are more practical in nature, which suggests that preventing discriminatory effects may require alternative strategies.23 23.See infra Section IV.B.Show More

This Article proceeds as follows. Part II first explores the role that tech intermediaries play in the labor market and how targeting tools can be misused for discriminatory purposes. It next explains that even if employers are no longer permitted to use discriminatory targeting criteria, a significant risk remains that platforms’ predictive algorithms will distribute access to opportunities in ways that reproduce existing patterns of inequality. Because tech intermediaries have a great deal of power to influence labor market interactions, and may do so in ways that are not transparent, I argue in Part II that they should bear responsibility when they cause discriminatory effects.

Parts III and IV consider the relevant legal landscape. Part III discusses how the growing importance of tech intermediaries in the labor market poses challenges for existing anti-discrimination law. It first shows how the question “who is an applicant?”—an issue critical for finding employer liability—is complicated as platforms increasingly mediate job seekers’ interactions with firms. It then explores the possibilities for holding these intermediaries directly liable under existing employment discrimination law, either as employment agencies or for interfering with third party employment relationships. Part IV considers some obstacles to holding tech intermediaries liable for their discriminatory labor market effects. Section IV.A examines and rejects the argument that Section 230 of the Communications Decency Act would automatically bar such claims. Section IV.B explains that significant practical obstacles remain, suggesting that a post hoc liability regime may not be the best way to prevent discriminatory harms. Thus, Section IV.B also argues that we should look to regulatory models in order to minimize the risks of discrimination from the use of predictive algorithms.

  1. * Daniel Noyes Kirby Professor of Law, Washington University School of Law, St. Louis, Missouri. I am grateful to Victoria Schwarz, Miranda Bogen, Aaron Riecke, Greg Magarian, Neil Richards, Peggie Smith, Dan Epps, John Inazu, Danielle Citron, Ryan Calo, Andrew Selbst, Margot Kaminski, and Felix Wu for helpful comments on earlier drafts of this Article. I also benefited from feedback from participants at the 2019 Privacy Law Scholar’s Conference, Washington University School of Law’s faculty workshop, and Texas A&M School of Law’s Faculty Speaker Series. Many thanks to Adam Hall, Theanne Liu, Joseph Tomchak, and Samuel Levy for outstanding research assistance.
  2. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power 8–11 (2019).
  3. See Ryan Calo, Digital Market Manipulation, 82 Geo. Wash. L. Rev. 995, 996, 999 (2014); Tal Z. Zarsky, Privacy and Manipulation in the Digital Age, 20 Theoretical Inquiries L. 157, 158, 160–61 (2019).
  4. Daniel Susser, Beate Roessler & Helen Nissenbaum, Online Manipulation: Hidden Influences in a Digital World, 4 Geo. L. Tech. Rev. 1, 2, 10 (2019).
  5. Julie E. Cohen, Law for the Platform Economy, 51 U.C. Davis L. Rev.
    133, 165 (2017)

    ; see also Julie E. Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism 75–77, 83–89, 96 (2019) (describing how techniques for behavioral surveillance and micro-targeting contribute to social harms such as polarization and extremism).

  6. See, e.g., Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You 13–14 (2011); Michael J. Abramowitz, Stop the Manipulation of Democracy Online, N.Y. Times (Dec. 11, 2017), https://www.nytimes.com/2017/12/11/opinion/fake-news-russia-kenya.html [https://perma.cc/9YWF-PED7]; James Doubek, How Disinformation and Distortions
    on Social Media Affected Elections Worldwide, NPR (Nov. 16, 2017, 2:28 PM), https://www.npr.org­/sections/alltechconsidered/2017/11/16/564542100/how-disinformation-and-distortions-on-social-media-affected-elections-worldwide [https://perma.cc/ZJ97-GQ SZ]; Jon Keegan, Blue Feed, Red Feed: See Liberal Facebook and Conservative Facebook, Side by Side, Wall St. J. (Aug. 19, 2019), http://graphics.wsj.com/blue-feed-red-feed/ [https://perma.cc/GJA8-4U9W].
  7. Carole Cadwalladr & Emma Graham-Harrison, Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach, Guardian (Mar. 17, 2018, 6:03 PM), https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influ­ence-us-election [https://perma.cc/72CR-9Y8K]; Alex Hern, Cambridge Analytica: How Did It Turn Clicks into Votes?, Guardian (May 6, 2018, 3:00 AM), https://www.theguardian.com/­news/2018/may/06/cambridge-analytica-how-turn-clicks-into-votes-christopher-wylie [https://perma.cc/AD8H-PF3M]; Matthew Rosenberg, Nicholas Confessore & Carole Cadwalladr, How Trump Consultants Exploited the Facebook Data of Millions, N.Y. Times (Mar. 17, 2018), https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html [https://perma.cc/3WYQ-3YKP].
  8. See, e.g., Miranda Bogen & Aaron Rieke, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias 5–6 (2018) (describing the role of platforms in the hiring process); Geoff Boeing, Online Rental Housing Market Representation and the Digital Reproduction of Urban Inequality, 52 Env’t & Plan. A 449, 450 (2019) (documenting the growing impact of Internet platforms in shaping the rental housing market).
  9. See infra Section II.B.
  10.  See Galen Sherwin & Esha Bhandari, Facebook Settles Civil Rights Cases by Making Sweeping Changes to Its Online Ad Platform, ACLU (Mar. 19, 2019, 2:00 PM), https://www.aclu.org/blog/womens-rights/womens-rights-workplace/facebook-settles-civil-rights-cases-making-sweeping [https://perma.cc/H6D6-UMJ4].
  11. See infra Section II.C.
  12. See Bogen & Rieke, supra note 7, at 5–6.
  13. Soc’y for Human Res. Mgmt., SHRM Survey Findings: Using Social Media for Talent Acquisition—Recruitment and Screening 3 (Jan. 7, 2016), https://www.shrm.org/hr-today/trends-and-forecasting/research-and-surveys/Documents/SHRM-Social-Media-Recruiting-Screening-2015.pdf [https://perma.cc/L6NT-N4KL]. The Society for Human Resource Management conducts biennial surveys of job recruiters. The surveys demonstrated an increase in the use of online recruiting by employers, rising from fifty-six percent in 2011 to seventy-seven percent in 2013 to eighty-four percent in 2015. Id.; Soc’y for Human Res. Mgmt., SHRM Survey Findings: Social Networking Websites and Recruiting/Selection 2 (Apr. 11, 2013), https://www.shrm.org/hr-today/trends-and-forecasting/research-and-sur­veys/Pages/shrm-social-networking-websites-recruiting-job-candidates.aspx [https://perma.cc/U4HN-E7U7]; see also Jobvite’s New 2015 Recruiter Nation Survey Reveals Talent Crunch, Jobvite (Sept. 22, 2015), https://www.jobvite.com/news_item/­jobvites-new-2015-recruiter-nation-survey-reveals-talent-crunch-95-recruiters-anticipate-similar-increased-competition-skilled-workers-coming-year-86-expect-exp/ [https://perma.cc /H66S-8E5Z] (stating that 92% of recruiters use social media to discover or evaluate candidates).
  14. See Bogen & Rieke, supra note 7, at 5, 19–20, 24.
  15. Id. at 22.
  16.  See, e.g., Ifeoma Ajunwa, Kate Crawford & Jason Schultz, Limitless Worker Surveillance, 105 Calif. L. Rev. 735, 738–39 (2017); Ifeoma Ajunwa, The Paradox of Automation as Anti-Bias Intervention, 41 Cardozo L. Rev. (forthcoming 2020) (manuscript at 14) (on file with author); Richard A. Bales & Katherine V.W. Stone, The Invisible Web of Work: Artificial Intelligence and Electronic Surveillance in the Workplace, 41 Berkeley J. Lab. & Emp. L. (forthcoming 2020) (manuscript at 3) (on file with author); Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671, 673–75 (2016); Matthew T. Bodie, Miriam A. Cherry, Marcia L. McCormick & Jintong Tang, The Law and Policy of People Analytics, 88 U. Colo. L. Rev. 961, 989–92 (2017); James Grimmelmann & Daniel Westreich, Incomprehensible Discrimination, 7 Calif. L. Rev. Online 164, 170–72, 176–77 (2017); Jeffrey M. Hirsch, Future Work, 2020 U. Ill. L. Rev. (forthcoming 2020) (manuscript at 3) (on file with author); Pauline T. Kim, Data-Driven Discrimination at Work, 58 Wm. & Mary L. Rev. 857, 860–61 (2017) [hereinafter Kim, Data-Driven Discrimination at Work]; Pauline T. Kim, Data Mining and the Challenges of Protecting Employee Privacy Under U.S. Law, 40 Comp. Lab. L. & Pol’y J. 405, 406 (2019); Pauline T. Kim & Erika Hanson, People Analytics and the Regulation of Information Under the Fair Credit Reporting Act, 61 St. Louis U. L.J. 17, 18–19 (2016); Charles A. Sullivan, Employing AI, 63 Vill. L. Rev. 395, 396 (2018).
  17. See infra Section II.C.
  18. See infra Section II.D.
  19. Building predictive models involves numerous choices, many of them implicating value judgments. See, e.g., Barocas & Selbst, supra note 15, at 674; Margot E. Kaminski, Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability, 92 S. Cal. L. Rev. 1529, 1539 (2019); David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, 51 U.C. Davis L. Rev. 653, 703–04 (2017); Andrew D. Selbst & Solon Barocas, The Intuitive Appeal of Explainable Machines, 87 Fordham L. Rev. 1085, 1130–31 (2018).
  20. Karen Levy and Solon Barocas have explored how the design choices made by platforms “can both mitigate and aggravate bias.” Karen Levy & Solon Barocas, Designing Against Discrimination in Online Markets, 32 Berkeley Tech. L.J. 1183, 1185 (2017). The focus of their analysis is on user bias in online markets like ride matching, consumer-to-consumer sales, short-term rentals, and dating. Id. at 1189–90. Because the design choices platforms make will structure users’ interactions with one another, these choices influence behavior, affecting whether or to what extent users can act on explicit or implicit biases. Levy and Barocas review multiple platforms across domains and develop a taxonomy of policy and design elements that have been used to address the risks of bias. Although the focus of this Article is on the impact of predictive algorithms rather than user bias, the issues are obviously interrelated. Past bias by users can cause predictive algorithms to discriminate. Conversely, algorithmic outputs in the form of recommendations or rankings can activate or exacerbate implicit user biases. To that extent, some, but not all, of the strategies they identify may be relevant to addressing bias in online opportunity markets.
  21. See Griggs v. Duke Power Co., 401 U.S. 424, 431 (1971).
  22. See infra Part III.
  23. 47 U.S.C. § 230 (2012).
  24. See infra Section IV.B.

Measuring Algorithmic Fairness

Algorithmic decision making is both increasingly common and increasingly controversial. Critics worry that algorithmic tools are not transparent, accountable, or fair. Assessing the fairness of these tools has been especially fraught as it requires that we agree about what fairness is and what it requires. Unfortunately, we do not. The technological literature is now littered with a multitude of measures, each purporting to assess fairness along some dimension. Two types of measures stand out. According to one, algorithmic fairness requires that the score an algorithm produces should be equally accurate for members of legally protected groups—blacks and whites, for example. According to the other, algorithmic fairness requires that the algorithm produce the same percentage of false positives or false negatives for each of the groups at issue. Unfortunately, there is often no way to achieve parity in both these dimensions. This fact has led to a pressing question. Which type of measure should we prioritize and why?

This Article makes three contributions to the debate about how best to measure algorithmic fairness: one conceptual, one normative, and one legal. Equal predictive accuracy ensures that a score means the same thing for each group at issue. As such, it relates to what one ought to believe about a scored individual. Because questions of fairness usually relate to action, not belief, this measure is ill-suited as a measure of fairness. This is the Article’s conceptual contribution. Second, this Article argues that parity in the ratio of false positives to false negatives is a normatively significant measure. While a lack of parity in this dimension is not constitutive of unfairness, this measure provides important reasons to suspect that unfairness exists. This is the Article’s normative contribution. Interestingly, improving the accuracy of algorithms overall will lessen this unfairness. Unfortunately, a common assumption that anti-discrimination law prohibits the use of racial and other protected classifications in all contexts is inhibiting those who design algorithms from making them as fair and accurate as possible. This Article’s third contribution is to show that the law poses less of a barrier than many assume.

Introduction

At an event celebrating Martin Luther King, Jr. Day, Representative Alexandria Ocasio-Cortez (D-NY) expressed the concern, shared by many, that algorithmic decision making is biased. “Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions,” she asserted. “They’re just automated. And if you don’t fix the bias, then you are automating the bias.”1.Blackout for Human Rights, MLK Now 2019, Riverside Church in the City of N.Y. (Jan. 21, 2019), https://www.trcnyc.org/mlknow2019/ [https://perma.cc/L45Q-SN9T] (interview with Rep. Ocasio-Cortez begins at approximately minute 16, and comments regarding algorithms begin at approximately minute 40); see also Danny Li, AOC Is Right: Algorithms Will Always Be Biased as Long as There’s Systemic Racism in This Country, Slate (Feb. 1, 2019, 3:47 PM), https://slate.com/news-and-politics/2019/02/aoc-algorithms-racist-bias.html [https://perma.cc/S97Z-UH2U] (quoting Ocasio-Cortez’s comments at the event in New York); Cat Zakrzewski, The Technology 202: Alexandria Ocasio-Cortez Is Using Her Social Media Clout To Tackle Bias in Algorithms, Wash. Post: PowerPost (Jan. 28, 2019), https://www.washingtonpost.com/news/powerpost/paloma/the-technology-202/2019/01/28 /the-technology-202-alexandria-ocasio-cortez-is-using-her-social-media-clout-to-tackle-bias-in-algorithms/5c4dfa9b1b326b29c37­78cdd/?utm_term=.541cd0827a23 [https://perma.cc/ LL4Y-FWDK] (discussing Ocasio-Cortez’s comments and reactions to them).Show More The audience inside the room applauded. Outside the room, the reaction was more mixed. “Socialist Rep. Alexandria Ocasio-Cortez . . . claims that algorithms, which are driven by math, are racist,” tweeted a writer for the Daily Wire.2.Ryan Saavedra (@RealSaavedra), Twitter (Jan. 22, 2019, 12:27 AM), https://twitter.com/RealSaavedra/status/1087627739861897216 [https://perma.cc/32DD-QK5S]. The coverage of Ocasio-Cortez’s comments is mixed. See, e.g., Zakrzewski, supra note 1 (describing conservatives’ criticism of and other media outlets’ and experts’ support of Ocasio-Cortez’s comments).Show More Math is just math, this commentator contends, and the idea that math can be unfair is crazy.

This controversy is just one of many to challenge the fairness of algorithmic decision making.3.See, e.g., Hiawatha Bray, The Software That Runs Our Lives Can Be Biased—But We Can Fix It, Bos. Globe, Dec. 22, 2017, at B9 (describing a New York City Council member’s proposal to audit the city government’s computer decision systems for bias); Drew Harwell, Amazon’s Facial-Recognition Software Has Fraught Accuracy Rate, Study Finds, Wash. Post, Jan. 26, 2019, at A14 (reporting on an M.I.T. Media Lab study that found that Amazon facial-recognition software is less accurate with regard to darker-skinned women than lighter-skinned men, and Amazon’s criticism of the study); Tracy Jan, Mortgage Algorithms Found To Have Racial Bias, Wash. Post, Nov. 15, 2018, at A21 (reporting on a University of California at Berkeley study that found that black and Latino home loan customers pay higher interest rates than white or Asian customers on loans processed online or in person); Tony Romm & Craig Timberg, Under Bipartisan Fire from Congress, CEO Insists Google Does Not Take Sides, Wash. Post, Dec. 12, 2018, at A16 (reporting on Congresspeople’s concerns regarding Google algorithms which were voiced at a House Judiciary Committee hearing with Google’s CEO).Show More The use of algorithms, and in particular their connection with machine learning and artificial intelligence, has attracted significant attention in the legal literature as well. The issues raised are varied, and include concerns about transparency,4.See, e.g., Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 1288–97 (2008); Natalie Ram, Innovating Criminal Justice, 112 Nw. U. L. Rev. 659 (2018); Rebecca Wexler, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, 70 Stan. L. Rev. 1343 (2018).Show More accountability,5.See, e.g., Margot E. Kaminski, Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability, 92 S. Cal. L. Rev. 1529 (2019); Joshua A. Kroll et al., Accountable Algorithms, 165 U. Pa. L. Rev. 633 (2017); Anne L. Washington, How To Argue with an Algorithm: Lessons from the COMPAS-ProPublica Debate, 17 Colo. Tech. L.J. 131 (2018) (arguing for standards governing the information available about algorithms so that their accuracy and fairness can be properly assessed). But see Jon Kleinberg et al., Discrimination in the Age of Algorithms (Nat’l Bureau of Econ. Research, Working Paper No. 25548, 2019), http://www.nber.org/papers/w25548 [https://perma.cc/JU6H-HG3W] (analyzing the potential benefits of algorithms as tools to prove discrimination).Show More privacy,6.See generally Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (2015) (discussing and critiquing internet and finance companies’ non-transparent use of data tracking and algorithms to influence and manage people); Anupam Chander, The Racist Algorithm?, 115 Mich. L. Rev. 1023, 1024 (2017) (reviewing Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (2015)) (arguing that instead of “transparency in the design of the algorithm” that Pasquale argues for, “[w]hat we need . . . is a transparency of inputs and results”) (emphasis omitted).Show More and fairness.7.See, e.g., Aziz Z. Huq, Racial Equity in Algorithmic Criminal Justice, 68 Duke L.J. 1043 (2019) (arguing that current constitutional doctrine is ill-suited to the task of evaluating algorithmic fairness and that current standards offered in the technology literature miss important policy concerns); Sandra G. Mayson, Bias In, Bias Out, 128 Yale L.J. 2218 (2019) (discussing how past and existing racial inequalities in crime and arrests mean that methods to predict criminal risk based on existing information will result in racial inequality).Show More This Article focuses on fairness—the issue raised by Ocasio-Cortez. It focuses on how we should assess what makes algorithmic decision making fair. Fairness is a moral concept, and a contested one at that. As a result, we should expect that different people will offer well-reasoned arguments for different conceptions of fairness. And this is precisely what we find.

The computer science literature is filled with a proliferation of measures, each purporting to capture fairness along some dimension. This Article provides a pathway through that morass. It makes three contributions: one conceptual, one normative, and one legal. This Article argues that one of the dominant measures of fairness offered in the literature tells us what to believe, not what to do, and thus is ill-suited as a measure of fair treatment. This is the conceptual claim. Second, this Article argues that the ratio between false positives and false negatives offers an important indicator of whether members of two groups scored by an algorithm are treated fairly, vis-à-vis each other. This is the normative claim. Third, this Article challenges a common assumption that anti-discrimination law prohibits the use of racial and other protected classifications in all contexts. Because using race within algorithms can increase both their accuracy and fairness, this misunderstanding has important implications. This Article’s third contribution is to show that the law poses less of a barrier than many assume.

We can use the controversy over a common risk assessment tool used by many states for bail, sentencing, and parole to illustrate the controversy about how best to measure fairness.8.See Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), https://www.pro­publica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [https://perma.cc/BA53-JT7V].Show More The tool, called COMPAS, assigns each person a score that indicates the likelihood that the person will commit a crime in the future.9.Equivant, Practitioner’s Guide to COMPAS Core 7 (2019), http://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf [https://perma.cc/LRY6-RXAH].Show More In a high-profile exposé, the website ProPublica claimed that COMPAS treated blacks and whites differently because black arrestees and inmates were far more likely to be erroneously classified as risky than were white arrestees and inmates despite the fact that COMPAS did not explicitly use race in its algorithm.10 10.See Angwin et al., supra note 8 (“Northpointe’s core product is a set of scores derived from 137 questions that are either answered by defendants or pulled from criminal records. Race is not one of the questions.”).Show More The essence of ProPublica’s claim was this:

In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways. The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. White defendants were mislabeled as low risk more often than black defendants.11 11.Id.Show More

Northpointe12 12.Northpointe, along with CourtView Justice Solutions Inc. and Constellation Justice Systems, rebranded to Equivant in January 2017. Equivant, Frequently Asked Questions 1, http://my.courtview.com/rs/322-KWH-233/images/Equivant%20Customer%20FAQ%20-%20FINAL.pdf [https://perma.cc/7HH8-LVQ6].Show More (the company that developed and owned COMPAS) responded to the criticism by arguing that ProPublica was focused on the wrong measure. In essence, Northpointe stressed the point ProPublica conceded—that COMPAS made mistakes with black and white defendants at roughly equal rates.13 13.See William Dieterich et al., COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity, Northpointe 9–10 (July 8, 2016), http://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf [https://perma.cc/N5RL-M9RN].Show More Although Northpointe and others challenged some of the accuracy of ProPublica’s analysis,14 14.For a critique of ProPublica’s analysis, see Anthony W. Flores et al., False Positives, False Negatives, and False Analyses: A Rejoinder to “Machine Bias: There’s Software Used Across the Country To Predict Future Criminals. And It’s Biased Against Blacks.”, 80 Fed. Prob. 38 (2016).Show More the main thrust of Northpointe’s defense was that COMPAS does treat blacks and whites the same. The controversy focused on the manner in which such similarity is assessed. Northpointe focused on the fact that if a black person and a white person were each given a particular score, the two people would be equally likely to recidivate.15 15.See Dieterich et al., supra note 13, at 9–11.Show More ProPublica looked at the question from a different angle. Rather than asking whether a black person and a white person with the same score were equally likely to recidivate, it focused instead on whether a black and white person who did not go on to recidivate were equally likely to have received a low score from the algorithm.16 16.See Angwin et al., supra note 8 (“In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.”).Show More In other words, one measure begins with the score and asks about its ability to predict reality. The other measure begins with reality and asks about its likelihood of being captured by the score.

The easiest way to fix the problem would be to treat the two groups equally in both respects. A high score and low score should mean the same thing for both blacks and whites (the measure Northpointe emphasized), and law-abiding blacks and whites should be equally likely to be mischaracterized by the tool (the measure ProPublica emphasized). Unfortunately, this solution has proven impossible to achieve. In a series of influential papers, computer scientists demonstrated that, in most circumstances, it is simply not possible to equalize both measures.17 17.See, e.g., Richard Berk et al., Fairness in Criminal Justice Risk Assessments: The State of the Art, Soc. Methods & Res. OnlineFirst 1, 23 (2018), https://journals.sagepub.com/doi/­10.1177/0049124118782533 [https://perma.cc/GG9L-9AEU] (discussing the required trade­off between predictive accuracy and various fairness measures); Alexandra Chouldechova, Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments, 5 Big Data 153, 157 (2017) (demonstrating that recidivism prediction instruments cannot simultaneously meet all fairness criteria where recidivism rates differ across groups because its error rates will be unbalanced across the groups when the instrument achieves predictive parity); Jon Kleinberg et al., Inherent Trade-Offs in the Fair Determination of Risk Scores, 67 LIPIcs 43:1, 43:5–8 (2017), https://drops.dagstuhl.de/opus/volltexte/2017/8156/pdf/LIPIcs-ITCS-2017-43.pdf [https://perma.cc/S9DM-PER2] (demonstrating how difficult it is for algorithms to simultaneously achieve the fairness goals of calibration and balance in predictions involving different groups).Show More The reason it is impossible relates to the fact that the underlying rates of recidivism among blacks and whites differ.18 18.See Bureau of Justice Statistics, U.S. Dep’t of Justice, 2018 Update on Prisoner Recidivism: A 9-Year Follow-up Period (2005–2014) 6 tbl.3 (2018), https://www.bjs.gov/­content/pub/pdf/18upr9yfup0514.pdf [https://perma.cc/3UE3-AS5S] (analyzing rearrests of state prisoners released in 2005 in 30 states and finding that 86.9% of black prisoners and 80.9% of white prisoners were arrested in the nine years following their release); see also Dieterich et al., supra note 13, at 6 (“[I]n comparison with blacks, whites have much lower base rates of general recidivism . . . .”). Of course, the data on recidivism itself may be flawed. This consideration is discussed below. See infra text accompanying notes 33–37.Show More When the two groups at issue (whatever they are) have different rates of the trait predicted by the algorithm, it is impossible to achieve parity between the groups in both dimensions.19 19.This is true unless the tool makes no mistakes at all. Kleinberg et al., supra note 17, at 43:5–6.Show More The example discussed in Part I illustrates this phenomenon.20 20.See infra Section I.A.Show More This fact gives rise to the question: in which dimension is such parity more important and why?

These different measures are often described as different conceptions of fairness.21 21.For example, Berk et al. consider six different measures of algorithmic fairness. See Berk et al., supra note 17, at 12–15.Show More This is a mistake. The measure favored by Northpointe is relevant to what we ought to believe about a particular scored individual. If a high-risk score means something different for blacks than for whites, then we do not know whether to believe (or how much confidence to have) in the claim that a particular scored individual is likely to commit a crime in the future. The measure favored by ProPublica relates instead to what we ought to do. If law-abiding blacks and law-abiding whites are not equally likely to be mischaracterized by the score, we will not know whether or how to use the scores in making decisions. If we are comparing a measure that is relevant to what we ought to believe to one that is relevant to what we ought to do, we are truly comparing apples to oranges.

This conclusion does not straightforwardly suggest that we should instead focus on the measure touted by ProPublica, however. A sophisticated understanding of the significance of these measures is fast-moving and evolving. Some computer scientists now argue that the lack of parity in the ProPublica measure is less meaningful than one might think.22 22.See Sam Corbett-Davies & Sharad Goel, The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning (arXiv, Working Paper No. 1808.00023v2, 2018), http://arxiv.org/abs/1808.00023 [https://perma.cc/ML4Y-EY6S].Show More The better way to understand the measure highlighted by ProPublica would be to say that it suggests that something is likely amiss. Differences in the ratio of false positive rates to false negative rates indicate that the algorithmic tool may rely on data that are themselves infected with bias or that the algorithm may be compounding a prior injustice. Because these possibilities have normative implications for how the algorithm should be used, this measure relates to fairness.

The most promising way to enhance algorithmic fairness is to improve the accuracy of the algorithm overall.23 23.See Sumegha Garg et al., Tracking and Improving Information in the Service of Fairness (arXiv, Working Paper No. 1904.09942v2, 2019), http://arxiv.org/abs/1904.09942 [https://perma.cc/D8ZN-CJ83].Show More And we can do that by permitting the use of protected traits (like race and sex) within the algorithm to determine what other traits will be used to predict the target variable (like recidivism). For example, housing instability might be more predictive of recidivism for whites than for blacks.24 24.See Sam Corbett-Davies et al., Algorithmic Decision Making and the Cost of Fairness, 2017 Proc. 23d ACM SIGKDD Int’l Conf. on Knowledge Discovery and Data Mining 797, 805.Show More If the algorithm includes a racial classification, it can segment its analysis such that this trait is used to predict recidivism for whites but not for blacks. Although this approach would improve risk assessment and thereby lessen the inequity highlighted by ProPublica, many in the field believe this approach is off the table because it is prohibited by law.25 25.See id. (“[E]xplicitly including race as an input feature raises legal and policy complications, and as such it is common to simply exclude features with differential predictive power.”).Show More This is not the case.

The use of racial classifications only sometimes constitutes disparate treatment on the basis of race and thus only sometimes gives rise to strict scrutiny. The fact that some uses of racial classifications do not constitute disparate treatment reveals that the concept of disparate treatment is more elusive than is often recognized. This observation is important given the central role that the distinction between disparate treatment and disparate impact plays in equal protection doctrine and statutory anti-discrimination law. In addition, it is important because it opens the door to more creative ways to improve algorithmic fairness.

The Article proceeds as follows. Part I develops the conceptual claim. It shows that the two most prominent types of measures used to assess algorithmic fairness are geared to different tasks. One is relevant to belief and the other to decision and action. This Part begins with a detailed explanation of the two measures and then explores the factors that affect belief and action in individual cases. Turning to the comparative context, Part I argues that predictive parity (the measure favored by Northpointe) is relevant to belief but not directly to the fair treatment of different groups.

Part II makes a normative claim. It argues that differences in the ratio of false positives to false negatives between protected groups (a variation on the measure put forward by ProPublica) suggest unfairness, and it explains why this is so. This Part begins by clarifying three distinct ways in which the concept of fairness is used in the literature. It then explains both the normative appeal of focusing on the parity in the ratio of false positives to false negatives and, at the same time, why doing so can be misleading. Despite these drawbacks, Part II argues that the disparity in the ratio of false positive to false negative rates tells us something important about the fairness of the algorithm.

Part III explores what can be done to diminish this unfairness. It argues that using protected classifications like race and sex within algorithms can improve their accuracy and fairness. Because constitutional anti­discrimination law generally disfavors racial classifications, computer scientists and others who work with algorithms are reluctant to deploy this approach. Part III argues that this reluctance rests on an overly simplistic view of the law. Focusing on constitutional law and on racial classification in particular, this Part argues that the doctrine’s resistance to the use of racial classifications is not categorical. Part III explores contexts in which the use of racial classifications does not constitute disparate treatment on the basis of race and extracts two principles from these examples. Using these principles, this Part argues that the use of protected classifications within algorithms may well be permissible. A conclusion follows.

  1. * D. Lurton Massee, Jr. Professor of Law and Roy L. and Rosamond Woodruff Morgan Professor of Law at the University of Virginia School of Law. I would like to thank Charles Barzun, Aloni Cohen, Aziz Huq, Kim Ferzan, Niko Kolodny, Sandy Mayson, Tom Nachbar, Richard Schragger, Andrew Selbst, and the participants in the Caltech 10th Workshop in Decisions, Games, and Logic: Ethics, Statistics, and Fair AI, the Dartmouth Law and Philosophy Workshop, and the computer science department at UVA for comments and critique. In addition, I would like to thank Kristin Glover of the University of Virginia Law Library and Judy Baho for their excellent research assistance. Any errors or confusions are my own.
  2. Blackout for Human Rights, MLK Now 2019, Riverside Church in the City of N.Y. (Jan. 21, 2019), https://www.trcnyc.org/mlknow2019/ [https://perma.cc/L45Q-SN9T] (interview with Rep. Ocasio-Cortez begins at approximately minute 16, and comments regarding algorithms begin at approximately minute 40); see also Danny Li, AOC Is Right: Algorithms Will Always Be Biased as Long as There’s Systemic Racism in This Country, Slate (Feb. 1, 2019, 3:47 PM), https://slate.com/news-and-politics/2019/02/aoc-algorithms-racist-bias.html [https://perma.cc/S97Z-UH2U] (quoting Ocasio-Cortez’s comments at the event in New York); Cat Zakrzewski, The Technology 202: Alexandria Ocasio-Cortez Is Using Her Social Media Clout To Tackle Bias in Algorithms, Wash. Post: PowerPost (Jan. 28, 2019), https://www.washingtonpost.com/news/powerpost/paloma/the-technology-202/2019/01/28 /the-technology-202-alexandria-ocasio-cortez-is-using-her-social-media-clout-to-tackle-bias-in-algorithms/5c4dfa9b1b326b29c37­78cdd/?utm_term=.541cd0827a23 [https://perma.cc/ LL4Y-FWDK] (discussing Ocasio-Cortez’s comments and reactions to them).
  3. Ryan Saavedra (@RealSaavedra), Twitter (Jan. 22, 2019, 12:27 AM), https://twitter.com/RealSaavedra/status/1087627739861897216 [https://perma.cc/32DD-QK5S]. The coverage of Ocasio-Cortez’s comments is mixed. See, e.g., Zakrzewski, supra note 1 (describing conservatives’ criticism of and other media outlets’ and experts’ support of Ocasio-Cortez’s comments).
  4. See, e.g., Hiawatha Bray, The Software That Runs Our Lives Can Be Biased—But We Can Fix It, Bos. Globe, Dec. 22, 2017, at B9 (describing a New York City Council member’s proposal to audit the city government’s computer decision systems for bias); Drew Harwell, Amazon’s Facial-Recognition Software Has Fraught Accuracy Rate, Study Finds, Wash. Post, Jan. 26, 2019, at A14 (reporting on an M.I.T. Media Lab study that found that Amazon facial-recognition software is less accurate with regard to darker-skinned women than lighter-skinned men, and Amazon’s criticism of the study); Tracy Jan, Mortgage Algorithms Found To Have Racial Bias, Wash. Post, Nov. 15, 2018, at A21 (reporting on a University of California at Berkeley study that found that black and Latino home loan customers pay higher interest rates than white or Asian customers on loans processed online or in person); Tony Romm & Craig Timberg, Under Bipartisan Fire from Congress, CEO Insists Google Does Not Take Sides, Wash. Post, Dec. 12, 2018, at A16 (reporting on Congresspeople’s concerns regarding Google algorithms which were voiced at a House Judiciary Committee hearing with Google’s CEO).
  5. See, e.g., Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 1288–97 (2008); Natalie Ram, Innovating Criminal Justice, 112 Nw. U. L. Rev. 659 (2018); Rebecca Wexler, Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System, 70 Stan. L. Rev. 1343 (2018).
  6. See, e.g., Margot E. Kaminski, Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability, 92 S. Cal. L. Rev. 1529 (2019); Joshua A. Kroll et al., Accountable Algorithms, 165 U. Pa. L. Rev. 633 (2017); Anne L. Washington, How To Argue with an Algorithm: Lessons from the COMPAS-ProPublica Debate, 17 Colo. Tech. L.J. 131 (2018) (arguing for standards governing the information available about algorithms so that their accuracy and fairness can be properly assessed). But see Jon Kleinberg et al., Discrimination in the Age of Algorithms (Nat’l Bureau of Econ. Research, Working Paper No. 25548, 2019), http://www.nber.org/papers/w25548 [https://perma.cc/JU6H-HG3W] (analyzing the potential benefits of algorithms as tools to prove discrimination).
  7. See generally Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (2015) (discussing and critiquing internet and finance companies’ non-transparent use of data tracking and algorithms to influence and manage people); Anupam Chander, The Racist Algorithm?, 115 Mich. L. Rev. 1023, 1024 (2017) (reviewing Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (2015)) (arguing that instead of “transparency in the design of the algorithm” that Pasquale argues for, “[w]hat we need . . . is a transparency of inputs and results”) (emphasis omitted).
  8. See, e.g., Aziz Z. Huq, Racial Equity in Algorithmic Criminal Justice, 68 Duke L.J. 1043 (2019) (arguing that current constitutional doctrine is ill-suited to the task of evaluating algorithmic fairness and that current standards offered in the technology literature miss important policy concerns); Sandra G. Mayson, Bias In, Bias Out, 128 Yale L.J. 2218 (2019) (discussing how past and existing racial inequalities in crime and arrests mean that methods to predict criminal risk based on existing information will result in racial inequality).
  9. See Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), https://www.pro­publica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [https://perma.cc/BA53-JT7V].
  10. Equivant, Practitioner’s Guide to COMPAS Core 7 (2019), http://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf [https://perma.cc/LRY6-RXAH].
  11. See Angwin et al., supra note 8 (“Northpointe’s core product is a set of scores derived from 137 questions that are either answered by defendants or pulled from criminal records. Race is not one of the questions.”).
  12. Id.
  13. Northpointe, along with CourtView Justice Solutions Inc. and Constellation Justice Systems, rebranded to Equivant in January 2017. Equivant, Frequently Asked Questions 1, http://my.courtview.com/rs/322-KWH-233/images/Equivant%20Customer%20FAQ%20-%20FINAL.pdf [https://perma.cc/7HH8-LVQ6].
  14. See William Dieterich et al., COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity, Northpointe 9–10 (July 8, 2016), http://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf [https://perma.cc/N5RL-M9RN].
  15. For a critique of ProPublica’s analysis, see Anthony W. Flores et al., False Positives, False Negatives, and False Analyses: A Rejoinder to “Machine Bias: There’s Software Used Across the Country To Predict Future Criminals. And It’s Biased Against Blacks.”, 80 Fed. Prob. 38 (2016).
  16. See Dieterich et al., supra note 13, at 9–11.
  17. See Angwin et al., supra note 8 (“In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.”).
  18. See, e.g., Richard Berk et al., Fairness in Criminal Justice Risk Assessments: The State of the Art, Soc. Methods & Res. OnlineFirst 1, 23 (2018), https://journals.sagepub.com/doi/­10.1177/0049124118782533 [https://perma.cc/GG9L-9AEU] (discussing the required trade­off between predictive accuracy and various fairness measures); Alexandra Chouldechova, Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments, 5 Big Data 153, 157 (2017) (demonstrating that recidivism prediction instruments cannot simultaneously meet all fairness criteria where recidivism rates differ across groups because its error rates will be unbalanced across the groups when the instrument achieves predictive parity); Jon Kleinberg et al., Inherent Trade-Offs in the Fair Determination of Risk Scores, 67 LIPIcs 43:1, 43:5–8 (2017), https://drops.dagstuhl.de/opus/volltexte/2017/8156/pdf/LIPIcs-ITCS-2017-43.pdf [https://perma.cc/S9DM-PER2] (demonstrating how difficult it is for algorithms to simultaneously achieve the fairness goals of calibration and balance in predictions involving different groups).
  19. See Bureau of Justice Statistics, U.S. Dep’t of Justice, 2018 Update on Prisoner Recidivism: A 9-Year Follow-up Period (2005–2014) 6 tbl.3 (2018), https://www.bjs.gov/­content/pub/pdf/18upr9yfup0514.pdf [https://perma.cc/3UE3-AS5S] (analyzing rearrests of state prisoners released in 2005 in 30 states and finding that 86.9% of black prisoners and 80.9% of white prisoners were arrested in the nine years following their release); see also Dieterich et al., supra note 13, at 6 (“[I]n comparison with blacks, whites have much lower base rates of general recidivism . . . .”). Of course, the data on recidivism itself may be flawed. This consideration is discussed below. See infra text accompanying notes 33–37.
  20. This is true unless the tool makes no mistakes at all. Kleinberg et al., supra note 17, at 43:5–6.
  21. See infra Section I.A.
  22. For example, Berk et al. consider six different measures of algorithmic fairness. See Berk et al., supra note 17, at 12–15.
  23. See Sam Corbett-Davies & Sharad Goel, The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning (arXiv, Working Paper No. 1808.00023v2, 2018), http://arxiv.org/abs/1808.00023 [https://perma.cc/ML4Y-EY6S].
  24. See Sumegha Garg et al., Tracking and Improving Information in the Service of
    Fairness (arXiv, Working Paper No. 1904.09942v2, 2019), http://arxiv.org/abs/1904.09942 [https://perma.cc/D8ZN-CJ83].
  25. See Sam Corbett-Davies et al., Algorithmic Decision Making and the Cost of Fairness, 2017 Proc. 23d ACM SIGKDD Int’l Conf. on Knowledge Discovery and Data Mining 797, 805.
  26. See id. (“[E]xplicitly including race as an input feature raises legal and policy complications, and as such it is common to simply exclude features with differential predictive power.”).