Concerns about online manipulation have centered on fears about undermining the autonomy of consumers and citizens. What has been overlooked is the risk that the same techniques of personalizing information online can also threaten equality. When predictive algorithms are used to allocate information about opportunities like employment, housing, and credit, they can reproduce past patterns of discrimination and exclusion in these markets. This Article explores these issues by focusing on the labor market, which is increasingly dominated by tech intermediaries. These platforms rely on predictive algorithms to distribute information about job openings, match job seekers with hiring firms, or recruit passive candidates. Because algorithms are built by analyzing data about past behavior, their predictions about who will make a good match for which jobs will likely reflect existing occupational segregation and inequality. When tech intermediaries cause discriminatory effects, they may be liable under Title VII, and Section 230 of the Communications Decency Act should not bar such actions. However, because of the practical challenges that litigants face in identifying and proving liability retrospectively, a more effective approach to preventing discriminatory effects should focus on regulatory oversight to ensure the fairness of algorithmic systems.
I. Introduction
Our online experiences are increasingly personalized. Facebook and Google micro-target advertisements aimed to meet our immediate needs. Amazon, Netflix, and Spotify offer up books, movies, and music tailored to match our tastes. Our news feeds are populated with stories intended to appeal to our particular interests and biases. This drive toward increasing personalization is powered by complex machine learning algorithms built to discern our preferences and anticipate our behavior. Personalization offers benefits because companies can efficiently offer consumers the precise products and services they desire.
Online personalization, however, has come under considerable criticism lately. Shoshana Zuboff assails our current economic system, which is built on companies amassing and exploiting ever more detailed personal information.1 1.Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power 8–11 (2019).Show More Ryan Calo and Tal Zarsky explain that firms are applying the insights of behavioral science to manipulate consumers by exploiting their psychological or emotional vulnerabilities.2 2.SeeRyan Calo, Digital Market Manipulation, 82 Geo. Wash. L. Rev. 995, 996, 999 (2014); Tal Z. Zarsky, Privacy and Manipulation in the Digital Age, 20 Theoretical Inquiries L. 157, 158, 160–61 (2019).Show More Daniel Susser, Beate Roessler, and Helen Nissenbaum describe how information technology is enabling manipulative practices on a massive scale.3 3.Daniel Susser, Beate Roessler & Helen Nissenbaum, Online Manipulation: Hidden Influences in a Digital World, 4 Geo. L. Tech. Rev. 1, 2, 10 (2019).Show More Julie Cohen similarly argues that “[p]latform-based, massively-intermediated processes of search and social networking are inherently processes of market manipulation.”4 4.Julie E. Cohen, Law for the Platform Economy, 51 U.C. Davis L. Rev. 133, 165 (2017); see also Julie E. Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism 75–77, 83–89, 96 (2019) (describing how techniques for behavioral surveillance and micro-targeting contribute to social harms such as polarization and extremism).Show More In the political sphere as well, concerns have been raised about manipulation, with warnings that news personalization is creating “filter bubble[s]” and increasing polarization.5 5.See, e.g., Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You 13–14 (2011); Michael J. Abramowitz, Stop the Manipulation of Democracy Online, N.Y. Times (Dec. 11, 2017), https://www.nytimes.com/2017/12/11/opinion/fake-news-russia-kenya.html [https://perma.cc/9YWF-PED7]; James Doubek, How Disinformation and Distortions on Social Media Affected Elections Worldwide, NPR (Nov. 16, 2017, 2:28 PM), https://www.npr.org/sections/alltechconsidered/2017/11/16/564542100/how-disinformation-and-distortions-on-social-media-affected-elections-worldwide [https://perma.cc/ZJ97-GQ SZ]; Jon Keegan, Blue Feed, Red Feed: See Liberal Facebook and Conservative Facebook, Side by Side, Wall St. J. (Aug. 19, 2019), http://graphics.wsj.com/blue-feed-red-feed/ [https://perma.cc/GJA8-4U9W].Show More These issues were highlighted by revelations that Cambridge Analytica sent personalized ads based on psychological profiles of eighty-seven million Facebook users in an effort to influence the 2016 presidential election.6 6.Carole Cadwalladr & Emma Graham-Harrison, Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach,Guardian (Mar. 17, 2018, 6:03 PM), https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election [https://perma.cc/72CR-9Y8K]; Alex Hern, Cambridge Analytica: How Did It Turn Clicks into Votes?, Guardian (May 6, 2018, 3:00 AM), https://www.theguardian.com/news/2018/may/06/cambridge-analytica-how-turn-clicks-into-votes-christopher-wylie [https://perma.cc/AD8H-PF3M]; Matthew Rosenberg, Nicholas Confessore & Carole Cadwalladr, How Trump Consultants Exploited the Facebook Data of Millions, N.Y. Times (Mar. 17, 2018), https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html [https://perma.cc/3WYQ-3YKP].Show More The extensive criticism of personalization is driven by concerns that online manipulation undermines personal autonomy and compromises rational decision making.
Largely overlooked in these discussions is the possibility that online manipulation also threatens equality. Online platforms increasingly operate as key intermediaries in the markets for employment, housing, and financial services—what I refer to as opportunity markets. Predictive algorithms are also used in these markets to segment the audience and determine precisely what information will be delivered to which users. The risk is that in doing so, these intermediaries will direct opportunities in ways that reproduce or reinforce historical forms of discrimination. Predictive algorithms are built by observing past patterns of behavior, and one of the enduring patterns in American economic life is the unequal distribution of opportunities along the lines of race, gender, and other personal characteristics. As a result, these systems are likely to distribute information about future opportunities in ways that reflect existing inequalities and may reinforce historical patterns of disadvantage.
The way in which information about opportunities is distributed matters, because these markets provide access to resources that are critical for human flourishing and well-being. In that sense, access to them is foundational. People need jobs and housing before they can act as consumers or voters. They need access to financial services in order to function in the modern economy. Of course, many other factors contribute to inequality, such as unequal educational resources, lack of access to health care, and over-policing in certain communities. Decisions by landlords, employers, or banks can also contribute to inequality. Tech intermediaries are thus just one part of a much larger picture. Nevertheless, they will be an increasingly important part as more and more transactions are mediated online.7 7.See, e.g., Miranda Bogen & Aaron Rieke, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias 5–6 (2018) (describing the role of platforms in the hiring process); Geoff Boeing, Online Rental Housing Market Representation and the Digital Reproduction of Urban Inequality, 52 Env’t & Plan. A 449, 450 (2019) (documenting the growing impact of Internet platforms in shaping the rental housing market).Show More Because they control access to information about opportunities, they have the potential to significantly impact how these markets operate.
Online intermediaries have unprecedented potential to finely calibrate the distribution of information. In the past, traditional print or broadcast media might aim at a particular audience, but they could not prevent any particular individual from accessing information that they published. And if an advertiser tried to signal its interest in only a particular group—as has happened with real estate ads that used code words or featured only white models—the attempts at exclusion were plainly visible. In contrast, online intermediaries have the ability to precisely target an audience, selecting some users to receive information and others to be excluded in ways that are not at all transparent.
The issue is illustrated by Facebook’s ad-targeting tools. Several lawsuits alleged that employers or landlords could use the company’s tools to exclude users on the basis of race, gender, or age from their audience.8 8.See infra Section II.B.Show More To a large extent, these concerns were resolved by a recent settlement in which Facebook agreed to bar the use of sensitive demographic variables to target employment, housing, and credit advertisements.9 9.See Galen Sherwin & Esha Bhandari, Facebook Settles Civil Rights Cases by Making Sweeping Changes to Its Online Ad Platform, ACLU (Mar. 19, 2019, 2:00 PM), https://www.aclu.org/blog/womens-rights/womens-rights-workplace/facebook-settles-civil-rights-cases-making-sweeping [https://perma.cc/H6D6-UMJ4].Show More However, the settlement failed to address another potential source of bias—Facebook’s ad-delivery algorithm, which determines which users within a targeted audience actually receive an ad. As explained below, even if an advertiser uses neutral targeting criteria and intends to reach a diverse audience, an ad-targeting algorithm may distribute information about opportunities in a biased way.10 10.See infra Section II.C.Show More This is an example of a much broader concern—namely, that when predictive algorithms are used to allocate access to opportunities, there is a significant risk that they will do so in a way that reproduces existing patterns of inequality and disadvantage.
Concerns about the distributive effects of predictive algorithms are relevant to all kinds of opportunity markets, including for housing, employment, and basic financial services. Each of these markets operates somewhat differently and is regulated under different laws. They deserve separate attention and more detailed consideration than can be provided here. This Article focuses on the labor market and the relevant laws regulating it; however, the issues it raises likely plague other opportunity markets as well.
Examining employment practices reveals dramatic change. Just a couple of decades ago, employers had a handful of available strategies for recruiting new workers, such as advertising in newspapers or hiring through an employment agency. Today, firms increasingly rely on tech intermediaries to fill job openings.11 11.See Bogen & Rieke, supra note 7, at 5–6.Show More Recent surveys suggest that somewhere from 84% to 93% of job recruiters use online strategies to find potential employees.12 12.Soc’y for Human Res. Mgmt., SHRM Survey Findings: Using Social Media for Talent Acquisition—Recruitment and Screening 3 (Jan. 7, 2016), https://www.shrm.org/hr-today/trends-and-forecasting/research-and-surveys/Documents/SHRM-Social-Media-Recruiting-Screening-2015.pdf [https://perma.cc/L6NT-N4KL]. The Society for Human Resource Management conducts biennial surveys of job recruiters. The surveys demonstrated an increase in the use of online recruiting by employers, rising from fifty-six percent in 2011 to seventy-seven percent in 2013 to eighty-four percent in 2015.Id.; Soc’y for Human Res. Mgmt., SHRM Survey Findings: Social Networking Websites and Recruiting/Selection 2 (Apr. 11, 2013), https://www.shrm.org/hr-today/trends-and-forecasting/research-and-surveys/Pages/shrm-social-networking-websites-recruiting-job-candidates.aspx [https://perma.cc/U4HN-E7U7]; see also Jobvite’s New 2015 Recruiter Nation Survey Reveals Talent Crunch, Jobvite (Sept. 22, 2015), https://www.jobvite.com/news_item/jobvites-new-2015-recruiter-nation-survey-reveals-talent-crunch-95-recruiters-anticipate-similar-increased-competition-skilled-workers-coming-year-86-expect-exp/ [https://perma.cc /H66S-8E5Z] (stating that 92% of recruiters use social media to discover or evaluate candidates).Show More Employers distribute information about positions through social media. They also rely on specialized job platforms like ZipRecruiter, LinkedIn, and Monster to recruit applicants and recommend the strongest candidates.13 13.See Bogen & Rieke, supra note 7, at 5, 19–20, 24.Show More In addition, passive recruiting—using data to identify workers who are not actively looking for another position—is a growing strategy for recruiting new talent.14 14.Id. at 22.Show More
The use of algorithms and artificial intelligence in the hiring process has not gone unnoticed. Numerous commenters and scholars have described how employers are using automated decision systems and have raised concerns that these developments may cause discrimination or threaten employee privacy.15 15.See, e.g., Ifeoma Ajunwa, Kate Crawford & Jason Schultz, Limitless Worker Surveillance, 105 Calif. L. Rev. 735, 738–39 (2017); Ifeoma Ajunwa, The Paradox of Automation as Anti-Bias Intervention, 41 Cardozo L. Rev. (forthcoming 2020) (manuscript at 14) (on file with author); Richard A. Bales & Katherine V.W. Stone, The Invisible Web of Work: Artificial Intelligence and Electronic Surveillance in the Workplace, 41 Berkeley J. Lab. & Emp. L. (forthcoming 2020) (manuscript at 3) (on file with author); Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671, 673–75 (2016); Matthew T. Bodie, Miriam A. Cherry, Marcia L. McCormick & Jintong Tang, The Law and Policy of People Analytics, 88 U. Colo. L. Rev. 961, 989–92 (2017); James Grimmelmann & Daniel Westreich, Incomprehensible Discrimination, 7 Calif. L. Rev. Online 164, 170–72, 176–77 (2017); Jeffrey M. Hirsch, Future Work, 2020 U. Ill. L. Rev. (forthcoming 2020) (manuscript at 3) (on file with author); Pauline T. Kim, Data-Driven Discrimination at Work, 58 Wm. & Mary L. Rev. 857, 860–61 (2017) [hereinafter Kim, Data-Driven Discrimination at Work]; Pauline T. Kim, Data Mining and the Challenges of Protecting Employee Privacy Under U.S. Law, 40 Comp. Lab. L. & Pol’y J. 405, 406 (2019); Pauline T. Kim & Erika Hanson, People Analytics and the Regulation of Information Under the Fair Credit Reporting Act, 61 St. Louis U. L.J. 17, 18–19 (2016); Charles A. Sullivan, Employing AI, 63 Vill. L. Rev. 395, 396 (2018).Show More However, previous work has focused on whether employers can or should be held liable when they use predictive algorithms or other artificial intelligence tools to make personnel decisions. What is missing from this literature is close scrutiny of how tech intermediaries are shaping labor markets and the implications for equality.
This Article undertakes that analysis, arguing that the use of predictive algorithms by labor market intermediaries risks reinforcing or even worsening existing patterns of inequality and that these intermediaries should be accountable for those effects. A number of studies have documented instances of biased delivery of employment ads.16 16.See infra Section II.C.Show More Although the exact mechanism is unclear, it should not be surprising that predictive algorithms distribute information about job opportunities in biased ways. These algorithms are built by analyzing existing data, and one of the most persistent facts of the U.S. labor market is ongoing occupational segregation along the lines of race and gender.17 17.See infra Section II.D.Show More If predictions are based solely on observations about past behavior—without regard to what social forces shaped that behavior—then they are likely to reproduce those patterns.
Tech intermediaries may not intend to cause discriminatory effects, but they are nevertheless responsible for them.18 18.Building predictive models involves numerous choices, many of them implicating value judgments. See, e.g., Barocas & Selbst, supra note 15, at 674; Margot E. Kaminski, Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability, 92 S. Cal. L. Rev. 1529, 1539 (2019); David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, 51 U.C. Davis L. Rev. 653, 703–04 (2017); Andrew D. Selbst & Solon Barocas, The Intuitive Appeal of Explainable Machines, 87 Fordham L. Rev. 1085, 1130–31 (2018).Show More They make choices when designing the algorithms that distribute information about job opportunities or suggest the best matches for job seekers and hiring firms. In doing so, they decide what goals to optimize—typically revenue—and those choices influence how information is channeled, making some opportunities visible and obscuring others. Thus, these technologies shape how the market participants—both workers and employers—perceive their available options and thereby also influence their behavior.19 19.Karen Levy and Solon Barocas have explored how the design choices made by platforms “can both mitigate and aggravate bias.” Karen Levy & Solon Barocas, Designing Against Discrimination in Online Markets, 32 Berkeley Tech. L.J. 1183, 1185 (2017). The focus of their analysis is on user bias in online markets like ride matching, consumer-to-consumer sales, short-term rentals, and dating. Id. at 1189–90. Because the design choices platforms make will structure users’ interactions with one another, these choices influence behavior, affecting whether or to what extent users can act on explicit or implicit biases. Levy and Barocas review multiple platforms across domains and develop a taxonomy of policy and design elements that have been used to address the risks of bias. Although the focus of this Article is on the impact of predictive algorithms rather than user bias, the issues are obviously interrelated. Past bias by users can cause predictive algorithms to discriminate. Conversely, algorithmic outputs in the form of recommendations or rankings can activate or exacerbate implicit user biases. To that extent, some, but not all, of the strategies they identify may be relevant to addressing bias in online opportunity markets.Show More When these intermediaries structure access to opportunities in ways that reflect historical patterns of discrimination and exclusion, they pose a threat to workplace equality. Even if the discriminatory effects are unintentional, the harm to workers can be real. Employment discrimination law has long targeted discriminatory effects, not just invidious motivation.20 20.See Griggs v. Duke Power Co., 401 U.S. 424, 431 (1971).Show More
The risk that tech intermediaries will contribute to workplace inequality poses a number of challenges for the law. Discrimination law has largely focused on employers, examining their decisions and practices for discriminatory intent or impact. However, if bias affects how potential applicants are screened out before they even interact with a hiring firm, then focusing on employer behavior will be inadequate to dismantle patterns of occupational segregation. Holding tech intermediaries directly responsible for their effects on labor markets, however, will raise a different set of challenges. Some of these are legal, such as whether existing law reaches these types of intermediaries,21 21.See infra Part III.Show More and whether they can avoid liability by relying on Section 230 of the Communications Decency Act (CDA),22 22.47 U.S.C. § 230 (2012).Show More which gives websites a defense to some types of liability. Other obstacles are more practical in nature, which suggests that preventing discriminatory effects may require alternative strategies.23 23.See infra Section IV.B.Show More
This Article proceeds as follows. Part II first explores the role that tech intermediaries play in the labor market and how targeting tools can be misused for discriminatory purposes. It next explains that even if employers are no longer permitted to use discriminatory targeting criteria, a significant risk remains that platforms’ predictive algorithms will distribute access to opportunities in ways that reproduce existing patterns of inequality. Because tech intermediaries have a great deal of power to influence labor market interactions, and may do so in ways that are not transparent, I argue in Part II that they should bear responsibility when they cause discriminatory effects.
Parts III and IV consider the relevant legal landscape. Part III discusses how the growing importance of tech intermediaries in the labor market poses challenges for existing anti-discrimination law. It first shows how the question “who is an applicant?”—an issue critical for finding employer liability—is complicated as platforms increasingly mediate job seekers’ interactions with firms. It then explores the possibilities for holding these intermediaries directly liable under existing employment discrimination law, either as employment agencies or for interfering with third party employment relationships. Part IV considers some obstacles to holding tech intermediaries liable for their discriminatory labor market effects. Section IV.A examines and rejects the argument that Section 230 of the Communications Decency Act would automatically bar such claims. Section IV.B explains that significant practical obstacles remain, suggesting that a post hoc liability regime may not be the best way to prevent discriminatory harms. Thus, Section IV.B also argues that we should look to regulatory models in order to minimize the risks of discrimination from the use of predictive algorithms.
- * Daniel Noyes Kirby Professor of Law, Washington University School of Law, St. Louis, Missouri. I am grateful to Victoria Schwarz, Miranda Bogen, Aaron Riecke, Greg Magarian, Neil Richards, Peggie Smith, Dan Epps, John Inazu, Danielle Citron, Ryan Calo, Andrew Selbst, Margot Kaminski, and Felix Wu for helpful comments on earlier drafts of this Article. I also benefited from feedback from participants at the 2019 Privacy Law Scholar’s Conference, Washington University School of Law’s faculty workshop, and Texas A&M School of Law’s Faculty Speaker Series. Many thanks to Adam Hall, Theanne Liu, Joseph Tomchak, and Samuel Levy for outstanding research assistance. ↑
- Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power 8–11 (2019). ↑
- See Ryan Calo, Digital Market Manipulation, 82 Geo. Wash. L. Rev. 995, 996, 999 (2014); Tal Z. Zarsky, Privacy and Manipulation in the Digital Age, 20 Theoretical Inquiries L. 157, 158, 160–61 (2019). ↑
- Daniel Susser, Beate Roessler & Helen Nissenbaum, Online Manipulation: Hidden Influences in a Digital World, 4 Geo. L. Tech. Rev. 1, 2, 10 (2019). ↑
- Julie E. Cohen, Law for the Platform Economy, 51 U.C. Davis L. Rev.
133, 165 (2017)
; see also Julie E. Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism 75–77, 83–89, 96 (2019) (describing how techniques for behavioral surveillance and micro-targeting contribute to social harms such as polarization and extremism). ↑
- See, e.g., Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You 13–14 (2011); Michael J. Abramowitz, Stop the Manipulation of Democracy Online, N.Y. Times (Dec. 11, 2017), https://www.nytimes.com/2017/12/11/opinion/fake-news-russia-kenya.html [https://perma.cc/9YWF-PED7]; James Doubek, How Disinformation and Distortions
on Social Media Affected Elections Worldwide, NPR (Nov. 16, 2017, 2:28 PM), https://www.npr.org/sections/alltechconsidered/2017/11/16/564542100/how-disinformation-and-distortions-on-social-media-affected-elections-worldwide [https://perma.cc/ZJ97-GQ SZ]; Jon Keegan, Blue Feed, Red Feed: See Liberal Facebook and Conservative Facebook, Side by Side, Wall St. J. (Aug. 19, 2019), http://graphics.wsj.com/blue-feed-red-feed/ [https://perma.cc/GJA8-4U9W]. ↑ - Carole Cadwalladr & Emma Graham-Harrison, Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach, Guardian (Mar. 17, 2018, 6:03 PM), https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election [https://perma.cc/72CR-9Y8K]; Alex Hern, Cambridge Analytica: How Did It Turn Clicks into Votes?, Guardian (May 6, 2018, 3:00 AM), https://www.theguardian.com/news/2018/may/06/cambridge-analytica-how-turn-clicks-into-votes-christopher-wylie [https://perma.cc/AD8H-PF3M]; Matthew Rosenberg, Nicholas Confessore & Carole Cadwalladr, How Trump Consultants Exploited the Facebook Data of Millions, N.Y. Times (Mar. 17, 2018), https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html [https://perma.cc/3WYQ-3YKP]. ↑
- See, e.g., Miranda Bogen & Aaron Rieke, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias 5–6 (2018) (describing the role of platforms in the hiring process); Geoff Boeing, Online Rental Housing Market Representation and the Digital Reproduction of Urban Inequality, 52 Env’t & Plan. A 449, 450 (2019) (documenting the growing impact of Internet platforms in shaping the rental housing market). ↑
- See infra Section II.B. ↑
- See Galen Sherwin & Esha Bhandari, Facebook Settles Civil Rights Cases by Making Sweeping Changes to Its Online Ad Platform, ACLU (Mar. 19, 2019, 2:00 PM), https://www.aclu.org/blog/womens-rights/womens-rights-workplace/facebook-settles-civil-rights-cases-making-sweeping [https://perma.cc/H6D6-UMJ4]. ↑
- See infra Section II.C. ↑
- See Bogen & Rieke, supra note 7, at 5–6. ↑
- Soc’y for Human Res. Mgmt., SHRM Survey Findings: Using Social Media for Talent Acquisition—Recruitment and Screening 3 (Jan. 7, 2016), https://www.shrm.org/hr-today/trends-and-forecasting/research-and-surveys/Documents/SHRM-Social-Media-Recruiting-Screening-2015.pdf [https://perma.cc/L6NT-N4KL]. The Society for Human Resource Management conducts biennial surveys of job recruiters. The surveys demonstrated an increase in the use of online recruiting by employers, rising from fifty-six percent in 2011 to seventy-seven percent in 2013 to eighty-four percent in 2015. Id.; Soc’y for Human Res. Mgmt., SHRM Survey Findings: Social Networking Websites and Recruiting/Selection 2 (Apr. 11, 2013), https://www.shrm.org/hr-today/trends-and-forecasting/research-and-surveys/Pages/shrm-social-networking-websites-recruiting-job-candidates.aspx [https://perma.cc/U4HN-E7U7]; see also Jobvite’s New 2015 Recruiter Nation Survey Reveals Talent Crunch, Jobvite (Sept. 22, 2015), https://www.jobvite.com/news_item/jobvites-new-2015-recruiter-nation-survey-reveals-talent-crunch-95-recruiters-anticipate-similar-increased-competition-skilled-workers-coming-year-86-expect-exp/ [https://perma.cc /H66S-8E5Z] (stating that 92% of recruiters use social media to discover or evaluate candidates). ↑
- See Bogen & Rieke, supra note 7, at 5, 19–20, 24. ↑
- Id. at 22. ↑
- See, e.g., Ifeoma Ajunwa, Kate Crawford & Jason Schultz, Limitless Worker Surveillance, 105 Calif. L. Rev. 735, 738–39 (2017); Ifeoma Ajunwa, The Paradox of Automation as Anti-Bias Intervention, 41 Cardozo L. Rev. (forthcoming 2020) (manuscript at 14) (on file with author); Richard A. Bales & Katherine V.W. Stone, The Invisible Web of Work: Artificial Intelligence and Electronic Surveillance in the Workplace, 41 Berkeley J. Lab. & Emp. L. (forthcoming 2020) (manuscript at 3) (on file with author); Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671, 673–75 (2016); Matthew T. Bodie, Miriam A. Cherry, Marcia L. McCormick & Jintong Tang, The Law and Policy of People Analytics, 88 U. Colo. L. Rev. 961, 989–92 (2017); James Grimmelmann & Daniel Westreich, Incomprehensible Discrimination, 7 Calif. L. Rev. Online 164, 170–72, 176–77 (2017); Jeffrey M. Hirsch, Future Work, 2020 U. Ill. L. Rev. (forthcoming 2020) (manuscript at 3) (on file with author); Pauline T. Kim, Data-Driven Discrimination at Work, 58 Wm. & Mary L. Rev. 857, 860–61 (2017) [hereinafter Kim, Data-Driven Discrimination at Work]; Pauline T. Kim, Data Mining and the Challenges of Protecting Employee Privacy Under U.S. Law, 40 Comp. Lab. L. & Pol’y J. 405, 406 (2019); Pauline T. Kim & Erika Hanson, People Analytics and the Regulation of Information Under the Fair Credit Reporting Act, 61 St. Louis U. L.J. 17, 18–19 (2016); Charles A. Sullivan, Employing AI, 63 Vill. L. Rev. 395, 396 (2018). ↑
- See infra Section II.C. ↑
- See infra Section II.D. ↑
- Building predictive models involves numerous choices, many of them implicating value judgments. See, e.g., Barocas & Selbst, supra note 15, at 674; Margot E. Kaminski, Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability, 92 S. Cal. L. Rev. 1529, 1539 (2019); David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About Machine Learning, 51 U.C. Davis L. Rev. 653, 703–04 (2017); Andrew D. Selbst & Solon Barocas, The Intuitive Appeal of Explainable Machines, 87 Fordham L. Rev. 1085, 1130–31 (2018). ↑
- Karen Levy and Solon Barocas have explored how the design choices made by platforms “can both mitigate and aggravate bias.” Karen Levy & Solon Barocas, Designing Against Discrimination in Online Markets, 32 Berkeley Tech. L.J. 1183, 1185 (2017). The focus of their analysis is on user bias in online markets like ride matching, consumer-to-consumer sales, short-term rentals, and dating. Id. at 1189–90. Because the design choices platforms make will structure users’ interactions with one another, these choices influence behavior, affecting whether or to what extent users can act on explicit or implicit biases. Levy and Barocas review multiple platforms across domains and develop a taxonomy of policy and design elements that have been used to address the risks of bias. Although the focus of this Article is on the impact of predictive algorithms rather than user bias, the issues are obviously interrelated. Past bias by users can cause predictive algorithms to discriminate. Conversely, algorithmic outputs in the form of recommendations or rankings can activate or exacerbate implicit user biases. To that extent, some, but not all, of the strategies they identify may be relevant to addressing bias in online opportunity markets. ↑
- See Griggs v. Duke Power Co., 401 U.S. 424, 431 (1971). ↑
- See infra Part III. ↑
- 47 U.S.C. § 230 (2012). ↑
- See infra Section IV.B. ↑