Race in the Machine: Racial Disparities in Health and Medical AI

Article — Volume 110, Issue 2

110 Va. L. Rev. 243
Download PDF
*Professor of Law, UC Berkeley School of Law. I would like to thank W. Nicholson Price, Rebecca Wexler, and the participants of a faculty workshop at the UC Berkeley School of Law, who offered incisive feedback on earlier versions of this Article. I am grateful to Edna Lewis in the Berkeley Law Library for truly wonderful research support. Thanks are also owed to Arni Daroy, Natalie Giron, Taylor Fox, Simone Lieban Levine, Hayley MacMillen, Tiffanie Obilor, and Samira Seraji, who all provided superb research assistance related to this Article. As always, I am thankful for my perfect little Belgian endive, Gert Reynaert, who makes my life both sweet and savory.Show More

What does racial justice—and racial injustice—look like with respect to artificial intelligence in medicine (“medical AI”)? This Article offers that racial injustice might look like a country in which law and ethics have decided that it is unnecessary to inform people of color that their health is being managed by a technology that likely encodes the centuries of inequitable medical care that people of color have received. Racial justice might look like an informed consent process that is reformed in light of this reality. This Article makes this argument in four Parts. Part I canvases the deep and wide literature that documents that people of color suffer higher rates of illness than their white counterparts while also suffering poorer health outcomes than their white counterparts when treated for these illnesses. Part II then provides an introduction to AI and explains the uses that scholars and developers predict medical AI technologies will have in healthcare, focusing specifically on the management of pregnancy. Part III subsequently serves as a primer on algorithmic bias—that is, systematic errors in the operation of an algorithm that result in a group being unfairly advantaged or disadvantaged. This Part argues that we should expect algorithmic bias that results in people of color receiving inferior pregnancy-related healthcare, and healthcare generally, because medical AI technologies will be developed, trained, and deployed in a country with striking and unforgivable racial disparities in health.

Part IV forms the heart of the Article, making the claim that obstetricians, and healthcare providers generally, should disclose during the informed consent process their reliance on, or consultation with, medical AI technologies that likely encode inequities. To be precise, providers should have to tell their patients that an algorithm has informed the recommendation that the provider is making; moreover, providers should inform their patients how racial disparities in health may have impacted the algorithm’s accuracy. It supports this argument by recounting the antiracist, anti-white supremacist—indeed radical—origins of informed consent in the Nuremberg Trials’ rebuke of Nazi “medicine.” This Part argues that the introduction into the clinical encounter of medical AI—and the likelihood that these technologies will perpetuate racially inequitable healthcare while masking the same—is an invitation to reform the informed consent process to make it more consistent with the commitments that spurred its origination. This Part proposes that, given the antiracist roots of the doctrine of informed consent, it would be incredibly ironic to allow the informed consent process to permit a patient—and, particularly, a patient of color—to remain ignorant of the fact that their medical care is being managed by a device or system that likely encodes racism. This Part argues that informing patients about the likelihood of race-based algorithmic bias—and the reasons that we might expect race-based algorithmic bias—may, in fact, be a prerequisite for actually transforming the inequitable social conditions that produce racial disparities in health and healthcare.

Introduction

As artificial intelligence (“AI”) technologies proliferate across sundry sectors of society—from mortgage lending and marketing to policing and public health—it has become apparent to many observers that these technologies will need to be regulated to ensure both that their social benefits outweigh their social costs and that these costs and benefits are distributed fairly across society. In October 2022, the Biden Administration announced its awareness of the dangers that “technology, data, and automated systems” pose to individual rights.1.See Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, The White House Off. of Sci. & Tech. Pol’y, https://www.white‌house.gov/ostp/ai-bill-of-rights/ [https://perma.cc/E5GS-6ZP3] (last visited Jan. 5, 2024). Some states and cities have also initiated efforts to regulate AI. See, e.g.,Laura Schneider, Debo Adegbile, Ariella Feingold & Makenzie Way, NYC Soon to Enforce AI Bias Law, Other Jurisdictions Likely to Follow, WilmerHale (Apr. 10, 2023), https://www.wilmerhale.com/insights/client-alerts/‌20230410-nyc-soon-to-enforce-ai-bias-law-other-jurisdictions-likely-to-follow [https://perm‌a.cc/K47J-XZUQ] (“New York City’s Department of Consumer and Worker Protection (DCWP) is expected to begin enforcing the City’s novel artificial intelligence (AI) bias audit law on July 5, 2023. This law prohibits the use of automated decision tools in employment decisions within New York City unless certain bias audit, notice, and reporting requirements are met.”); Jonathan Kestenbaum, NYC’s New AI Bias Law Broadly Impacts Hiring and Requires Audits, Bloomberg Law (July 5, 2023, 5:00 AM), https://news.bloomberglaw.com/‌us-law-week/nycs-new-ai-bias-law-broadly-impacts-hiring-and-requires-audits [https://perm‌a.cc/L94C-X3BN] (observing that the “New Jersey Assembly is considering a limit on use of AI tools in hiring unless employers can prove they conducted a bias audit,” that “Maryland and Illinois have proposed laws that prohibit use of facial recognition and video analysis tools in job interviews without consent of the candidates,” and that “the California Fair Employment and Housing Council is mulling new mandates that would outlaw use of AI tools and tests that could screen applicants based on race, gender, ethnicity, and other protected characteristics”); Attorney General Bonta Launches Inquiry into Racial and Ethnic Bias in Healthcare Algorithms, State of Cal. Dep’t of Just. Off. of the Att’y Gen. (Aug. 31, 2022), https://oag.‌ca.gov/‌news/press-releases/attorney-general-bonta-launches-inquiry-racial-and-ethnic-bias-healthca‌re [https://perma.cc/ERC4-GVJJ] (“California Attorney General Rob Bonta today sent letters to hospital CEOs across the state requesting information about how healthcare facilities and other providers are identifying and addressing racial and ethnic disparities in commercial decision-making tools. The request for information is the first step in a DOJ inquiry into whether commercial healthcare algorithms—types of software used by healthcare providers to make decisions that affect access to healthcare for California patients—have discriminatory impacts based on race and ethnicity.”).Show More Through its Office of Science and Technology Policy, the Administration declared the need for a coordinated approach to address the problems that AI technologies have generated—problems that include “[a]lgorithms used in hiring and credit decisions [that] have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination,” “[u]nchecked social media data collection [that] has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity,” and, most germane to the concerns of this Article, “systems [that are] supposed to help with patient care [but that] have proven unsafe, ineffective, or biased.”2.See Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, supra note 1.Show More

As an initial measure in the effort to eliminate—or, at least, contain—the harms that automation poses, the Administration offers a Blueprint for an AI Bill of Rights, which consists of “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.”3.Id.Show More Crucially, the Blueprint identifies “notice and explanation” as a central element in a program that protects the rights of individuals in an increasingly automated society.4.Id.Show More That is, the Biden Administration proposes that in order to ensure that AI does not threaten “civil rights or democratic values,” individuals should be informed when “an automated system is being used,” and they should “understand how and why it contributes to outcomes that impact” them.5.Id.Show More To apply it to the context to which this Article is most attuned, if a hospital system or healthcare provider relies upon an AI technology when making decisions about a patient’s care, then the patient whose health is being managed by the technology ought to know about the technology’s usage.

Although the Biden Administration appears committed to the idea that an individual’s rights are violated when they are unaware that an AI technology has had some impact on the healthcare that they have received, many actors on the ground, including physicians and other healthcare providers, do not share this commitment. As one journalist reports:

[T]ens of thousands of patients hospitalized at one of Minnesota’s largest health systems have had their discharge planning decisions informed with help from an artificial intelligence model. But few if any of those patients [have] any idea about the AI involved in their care. That’s because frontline clinicians . . . generally don’t mention the AI whirring behind the scenes in their conversations with patients.6.Rebecca Robbins & Erin Brodwin, An Invisible Hand: Patients Aren’t Being Told About the AI Systems Advising Their Care, STAT (July 15, 2020), https://www.statnews.com/‌2020/07/15/artificial-intelligence-patient-consent-hospitals/ [https://perma.cc/R3F5-NNX4].Show More

This health system is hardly unique in its practice of keeping this information from patients. “The decision not to mention these systems to patients is the product of an emerging consensus among doctors, hospital executives, developers, and system architects who see little value . . . in raising the subject.”7.Id.Show More Moreover, while these actors see few advantages associated with informing patients that AI has informed a healthcare decision or recommendation, they see lots of disadvantages, with the disclosure operating as a “distraction” and “undermin[ing] trust.”8.Id.Show More

We exist in a historical moment in which the norms around notice and consent in the context of AI in healthcare have not yet emerged—with some powerful actors in the federal government proposing that patients are harmed when they are not notified that AI has impacted their healthcare, and other influential actors on the ground proposing that patients are harmed when they are notified that AI has impacted their healthcare.9.See also Attorney General Bonta Launches Inquiry into Racial and Ethnic Bias in Healthcare Algorithms, supra note 1 (understanding as problematic the fact that some AI tools used in healthcare “are not fully transparent to healthcare consumers”); cf. Schneider et al., supra note 1 (noting that New York City’s law regulating AI in employment requires an employer to provide “applicants and employees who reside in New York City notice of its use of AI in hiring and/or promotion decisions, either via website, job posting, mail or e-mail”). Interestingly, some investigations have shown that some patients do not want to know when physicians and hospital administrators rely on medical AI when managing their healthcare. See Robbins & Brodwin, supra note 6 (reporting that some patients who were interviewed stated that “they wouldn’t expect or even want their doctor to mention” the use of medical AI and stating that these patients “likened it to not wanting to be privy to numbers around their prognosis, such as how much time they might expect to have left, or how many patients with their disease are still alive after five years”). However, other studies have shown that patients do desire this information. See Anjali Jain et al., Awareness of Racial and Ethnic Bias and Potential Solutions to Address Bias with Use of Health Care Algorithms, JAMA Health F., June 2, 2023, at 10, https://jamanetwork.com/journals/jama-health-forum/fullarticle/2805595 [https://perma.cc/9FMK-E4VV] (discussing a “recent, nationally representative survey” that showed that “patients . . . wanted to know when [AI] was involved in their care”).Show More As we think about the shape that these norms ought to take, this Article implores us to keep in mind the fact of racial inequality and the likelihood that AI will have emerged from, and thereby reflect, that racial inequality. Indeed, this Article’s central claim is that the well-documented racial disparities in health that have existed in the United States since the dawn of the nation demand that providers inform all patients—but especially patients of color—that they have relied on or consulted with an AI technology when providing healthcare to them.

Although much has been written about AI in healthcare,10 10.Indeed, volumes have been written about algorithmic bias, what AI technologies mean with respect to data privacy, and how we ought to regulate AI inside the medical context. See generally The Oxford Handbook of Digital Ethics (Carissa Véliz ed., 2021).Show More or medical AI, very little has been written about the effects that medical AI can and should have on the informed consent process.11 11.See I. Glenn Cohen, Informed Consent and Medical Artificial Intelligence: What to Tell the Patient?, 108 Geo. L.J. 1425, 1428 (2020) (noting that his Article, which was published just three years ago, was “the first to examine in-depth how medical AI / [machine learning] intersects with our concept of informed consent”).Show More Moreover, no article to date has interrogated what the reality of racial disparities in health should mean with respect to obtaining a patient’s informed consent to a medical intervention (or nonintervention) that an AI system has recommended. This Article offers itself as the beginning of that conversation. It makes the case that we ought to reform the informed consent process to ensure that patients of color are aware that their health is being managed by a technology that likely encodes the centuries of inequitable medical care that people of color have received in this country and around the world.

The Article proceeds in four Parts. Part I canvases the deep and wide literature that documents that people of color suffer higher rates of illness than their white counterparts while also suffering poorer health outcomes than their white counterparts when treated for these illnesses. These racial disparities in health are also present in the context of pregnancy, a fact that is illustrated most spectacularly by the often-quoted statistic describing black women’s three- to four-fold increased risk of dying from a pregnancy-related cause as compared to white women.12 12.Elizabeth A. Howell, Reducing Disparities in Severe Maternal Morbidity and Mortality, 61 Clinical Obstetrics & Gynecology 387, 387 (2018).Show More Part II then provides an introduction to AI and explains the uses that scholars and developers predict medical AI technologies will have in healthcare and, specifically, the management of pregnancy. Part III subsequently serves as a primer on algorithmic bias—that is, systematic errors in the operation of an algorithm that result in a group being unfairly advantaged or disadvantaged. This Part explains the many causes of algorithmic bias and gives examples of algorithmic bias in medicine and healthcare. This Part argues that we should expect algorithmic bias from medical AI that results in people of color receiving inferior healthcare. This is because medical AI technologies will be developed, trained, and deployed in a country with striking and unforgivable racial disparities in health.

Part IV forms the heart of the Article. It begins by asking a question: Will patients of color even want medical AI? There is reason to suspect that significant numbers of them do not. Media attention to the skepticism with which many black people initially viewed COVID-19 vaccines has made the public newly aware of the higher levels of mistrust that black people, as a racial group, have toward healthcare institutions and their agents. That is, the banality of racial injustice has made black people more suspicious of medical technologies. This fact suggests that ethics—and justice—require providers to inform their patients of the use of a medical technology that likely embeds racial injustice within it.

The Part continues by making the claim that healthcare providers should disclose during the informed consent process their reliance on medical AI. To be precise, providers should have to tell their patients that an algorithm has affected the providers’ decision-making around the patients’ healthcare; moreover, providers should inform their patients how racial disparities in health may have impacted the algorithm’s predictive accuracy. This Part argues that requiring these disclosures as part of the informed consent process revives the antiracist, anti-white supremacist origins of the informed consent process. To be sure, the practice of informed consent originated in the Nuremberg Trials’ rebuke of Nazi medicine. These defiant, revolutionary origins have been expunged from the perfunctory form that the informed consent process has taken at present. Resuscitating the rebelliousness that is latent within informed consent will not only help to protect patient autonomy in the context of medical AI but may also be the condition of possibility for transforming the social conditions that produce racial disparities in health and healthcare. That is, the instant proposal seeks to call upon the rebellious roots of the doctrine of informed consent and use it as a technique of political mobilization. A short conclusion follows.

Two notes before beginning: First, although this Article focuses on medical AI in pregnancy and prenatal care, its argument is applicable to informed consent in all contexts—from anesthesiology to x-rays—in which a provider might utilize a medical AI device. Concentrating on pregnancy and prenatal care allows the Article to offer concrete examples of the phenomena under discussion and, in so doing, make crystal clear the exceedingly high stakes of our societal and legal decisions in this area.

Second, the moment that a provider consults a medical AI device when delivering healthcare to a patient of color certainly is not the first occasion in that patient’s life in which racial disenfranchisement may come to impact the healthcare that they receive. That is, we can locate racial bias and exclusion at myriad sites within healthcare, medicine, and the construction of medical knowledge well before a clinical encounter in which medical AI is used. For example: people of color are underrepresented within clinical trials that test the safety and efficacy of drugs—a fact that might impact our ability to know whether a drug actually is safe and effective for people of color.13 13.See The Nat’l Acads. of Scis., Eng’g & Med., Improving Representation in Clinical Trials and Research: Building Research Equity for Women and Underrepresented Groups 24 (Kirsten Bibbins-Domingo & Alex Helman eds., 2022), https://nap.nationalacademies.org/‌catalog/26479/improving-representation-in-clinical-trials-and-research-building-research‌-equity [https://perma.cc/FE2H-9YC5] (explaining that “research has demonstrated that many groups underrepresented and excluded in clinical research can have distinct disease presentations or health circumstances that affect how they will respond to an investigational drug or therapy” and that “[s]uch differences contribute to variable therapeutic responses and necessitate targeted efficacy and safety evaluation”). An FDA report of clinical trials that took place between 2015 and 2019 revealed that while non-Hispanic white people constituted only 61% of the general population in the United States, they were 78% of trial participants. See id. at 35; see also id. at 44–45 (“Even recently completed trials have failed to include enrollment consistent with the distribution of disease across the population—a Phase 2 trial of crenezumab in Alzheimer’s disease with 360 participants across 83 sites in 6 countries reported 97.5 percent of participants being white, and only 2.8 percent of all participants being Hispanic.”). Notably, clinical trials only rarely include pregnant and lactating people. See id. at 40. This means that when most medications are introduced into the market, their safety and efficacy vis-à-vis pregnant and lactating people are unknown—although it is quite common for people to take medications while pregnant or lactating. See id. (“During pregnancy and lactation, greater than 90 percent of these individuals take at least one medication, either to treat pregnancy-related complications or to treat ongoing medical issues.”).Show More For example: the National Institute of Health (“NIH”) and the National Science Foundation (“NSF”) fund medical research conducted by investigators of color at lower rates than that conducted by white investigators14 14.See Christine Yifeng Chen et al., Meta-Research: Systemic Racial Disparities in Funding Rates at the National Science Foundation, eLife, Nov. 29, 2022, at 2, https://doi.org/10.7554/‌eLife.83071 [https://perma.cc/NFS8-T3LB] (showing that the National Science Foundation funded proposals by white principal investigators at +8.5% of the average funding rate while funding proposals by Asian, black, and Native Hawaiian/Pacific Islander principal investigators at 21.2%, 8.1%, and 11.3% of the average funding rate, respectively); Donna K. Ginther et al., Race, Ethnicity, and NIH Research Awards, 333 Science 1015, 1016 (2011), https://doi.org/10.1126/science.1196783 [https://perma.cc/NQA9-LYMG] (showing that the National Institute of Health funded proposals by black principal investigators at close to half the rate as white principal investigators).Show More—a fact that might contribute to the underfunding of medical conditions that disproportionately impact people of color. For example: most medical schools still approach race as a genetic fact instead of a social construction, with the result being that most physicians in the United States have not been disabused of the notion that people of color—black people, specifically—possess genes and genetic variations that make them get sicker and die earlier than their white counterparts.15 15.See Christina Amutah et al., Misrepresenting Race—The Role of Medical Schools in Propagating Physician Bias, 384 New Eng. J. Med. 872, 873–74 (2021). Funding for research into the imagined genetic causes of racial disparities in health outcomes vastly outstrips funding for research into social determinants of health or the physiological effects of stress and racism on people of color. Shawn Kneipp et al., Trends in Health Disparities, Health Inequity, and Social Determinants of Health Research, 67 Nursing Rsch. 231, 231 (2018). See also René Bowser, Racial Profiling in Health Care: An Institutional Analysis of Medical Treatment Disparities, 7 Mich. J. Race & L. 79, 114 (2001) (arguing that “physicians who focus on racism as opposed to cultural peculiarities or the genetic basis of disease are likely to be considered both as not ‘real scientists’ and as dangerous” and stating that producing research that explains racial disparities in health outcomes in terms of culture and genes, as opposed to structural racism and inherited disadvantage, “enhances the researcher’s status”). This funding disparity undoubtedly contributes to the perpetuation of the myth of biological race.Show More For example: pulse oximeters, which use infrared light to measure an individual’s blood saturation levels, are so common as to be called ubiquitous, even though it is well-known that the devices do not work as well on more pigmented skin.16 16.See Haley Bridger, Skin Tone and Pulse Oximetry: Racial Disparities in Care Tied to Differences in Pulse Oximeter Performance, Harv. Med. Sch. (July 14, 2022), https://hms.‌harvard.edu/news/skin-tone-pulse-oximetry [https://perma.cc/HZW8-YMAS].Show More For example: most clinical studies that are used to establish evidence-based practices are conducted in well-resourced facilities, making their generalizability to more contingently equipped and more unreliably funded facilities uncertain.17 17.See The National Academies of Sciences, Engineering, and Medicine, supra note 13, at 25 (observing that “[c]linical research is often performed in well-resourced tertiary care sites in large urban centers, and may have limited applicability to community sites, less well-resourced safety net settings, and rural settings”).Show More For example: many research studies do not report their findings by race, thereby impeding our ability to know whether the studies’ results are equally true for all racial groups.18 18.See id. at 31 (stating that the “[l]ack of representative studies on screening for cancer or cardiometabolic disease may lead to recommendations that fail to consider earlier ages or lower biomarker thresholds to start screening that might be warranted in some populations” and observing that “due to [a] lack of studies that report findings by race,” the guidelines for some screenings are universal, although there is some evidence that they should vary by race and age).Show More And so on. If providers ought to notify their patients (especially their patients of color) that the provider has relied upon medical AI when caring for the patient, then it is likely true that providers similarly ought to notify their patients about racial inequity in other contexts as well. That is, there is a compelling argument that when a provider prescribes a medication to a patient, they might need to notify the patient that preciously small numbers of people who were not white cisgender men participated in the clinical trial of the medication.19 19.See Barbara A. Noah, Racial Disparities in the Delivery of Health Care,35 San Diego L. Rev. 135, 152 (1998) (noting that “[b]efore the National Institutes of Health (NIH) issued a directive in 1990, investigators almost uniformly tested new chemical entities only on white male subjects”).Show More There is a compelling argument that when a provider tells a black patient that the results of her pulmonary function test were “normal,” they might also need to inform that patient that if she were white, her results would be considered “abnormal,” as the idea that the races are biologically distinct has long informed notions of whether a set of lungs is healthy or not.20 20.See Lundy Braun, Breathing Race into the Machine: The Surprising Career of the Spirometer from Plantation to Genetics, at xv (2014).Show More There is a compelling argument that when a provider affixes a pulse oximeter to the finger of a patient of color, they might also need to inform that patient that the oximeter’s readings may be inaccurate—and the care that she receives based on those readings may be inferior21 21.See Bridger, supra note 16 (describing a study that showed that pulse oximeters reported blood oxygen saturation levels for patients of color that were higher than what they actually were, leading these patients’ providers to give them supplemental oxygen at lower rates).Show More—given the widely known and undisputed fact that such devices do not work as well on darker skin. There is a compelling argument that when a physician tells a pregnant patient laboring in a safety net hospital that the evidence-based practice for patients presenting in the way that she presents is an artificial rupture of membranes (“AROM”) to facilitate the progression of the labor, they might also need to inform the patient that the studies that established AROM as an evidence-based practice were conducted in well-funded research hospitals that were affiliated with universities.22 22.See, e.g., Alan F. Guttmacher & R. Gordon Douglas, Induction of Labor by Artificial Rupture of the Membranes, 21 Am. J. Obstetrics & Gynecology 485, 485 (1931) (establishing artificial rupture of the membranes as an evidence-based practice in obstetrics after studying the safety and efficacy of the procedure among patients cared for at a clinic affiliated with Johns Hopkins University).Show More There is a compelling argument that when a physician tells a forty-year-old black patient that he does not need to do a screening for colorectal cancer until age forty-five, they might also need to inform the patient that the studies that established forty-five as the age when such screenings should commence did not report their findings by race.23 23.See Screening for Colorectal Cancer: US Preventive Services Task Force Recommendation Statement, 325 JAMA 1965, 1970 (2021), https://jamanetwork.com/jour‌nals/jama/fullarticle/2779985 [https://perma.cc/TV68-6W75].Show More And so on.

It does not defeat this Article’s claim to observe that racial bias and exclusion are pervasive throughout medicine and healthcare and that providers in many contexts outside of the use of medical AI ought to notify patients how this bias and exclusion may affect the healthcare that they are receiving. Indeed, it is seductive to claim in those other contexts that it is better to fix the inequities in the healthcare than to tell patients of color about them—a fact that is also true in the context of medical AI. However, fixing the inequities in healthcare in those other contexts and telling patients about them are not mutually exclusive—a fact that is also true in the context of medical AI. And as Part IV argues, telling patients about the inequities in those other contexts might be the condition of possibility of fixing the inequities—a fact that is also true in the context of medical AI.

Essentially, this Article’s claim may be applied in a range of circumstances. In this way, this Article’s investigation into how algorithmic bias in medical AI should affect the informed consent process is simply a case study of a broader phenomenon. This Article’s insights vis-à-vis medical AI are generalizable to all medical interventions and noninterventions.

  1.  See Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, The White House Off. of Sci. & Tech. Pol’y, https://www.white‌house.gov/ostp/ai-bill-of-rights/ [https://perma.cc/E5GS-6ZP3] (last visited Jan. 5, 2024). Some states and cities have also initiated efforts to regulate AI. See, e.g., Laura Schneider, Debo Adegbile, Ariella Feingold & Makenzie Way, NYC Soon to Enforce AI Bias Law, Other Jurisdictions Likely to Follow, WilmerHale (Apr. 10, 2023), https://www.wilmerhale.com/insights/client-alerts/‌20230410-nyc-soon-to-enforce-ai-bias-law-other-jurisdictions-likely-to-follow [https://perm‌a.cc/K47J-XZUQ] (“New York City’s Department of Consumer and Worker Protection (DCWP) is expected to begin enforcing the City’s novel artificial intelligence (AI) bias audit law on July 5, 2023. This law prohibits the use of automated decision tools in employment decisions within New York City unless certain bias audit, notice, and reporting requirements are met.”); Jonathan Kestenbaum, NYC’s New AI Bias Law Broadly Impacts Hiring and Requires Audits, Bloomberg Law (July 5, 2023, 5:00 AM), https://news.bloomberglaw.com/‌us-law-week/nycs-new-ai-bias-law-broadly-impacts-hiring-and-requires-audits [https://perm‌a.cc/L94C-X3BN] (observing that the “New Jersey Assembly is considering a limit on use of AI tools in hiring unless employers can prove they conducted a bias audit,” that “Maryland and Illinois have proposed laws that prohibit use of facial recognition and video analysis tools in job interviews without consent of the candidates,” and that “the California Fair Employment and Housing Council is mulling new mandates that would outlaw use of AI tools and tests that could screen applicants based on race, gender, ethnicity, and other protected characteristics”); Attorney General Bonta Launches Inquiry into Racial and Ethnic Bias in Healthcare Algorithms, State of Cal. Dep’t of Just. Off. of the Att’y Gen. (Aug. 31, 2022), https://oag.‌ca.gov/‌news/press-releases/attorney-general-bonta-launches-inquiry-racial-and-ethnic-bias-healthca‌re [https://perma.cc/ERC4-GVJJ] (“California Attorney General Rob Bonta today sent letters to hospital CEOs across the state requesting information about how healthcare facilities and other providers are identifying and addressing racial and ethnic disparities in commercial decision-making tools. The request for information is the first step in a DOJ inquiry into whether commercial healthcare algorithms—types of software used by healthcare providers to make decisions that affect access to healthcare for California patients—have discriminatory impacts based on race and ethnicity.”).
  2.  See Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, supra note 1.
  3.  Id.
  4.  Id.
  5.  Id.
  6.  Rebecca Robbins & Erin Brodwin, An Invisible Hand: Patients Aren’t Being Told About the AI Systems Advising Their Care, STAT (July 15, 2020), https://www.statnews.com/‌2020/07/15/artificial-intelligence-patient-consent-hospitals/ [https://perma.cc/R3F5-NNX4].
  7.  Id.
  8.  Id.
  9.  See also Attorney General Bonta Launches Inquiry into Racial and Ethnic Bias in Healthcare Algorithms, supra note 1 (understanding as problematic the fact that some AI tools used in healthcare “are not fully transparent to healthcare consumers”); cf. Schneider et al., supra note 1 (noting that New York City’s law regulating AI in employment requires an employer to provide “applicants and employees who reside in New York City notice of its use of AI in hiring and/or promotion decisions, either via website, job posting, mail or e-mail”).

    Interestingly, some investigations have shown that some patients do not want to know when physicians and hospital administrators rely on medical AI when managing their healthcare. See Robbins & Brodwin, supra note 6 (reporting that some patients who were interviewed stated that “they wouldn’t expect or even want their doctor to mention” the use of medical AI and stating that these patients “likened it to not wanting to be privy to numbers around their prognosis, such as how much time they might expect to have left, or how many patients with their disease are still alive after five years”). However, other studies have shown that patients do desire this information. See Anjali Jain et al., Awareness of Racial and Ethnic Bias and Potential Solutions to Address Bias with Use of Health Care Algorithms, JAMA Health F., June 2, 2023, at 10, https://jamanetwork.com/journals/jama-health-forum/fullarticle/2805595 [https://perma.cc/9FMK-E4VV] (discussing a “recent, nationally representative survey” that showed that “patients . . . wanted to know when [AI] was involved in their care”).

  10.  Indeed, volumes have been written about algorithmic bias, what AI technologies mean with respect to data privacy, and how we ought to regulate AI inside the medical context. See generally The Oxford Handbook of Digital Ethics (Carissa Véliz ed., 2021).
  11.  See I. Glenn Cohen, Informed Consent and Medical Artificial Intelligence: What to Tell the Patient?, 108 Geo. L.J. 1425, 1428 (2020) (noting that his Article, which was published just three years ago, was “the first to examine in-depth how medical AI / [machine learning] intersects with our concept of informed consent”).
  12.  Elizabeth A. Howell, Reducing Disparities in Severe Maternal Morbidity and Mortality, 61 Clinical Obstetrics & Gynecology 387, 387 (2018).
  13.  See The Nat’l Acads. of Scis., Eng’g & Med., Improving Representation in Clinical Trials and Research: Building Research Equity for Women and Underrepresented Groups 24 (Kirsten Bibbins-Domingo & Alex Helman eds., 2022), https://nap.nationalacademies.org/‌catalog/26479/improving-representation-in-clinical-trials-and-research-building-research‌-equity [https://perma.cc/FE2H-9YC5] (explaining that “research has demonstrated that many groups underrepresented and excluded in clinical research can have distinct disease presentations or health circumstances that affect how they will respond to an investigational drug or therapy” and that “[s]uch differences contribute to variable therapeutic responses and necessitate targeted efficacy and safety evaluation”). An FDA report of clinical trials that took place between 2015 and 2019 revealed that while non-Hispanic white people constituted only 61% of the general population in the United States, they were 78% of trial participants. See id. at 35; see also id. at 44–45 (“Even recently completed trials have failed to include enrollment consistent with the distribution of disease across the population—a Phase 2 trial of crenezumab in Alzheimer’s disease with 360 participants across 83 sites in 6 countries reported 97.5 percent of participants being white, and only 2.8 percent of all participants being Hispanic.”).

    Notably, clinical trials only rarely include pregnant and lactating people. See id. at 40. This means that when most medications are introduced into the market, their safety and efficacy vis-à-vis pregnant and lactating people are unknown—although it is quite common for people to take medications while pregnant or lactating. See id. (“During pregnancy and lactation, greater than 90 percent of these individuals take at least one medication, either to treat pregnancy-related complications or to treat ongoing medical issues.”).

  14.  See Christine Yifeng Chen et al., Meta-Research: Systemic Racial Disparities in Funding Rates at the National Science Foundation, eLife, Nov. 29, 2022, at 2, https://doi.org/10.7554/‌eLife.83071 [https://perma.cc/NFS8-T3LB] (showing that the National Science Foundation funded proposals by white principal investigators at +8.5% of the average funding rate while funding proposals by Asian, black, and Native Hawaiian/Pacific Islander principal investigators at 21.2%, 8.1%, and 11.3% of the average funding rate, respectively); Donna K. Ginther et al., Race, Ethnicity, and NIH Research Awards, 333 Science 1015, 1016 (2011), https://doi.org/10.1126/science.1196783 [https://perma.cc/NQA9-LYMG] (showing that the National Institute of Health funded proposals by black principal investigators at close to half the rate as white principal investigators).
  15.  See Christina Amutah et al., Misrepresenting Race—The Role of Medical Schools in Propagating Physician Bias, 384 New Eng. J. Med. 872, 873–74 (2021). Funding for research into the imagined genetic causes of racial disparities in health outcomes vastly outstrips funding for research into social determinants of health or the physiological effects of stress and racism on people of color. Shawn Kneipp et al., Trends in Health Disparities, Health Inequity, and Social Determinants of Health Research, 67 Nursing Rsch. 231, 231 (2018). See also René Bowser, Racial Profiling in Health Care: An Institutional Analysis of Medical Treatment Disparities, 7 Mich. J. Race & L. 79, 114 (2001) (arguing that “physicians who focus on racism as opposed to cultural peculiarities or the genetic basis of disease are likely to be considered both as not ‘real scientists’ and as dangerous” and stating that producing research that explains racial disparities in health outcomes in terms of culture and genes, as opposed to structural racism and inherited disadvantage, “enhances the researcher’s status”). This funding disparity undoubtedly contributes to the perpetuation of the myth of biological race.
  16.  See Haley Bridger, Skin Tone and Pulse Oximetry: Racial Disparities in Care Tied to Differences in Pulse Oximeter Performance, Harv. Med. Sch. (July 14, 2022), https://hms.‌harvard.edu/news/skin-tone-pulse-oximetry [https://perma.cc/HZW8-YMAS].
  17.  See The National Academies of Sciences, Engineering, and Medicine, supra note 13, at 25 (observing that “[c]linical research is often performed in well-resourced tertiary care sites in large urban centers, and may have limited applicability to community sites, less well-resourced safety net settings, and rural settings”).
  18.  See id. at 31 (stating that the “[l]ack of representative studies on screening for cancer or cardiometabolic disease may lead to recommendations that fail to consider earlier ages or lower biomarker thresholds to start screening that might be warranted in some populations” and observing that “due to [a] lack of studies that report findings by race,” the guidelines for some screenings are universal, although there is some evidence that they should vary by race and age).
  19.  See Barbara A. Noah, Racial Disparities in the Delivery of Health Care, 35 San Diego L. Rev. 135, 152 (1998) (noting that “[b]efore the National Institutes of Health (NIH) issued a directive in 1990, investigators almost uniformly tested new chemical entities only on white male subjects”).
  20.  See Lundy Braun, Breathing Race into the Machine: The Surprising Career of the Spirometer from Plantation to Genetics, at xv (2014).
  21.  See Bridger, supra note 16 (describing a study that showed that pulse oximeters reported blood oxygen saturation levels for patients of color that were higher than what they actually were, leading these patients’ providers to give them supplemental oxygen at lower rates).
  22.  See, e.g., Alan F. Guttmacher & R. Gordon Douglas, Induction of Labor by Artificial Rupture of the Membranes, 21 Am. J. Obstetrics & Gynecology 485, 485 (1931) (establishing artificial rupture of the membranes as an evidence-based practice in obstetrics after studying the safety and efficacy of the procedure among patients cared for at a clinic affiliated with Johns Hopkins University).
  23.  See Screening for Colorectal Cancer: US Preventive Services Task Force Recommendation Statement, 325 JAMA 1965, 1970 (2021), https://jamanetwork.com/jour‌nals/jama/fullarticle/2779985 [https://perma.cc/TV68-6W75].

Click on a link below to access the full text of this article. These are third-party content providers and may require a separate subscription for access.

  Volume 110 / Issue 2  

Race in the Machine: Racial Disparities in Health and Medical AI

What does racial justice—and racial injustice—look like with respect to artificial intelligence in medicine (“medical AI”)? This Article offers that racial injustice might look like a country in which law and ethics have decided that it is …

By Khiara M. Bridges
110 Va. L. Rev. 243

The Education Power

Public officials are increasingly warring over the power to set fundamental education policies. A decade ago, disputes over Common Core Curriculum and school choice programs produced a level of acrimony between policymakers not seen since school …

By Derek W. Black
110 Va. L. Rev. 341

Becoming the “Bill of Rights”: The First Ten Amendments from Founding to Reconstruction

The first ten amendments to the federal Constitution have no formal title. It is only by cultural tradition that Americans refer to these provisions as our national “Bill of Rights.” Until recently, most scholars assumed that this tradition could be …

By Kurt T. Lash
110 Va. L. Rev. 411

The Right to Remain Protected: Upholding Youths’ Fifth Amendment Rights After Vega v. Tekoh

In June 2022, the Supreme Court held in Vega v. Tekoh that a failure to read a suspect their Miranda rights before questioning them does not provide a basis for a claim under 42 U.S.C. § 1983. Experts predict that this decision will …

By Julia Eger
110 Va. L. Rev. 489