With the rapid emergence of high-quality generative artificial intelligence (“AI”), some have advocated for mandatory disclosure when the technology is used to generate new text, images, or video. But the precise harms posed by nontransparent uses of generative AI have not been fully explored. While the use of the technology to produce material that masquerades as factual (“deepfakes”) is clearly deceptive, this Article focuses on a more ambiguous area: the consumer’s interest in knowing whether works of art or entertainment were created using generative AI.
In the markets for creative content—fine art, books, movies, television, music, and the like—producers have several financial reasons to hide the role of generative AI in a work’s creation. Copyright law is partially responsible. The Copyright Office and courts have concluded that only human-authored works are copyrightable, meaning much AI-generated content falls directly into the public domain. Producers thus have an incentive to conceal the role of generative AI in a work’s creation because disclosure could jeopardize their ability to secure copyright protection and monetize the work.
Whether and why this obfuscation harms consumers is a different matter. The law has never required disclosure of the precise ways a work is created; indeed, failing to publicly disclose the use of a ghostwriter or other creative assistance is not actionable. But AI authorship is different for several reasons. There is growing evidence that consumers have strong ethical and aesthetic preferences for human-created works and understand the failure to disclose AI authorship as deceptive. Moreover, hidden AI authorship is normatively problematic from the perspective of various theories of artistic value. Works that masquerade as human-made destabilize art’s ability to encourage self-definition, empathy, and democratic engagement, turning all creative works into exclusively entertainment-focused commodities.
This Article also investigates ways to facilitate disclosure of the use of generative AI in creative works. Industry actors could be motivated to self-regulate, adopting a provenance-tracking or certification scheme. And Federal Trade Commission (“FTC”) enforcement could provide some additional checks on the misleading use of AI in a work’s creation. Intellectual property law could also help incentivize disclosure. In particular, doctrines designed to prevent the overclaiming of material in the public domain—such as copyright misuse—could be used to raise the financial stakes of failing to disclose the role of AI in a work’s creation.
Introduction
When Marvel Studios’ big-budget series, Secret Invasion, premiered in June 2023, most viewers did not give a second thought to the show’s opening credits, which featured angular alien faces, a toothless Samuel L. Jackson, and swirling green cityscapes. Shortly after the show premiered, however, director and executive producer Ali Salim made an unusual admission: the credit sequence’s visuals had been generated using artificial intelligence (“AI”).1 1.Zosha Millman, Yes, Secret Invasion’s Opening Credits Scene Is AI-Made—Here’s Why, Polygon (June 22, 2023, 7:16 PM), https://www.polygon.com/23767640/ai-mcu-secret-invasion-opening-credits [https://perma.cc/5WPN-93TY].Show More Public outcry was swift. Many criticized the use of generative AI by a wealthy studio as “unethical,” especially in light of Hollywood labor disputes driven by the possible effects of generative AI on acting and writing jobs.2 2.See Angela Watercutter, Marvel’s Secret Invasion AI Scandal Is Strangely Hopeful, Wired (June 23, 2023, 9:00 AM), https://www.wired.com/story/marvel-secret-invasion-artificial-intelligence/.Show More Others argued that Marvel’s use of AI was lazy, yielding images devoid of artistic merit.3 3.See Dani Di Placido, Marvel’s AI-Generated ‘Secret Invasion’ Sequence Sparks Backlash, Forbes (June 23, 2023, 11:47 AM), https://www.forbes.com/sites/danidiplacido/2023/06/21/the-big-backlash-against-marvels-secret-invasion-explained/?sh=2ef04c17344e.Show More The criticism ultimately prompted Marvel to walk back its admission, explaining that “AI is just one tool among the array of tool sets our artists used. No artists’ jobs were replaced by incorporating these new tools; instead, they complemented and assisted our creative teams.”4 4.Carolyn Giardina, ‘Secret Invasion’ Opening Using AI Cost “No Artists’ Jobs,” Says Studio That Made It, Hollywood Rep. (June 21, 2023, 8:12 PM), https://www.hollywoodreporter.com/tv/tv-news/secret-invasion-ai-opening-1235521299/ [https://perma.cc/3DTD-U46W].Show More
With the dramatic arrival of high-quality generative AI systems,5 5.For a full discussion of what I mean by “generative AI,” see infra Section I.A.Show More scholars and policy-makers have begun debating the potential harms posed by the technology’s many possible applications. Much of this debate has centered on generative AI’s ability to create materials that masquerade as factual—in particular, false photorealistic images and audiovisual content, commonly known as “deepfakes”—which can harm individual reputations or further misinformation that undermines public trust.6 6.See generally Lisa Macpherson, Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It, Pub. Knowledge (Aug. 7, 2023), https://publicknowledge.org/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it/ [https://perma.cc/9BNL-QN9S] (discussing the potential harm to the integrity of news through the use of generative AI); Adam Satariano & Paul Mozur, The People Onscreen Are Fake. The Disinformation Is Real, N.Y. Times (Feb. 7, 2023), https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html (describing how deepfakes make it difficult to separate reality from forgeries and enable the spread of propaganda by foreign governments); Todd C. Helmus, Artificial Intelligence, Deepfakes, and Disinformation: A Primer, RAND Corp. (July 6, 2022), https://www.rand.org/pubs/perspectives/PEA1043-1.html [https://perma.cc/HD55-62L5] (observing that, in an increasingly polarized and fact-resistant political climate, deepfakes pose a potent threat). See also Council Regulation 2024/1689, art. 134, 2024 O.J. (L) 1, 34 (EU) (requiring creators of deepfakes to disclose use of AI).Show More But the Secret Invasion controversy illustrates an underexplored dimension of the lack of transparency in many uses of generative AI: Are consumers also deceived by nonfactual, AI-generated creative works that masquerade as human-made? Put another way, does hidden AI “authorship”—that is, the undisclosed use of AI to produce expression that we generally consider to be within the purview of human creators7 7.As discussed below, AI cannot be an author for legal purposes, so when I say “AI authorship,” I refer to situations in which AI has accomplished the creative work that we generally associate with human authorship. See infra Subsections I.B.2, II.C.2.Show More—pose harm to the public?
This Article provides the first comprehensive treatment of this question, as well as the problem of hidden AI authorship more generally. In so doing, this Article makes three contributions. The first contribution is descriptive: the Article examines how and why producers of commercial creative works—visual art, books, television, music, films, and more—might choose to hide the role of generative AI in the production of new content. Though not immediately obvious, this phenomenon is deeply intertwined with intellectual property law, and copyright law in particular. As copyright decision-makers increasingly find that AI-generated works are largely unprotectable, producers have an incentive to hide their use of the technology. The second contribution is normative: the Article argues that the hiding of AI authorship indeed poses harm to consumers, albeit a less straightforward form of harm than the clear problems posed by deceptive deepfakes. This harm must be understood by examining the strong evidence that many consumers prefer human-created works of art and entertainment, as well as the broader social significance of human authorship in art’s ability to foster self-definition, empathy, and political engagement. The third contribution is prescriptive: the Article identifies various regulatory options, including existing consumer protection and intellectual property law regimes, that could be used to encourage greater transparency among the sellers of AI-generated content, enabling better informed consumer choice.
How might generative AI come to be frequently, but nontransparently, used to create new works of art and entertainment? As Part I explores, this problem is already emerging. Generative AI is quickly being incorporated into content creation, leading large content producers to encounter a dilemma like the one faced by Marvel: whether or not to disclose the role of the technology in a work’s creation. As the creators of Secret Invasion discovered, many consumers seem to bristle at the use of the technology. Indeed, recent empirical research suggests that many consumers consider “AI-generated” works inferior, even if they cannot tell from the work itself that AI had a role in its creation.8 8.See infra Subsection I.B.1; see, e.g., Lucas Bellaiche et al., Humans Versus AI: Whether and Why We Prefer Human-Created Compared to AI-Created Artwork, 8 Cognitive Rsch., 2023, at 1, 3 (observing that, across multiple artistic mediums, study participants preferred art labelled “human-created” over art labelled “AI-created”).Show More This yields a clear financial incentive to hide the AI provenance of a work from the public.
Current trends in copyright law compound producers’ incentives to hide their use of generative AI. It is black-letter law that a work created by a nonhuman is ineligible for copyright protection.9 9.See, e.g., Naruto v. Slater, 888 F.3d 418, 426 (9th Cir. 2018) (monkey that took “selfie” photos could not claim copyright for lack of standing); see also infra Subsection I.B.2 (discussing limits on copyright protections for works not produced by human beings).Show More Courts and the Copyright Office10 10.The Copyright Office is a regulatory body housed in the Library of Congress that is responsible for registering new works. See infra Subsection I.B.2.Show More have emphasized the importance of “elements of human creativity” when assessing whether an AI-generated work can be registered, such as human-made decisions about how to organize and structure AI-generated material in a final work.11 11.See, e.g., Letter from Robert J. Kasunic, U.S. Copyright Off. Rev. Bd., to Van Lindberg, Taylor Eng. Duma LLP, Zarya of the Dawn (Registration # VAu001480196), at 5–8 (Feb. 21, 2023), https://www.copyright.gov/docs/zarya-of-the-dawn.pdf [https://perma.cc/X54U-MQ4F] (finding that AI-generated images in a comic book were unprotectable but the comic book could still be thinly protected as a compilation).Show More For content producers, this means that highlighting the role of generative AI in a work’s production can compromise efforts to achieve copyright registration.12 12.See, e.g., Thaler v. Perlmutter, 687 F. Supp. 3d 140, 149–50 (D.D.C. 2023) (upholding denial of copyright registration where “the record designed by plaintiff from the outset of his application for copyright registration . . . [showed] the absence of any human involvement in the creation of the work,” precluding copyrightability for AI-generated image); see also infra Subsection I.B.2 (describing instances in which acknowledging the role of AI in a work’s creation foreclosed copyright protection).Show More Failure to obtain protection essentially means that new content immediately falls into the public domain and cannot be monetized. Thus, if trends in the law continue in their likely direction, content producers will increasingly try to hide the role of AI in new creative works to ensure such works remain protectable.13 13.Disclosure to the Copyright Office and public disclosure are interrelated. The Copyright Office has begun listing registrations that explicitly state whether a work is a product of generative AI (noting that the AI-produced elements are unprotectable). These registrations are easily publicly searchable. See infra Section I.C.Show More
Should this obfuscation be considered a problem? After all, if a consumer enjoys a work like the Secret Invasion credit images, does it matter whether they know that work was produced using generative AI? Part II addresses this question, arguing that consumers seem to have a range of “process preferences”14 14.See Douglas A. Kysar, Preferences for Processes: The Process/Product Distinction and the Regulation of Consumer Choice, 118 Harv. L. Rev. 525, 529 (2004) (examining how “consumer preferences may be heavily influenced by information regarding the manner in which goods are produced”).Show More—that is, preferences that relate to how a work was created, rather than just the work itself—that implicate the use of generative AI. One issue is ethical: consumers may prefer human-created works because of ethical concerns over AI supplanting human labor.15 15.See infra Subsection II.A.1.Show More Another issue is aesthetic: as showcased in old debates regarding the use of “mechanical reproduction” in art, human creation can confer an element of authenticity on a creative work that a machine-generated work lacks.16 16.See infra Subsection II.A.2.Show More Finally, consumers have deep-seated connections to specific artists, born out of a sense of fandom, which are undermined by the more specific example of AI-generated works that mimic an artist’s voice or style.17 17.See infra Subsection II.A.3.Show More Considering these preferences, obscuring the role of generative AI in a work’s creation may prevent a consumer from making an informed decision about whether to consume it.
But just because consumers have these preferences does not mean the law must respect them.18 18.See infra Part II (examining difficulties of determining whether consumer preferences should give rise to regulatory action).Show More Part II thus also provides a separate normative case for why consumers who care about human authorship should be taken seriously. As aesthetic and ethical theorists have argued, authorship and readership19 19.By “readership,” I mean the experience of engaging with a work of art or entertainment. See infra Section II.B.Show More are fundamentally social activities; through art, the public can engage in ongoing “dialogic” processes of self-definition, ethical development, and political engagement. Novelist and journalist Jay Caspian Kang has recently put it more plainly: “[T]he reason we read books and listen to songs and look at paintings is to see the self in another self, or even to just see what other people are capable of creating.”20 20.Jay Caspian Kang, What’s the Point of Reading Writing by Humans?, New Yorker (Mar. 31, 2023), https://www.newyorker.com/news/our-columnists/whats-the-point-of-reading-writing-by-humans; see also infra note 199 (surveying other writers’ similar perspectives).Show More Thus, even if, as some have argued, the author’s “intent” lacks significance,21 21.See infra Subsection II.B.1 (noting that postmodern theorists have questioned the importance of the individual author). See generally Roland Barthes, The Death of the Author, in Image, Music, Text 142 (Stephen Heath trans., 1977) (criticizing literary critics’ preoccupation with individual authors and instead emphasizing the importance of readers as recipients and interpreters of literary texts).Show More the author’s and reader’s basic shared humanness can be essential to allowing art to play a meaningful social and ethical function. The undisclosed use of generative AI in authoring a work22 22.Importantly, I distinguish between truly “authoring” a work—that is, generating something that is at the creative heart of the work—and using AI in a merely assistive role. Copyright’s doctrinal distinction between these two concepts corresponds well to this normative distinction. See infra Subsection II.C.2. I also explain why the use of a human ghostwriter does not pose the same problems as undisclosed AI authorship. See infra Subsection II.C.1.Show More fundamentally destabilizes this dialogue between author and reader, robbing art of its social value and turning it into an exclusively entertainment-focused commodity.23 23.See infra Subsection II.B.2 (exploring how lack of knowledge regarding a work’s provenance forces consumers to exclusively engage with works of art on market-based, rather than social, terms).Show More The argument here is not that AI authorship is inherently immoral; indeed, AI might yield a range of works that consumers enjoy. Rather, it is that such use must be disclosed in order to allow consumers to choose whether and on what terms they wish to engage with a work.
The obvious solution to the problem of undisclosed AI authorship is to provide consumers with information about a work’s provenance, so that they can make an informed choice. Part III explores various regulatory options for fostering transparency, examining their benefits and shortcomings. An affirmative disclosure regime could come about through industry self-regulation; if it is true that some consumers prefer human-made works, the market would logically step in to provide this information.24 24.See infra Subsection III.A.1 (examining private ordering solutions, such as provenance tracking and a certification regime).Show More A legislative transparency mandate would also—and more thoroughly—accomplish this task.25 25.See infra Subsection III.A.2.Show More In lieu of a comprehensive affirmative disclosure regime, the FTC could also target specific instances in which producers deceptively omit information about a work’s origins so as to mislead consumers.26 26.See infra Section III.B.Show More
An additional, and perhaps more politically feasible,27 27.See infra Section III.C (exploring barriers to legislation and FTC enforcement).Show More set of options is offered by existing intellectual property law. Such an approach would look to IP’s existing doctrines as a way of forcing information, making it costlier for producers to hide the fact that they used generative AI to produce works. In particular, litigants could take advantage of the often-ignored doctrine of copyright misuse to police those who assert that an entire work was human-created, when, in fact, it was a product of AI. Such assertions should fall within one of the categories of copyright misuse: the overclaiming of material that is in the public domain.28 28.See infra Subsection III.C.1; see also infra Subsection I.B.2 (noting that AI-generated material is inherently in the public domain due to lack of authorship, rendering many AI-derived works unprotectable or only thinly protectable).Show More Using the copyright misuse doctrine in litigation would raise the financial risks of surreptitiously using AI-generated materials, incentivizing rightsholders to disclose (and disclaim) this content.29 29.See infra Subsection III.C.1. In particular, a copyright misuse finding prevents a rightsholder from enforcing even legally protectable aspects of their work, rendering the work completely uncopyrightable and essentially valueless. See infra Subsection III.C.1.Show More Trademark law and the right of publicity could also play a role in raising the financial stakes of nontransparency. For the specific subset of AI-generated works that mimic a human artist’s voice or likeness,30 30.See infra Subsection II.A.3 (discussing examples such as “fake Drake”).Show More trademark and the right of publicity provide causes of action that could subject producers and distributors to damages.31 31.See infra Subsection III.C.2.Show More In combination, these various tools could ideally achieve a world in which information about most works’ provenance is readily accessible to consumers.
-
Zosha Millman, Yes, Secret Invasion’s Opening Credits Scene Is AI-Made—Here’s Why, Polygon (June 22, 2023, 7:16 PM), https://www.polygon.com/23767640/ai-mcu-secret-invasion-opening-credits [https://perma.cc/5WPN-93TY]. ↑
-
See Angela Watercutter, Marvel’s Secret Invasion AI Scandal Is Strangely Hopeful, Wired (June 23, 2023, 9:00 AM), https://www.wired.com/story/marvel-secret-invasion-artificial-intelligence/. ↑
-
See Dani Di Placido, Marvel’s AI-Generated ‘Secret Invasion’ Sequence Sparks Backlash, Forbes (June 23, 2023, 11:47 AM), https://www.forbes.com/sites/danidiplacido/2023/06/21/the-big-backlash-against-marvels-secret-invasion-explained/?sh=2ef04c17344e. ↑
-
Carolyn Giardina, ‘Secret Invasion’ Opening Using AI Cost “No Artists’ Jobs,” Says Studio That Made It, Hollywood Rep. (June 21, 2023, 8:12 PM), https://www.hollywoodreporter.com/tv/tv-news/secret-invasion-ai-opening-1235521299/ [https://perma.cc/3DTD-U46W]. ↑
-
For a full discussion of what I mean by “generative AI,” see infra Section I.A. ↑
-
See generally Lisa Macpherson, Lies, Damn Lies, and Generative Artificial Intelligence: How GAI Automates Disinformation and What We Should Do About It, Pub. Knowledge (Aug. 7, 2023), https://publicknowledge.org/lies-damn-lies-and-generative-artificial-intelligence-how-gai-automates-disinformation-and-what-we-should-do-about-it/ [https://perma.cc/9BNL-QN9S] (discussing the potential harm to the integrity of news through the use of generative AI); Adam Satariano & Paul Mozur, The People Onscreen Are Fake. The Disinformation Is Real, N.Y. Times (Feb. 7, 2023), https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html (describing how deepfakes make it difficult to separate reality from forgeries and enable the spread of propaganda by foreign governments); Todd C. Helmus, Artificial Intelligence, Deepfakes, and Disinformation: A Primer, RAND Corp. (July 6, 2022), https://www.rand.org/pubs/perspectives/PEA1043-1.html [https://perma.cc/HD55-62L5] (observing that, in an increasingly polarized and fact-resistant political climate, deepfakes pose a potent threat). See also Council Regulation 2024/1689, art. 134, 2024 O.J. (L) 1, 34 (EU) (requiring creators of deepfakes to disclose use of AI). ↑
-
As discussed below, AI cannot be an author for legal purposes, so when I say “AI authorship,” I refer to situations in which AI has accomplished the creative work that we generally associate with human authorship. See infra Subsections I.B.2, II.C.2. ↑
-
See infra Subsection I.B.1; see, e.g., Lucas Bellaiche et al., Humans Versus AI: Whether and Why We Prefer Human-Created Compared to AI-Created Artwork, 8 Cognitive Rsch., 2023, at 1, 3 (observing that, across multiple artistic mediums, study participants preferred art labelled “human-created” over art labelled “AI-created”). ↑
-
See, e.g., Naruto v. Slater, 888 F.3d 418, 426 (9th Cir. 2018) (monkey that took “selfie” photos could not claim copyright for lack of standing); see also infra Subsection I.B.2 (discussing limits on copyright protections for works not produced by human beings). ↑
-
The Copyright Office is a regulatory body housed in the Library of Congress that is responsible for registering new works. See infra Subsection I.B.2. ↑
-
See, e.g., Letter from Robert J. Kasunic, U.S. Copyright Off. Rev. Bd., to Van Lindberg, Taylor Eng. Duma LLP, Zarya of the Dawn (Registration # VAu001480196), at 5–8 (Feb. 21, 2023), https://www.copyright.gov/docs/zarya-of-the-dawn.pdf [https://perma.cc/X54U-MQ4F] (finding that AI-generated images in a comic book were unprotectable but the comic book could still be thinly protected as a compilation). ↑
-
See, e.g., Thaler v. Perlmutter, 687 F. Supp. 3d 140, 149–50 (D.D.C. 2023) (upholding denial of copyright registration where “the record designed by plaintiff from the outset of his application for copyright registration . . . [showed] the absence of any human involvement in the creation of the work,” precluding copyrightability for AI-generated image); see also infra Subsection I.B.2 (describing instances in which acknowledging the role of AI in a work’s creation foreclosed copyright protection). ↑
-
Disclosure to the Copyright Office and public disclosure are interrelated. The Copyright Office has begun listing registrations that explicitly state whether a work is a product of generative AI (noting that the AI-produced elements are unprotectable). These registrations are easily publicly searchable. See infra Section I.C. ↑
-
See Douglas A. Kysar, Preferences for Processes: The Process/Product Distinction and the Regulation of Consumer Choice, 118 Harv. L. Rev. 525, 529 (2004) (examining how “consumer preferences may be heavily influenced by information regarding the manner in which goods are produced”). ↑
-
See infra Subsection II.A.1. ↑
-
See infra Subsection II.A.2. ↑
-
See infra Subsection II.A.3. ↑
-
See infra Part II (examining difficulties of determining whether consumer preferences should give rise to regulatory action). ↑
-
By “readership,” I mean the experience of engaging with a work of art or entertainment. See infra Section II.B. ↑
-
Jay Caspian Kang, What’s the Point of Reading Writing by Humans?, New Yorker (Mar. 31, 2023), https://www.newyorker.com/news/our-columnists/whats-the-point-of-reading-writing-by-humans; see also infra note 199 (surveying other writers’ similar perspectives). ↑
-
See infra Subsection II.B.1 (noting that postmodern theorists have questioned the importance of the individual author). See generally Roland Barthes, The Death of the Author, in Image, Music, Text 142 (Stephen Heath trans., 1977) (criticizing literary critics’ preoccupation with individual authors and instead emphasizing the importance of readers as recipients and interpreters of literary texts). ↑
-
Importantly, I distinguish between truly “authoring” a work—that is, generating something that is at the creative heart of the work—and using AI in a merely assistive role. Copyright’s doctrinal distinction between these two concepts corresponds well to this normative distinction. See infra Subsection II.C.2. I also explain why the use of a human ghostwriter does not pose the same problems as undisclosed AI authorship. See infra Subsection II.C.1. ↑
-
See infra Subsection II.B.2 (exploring how lack of knowledge regarding a work’s provenance forces consumers to exclusively engage with works of art on market-based, rather than social, terms). ↑
-
See infra Subsection III.A.1 (examining private ordering solutions, such as provenance tracking and a certification regime). ↑
-
See infra Subsection III.A.2. ↑
-
See infra Section III.B. ↑
-
See infra Section III.C (exploring barriers to legislation and FTC enforcement). ↑
-
See infra Subsection III.C.1; see also infra Subsection I.B.2 (noting that AI-generated material is inherently in the public domain due to lack of authorship, rendering many AI-derived works unprotectable or only thinly protectable). ↑
-
See infra Subsection III.C.1. In particular, a copyright misuse finding prevents a rightsholder from enforcing even legally protectable aspects of their work, rendering the work completely uncopyrightable and essentially valueless. See infra Subsection III.C.1. ↑
-
See infra Subsection II.A.3 (discussing examples such as “fake Drake”). ↑
-
See infra Subsection III.C.2. ↑