Principle No. 1 of responsible AI contracts is “Don’t be scum.” There is nothing scummier than generating and distributing pornography deepfakes. Or contributing to the same.
This is the second in an ongoing series on Your Guide to Responsible AI Contracting,1 in which I will:
- lay out the key ethical and legal issues raised by generative AI,
- assess how they are addressed (if at all) by AI providers’ terms of service, privacy policies, and acceptable use policies, and
- provide guidance on how you as an AI provider, implementer, or end user can use “responsible AI” to drive your contract negotiations to secure a closer approximation to the protections you deserve, on principled grounds.
How well do AI providers’ standard contractual terms align with their “responsible AI” PR statements with respect to deepfake pornography? Let’s take a look.
I. Principle No. 1: Don’t be scum, cont’d
B. Rule 1(b) on deepfake pornography: My body and identity, my choice
There’s simply no argument for any right to:
- create deepfakes (AI generated video and/or audio capable of portraying someone doing something they did not do),
- distribute and pass them off as real, and
- disclaim liability for any harm suffered.
Certainly not without the victim’s consent. And even more certainly when it involves digitally removing her clothes and portraying her as committing sexual acts.
Even if “honestly presented” as fake, we should have the right to prevent our likeness from such “nudification.” But we simply do not, at least not under current law.
The most common targets are famous people, but anybody–of any age–can be the object of anyone’s perversion. Not surprisingly the victims are disproportionately young women.2
The law simply hasn’t caught up with the technology. Any discussion of “responsible AI” simply cannot start with the pretense: “It’s currently not illegal, so we’re good!” Or the stock position of manufacturers and service providers everywhere: “If our customers use our products or services in an unintended manner, no matter how predictably, that’s on them, not us!”
1. The “case” against anti-pornography deepfake laws
a. Claim 1: 1st Amendment concerns
i. Claim 1(a): Deepfake porn is protectable speech too
One argument seeks to broadly apply 1st Amendment protections to deepfake pornography creations, treating it as a form of “speech.” This position does not survive even strict scrutiny. It has been raised by defendants against state revenge pornography laws in cases that went all the way to five state supreme courts and rejected.3
ii. Claim 1(b): Any watermarking requirement for generative AI has 1st Amendment implications
A more principled argument against deepfake laws centers around efforts to impose a watermarking requirement on all deepfakes. A law requiring that creatives identify all generative AI output as such would logically help any efforts to deter deepfakes. There is, however, more to this issue than meets the eye. Technological limitations dovetail into free speech and government overreach concerns. I will discuss this issue more fully in three weeks when this blog will focus on political campaign deepfakes.
For now, let’s just note that any watermarking requirement need not necessarily be part of any deepfake pornography law. We can pass laws punishing the distribution of the output without going down this internet rabbit hole.
Disassociating pornography deepfakes from political deepfakes would probably be best for advocates of each respectively. They give rise to fundamentally different issues.
b. Claim 2: We shouldn’t criminalize mere possession
Another argument against concerns the question of proportionality. How do we make the punishment fit the crime? And how to we guard against pearl-clutching individuals weaponizing the criminalization of deepfake pornography for ulterior purposes?
I share these concerns. And since it’s my blog, I’ll just say it. We should not criminalize the downloading and possession of deepfake pornography by itself.
The focus of any criminalization effort should be on the distribution of deepfake pornography, whether or not for sale. Criminalization of generation of deepfake pornography without distribution, though ideal, might be a bridge too far. And criminalization the mere possession of deepfake pornography would just be a bad idea. Sometimes it’s better to target the channel, not the source.
The recently passed March 2022 federal law on revenge pornography takes this pragmatic approach. It imposes liability for the “disclosure” of the images and not just for their creation.4
We do not want a “war on deepfake pornography” to go down the road of the “war on drugs.” But we shouldn’t allow such concerns to prevent us from criminalizing deepfake pornography entirely, either.
Self-schedule a free 20-min. video or phone consult with Jim W. Ko of Ko IP and AI Law PLLC here.
1. The “case” against anti-deepfake pornography laws, cont’d
c. Claim 3: Current laws are sufficient
Another argument is that current laws are sufficient as is to deter and punish pornography deepfakes. This position is simply wrong. Let’s investigate.
2. The call for anti-pornography deepfake laws
a. Deepfake pornography slips through the cracks of current criminal and civil law.
Under current law, can U.S. federal or state governments appropriately punish individuals for generating deepfake AI pornography? In many cases, the answer is no.5
Unless deepfake pornography is used for purposes of extortion or harassment, it is currently not a crime in most states. And only a handful of states have passed laws making pornography deepfakes an actual crime to date.6
Moreover, deepfake pornography evades the variety of mostly state civil laws that could apply.
i. Revenge pornography laws
Almost all states have a nonconsensual pornography law in some form. Depending on their wording, some may be applicable to deepfake pornography. But many are at least arguably limited to images that are actually taken in real life, in particular if they reference “private” images and do not specify that altered images are also covered by the law.7
In March 2022, Congress passed the first federal law on this subject, creating a new federal civil cause of action for victims to pursue against a person who “discloses” their “intimate visual depiction” without consent.8 While no federal court has yet interpreted this statute, it appears limited to “real” private images. And it does not make that a crime, either.
ii. Right of publicity
Also, right-of-publicity laws–preventing the unauthorized commercial use of ones–vary from state to state. “Some states lack any explicit legal right, while entertainment industry hubs such as California and New York have clear statutory protections that have generated decades of case law.”9
iii. Defamation and false light
Defamation and false light laws have highly subjective elements which have to be stretched to apply to deepfake pornography. The victim must establish the perpetrator actually presented the deepfake as truth. She must overcome several defenses, such as:
- “c’mon, everybody knows these things are fake,”
- “these are just manifestations of schoolboy fantasies,” and
- “I didn’t mean any harm.”
The victim must also establish that she suffered reputational or emotional distress harm and quantity it.
*****
In sum, there are numerous and often subjective “outs” for deepfake pornography defendants under the current patchwork of applicable laws. Specific laws are necessary that directly prohibit the creation and distribution of deepfake pornography. The victim should only have to prove the defendant created and distributed a pornographic deepfake without consent. After this, the only thing that should be in question is what the punishment should be.
b. The recent development of state anti-pornography deepfake laws
A handful of state legislatures have passed anti-pornography deepfake laws, including California, Georgia, Virginia, and New York.10 As of June, there were four other states with pending bills directed at deepfakes of any kind.11
c. We need a federal anti-pornography deepfake law
As laid out in a rather prescient 2019 article, federal criminalization of pornography deepfakes is necessary for multiple reasons including: “the punishment imposed and remedies provided should not depend on the state in which the victims or perpetrators reside” and “[a] federal criminal statute would ensure that victims are protected in states that refuse to act or are slow to do so.”12
Any federal anti-pornography deepfake law should provide a private right of action.13 Nothing could be more personal.
II.B. Rule 1(b): My body and identity, my choice–cont’d
2. AI business cooperation is necessary to combat deepfakes generated and distributed by its users
The cooperation of AI providers (both LLM providers and AI Agent providers)14 and implementers will be essential to mitigate against pornography deepfakes. Much like search engine providers have provided law enforcement for years, in response to search warrants, data privacy breaches, etc.
a. Arguments in support of extending Sect. 230 immunity to AI providers
Should AI providers ultimately secure the same immunity from liability for the third-party postings of end-users as internet service providers (ISPs) did with Section 230 of the Communications Decency Act of 1996,15 then this would facilitate such cooperation. Big tech and their lobbyists are pushing for such immunities to be extended to their AI. While this unquestionably raises “fox guarding the henhouse” concerns, there is a case to be made for this approach. Relying on AI provider “voluntary commitments” would further mitigate against big government concerns.
b. Arguments against extending Sect. 230 immunity to AI providers
Having said this, AI providers are also fundamentally different from ISPs. AI providers are not simply passive hosts; their AI platforms make possible the very generation of the incriminating deepfakes in question.
And left entirely to voluntary commitments, there will always be some AI providers who do not sign on. The market for pornography is way too big to think otherwise.
*****
What balance will our federal and state legislatures come to on this issue? Will it ultimately be on principled grounds, or as part of a global “race to the bottom”? With countries competing to throw caution–and any aspirations of achieving “responsible AI”–to the wind and grab the lead in AI?
The international race to the bottom for AI. Which one is US?
[And not bad Google Duet AI. Not bad.]
3. How current AI provider customer policies address pornography deepfakes
Once again (much like with website paywall bypassing) not so much and not at all directly. You can hardly blame them, as this is a third rail issue. I have trouble conceiving of a terms of use that would meaningfully address this issue and not assume boundless liability. Having said this, let’s take a quick look at how this issue plays out the existing contractual regimes today
a. Acceptable use policies
In general, AI provider policy regimes only address deepfake pornography thru their acceptable-use policies. They all generally say the same things:
- it is not acceptable for users to create inappropriate content using the AI provider’s platforms, and
- we will remove your access if you repeatedly do so.16
Why address only via acceptable-use policies? Because they impose no meaningful obligations on the AI provider.
b. Terms of service and indemnification
Meanwhile, terms of service like OpenAI’s commonly require AI AI Agent providers, AI implementers, and end users to “indemnify and hold harmless” the LLM provider “from and against any costs, losses, liabilities, and expenses (including attorneys’ fees) from third party claims arising or relating to the use” of the generative AI platform.17 And OpenAI and other LLM providers typically disclaim any warranties that they don’t expressly assume. Rest assured, deepfake pornography ain’t one of them.
The only indemnification LLM providers provide, if any, with respect to third party claims is with respect to IP. But establishing liability for generative AI under copyright law in general, let alone against deepfakes, is an uphill battle, perhaps of Everest proportions.18 And it’s debatable whether right of publicity is an intellectual property right to begin with. IP indemnification just doesn’t apply to this issue of third-party liability for deepfake pornography.
*****
In sum, LLM providers declare it unacceptable for users to generate deepfake pornography. But they in parallel effectively indemnify themselves from all liability for it. They push away all liability for any third party claims for pornography deepfakes.
4. Are AI provider “responsible AI” policy statements legally binding?
The simple reality is that policy statements are aspirational and generally not contractually binding. For an AI Agent provider or AI implementer to cite an LLM provider’s deepfake policy against it in litigation will be a tough road to hoe, regardless of how inconsistent it is with its indemnification terms.
5. How AI implementers or end-users should negotiate related contract terms
a. Chipping away at one-way indemnification terms.
AI Agent providers should certainly try to leverage any such favorable “responsible AI” policy statements during their contract negotiations with their LLM providers. Same for AI implementers with respect to their AI providers. But don’t expect to get anything other than around the margins here at best.
The home run would be the removal of the commonplace one-sided indemnification terms in the AI provider’s terms of use. This would also be the most principled approach. The AI provider and the AI implementer would then share any liability to third-party claims. Which is precisely why AI providers will never agree to it.
The logical fallback would be to attempt to carve out deepfake pornography from the broad pro-AI provider indemnification terms. If end-user output is generated primarily from the technological capabilities provided by the AI provider’s platform, then why should the provider get to absolve itself of all liabilities? Having said this, unless you yourself have significant negotiating power, you shouldn’t expect to get much here either.
b. Don’t serve as your AI provider’s de facto third-party liability coverage
Nonetheless, make sure you take what you can get on these indemnification terms. The stakes are just too high to roll over without opposition. And every little bit can count in any future third-party litigation you have to defend against in the future.
Absent any legislative “fix” to come on these issues that appropriately accounts for the interests of AI implementers, you should prepare for a lot of third-party litigation in the future.
c. Your responsible AI policy will likely be your best defense
Your best approach with respect to mitigating third-party liability for your AI implementation will in all likelihood be to look within. Develop a policy and procedures for your implementation of responsible AI. Publish it. Update it as general industry standards and best practices develop in the coming years. And comply with it.
This won’t prevent third-parties from suing you in the first place (or for suing your AI providers who will pass the buck to you). But it will provide your best defense if and when they do.
© 2024 Ko IP & AI Law PLLC
*Note: Nothing in this blog constitutes legal advice or the formation of any attorney-client relationship. See Disclaimers.
II. Where have we come from and where are we going?
We will address the related issue of political deepfakes in three weeks or so. Our next stop in our progression from clear to complex issues for responsible AI contracting in this ongoing series is on the not-at-all-sexy-but-oh-so-important topic of data security.
Our first two articles in this series have both addressed two types of responsibilities and liabilities that AI providers and implementers might have with respect to third parties with whom they have no contractual relationships: 1.) website hosts (and any outright copying of articles and images by your AI platforms that were behind their internet paywalls), and 2.) victims of deepfake pornography (created by your AI platforms).
Next week will be focused for the first time on a duty you owe to your direct customers, specifically the data they provide to you in connection with your AI platform service offering.19
Come back next Monday for the next article in this series:
Your guide to responsible AI contracts. Principle No. 1: Don’t be scum, cont’d
Part 3: Your duty to your AI customers and their data
- Last week’s article presented Principle No. 1: Don’t be scum, Rule 1(a): Respect internet paywalls. AI providers will apparently respect them to the extent that they won’t directly plagiarize content that is behind your paywall. But will they commit to not training their AI models using webcrawlers that hop your paywalls? Not so much. ↩︎
- Sophie Compton, What are Deepfakes? More and More Women are Facing the Scary Reality of Deepfakes, Vogue (Mar. 16, 2012), available here (“According to cybersecurity firm Sensity, deepfakes are growing exponentially, doubling every six months. Of the 85,000 circulating online, 90 percent depict non-consensual porn featuring women.”) ↩︎
- For an excellent overview of these issues, see Federal Civil Action for Disclosure of Intimate Images: Free Speech Considerations, Congressional Research Service (Apr. 1, 2022), available here. ↩︎
- The Violence Against Women Act Reauthorization Act of 2022, H.R. 2471, Sec. 1309 (Civil Action Relating to Disclosure of Intimate Images), at Sec. 1309(b)(a)(A), available here. ↩︎
- See Karen Hao, Deepfake porn is ruining women’s lives. Now the law may finally ban it. MIT Technology Review, available here (Feb. 12, 2021) (“Today there are few legal options for victims of nonconsensual deepfake porn.”). ↩︎
- Isaiah Poritz, States are Rushing to Regulate Deepfakes as AI Goes Mainstream, Bloomberg Law (June 20, 2023), available here (listing only Hawaii, Minnesota, Texas, Virginia, and Wyoming as imposing any criminal penalties on deepfake pornography). ↩︎
- For a compilation of state revenge porn laws, click here. ↩︎
- The Violence Against Women Act Reauthorization Act of 2022, H.R. 2471, Sec. 1309 (Civil Action Relating to Disclosure of Intimate Images), available here. ↩︎
- Isaiah Poritz, AI Deepfakes Bill Pushes Publicity Rights, Spurs Speech Concerns, Bloomberg Law (Oct. 17, 2023), available here. ↩︎
- Caroline Quirk, The High Stakes of Deepfakes: The Growing Necessity of Federal Legislation to Regulate This Rapidly Evolving Technology, Princeton Legal J. (June 19, 2023), available here. ↩︎
- Isaiah Poritz, States are Rushing to Regulate Deepfakes as AI Goes Mainstream, Bloomberg Law, June 20, 2023 (listing Illinois, Louisiana, Massachusetts, and New Jersey as having pending laws regarding deepfakes), available here. ↩︎
- See Rebecca A. Delfino, Pornographic Deepfakes: The Case for Federal Criminalization of Revenge Porn’s Next Tragic Act, 88 Fordham L. Rev. 887 (Dec. 2019), available here. Other reasons presented are: 1.) “a pornographic deepfake, like any other internet crime, is by its nature an offense that is beyond the jurisdictional limits of any single state”; 2.) “state laws are constrained by section 230 of the [Communications Decency Act of 1996], which impedes state actions against website operators who host nonconsensual pornography”; 3.) “criminalizing pornographic deepfakes as a federal crime brings to bear the greater resources of the federal government, including the prosecutorial power of the Department of Justice and the investigative expertise of the FBI”; and 4.) “criminalizing deepfakes at the federal level … demonstrates that the problem is of national concern and signals the seriousness of the damage to the victims.” ↩︎
- For discussion of when private rights of action are or should be granted by federal law, see next week’s blog on Your Duty to Your AI Customer’s and Their Data. ↩︎
- [updated Nov. 2024] For definitions of “AI providers” (including “LLM providers” and “AI-Agent providers”) and “AI implementers” (including those incorporating AI into one’s products and services and those implementing AI into one’s internal business processes) as used here and throughout this blog, see 11/13/23 blog article (“…and without implementing AI successfully, you will be replaced“). This discussion is confusing even with clear definitions and impenetrable without them. ↩︎
- I will do a deep dive into Sect. 230 and any applicable to our AI context in a future article. My view is that Congress got it right back in 1996 with its passage and the courts have gotten it right in their interpretation. And Sect. 230 has been instrumental to the success of the Internet and the U.S. and global economy since then. The issues, however, are even more complex and far-reaching today in this generative AI context. Let’s try to get informed on them together, OK? ↩︎
- E.g. OpenAI’s Usage Policies (updated Mar. 23, 2023), available here; Google’s Generative AI Prohibited Use Policy (last modified March 14, 2023), available here. ↩︎
- OpenAI’s Terms of Use, updated Nov. 14, 2023, available here (see “Disclaimer of Warranties”). ↩︎
- I will explore AI provider liability for copyright infringement for using copyrighted information to train its generative AI models in two weeks or so. ↩︎
- Should there be any additional duties of AI providers and implementers toward third-party personal identifying information (PII) that their AI may collect when trawling the internet? I will explore this question later on. ↩︎
This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.
[…] the pretense: “It’s currently not illegal, so we’re good!” As noted in our discussion of deepfake pornography last week, the law simply hasn’t caught up with the technology. Perhaps the same can be said […]