[updated Nov. 2024]
Hyperbole? No, at least not for many industries. Businesses and individuals who figure out how to implement AI successfully will operate far more efficiently than those who don’t. And the already starving artist is facing its greatest existential crisis; there’s seemingly nothing to do but protest. Employee displacement by generative AIโwhich generates high-quality text, images, and other content based on the data it was trained onโwill be perhaps the greatest societal issue of our times (and we’ve not exactly been short of issues…).
A successful AI implementation cannot be evaluated solely on technical grounds. Navigating the legal and regulatory minefield will be even more important in AI than it has been historically for other technologies. This will be comparable to the “IP tax” paid, in particular in high tech, as a cost of doing business.1 But in addition to repeatedly having to defend against infringement claims by third-party IP owners, all businesses using AI, including:
- “AI providers,” including “AI-as-a-Service (AIaaS) providers,” which include both:
- “Large language model (LLM) providers” like OpenAI with its ChatGPT that enables customers to access OpenAI’s LLM, and
- “AI Agent providers” that develop and provide as their services AI Agents often built on LLMs enabling everything from dynamic customer service bots to complex research assistants and intelligent automation systems, and
- “AI implementers,” which use AI, including both:
- businesses that implement AI services into their own products or services that they provide; and
- businesses that implement such services in their own internal business processes, e.g., HR / employment …2
…will also have to defend against claims or charges brought on several additional fronts, including:
- customers and third-parties who are harmed, directly or indirectly, by the AI,
- employees displaced by AI, and
- government regulators.
Picking up on last week’s blog (“Without IP, you are replaceable….”), this article first provides a comprehensive overview of the intellectual property (IP) issues that arise with AI. The article then present a framework for analyzing the legal risks to come for AI providers and implementers.3
I. Intellectual Property Issues in AI
A. IP infringement
1. Copyright infringementโcreative works
Imitation is the sincerest form of flattery? Not so much when it’s generative AI, which trains on voluminous data and creates new works. It inevitably plagiarizes excerpts of books and articles and copies elements of art, photography, and music compositions with impunity.
From an artist’s perspective, not only are your creative works being stolen, but your livelihood is jeopardized by AI. The recent Hollywood writers and actors strikes reflect nothing lessโclick here.
2. Copyright infringementโsoftware coding
There’s a specific wrinkle for copyright issues when generative AI creates software code. The software community has a unique and proud tradition of crowdsourcing for open source softwareโclick here. Contrary to popular misunderstanding, open source software is not by definition “free.” Instead it can and often is incorporated into closed (i.e., proprietary) software, for commercial sale.
But one particular type of open source software, cheekily named “copyleft” software, is an exception. Copyleft licensing terms are described as “viral,” because they require all derivative works incorporating copyleft code to also be released under a copyleft license.
This gives rise to a possible scenario where generative AI incorporates excerpts of copyleft open source software as part of its plagiarizing process only to have the software that it generates rendered unprotectable because of it. While most commentators do not believe that a court would ultimately impose such a harsh result should a dispute arise, it is a position available to any litigant who would benefit from it. Software businesses must take note of this risk and take steps to mitigate against it.
The above image is the actual report of my alarmingly cheery AI-generated headline analyzer….
Self-schedule a free 20-min. video or phone consult with Jim W. Ko of Ko IP and AI Law PLLC here.
3. Right of publicity misappropriation
There’s presumably a special place in hell for whoever knocked off Tom Hanks, rightโclick here? And yet, the internet goes far lower in misappropriating the images and likenesses of famous and every day people alike.
Any generative AI’s creation of a human image is trained on the images of one or more people. Virtually none of those people have granted permission for the use of their images.
There is no federal law in place granting any “right of publicity” preventing the unauthorized commercial use of an individual’s name, likeness, or other recognizable aspects of one’s persona. The right of publicity is but a patchwork of state and common law.
Every state has its own flavor of right of publicity law, with some states’ laws being more established than others. My home state of Arizona, for example, has two statutes recognizing a right of publicity, but directed only at soldiers. In 2020, an Arizona superior court held this reflected a legislative intent to deny a right of publicity for civilians. This was overturned on appeal, with the court holding that Arizona always has and continues to recognize a common law right of publicity.4
A typical AI provider’s business model is to sell software-as-a-service (SaaS) subscriptions for the use of its platform over the cloud. The provider is thus subject to both federal law and the individual laws of every state.
B. Protection of IP rights by AI providers
The global wave of legislation on AI has focused on mitigating against its potential negative effects on society. Not much specific has been directed toward defining the IP rights of providers to date.5
1. Authorship / inventorship issues
Will the law recognize generative AI as capable of creating original works or inventions for IP ownership purposes? And will individuals with ownership rights over such output be granted copyright or patent rights?
Some argue that we should treat AI like a tool. And creators should be able to use it like any other tool to create original works or inventions and secure IP rights on them.
The U.S. federal courts appears to have set an outer boundary to this position over the past few years. Two courts have held that if there is no human hand in any part of the generative AI’s process, then neither copyright rights nor patent rights are available to the AI’s owner.6 But the question of whether creative works or inventions made under the direction of human beings and with only some degree of non-human assistance should be eligible for copyright or patent protections remains wide open.
2. Trade secret protections
Many AI companies rely primarily if not exclusively on trade secret protections to protect their innovations. As noted in my blog last week, AI seems like a great candidate for IP protection by trade secrets.
But President Biden’s Oct. 2022 Blueprint for an AI Bill of Rights and Oct. 2023 Executive Order on AI make clear our current administration’s intent to make it a heavily regulated industry. Both the EU’s proposed June 2023 and China’s Aug. 2023 versions do the same. Providers and implementers may need to meet future regulatory standards establishing the accuracy and lack of “algorithmic discrimination” of their AIโclick here.
When an AI company’s invaluable trade secret rights collide with the government’s interests in protecting its citizens in this highly disruptive area, who do you think will win? And who and what should we be rooting for? Let’s play this out in next week’s blog….
II. Legal Risks For Businesses and AI
The business use of AI gives rise to various types of legal liabilities, including:
- IP infringement,
- “algorithmic discrimination,”
- data privacy violations, and
- fraudulent and negligent misrepresentation.
In the U.S., your business will be subject to greater regulatory scrutiny if it involves a “sensitive domain.”7 These include:
- hiring,
- financial systems,
- healthcare,
- housing, and
- education.
Your primary defense against an AI claim will be fundamentally an issue of compliance, with:
- existing principles of IP and technology law,
- existing applicable regulatory regimes, and most importantly
- the AI-specific legal and regulatory regimes that various national and state governments will develop in the coming months and years.
President Biden followed up his October 2022 Blueprint for an AI Bill of Rights with his October 2023 Executive Order on AI last month.
A. AI use in your hiring, promotion, and termination processes
All businesses implementing AI into their hiring, promotion, and firing processes will need to implement a defensible AI. The employer and/or its AI provider will be liable for any “algorithmic discrimination” harming an affected claimant. If a jilted individual files any employment discrimination or unlawful termination suit and the implementation played any sort of part in the process, you will need to defend it.
B. Incorporating AI output in your service offerings
When implementing AI in providing services to your customers, then you and/or the provider face the following legal risks:
1. IP infringement (for overview of legal risks, see Sect. I above)
2. Data privacy violations
There is an insatiable need for the collection of data in AI for the training of its models. Each data set will, especially if unchecked, invariably contain the personal identifying information (PII) of individuals. And some AI implementations (e.g., for surveillance or for targeted marketing) collect and intentionally generate PII.
Each data set thus becomes a potential target for a data breach. And if a breacher makes this information public, for e.g., by posting on the Internet, then such PII repositories again become pulled into data sets for the training of other AI models, continuing the cycle.
Such collection of PII gives rise to potential data privacy liability for the AI implementer or provider, e.g., for:
- improper collection and use of PII,
- improper disclosure of the collection and use of PII,
- lack of proper cybersecurity against data breaches, and
- improper transfer of data containing PII.
Data privacy concerns further arise from an AI implementer or end-user uses the AI platform. The information that the user enters as prompts for the AI to generate a desired response may contain PII and itself be collected and used by this or other AIs in the future.
A veritable alphabet soup of federal and state laws may apply depending on the situationโclick here.
3. Fraudulent and negligent misrepresentation
Fraud in any form, including that intentionally perpetrated through AI, gives rise to criminal and civil liabilities. The only open question is what additional laws and regulations will come into effect specific to AI.
An “AI hallucination” is content generated that is nonsensical or unfaithful to the provided source content. They happen with surprising frequency and, if and when relied upon, can cause harm. The liability to be borne by implementers or providers for such hallucinations is yet to be determined.
C. Addressing unauthorized use of AI by your employees
Have you updated your employee handbook for AI issues?
All companies should update their company policies and employee handbooks to address any unauthorized use of AI by their employees. At a minimum, you should require your employees to disclose any such use made in the course of their employment.
This will help mitigate against the risks of any of the types of third-party claims (IP infringement, algorithmic discrimination, etc.). It will further mitigate against claims of unlawful termination for such conduct.
D. Will there be federal protections for providers?
Section 230 of the Communications Decency Act of 1996 has long shielded internet platforms from liability for content created by users. Title II (the โOnline Copyright Infringement Liability Limitation Actโ) of the Digital Millenium Copyright Act of 1998 limits the liability of online service providers for copyright infringement. Some credit such federal protections as being instrumental to the very success of the internet.
Will our government apply these same principles to protect providers from liability for content generated using their platforms? Should we want them to? Stay tuned.
E. Indemnification
The indemnification terms in technology contracts delineating which party is responsible for defending against third-party claims (e.g., for IP infringement) are typically heavily negotiated. This has and will continue to be the case for contract negotiations between AI providers and implementers.
Will our government continue to allow parties the freedom to contract on indemnification terms, e.g., for intellectual property issues? Or will AI be the subject of more oversight on this and other issues?
The topic of indemnification has had a special place in my heart for decades. It is the zero-sum game of zero-sum games. More to come on this.
III. Join the dialogue on IP and AI issues!
The rather jarring sentiment-analysis AI in my headline analyzer up top probably shouldn’t be taken by itself as evidence that the end is nigh. In reality, it only shows that the training of this particular implementation is not quite there yet. Well…unless it just wants us to think that that’s the case….
But make no mistake about it, other AI platforms and implementations are already there. And generative AI is developing faster than most would have imagined last year, far outpacing the development of the laws and regulations to govern it.
Your business’s ability to set up effective company policies and negotiate the AI terms in your contracts in compliance with applicable laws and regulationsโnot as they currently are but where they are headedโwill define how much of an “AI tax” you have to pay. Much like with the “IP tax” before it, this can be a significant drain on your profit margin and can make or break your business. And despite the inherent uncertainties, you have more control than you might think, in particular as a startup or SME.
My goal is for this blog to become your go-to forum and resource for understanding and staying current on IP and AI issues. IP law has been a moving target for decades and AI law promises even more of the same. Through my first two articles, we have now established frameworks for developing your business’s strategies for IP protection and mitigating against AI risks. Please subscribe below and like, comment, and repost on LinkedIn.
Come join the dialogue and enjoy the ride!
ยฉ 2023 Ko IP & AI Law PLLC
Come back next Monday for my next blog article:
“Are we sure trade secrets are the way to go for protecting your AI IP?“
- For a humorous and informed, albeit one-sided, discussion, click here. โฉ๏ธ
- We need consistent terminology defining the different ways that companies interface with AI in order to intelligently discuss roles, responsibilities, liabilities, etc. with respect to its usage. โฉ๏ธ
- This article will focus exclusively on the legal and business implications of AI. Let’s leave discussion of the broader government surveillance and “big brother” / “ministry of truth” type issues for another day. โฉ๏ธ
- Canas v. Bay Entertainment, 2019 WL 13084976 (Ariz. Super. Oct. 25, 2019); Canas v. Bay Entertainment, 252 Ariz. 117 (Ariz. Ct. App. 2021). โฉ๏ธ
- President Biden’s Oct. 2023 Executive Order on AI, however, does direct the Directors of the U.S. Patent and Trademark Office and the U.S. Copyright Office to publish additional guidance addressing these issues and more in the coming year. See id., at ยง 5.2(c). โฉ๏ธ
- The claimant in both cases is the same, apparently not-so-starving artist. See Thaler v. Perlmutter, 2023 WL 5333236 (D.D.C. Aug. 18, 2023) (affirming the U.S. Copyright Office’s denial of a copyright application on the grounds that the AI-generated work lacked human authorship and the AI could not properly be listed as the work’s “author”); see Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022) (affirming the U.S. Patent and Trademark Office’s denial of a patent application on analogous grounds, concluding that under the Patent Act, an “inventor” must be a human being). โฉ๏ธ
- This term “sensitive domain” appears to be an entirely new term coined by the executive branch specifically for this AI context, addressing a subset of heavily regulated industries with stronger civil liberties implications. โฉ๏ธ
This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.
Congrats Jim, and thank you for sharing the informative blog post.
Sincerely appreciated Jim! Hope you find it useful!