Ko IP & AI Law PLLC

Patent lawyer focused on intellectual property & artificial intelligence law. Own your ideas, implement AI into your processes, and mitigate against the risks.

…and without implementing AI successfully, you will be replaced.

Hyperbole? No, at least not for many industries. Businesses and individuals who figure out how to implement AI successfully will operate far more efficiently than those who don’t. And the already starving artist is facing its greatest existential crisis; there’s seemingly nothing to do but protest. Employee displacement by generative AI—which generates high-quality text, images, and other content based on the data it was trained on—will be perhaps the greatest societal issue of our times (and we’ve not exactly been short of issues…).

A successful AI implementation cannot be evaluated solely on technical grounds. Navigating the legal and regulatory minefield will be even more important in AI than it has been historically for other technologies. This will be comparable to the “IP tax” paid, in particular in high tech, as a cost of doing business.1 But in addition to repeatedly having to defend against infringement claims by third-party IP owners, businesses that provide AI services (“AI providers”) and businesses that implement such services in their own business processes (“AI implementers”) will also have to defend against claims brought on two additional fronts: by government regulators and by displaced employees.

Picking up on last week’s blog (“Without IP, you are replaceable….”), this article first provides a comprehensive overview of the intellectual property (IP) issues that arise with AI. The article then present a framework for analyzing the legal risks to come for AI providers and implementers.2

I. Intellectual Property Issues in AI

A. IP infringement

1. Copyright infringement—creative works

Copyright infringement issues by generative AI.

Imitation is the sincerest form of flattery? Not so much when it’s generative AI, which trains on voluminous data and creates new works. It inevitably plagiarizes excerpts of books and articles and copies elements of art, photography, and music compositions with impunity.

From an artist’s perspective, not only are your creative works being stolen, but your livelihood is jeopardized by AI. The recent Hollywood writers and actors strikes reflect nothing less—click here.

2. Copyright infringement—software coding

generative AI coding software

There’s a specific wrinkle for copyright issues when generative AI creates software code. The software community has a unique and proud tradition of crowdsourcing for open source software—click here. Contrary to popular misunderstanding, open source software is not by definition “free.” Instead it can and often is incorporated into closed (i.e., proprietary) software, for commercial sale.

But one particular type of open source software, cheekily named “copyleft” software, is an exception. Copyleft licensing terms are described as “viral,” because they require all derivative works incorporating copyleft code to also be released under a copyleft license.

This gives rise to a possible scenario where generative AI incorporates excerpts of copyleft open source software as part of its plagiarizing process only to have the software that it generates rendered unprotectable because of it. While most commentators do not believe that a court would ultimately impose such a harsh result should a dispute arise, it is a position available to any litigant who would benefit from it. Software businesses must take note of this risk and take steps to mitigate against it.

Sentiment analysis by headline analyzer
Ko IP & AI Law PLLC logo
Jim W. Ko headshot

Self-schedule a free 20-min. video or phone consult with Jim W. Ko of Ko IP and AI Law PLLC here.

3. Right of publicity misappropriation

There’s presumably a special place in hell for whoever knocked off Tom Hanks, right—click here? And yet, the internet goes far lower in misappropriating the images and likenesses of famous and every day people alike.

Any generative AI’s creation of a human image is trained on the images of one or more people. Virtually none of those people have granted permission for the use of their images.

There is no federal law in place granting any “right of publicity” preventing the unauthorized commercial use of an individual’s name, likeness, or other recognizable aspects of one’s persona. The right of publicity is but a patchwork of state and common law.

Every state has its own flavor of right of publicity law, with some states’ laws being more established than others. My home state of Arizona, for example, has two statutes recognizing a right of publicity, but directed only at soldiers. In 2020, an Arizona superior court held this reflected a legislative intent to deny a right of publicity for civilians. This was overturned on appeal, with the court holding that Arizona always has and continues to recognize a common law right of publicity.3

A typical AI provider’s business model is to sell software-as-a-service (SaaS) subscriptions for the use of its platform over the cloud. The provider is thus subject to both federal law and the individual laws of every state.

B. Protection of IP rights by AI providers

The global wave of legislation on AI has focused on mitigating against its potential negative effects on society. Not much specific has been directed toward defining the IP rights of providers to date.4

1. Authorship / inventorship issues

Will the law recognize generative AI as capable of creating original works or inventions for IP ownership purposes? And will individuals with ownership rights over such output be granted copyright or patent rights?

Some argue that we should treat AI like a tool. And creators should be able to use it like any other tool to create original works or inventions and secure IP rights on them.

The U.S. federal courts appears to have set an outer boundary to this position over the past few years. Two courts have held that if there is no human hand in any part of the generative AI’s process, then neither copyright rights nor patent rights are available to the AI’s owner.5 But the question of whether creative works or inventions made under the direction of human beings and with only some degree of non-human assistance should be eligible for copyright or patent protections remains wide open.

2. Trade secret protections

Many AI companies rely primarily if not exclusively on trade secret protections to protect their innovations. As noted in my blog last week, AI seems like a great candidate for IP protection by trade secrets.

But President Biden’s Oct. 2022 Blueprint for an AI Bill of Rights and Oct. 2023 Executive Order on AI make clear our current administration’s intent to make it a heavily regulated industry. Both the EU’s proposed June 2023 and China’s Aug. 2023 versions do the same. Providers and implementers will need to meet future regulatory standards establishing the accuracy and lack of “algorithmic discrimination” of their AI—click here.

When an AI company’s invaluable trade secret rights collide with the government’s interests in protecting its citizens in this highly disruptive area, who do you think will win? And who and what should we be rooting for? Let’s play this out in next week’s blog….


II. Legal Risks For Businesses and AI

The business use of AI gives rise to various types of legal liabilities, including:

In the U.S., your business will be subject to greater regulatory scrutiny if it involves a “sensitive domain.”6 These include:

Your primary defense against an AI claim will be fundamentally an issue of compliance, with:

Biden's Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.

A. AI use in your hiring, promotion, and termination processes

All businesses implementing AI into their hiring, promotion, and firing processes will need to implement a defensible AI. The employer and/or its AI provider will be liable for any “algorithmic discrimination” harming an affected claimant. If a jilted individual files any employment discrimination or unlawful termination suit and the implementation played any sort of part in the process, you will need to defend it.

B. Incorporating AI output in your service offerings

When implementing AI in providing services to your customers, then you and/or the provider face the following legal risks:

1. IP infringement (for overview of legal risks, see Sect. I above)

2. Data privacy violations

There is an insatiable need for the collection of data in AI for the training of its models. Each data set will, especially if unchecked, invariably contain the personal identifying information (PII) of individuals. And some AI implementations (e.g., for surveillance or for targeted marketing) collect and intentionally generate PII.

Each data set thus becomes a potential target for a data breach. And if a breacher makes this information public, for e.g., by posting on the Internet, then such PII repositories again become pulled into data sets for the training of other AI models, continuing the cycle.

Such collection of PII gives rise to potential data privacy liability for the AI implementer or provider, e.g., for:

Data privacy concerns further arise from an AI implementer or end-user uses the AI platform. The information that the user enters as prompts for the AI to generate a desired response may contain PII and itself be collected and used by this or other AIs in the future.

A veritable alphabet soup of federal and state laws may apply depending on the situation—click here.

3. Fraudulent and negligent misrepresentation

Fraud in any form, including that intentionally perpetrated through AI, gives rise to criminal and civil liabilities. The only open question is what additional laws and regulations will come into effect specific to AI.

An “AI hallucination” is content generated that is nonsensical or unfaithful to the provided source content. They happen with surprising frequency and, if and when relied upon, can cause harm. The liability to be borne by implementers or providers for such hallucinations is yet to be determined.

C. Addressing unauthorized use of AI by your employees

You should update your Employee Handbook to include AI terms.

All companies should update their company policies and employee handbooks to address any unauthorized use of AI by their employees. At a minimum, you should require your employees to disclose any such use made in the course of their employment.

This will help mitigate against the risks of any of the types of third-party claims (IP infringement, algorithmic discrimination, etc.). It will further mitigate against claims of unlawful termination for such conduct.

D. Will there be federal protections for providers?

Section 230 of the Communications Decency Act of 1996 has long shielded internet platforms from liability for content created by users. Title II (the “Online Copyright Infringement Liability Limitation Act”) of the Digital Millenium Copyright Act of 1998 limits the liability of online service providers for copyright infringement. Some credit such federal protections as being instrumental to the very success of the internet.

Will our government apply these same principles to protect providers from liability for content generated using their platforms? Should we want them to? Stay tuned.

E. Indemnification

The indemnification terms in technology contracts delineating which party is responsible for defending against third-party claims (e.g., for IP infringement) are typically heavily negotiated. This has and will continue to be the case for contract negotiations between AI providers and implementers.

Will our government continue to allow parties the freedom to contract on indemnification terms, e.g., for intellectual property issues? Or will AI be the subject of more oversight on this and other issues?

The topic of indemnification has had a special place in my heart for decades. It is the zero-sum game of zero-sum games. More to come on this.


III. Join the dialogue on IP and AI issues!

The rather jarring sentiment-analysis AI in my headline analyzer up top probably shouldn’t be taken by itself as evidence that the end is nigh. In reality, it only shows that the training of this particular implementation is not quite there yet. Well…unless it just wants us to think that that’s the case….

But make no mistake about it, other AI platforms and implementations are already there. And generative AI is developing faster than most would have imagined last year, far outpacing the development of the laws and regulations to govern it.

Your business’s ability to set up effective company policies and negotiate the AI terms in your contracts in compliance with applicable laws and regulations—not as they currently are but where they are headed—will define how much of an “AI tax” you have to pay. Much like with the “IP tax” before it, this can be a significant drain on your profit margin and can make or break your business. And despite the inherent uncertainties, you have more control than you might think, in particular as a startup or SME.

My goal is for this blog to become your go-to forum and resource for understanding and staying current on IP and AI issues. IP law has been a moving target for decades and AI law promises even more of the same. Through my first two articles, we have now established frameworks for developing your business’s strategies for IP protection and mitigating against AI risks. Please subscribe below and like, comment, and repost on LinkedIn.

Come join the dialogue and enjoy the ride!


  1. For a humorous and informed, albeit one-sided, discussion, click here. ↩︎
  2. This article will focus exclusively on the legal and business implications of AI. Let’s leave discussion of the broader government surveillance and “big brother” / “ministry of truth” type issues for another day. ↩︎
  3. Canas v. Bay Entertainment, 2019 WL 13084976 (Ariz. Super. Oct. 25, 2019); Canas v. Bay Entertainment, 252 Ariz. 117 (Ariz. Ct. App. 2021). ↩︎
  4. President Biden’s Oct. 2023 Executive Order on AI, however, does direct the Directors of the U.S. Patent and Trademark Office and the U.S. Copyright Office to publish additional guidance addressing these issues and more in the coming year. See id., at § 5.2(c). ↩︎
  5. The claimant in both cases is the same, apparently not-so-starving artist. See Thaler v. Perlmutter, 2023 WL 5333236 (D.D.C. Aug. 18, 2023) (affirming the U.S. Copyright Office’s denial of a copyright application on the grounds that the AI-generated work lacked human authorship and the AI could not properly be listed as the work’s “author”); see Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022) (affirming the U.S. Patent and Trademark Office’s denial of a patent application on analogous grounds, concluding that under the Patent Act, an “inventor” must be a human being). ↩︎
  6. This term “sensitive domain” appears to be an entirely new term coined by the executive branch specifically for this AI context, addressing a subset of heavily regulated industries with stronger civil liberties implications. ↩︎

2 responses to “…and without implementing AI successfully, you will be replaced.”