Ko IP & AI Law PLLC

Arizona patent lawyer focused on intellectual property & artificial intelligence law. Own your ideas, implement your AI, and mitigate the risks.

…and without implementing AI successfully, you will be replaced.

Hyperbole? No, at least not for many industries. Businesses and individuals who figure out how to implement AI successfully will operate far more efficiently than those who don’t. And the already starving artist is facing its greatest existential crisis; there’s seemingly nothing to do but protest. Employee displacement by generative AIโ€”which generates high-quality text, images, and other content based on the data it was trained onโ€”will be perhaps the greatest societal issue of our times (and we’ve not exactly been short of issues…).

A successful AI implementation cannot be evaluated solely on technical grounds. Navigating the legal and regulatory minefield will be even more important in AI than it has been historically for other technologies. This will be comparable to the “IP tax” paid, in particular in high tech, as a cost of doing business.1 But in addition to repeatedly having to defend against infringement claims by third-party IP owners, all businesses using AI, including:

…will also have to defend against claims or charges brought on several additional fronts, including:

  • customers and third-parties who are harmed, directly or indirectly, by the AI,
  • employees displaced by AI, and
  • government regulators.

Picking up on last week’s blog (“Without IP, you are replaceable….”), this article first provides a comprehensive overview of the intellectual property (IP) issues that arise with AI. The article then present a framework for analyzing the legal risks to come for AI providers and implementers.3

I. Intellectual Property Issues in AI

A. IP infringement

1. Copyright infringementโ€”creative works

Copyright infringement issues by generative AI.

Imitation is the sincerest form of flattery? Not so much when it’s generative AI, which trains on voluminous data and creates new works. It inevitably plagiarizes excerpts of books and articles and copies elements of art, photography, and music compositions with impunity.

From an artist’s perspective, not only are your creative works being stolen, but your livelihood is jeopardized by AI. The recent Hollywood writers and actors strikes reflect nothing lessโ€”click here.

2. Copyright infringementโ€”software coding

generative AI coding software

There’s a specific wrinkle for copyright issues when generative AI creates software code. The software community has a unique and proud tradition of crowdsourcing for open source softwareโ€”click here. Contrary to popular misunderstanding, open source software is not by definition “free.” Instead it can and often is incorporated into closed (i.e., proprietary) software, for commercial sale.

But one particular type of open source software, cheekily named “copyleft” software, is an exception. Copyleft licensing terms are described as “viral,” because they require all derivative works incorporating copyleft code to also be released under a copyleft license.

This gives rise to a possible scenario where generative AI incorporates excerpts of copyleft open source software as part of its plagiarizing process only to have the software that it generates rendered unprotectable because of it. While most commentators do not believe that a court would ultimately impose such a harsh result should a dispute arise, it is a position available to any litigant who would benefit from it. Software businesses must take note of this risk and take steps to mitigate against it.

Sentiment analysis by headline analyzer
Ko IP & AI Law PLLC logo
Jim W. Ko headshot

Self-schedule a free 20-min. video or phone consult with Jim W. Ko of Ko IP and AI Law PLLC here.

3. Right of publicity misappropriation

There’s presumably a special place in hell for whoever knocked off Tom Hanks, rightโ€”click here? And yet, the internet goes far lower in misappropriating the images and likenesses of famous and every day people alike.

Any generative AI’s creation of a human image is trained on the images of one or more people. Virtually none of those people have granted permission for the use of their images.

There is no federal law in place granting any “right of publicity” preventing the unauthorized commercial use of an individual’s name, likeness, or other recognizable aspects of one’s persona. The right of publicity is but a patchwork of state and common law.

Every state has its own flavor of right of publicity law, with some states’ laws being more established than others. My home state of Arizona, for example, has two statutes recognizing a right of publicity, but directed only at soldiers. In 2020, an Arizona superior court held this reflected a legislative intent to deny a right of publicity for civilians. This was overturned on appeal, with the court holding that Arizona always has and continues to recognize a common law right of publicity.4

A typical AI provider’s business model is to sell software-as-a-service (SaaS) subscriptions for the use of its platform over the cloud. The provider is thus subject to both federal law and the individual laws of every state.

B. Protection of IP rights by AI providers

The global wave of legislation on AI has focused on mitigating against its potential negative effects on society. Not much specific has been directed toward defining the IP rights of providers to date.5

1. Authorship / inventorship issues

Will the law recognize generative AI as capable of creating original works or inventions for IP ownership purposes? And will individuals with ownership rights over such output be granted copyright or patent rights?

Some argue that we should treat AI like a tool. And creators should be able to use it like any other tool to create original works or inventions and secure IP rights on them.

The U.S. federal courts appears to have set an outer boundary to this position over the past few years. Two courts have held that if there is no human hand in any part of the generative AI’s process, then neither copyright rights nor patent rights are available to the AI’s owner.6 But the question of whether creative works or inventions made under the direction of human beings and with only some degree of non-human assistance should be eligible for copyright or patent protections remains wide open.

2. Trade secret protections

Many AI companies rely primarily if not exclusively on trade secret protections to protect their innovations. As noted in my blog last week, AI seems like a great candidate for IP protection by trade secrets.

But President Biden’s Oct. 2022 Blueprint for an AI Bill of Rights and Oct. 2023 Executive Order on AI make clear our current administration’s intent to make it a heavily regulated industry. Both the EU’s proposed June 2023 and China’s Aug. 2023 versions do the same. Providers and implementers may need to meet future regulatory standards establishing the accuracy and lack of “algorithmic discrimination” of their AIโ€”click here.

When an AI company’s invaluable trade secret rights collide with the government’s interests in protecting its citizens in this highly disruptive area, who do you think will win? And who and what should we be rooting for? Let’s play this out in next week’s blog….


II. Legal Risks For Businesses and AI

The business use of AI gives rise to various types of legal liabilities, including:

In the U.S., your business will be subject to greater regulatory scrutiny if it involves a “sensitive domain.”7 These include:

Your primary defense against an AI claim will be fundamentally an issue of compliance, with:

Biden's Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.

A. AI use in your hiring, promotion, and termination processes

All businesses implementing AI into their hiring, promotion, and firing processes will need to implement a defensible AI. The employer and/or its AI provider will be liable for any “algorithmic discrimination” harming an affected claimant. If a jilted individual files any employment discrimination or unlawful termination suit and the implementation played any sort of part in the process, you will need to defend it.

B. Incorporating AI output in your service offerings

When implementing AI in providing services to your customers, then you and/or the provider face the following legal risks:

1. IP infringement (for overview of legal risks, see Sect. I above)

2. Data privacy violations

There is an insatiable need for the collection of data in AI for the training of its models. Each data set will, especially if unchecked, invariably contain the personal identifying information (PII) of individuals. And some AI implementations (e.g., for surveillance or for targeted marketing) collect and intentionally generate PII.

Each data set thus becomes a potential target for a data breach. And if a breacher makes this information public, for e.g., by posting on the Internet, then such PII repositories again become pulled into data sets for the training of other AI models, continuing the cycle.

Such collection of PII gives rise to potential data privacy liability for the AI implementer or provider, e.g., for:

Data privacy concerns further arise from an AI implementer or end-user uses the AI platform. The information that the user enters as prompts for the AI to generate a desired response may contain PII and itself be collected and used by this or other AIs in the future.

A veritable alphabet soup of federal and state laws may apply depending on the situationโ€”click here.

3. Fraudulent and negligent misrepresentation

Fraud in any form, including that intentionally perpetrated through AI, gives rise to criminal and civil liabilities. The only open question is what additional laws and regulations will come into effect specific to AI.

An “AI hallucination” is content generated that is nonsensical or unfaithful to the provided source content. They happen with surprising frequency and, if and when relied upon, can cause harm. The liability to be borne by implementers or providers for such hallucinations is yet to be determined.

C. Addressing unauthorized use of AI by your employees

You should update your Employee Handbook to include AI terms.

All companies should update their company policies and employee handbooks to address any unauthorized use of AI by their employees. At a minimum, you should require your employees to disclose any such use made in the course of their employment.

This will help mitigate against the risks of any of the types of third-party claims (IP infringement, algorithmic discrimination, etc.). It will further mitigate against claims of unlawful termination for such conduct.

D. Will there be federal protections for providers?

Section 230 of the Communications Decency Act of 1996 has long shielded internet platforms from liability for content created by users. Title II (the โ€œOnline Copyright Infringement Liability Limitation Actโ€) of the Digital Millenium Copyright Act of 1998 limits the liability of online service providers for copyright infringement. Some credit such federal protections as being instrumental to the very success of the internet.

Will our government apply these same principles to protect providers from liability for content generated using their platforms? Should we want them to? Stay tuned.

E. Indemnification

The indemnification terms in technology contracts delineating which party is responsible for defending against third-party claims (e.g., for IP infringement) are typically heavily negotiated. This has and will continue to be the case for contract negotiations between AI providers and implementers.

Will our government continue to allow parties the freedom to contract on indemnification terms, e.g., for intellectual property issues? Or will AI be the subject of more oversight on this and other issues?

The topic of indemnification has had a special place in my heart for decades. It is the zero-sum game of zero-sum games. More to come on this.


III. Join the dialogue on IP and AI issues!

The rather jarring sentiment-analysis AI in my headline analyzer up top probably shouldn’t be taken by itself as evidence that the end is nigh. In reality, it only shows that the training of this particular implementation is not quite there yet. Well…unless it just wants us to think that that’s the case….

But make no mistake about it, other AI platforms and implementations are already there. And generative AI is developing faster than most would have imagined last year, far outpacing the development of the laws and regulations to govern it.

Your business’s ability to set up effective company policies and negotiate the AI terms in your contracts in compliance with applicable laws and regulationsโ€”not as they currently are but where they are headedโ€”will define how much of an “AI tax” you have to pay. Much like with the “IP tax” before it, this can be a significant drain on your profit margin and can make or break your business. And despite the inherent uncertainties, you have more control than you might think, in particular as a startup or SME.

My goal is for this blog to become your go-to forum and resource for understanding and staying current on IP and AI issues. IP law has been a moving target for decades and AI law promises even more of the same. Through my first two articles, we have now established frameworks for developing your business’s strategies for IP protection and mitigating against AI risks. Please subscribe below and like, comment, and repost on LinkedIn.

Come join the dialogue and enjoy the ride!

ยฉ 2023 Ko IP & AI Law PLLC


  1. For a humorous and informed, albeit one-sided, discussion, click here. โ†ฉ๏ธŽ
  2. We need consistent terminology defining the different ways that companies interface with AI in order to intelligently discuss roles, responsibilities, liabilities, etc. with respect to its usage. โ†ฉ๏ธŽ
  3. This article will focus exclusively on the legal and business implications of AI. Let’s leave discussion of the broader government surveillance and “big brother” / “ministry of truth” type issues for another day. โ†ฉ๏ธŽ
  4. Canas v. Bay Entertainment, 2019 WL 13084976 (Ariz. Super. Oct. 25, 2019); Canas v. Bay Entertainment, 252 Ariz. 117 (Ariz. Ct. App. 2021). โ†ฉ๏ธŽ
  5. President Biden’s Oct. 2023 Executive Order on AI, however, does direct the Directors of the U.S. Patent and Trademark Office and the U.S. Copyright Office to publish additional guidance addressing these issues and more in the coming year. See id., at ยง 5.2(c). โ†ฉ๏ธŽ
  6. The claimant in both cases is the same, apparently not-so-starving artist. See Thaler v. Perlmutter, 2023 WL 5333236 (D.D.C. Aug. 18, 2023) (affirming the U.S. Copyright Office’s denial of a copyright application on the grounds that the AI-generated work lacked human authorship and the AI could not properly be listed as the work’s “author”); see Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022) (affirming the U.S. Patent and Trademark Office’s denial of a patent application on analogous grounds, concluding that under the Patent Act, an “inventor” must be a human being). โ†ฉ๏ธŽ
  7. This term “sensitive domain” appears to be an entirely new term coined by the executive branch specifically for this AI context, addressing a subset of heavily regulated industries with stronger civil liberties implications. โ†ฉ๏ธŽ
Loading

Posted by

in

0 0 votes
Article Rating
simple-ad

This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.

Subscribe
Notify of
guest

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Jim Vaughn
Jim Vaughn
Guest
1 year ago

Congrats Jim, and thank you for sharing the informative blog post.

2
0
Would love your thoughts, please comment.x
()
x