Ko IP & AI Law PLLC

Arizona patent lawyer focused on intellectual property & artificial intelligence law. Own your ideas, implement your AI, and mitigate the risks.

GenAI: The Ultimate in Diffusion-of-Responsibility Technology [Part 3 of the Parsing the Blame series]

I. When Software Fails: The Challenges of Proving Liability

A. Why strict product liability doesn’t compute for software

[see Part 1 here]

B. Negligence

[see Part 2 here]

C. Limitation of Liability Provisions Are More Enforceable for Software

[see Part 2 here]

II. GenAI: The Ultimate in Diffusion-of-Responsibility Technology

Then there is AI-as-a-Service (AIaaS), which promises transformative potential but also complicates liability exponentially.

A. Differences between Software and AI

1. The incomparable LLM provider

There simply is no parallel to the LLM provider in the history of third-party liability.

There are a handful of foundational LLM models that have outsized influence today, including those of OpenAI, Anthropic, Meta, and Google. Most AIaaS platforms are built on or integrate with these general-purpose LLMs. Unlike on-premise software or SaaS, LLMs autonomously generate content based on user prompts, which can result in outputs that are unanticipated, inappropriate, or harmful. The unpredictability of this generative behavior complicates liability attribution. And LLMs can generate an infinite variety of outputs across countless domains, making it inherently even more challenging to establish that a particular use of the LLM that led to a given harm was “reasonably foreseeable.”

2. “Not my fault…”

AIaaS providers rely on external datasets, third-party Application Programming Interfaces (APIs) to facilitate interaction and data sharing between applications, and pretrained models from the providers of the foundational LLM models (the “LLM providers”). These dependencies create a precarious house of cards when things go wrong. And if everyone may be responsible, then perhaps no one is?

3. The conflation of design and “manufacturing” in AI

In traditional product manufacturing, product design and product manufacturing are distinct stages in the process. When the product fails, there is a clean(-er) pathway toward accountability, applying logic, science, and the immutable laws of the universe. However, computer programming, especially with respect to AI, combines design and development in “a largely iterative processes.”1 “Unlocked, dynamic AI systems pose a greater challenge because they are designed to learn continuously as they are being used, which means that design and manufacture (and the potential defects introduced) never end.”2

Litigation holds (directives issued to individuals or organizations to preserve all forms of relevant information when litigation is reasonably anticipated, pending, or ongoing) will never be more important—or more practically infeasible—to implement.

[cont’d ↗]

Ko IP & AI Law PLLC logo

4. …and they create their own “laws”…

And the laws of the AI universes are decidedly “mutable.” They were defined—at least initially—by their programmers and even bear such proprietary names as the “Metaverse.”

There are going to be at least some cases where teasing out what the source of a “defective GenAI output” that caused harm to an AI customer or a third party would require an analysis of the underlying LLM models and how they were programmed. And yet, somehow, even the purportedly most progressive AI law in the U.S., the Colorado AI Act, has specifically exempted the LLM and AIaaS providers from being required to disclose any trade secrets in making any disclosures to the attorney general with respect to any potential algorithmic discrimination or other issues.3 With the term “trade secret” is defined broadly in the law to cover anything “of economic value” that is kept secret, which pretty much covers anything that AI providers choose not to disclose to the public.

There are no principled grounds for such a blanket exemption that would be consistent with the goals of any regulatory regime geared toward protecting the public. Seems that the Googles of the world can write the laws of not only their universes, but of ours too….

B. Theories of Third-Party Liability for AI

1. [Not-so-]Strict liability for AI?

a. From embedded software to embodied AI?

If you are run over by a driverless vehicle, the argument that it was the hunk of metal that killed you and not the failure of the AI is one that only a corporate attorney could love. So-called “Embodied AI,” such as that used in autonomous driving, “integrates artificial intelligence into physical entities like robots, endowing them with the ability to perceive, learn from, and dynamically interact with their environment.”4 Clearly the logic of Holbrook v. Prodomax Automation will be applied to extend strict product liability principles5 to this driverless vehicle context, right?

Not so fast. First whether Holbrook is ultimately viewed as a groundbreaking case or a mere outlier is yet to be determined and there hasn’t been another case in any other court that has directly affirmed this principle in the three years since.

And the ongoing series of Tesla Autopilot cases, first filed in 2017, have put the brakes on any such direction. Plaintiffs contended they had become beta testers for incomplete software that rendered Tesla vehicles hazardous when engaged. While there have been settlements in some of these cases, no jury has found for the Plaintiffs to date.6 Juries have presumably been persuaded by the fact that Tesla marketed its “Autopilot” feature as a driver-assistance system and not as a fully autonomous driving system, replete with warnings to drivers that they need to keep their hands on the wheel, which the plaintiff-drivers did not do.

And strong policy arguments can be made both that a degree of “assumption of risk” should apply for any individual using such cutting-edge technologies and that applying strict product liability principles for such emergent technologies would serve to discourage technology progress. This argument logically extends to “Embodied AI” and any resulting physical harms.

It would seemingly apply even moreso for more intangible harms such as algorithmic discrimination.


*Parting thought: How can we mitigate any intangible harms caused by AI?

On the flip side, perhaps strict product liability principles will make a comeback in the near future, as we’ve already passed the “semi-autonomous” driving stage and into the fully autonomous vehicles with Waymo cars a common sight in the Phoenix area where I live.7 There are, after all, limits to the efficacy of “informed consent” and “assumption of risk” defenses whenever potential direct physical harm is involved, e.g., in health care.

But it is near impossible to imagine more intangible harms such as algorithmic discrimination following a similar trajectory.8 Will negligence principles save the day? Let’s take a look at this next time.

© 2024 Ko IP & AI Law PLLC


Loading

  1. Charlotte A. Tschider, Humans Outside the Loop, 26 Yale L. J. & Tech. 324, 371 (2024). ↩︎
  2. Id. at 374. ↩︎
  3. See C.R.S. § 6-1-1702(6) (“Nothing in subsections (2) to (5) of this section requires a developer to disclose a trade secret….”). ↩︎
  4. Shaoshan Liu & Shuang Wu,  A Brief History of Embodied Artificial Intelligence, and its Outlook,  Communications of the ACM (Apr. 29, 2024), https://cacm.acm.org/blogcacm/a-brief-history-of-embodied-artificial-intelligence-and-its-future-outlook/. ↩︎
  5. For discussion, see Part 1 here. ↩︎
  6. See Andrew J. Hawkins, Tesla wins another court case by arguing fatal Autopilot crash was caused by human error, The Verge (Oct. 31, 2023), available at https://www.theverge.com/2023/10/31/23940693/tesla-jury-autopilot-win-liable-micah-lee (discussing Lee v. Tesla, Case No. 2:20-cv-00570 (C.D. Cal., filed Jan. 20, 2020) and Hsu v. Tesla, Case No. 20STCV18473 (Cal. Super. Ct. L.A. Cnty., filed May 14, 2020). ↩︎
  7. See Waymo, Redefine how you move around Phoenix,https://waymo.com/waymo-one-phoenix/. ↩︎
  8. The only avenue would be through the passage of new laws or the implementation of new regulations that are ultimately upheld and enforced by the courts. We’ll take a look at this in Part 5 of the series. ↩︎
0 0 votes
Article Rating
simple-ad

This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x