Who will be liable when an AI implementation harms either its users or unrelated third parties?
In Part 1 of this series last week, we explored how software is presumptively treated as a โserviceโโnot a โproductโโand thus is not subject to strict product liability law. So what theories of liability are applicable when software output causes harms to customers or third parties? Let’s take a look.
[Note: We will later discuss how AI (which can be thought of โas software 2.0โ) threatens to slip through the cracks of the current law on third-party liability in Parts III and IV of this series in the weeks to come.]
I. When Software Fails: The Challenges of Proving Liability
A. Why strict product liability doesnโt compute for software
[see last week’s post here]
B. Negligence
Unlike for product liability law where the emphasis is on the product itself and whether it was defective or unsafe, negligence law focuses on a partyโs failure to act with reasonable care that causes harm to someone else to whom they owe a duty of care. In the absence of specific laws or regulations governing the situation at hand (which exist for some subject areas, but the vast majority of which have not been updated to account for generative AI to date1 2), principles of negligence law and contract law are largely the only games in town.
Under a negligence theory of liability, a plaintiff needs to establish not just that the software was the actual or “but for” cause of the harm (essentially asking โif this action hadnโt happened, would the harm have occurred?โ). The plaintiff also needs to prove that the software was the โproximateโ cause (i.e., very simply stated, the โprimaryโ) cause of the harm. Generally speaking, this requires the plaintiff to establish that the harm was a โforeseeable consequenceโ of the software.
Software inherently benefits from far muddier waters for establishing either but-for or proximate causation than physical products. A single software program is rarely the sole cause of any harm; it typically interacts with numerous systems, devices, and user inputs, making it more difficult to assign or apportion blame.
Relative fault is often more difficult to determine for harm caused by software than it is for harm caused by tangible goods, because of the larger cast of characters and the more complicated nature of their contributions to the software in question.
And even if specific harm is apportioned to software, on-premise software programs themselves commonly rely on third-party components, and are mosaics of proprietary code, open-source libraries, and plug-ins. It is impossible to test software in every conceivable environment or context. There is a wider range of potential use cases and environments where software can be used than for tangible goods, which complicates any foreseeable harm analysis.
Moreover, Software-as-a-Service (SaaS) platforms often rely on the same plus an even larger web of third-party services, including hosting providers, analytics engines, and payment processors.
Plaintiffs further need to overcome a software providerโs defenses that other intervening factors were the proximate cause of the harm, including user modifications, misuse, or third-party software and hardware.
[cont’d โ]
Don’t really need to worry about this…
Self-schedule a free 20-min. video or phone consult with Jim W. Ko of Ko IP and AI Law PLLC here.
C. Limitation of Liability Provisions Are More Enforceable for Software
With strict product liability largely off the table for software, this also makes it far easier for software provide to limit their potential liability through contract law.
1. Software, still at your service…
Because software is presumptively treated as a โservice,โ the enforceability of limitation of liability provisions for software harms is stronger. Were software treated as a โproduct,โ it would be subject to the Uniform Commercial Code and product liability doctrines. And courts are generally less willing to enforce clauses that limit or exclude strict liability, in particular when consumer safety is at stake.3
Under contract law, parties can generally include โexculpatory clausesโ in contracts limiting or excluding liability for negligence so long as: 1.) the limitations are clearly stated,4 2.) the terms are not unconscionable or against public policy, and 3.) the harmed party had an opportunity to negotiate or reject the terms.5
2. Software’s highly scalable nature
There is a public policy argument for limiting liability for harms resulting from software compared to that of physical products. Noted technology attorney David Tollen notes that software โis an unusually scalable toolโ:
[I]t can be used to achieve goals geometrically more valuable than the tool itself. You can use a $10,000 software program to design a half-billion-dollar bridge or to manage a billion-dollar asset portfolio. And that same low-cost software program can ruin a half-billion-dollar bridge or a billion-dollar asset portfolio, if it doesnโt work. The provider couldnโt do business if every $10,000 sale generated meaningful odds of a billion dollar liability, or even a half-million dollars. So the provider needs limits of liability.6
Limitation of liability provisions help software providers mitigate risks associated with widespread adoption, such as unforeseen bugs or system failures affecting thousands of users simultaneously. This applies equally to software distributed under the traditional model as to software distributed via the cloud as a โservice.โ It is common, e.g., for SaaS providers to cap their liability at the cost of the subscription fee, reflecting this scalability issue. And courts will generally uphold such limitation of liability clauses so long as the requirements discussed in the previous section are met.
Playing out this public policy argument even further, assigning strict and unlimited liability to software developers for every issue would stifle innovation and development. But the flip side is, limiting liability for software developers for the harms caused by their software is inherently at the expense of making any future harmed parties whole, in the zero-sum game of liability.
“In the great non zero sum games of history, if you’re part of the problem, then you’ll likely be a victim of the solution.” – Robert Wright
*Parting thought: Should potential third-party liability for implementation of a technology be proportionate to how disruptive it is?
Has placing the thumb on the scales in favor of software providers by removing product liability from the table been the right choice from a public policy perspective? My view, FWIW, is yes. Probably.
But how, if at all, does our current transition into the AI Age, where generative AI software increasingly has the ability to make decisions previously exclusively reserved for human beings, and companies and individuals are increasingly relying onโand being harmed byโthe outputs of AI? Let’s explore this in depth in the next part of this article series.
ยฉ 2024 Ko IP & AI Law PLLC
- With a notable exception of in a handful of states, of which only Colorado has passed a nominally comprehensive AI framework regulating the development, deployment, and use of certain AI systems to date. See Colo. Rev. Stat. ยง 6-1-1701 et seq. (2024). โฉ๏ธ
- We will walk through the legal and regulatory regimes governing some of these subject areas in Parts 3 and 4 of this article series. โฉ๏ธ
- See Henningsen v. Bloomfield Motors, Inc., 161 A.2d 69 (N.J. 1960) (invalidating an exculpatory clause in a standard-form car sales contract as contrary to public policy). โฉ๏ธ
- This is why limitation of liability language in contacts is often typed in ALL CAPS. Nope, not silly at all โฆ. โฉ๏ธ
- See, e.g., Restatement (Second) of Torts (1965), at ยง 496BโC. โฉ๏ธ
- David W. Tollen, The Tech Contracts Handbook (3d ed. 2021), at 195. โฉ๏ธ
This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.