I. When Software Fails: The Challenges of Proving Liability
A. Why strict product liability doesn’t compute for software
[see Part 1 here]
B. Negligence
[see Part 2 here]
C. Limitation of Liability Provisions Are More Enforceable for Software
[see Part 2 here]
II. GenAI: The Ultimate in Diffusion-of-Responsibility Technology
A. Differences between Software and AI
[see Part 3 here]
B. Theories of Third-Party Liability for AI
1. [Not-so-]Strict liability for AI?
[see Part 3 here]
2. Negligence by AI?
As strict liability is simply not in the cards—in particular for intangible harms caused by the implementation of AI—in the U.S.,1 2 the only main theories of liability against AI providers would be through enforcing existing or new laws or regulations or through the application of negligence principles. Let’s look at the latter here in Part IV of this series on Parsing the Blame: How AI slips through the cracks of third-party liability law.
a. Are AI providers subject to liability for algorithms that recommend harmful content subject?
When an AI algorithm recommends harmful content, the lines of liability blur between the platform hosting the AI, the provider of the AI model, and the end user who may act on that recommendation. Historically, courts have treated platforms as intermediaries protected under Section 230 of the Communications Decency Act in the U.S., which shields them from liability for user-generated content. However, generative AI introduces a twist: the AI itself is often the author of the harmful output.
Take, for example, a generative AI system that creates a personalized health plan for a user but inadvertently recommends an unsafe combination of medications due to flawed training data. Courts would likely scrutinize whether the harm was foreseeable and whether the provider exercised due care in training and deploying the model. The foreseeability standard could become a contentious issue, given the expansive and unpredictable use cases of large language models (LLMs).
For example, several pending cases that have been brought against social media platform providers for their algorithms, primarily now built on AI and machine learning technologies, recommending harmful content subject matter to their users. To bypass social media company’s defenses based on Section 230,3 plaintiffs have been framing their cases under product liability—both strict liability and “negligent design” liability.4
Providers will argue that their algorithms operate as tools, and ultimate responsibility rests with the user or intermediary integrating the AI. But as AI systems grow more autonomous and influential, this argument becomes less tenable. The judiciary may be forced to reexamine longstanding doctrines to address harms stemming directly from AI’s recommendations, potentially diluting the protections afforded by intermediary liability frameworks.
[cont’d ↗]
Self-schedule a free 20-min. video or phone consult with Jim W. Ko of Ko IP and AI Law PLLC here.
Mark Walters, radio host
V
b. Liability for AI output inaccuracies or “hallucinations” that cause harm?
AI systems are far from infallible. They generate responses based on probabilistic models trained on datasets that may contain inaccuracies, biases, or outdated information. When an LLM confidently provides false information—a phenomenon sometimes referred to as “hallucination”—and that causes harm, the question arises: who bears the blame?
Consider a scenario where an AI tool advises a financial planner, leading to significant client losses. The planner relied on the AI’s analysis without independently verifying its accuracy. Under negligence principles, establishing liability would depend on proving that the AI provider failed to exercise reasonable care in training or maintaining the system. Yet, the standard of “reasonable care” remains nebulous in the context of AI, where the sheer complexity of models may obscure flaws until after deployment.
Take the pending case of Walters v. OpenAI, in which plaintiff filed a defamation lawsuit against AI provider OpenAI after its LLM implementation ChatGPT generated a false legal summary accusing him of embezzlement and fraud.5 What steps taken by ChatGPT might constitute “reasonable care” absolving it of liability despite the inaccuracy of its output here?6
A further complicating factor is whether stock disclaimers absolve AI providers of liability. All providers include terms of service disclaiming responsibility for inaccuracies or misuse, and OpenAI has asserted this as a defense in Walters.7 While courts often uphold such clauses, they may not apply if the inaccuracies lead to substantial financial, reputational, or physical harm.
c. Can AI providers reasonably be made their user’s keepers?
AI’s unique position as both tool and actor raises the question: should AI providers bear responsibility for their users’ actions? For instance, if an AI system generates code that enables a cyberattack, is the provider liable?
Under existing negligence law, the answer depends on foreseeability and duty of care. Courts may ask whether the AI provider reasonably anticipated such misuse and took adequate steps to prevent it. Preventative measures might include restricting the scope of outputs, implementing safeguards, or training the model to avoid producing harmful content. However, over-restriction could stifle legitimate uses and innovation, creating a challenging balancing act.
The scale and autonomy of AI systems exacerbate this issue. While traditional software is deterministic, AI can evolve dynamically, producing unforeseen results. Expecting providers to preemptively address every potential misuse is arguably unrealistic. A reckless prompt fed into a generative AI model can produce everything from copyright infringements to defamatory statements, potentially harming quite literally every individual or group in the world, the vast majority of which have no contractual relationship whatsoever with the AIaaS provider or any of its other associated third parties.
Yet the alternative—leaving third parties unprotected—is equally unsatisfactory.
b.
*Parting thoughts: Perhaps existing statutory or regulatory requirements are enough?
In October 2024, 14 states filed suits against the social media platform of TikTok employees a variety of addictive features (some presumably driven by AI algorithms) targeting teens and collecting their data without consent.8 Over 40 states have joined a coalition on a similar suit filed in October 2023 against Meta over its Facebook and Instagram social media platforms.9 Both attempt to bypass Section 230 defenses not by applying any strict product liability or negligence principles, but rather by focusing on state consumer protection laws and child safety laws.10
Do existing statutory or regulatory requirements suffice to mitigate the novel and unique risks posed by the AI Age? Let’s explore this next time.
© 2025 Ko IP & AI Law PLLC
- See Part 3 here. ↩︎
- The contrast between the U.S. approach with that taken by Europe on these same AI issues simply could not be more different. The new Product Liability Directive that came into effect in December explicitly expands product liability concepts to include software and the newest technologies like artificial intelligence. See Liability for defective products, European Commission (8 December 2024), https://single-market-economy.ec.europa.eu/single-market/goods/free-movement-sectors/liability-defective-products_en#who-can-be-held-liable. ↩︎
- For some additional cases implicating the scope of Section 230 immunity, see Gonzalez v. Google LLC, 598 U.S. 617 (2023); Twitter, Inc. v. Taamneh, 598 U.S. 471 (2023); and Lemmon v. Snap, 995 F.3d 1085 (9th Cir. 2021) (reversing grant of motion to dismiss, holding social media was not entitled to § 230 immunity when social media company directly released its platforms to include the features that allegedly caused the harm to the public). ↩︎
- See In re Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, 702 F.Supp.3d 809 (N.D. Cal. Nov. 14, 2023) (denying social media defendants’ argument that their platforms are not “products” subject to state product liability law); but see, e.g.,Rodgers v. Christie, 795 F. Appx. 878, 880 (3d Cir. 2020) (not precedential) (holding that artificial intelligence software “is neither ‘tangible personal property’ nor remotely ‘analogous to’ it” to qualify as a product for product liability purposes); and Intellect Art Multimedia, Inc. v. Milewski, 2009 WL 2915273 (N.Y. Sup. Sept. 11, 2009) (unpublished) (holding that a website is not a “product” for strict liability purposes). ↩︎
- See Isaiah Poritz, OpenAI Fails to Escape First Defamation Suit From Radio Host, Bloomberg Law (Jan. 16, 2024), https://news.bloomberglaw.com/ip-law/openai-fails-to-escape-first-defamation-suit-from-radio-host. ↩︎
- It should also be noted that defamation cases like Walters v. OpenAI bear a higher burden of proof than negligence cases. Can GenAI be deemed to have acted with any specific intent in the first place, as required for a defamation claim against at least public figures such as plaintiff Walters (a radio host)? ↩︎
- OpenAI has asserted there should be no liability because the ChatGPT terms of use alerts users that ChatGPT “is not fully reliable (it ‘hallucinates’ facts and makes reasoning errors)…[and] care should be taken when using language model outputs, particularly in high-stakes contexts.” GPT-4, OpenAI (Mar. 14, 2023), https://openai.com/index/gpt-4-research/. ↩︎
- See Attorney General James Sues TikTok for Harming Children’s Mental Health, Office of the New York State Attorney General (Oct. 8, 2024), https://ag.ny.gov/press-release/2024/attorney-general-james-sues-tiktok-harming-childrens-mental-health. ↩︎
- See Complaint for Injunctive and Other Relief, Case 4:23-cv-05448 (N.D. Cal., filed Oct. 24, 2024), available at https://www.documentcloud.org/documents/24080032-state-ags-v-meta. ↩︎
- See Bobby Allyn, States sue Meta, claiming Instagram, Facebook fueled youth mental health crisis, NPR (Oct. 24, 2023), https://www.npr.org/2023/10/24/1208219216/states-sue-meta-claiming-instagram-facebook-fueled-youth-mental-health-crisis. ↩︎
This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.