Ko IP & AI Law PLLC

Arizona patent lawyer focused on intellectual property & artificial intelligence law. Own your ideas, implement your AI, and mitigate the risks.

In LLM Providers We Trust…? [Conclusion of the Parsing the Blame for AI series]

I. When Software Fails: The Challenges of Proving Liability

A. Why strict product liability doesn’t compute for software

[see Part 1 here]

B. Negligence

[see Part 2 here]

C. Limitation of Liability Provisions Are More Enforceable for Software

[see Part 2 here]

II. GenAI: The Ultimate in Diffusion-of-Responsibility Technology

A. Differences between Software and AI

[see Part 3 here]

B. Theories of Third-Party Liability for AI

1. [Not-so-]Strict liability for AI?

2. Negligence by AI?

[see Part 4 here]

3. Statutory or Regulatory Oversight of the Implementation of AI

The right to private action—the ability of individuals or groups to bring lawsuits in court to address harms they have suffered—plays a vital role in protecting against many societal harms, in particular in the U.S. But given the unavailability of strict product liability principles and the heightened challenges of establishing negligence for AI-related harms, the role of private actions to holding AI providers accountable for any harms caused by their AI services to the public is inherently diminished.

The patchwork of federal and state laws governing data privacy on a sector-by-sector basis does not grant a right to private action, but for a few exceptions.1

Perhaps this is as it should be—such harms, often diffuse and systemic, may be better addressed collectively through by government agencies and officials tasked with enforcing the various federal laws to protect the general public from harms2 such as those implicated by the rise of AI. After all, private enforcement entails its own host of issues, including inconsistent outcomes, the potential for frivolous lawsuits, and a lack of focus on broader public policy objectives.

The nature and complexity of at least two types of AI-related harms—data privacy violations and algorithmic discrimination—are such that it is difficult to imagine anything but governmental action providing meaningful protections for the public.

[cont’d ↗]

Ko IP & AI Law PLLC logo
a. Data privacy

“AI impacts privacy in ways that often do not introduce entirely new problems, but instead modify and intensify existing ones. Current privacy laws are quite inadequate in addressing the challenges brought by AI. The outdated approaches that persist in most privacy laws are particularly unsuitable for managing AI’s complexities.” – Daniel Solove3

The advent of AI renders data privacy almost impossible because it is the perfect tool for collecting and analyzing data. It is virtually omniscient and tireless. Before AI, the primary data privacy concern was service providers unscrupulously selling the private data of their own customers. With the rise of AI-bots scraping data from every corner of the internet and finding predictive value in seemingless random aspects of our personal existences, all of our personal data is effectively laid bare through a never-ending series of data breaches, postings of data troves, AI-bot scraping, training of AI models on this data, etc., etc., etc. No personal or contractual relationship is involved or required.

b. Algorithmic discrimination re employment, access to services, etc.

This above-quoted assessment by Solove applies to not just data privacy, but also the other major categories of legal regulatory regimes impacted by the advent of AI, including those governing employment issues and access to services.4 But the nature of the AI risk is entirely different. These all entail the implementation of AI to evaluate an individual’s application for a job, loan, government benefit, etc. And inherently entail a risk of algorithmic discrimination and the need for transparency and defensibility.

AI such as automated employment decision tools by definition extract predictive value from information other than traditional data, with a job applicant’s personal social media posts being exhibit A.
0
As long as such information is in the public domain, this is presumptively legal. Should this remain unfettered in the AI age?x

At least some of such “alternative data” may be correlated with a person’s race, color, and national origin, religion, sex and gender, disability, age, or any of the other protected classes under the Civil Rights of 1964.

In recent years, we have started to see lawsuits and regulatory actions claiming discrimination involving AI in the hiring process,5 in consumer issues,6 and in housing/rental applications.7 Not at all surprising to see the EEOC, FTC, and a class-action suit in each respectively—it seems likely that these will be the principle avenues of seeking redress for such AI-related harms under the current U.S. legal regime.

1. AI and employment laws and regulations

“Disparate impact liability helps root out discrimination that is unintentional but unjustified—precisely the risk with AI.” – Chiraag Bains8

AI’s impact on hiring, firing, and workplace management introduces new legal complexities. Automated hiring tools risk perpetuating systemic biases, while AI-driven performance evaluations might lead to unfair terminations.

Consider an AI platform that evaluates job candidates. If its algorithms inadvertently perpetuate bias, who shoulders the blame? The AIaaS provider will say the source of any bias is from the LLM. The LLM provider will point the finger at the original source of data and present all of the technological “guard rails” it put in place to remove bias from the sources of its data, which will turn out to be effectively the entire internet. Good luck untangling that knot!

In May 2023, the Equal Employment Opportunity published updated guidance on automated employment decision tools using AI. New York City’s Local Law 144 went into effect last July requiring NYC employers to disclose and audit any use and output of AI-based automated employment decision tools. Both reflect an increasing awareness of the inherent risk of inequities in the application of AI to the hiring process, but without offering much of any concrete legal recourse for harmed individuals.9


2. AI and consumer issues

The role of the Federal Trade Commission with respect to protecting the public from AI harms is both emblematic and informative. It is arguably the most important patch in the sprawling quilt of laws and regulations purportedly governing data privacy issues in the U.S. And yet it has no direct authority to govern these issues.10

First, there’s no private right of action under FTC Section 5, so harmed individuals rely entirely on the discretion of the FTC as to whether the FTC initiates an “unfair or deceptive act of practice” (UDAP) enforcement action under Section 5.

More importantly, the basis of the FTC’s authority is under Section 5 of the FTC Act to prohibit “unfair or deceptive acts or practices in or affecting commerce.” How does this apply to data privacy? If a company publicly represents that it protects the privacy of its customer data by doing X and fails to live up to such a promise, then the FTC can and will step in, for example ultimately extracting a $5 billion settlement from Facebook for the Cambridge Analytica scandal in 2019.11

Would the FTC have the authority to interpret “unfair acts” to encompass, e.g., an AI provider collective and selling private “alternative data” about individuals? Certainly not without the AI providers being able to challenge such a maneuver under the Supreme Court’s recent ruling decision in Loper Bright Enterprises v. Raimondo, curtailing the deference historically afforded to regulatory agencies under the Chevron doctrine.

More likely, the FTC’s ambit on these issues will remain limited to calling shenanigans on companies with respect to their own marketing statements regarding their data privacy measures, which generally relate to agreeing to industry standard measures (and loopholes…) that they directly contributed to developing. I.e., the FTC can only regulate companies with respect to their sale of any private data (“alternative” or otherwise) only if a company publicly voluntarily stated that it would not do so in the first place.

As such, the FTC’s authority with respect to AI and data privacy issues is fundamentally limited. Unless the private alternative data has already been defined by legislation or interpreted by the courts as “unfair,” it seems likely that the FTC in effect can only regulate a company’s self-regulation efforts and whether they are in line with the company’s marketing statements.

3. AI and access to services

Generative AI systems are increasingly used to determine eligibility for essential services, such as loans, healthcare, and/or access to government benefits. When these systems produce erroneous or discriminatory outputs, individuals may be unfairly denied access.

Legislators could impose accountability frameworks requiring AI providers to explain decisions and provide mechanisms for redress. However, achieving genuine transparency in complex AI systems—where even developers may struggle to explain certain outputs—remains a significant challenge.


III. Conclusion

Regulators across the spectrum will inevitably seek to impose stricter oversight on AI systems, including those implicating data privacy rights and also those related to employment, consumer protections, access-to-services AI, etc., including mandatory disclosures of decision-making criteria and avenues for human review.

But for better or for worse, the authority of federal agencies to attempt to address AI harms, whether proactively or after the fact, has unequivocally been diminished by the 2024 Supreme Court decision in Loper Bright. Federal agencies are more constrained than ever to operate under the letter of the existing law, most all of which was drafted in the pre-AI age before the disruptive impact of AI was even contemplated.

The inescapable conclusion of this article series is that AI slips through the cracks of our legal system. AI providers elude accountability for harms caused by their AI outputs from private actions. And they simultaneously are benefiting from the current wave of government deregulation.12 In the U.S., most all of our legal eggs for mitigating any harms caused by what may be the most disruptive technological force in history have been primarily placed in one basket—that of future legislative action. Until any progress is made there, is there really nothing for us to do but trust the LLM providers to self-regulate responsibly …?

© 2025 Ko IP & AI Law PLLC


Loading
  1. E.g., the federal laws governing protected health information (PHI) and nonpublic personal information (NPI) collected by healthcare and financial institutions about their customers under the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act (GLBA) do not allow any private right of action. Only a handful of federal laws grant a private right of action, including the Fair Credit Reporting Act (FCRA) protecting the accuracy, fairness, and privacy of information in consumer credit reports and the Telephone Consumer Protection Act (TCPA) protecting privacy from unwanted telemarketing calls (and, e.g., providing statutory remedies of $500-$1,500 per violation).

    This past decade has seen a recent wave of states passing comprehensive data privacy laws, starting with the California Consumer Privacy Act (CCPA) enacted in 2018 and effective in 2020 and the California Privacy Rights Act (CPRA) enacted in 2020 and effective 2023. To date, there are twenty states that have passed such laws. But only California grants consumers a limited private right of action.

    As such, under the vast majority of statutory regimes impacting data privacy rights, enforcement authority is typically vested in state attorneys general or designated regulatory bodies. ↩︎
  2. Note: some of these same federal laws also provide an avenue to private enforcement. But many do not. And those that do, may require filing a compliant with the federal agency first, before filing any private action. For example, before filing a private lawsuit alleging workplace discrimination under federal law, individuals are generally statutorily required to file a charge with the Equal Employment Opportunity Commission (EEOC). ↩︎
  3. Daniel Solove, A regulatory roadmap to AI and privacy, International Association of Privacy Professionals (Apr. 24, 2024), https://iapp.org/news/a/a-regulatory-roadmap-to-ai-and-privacy/. ↩︎
  4. This series of article addresses non-intellectual property third-party liability for harms resulting from the implementation of AI by companies or individuals. Intellectual property third-party liability for harms resulting from the implementation of AI by companies or individuals and third-party liability for harms resulting from the implementation of AI by the government are issues that will be addressed in future blog articles. ↩︎
  5. See iTutorGroup to pay $365,000 to settle EEOC discriminatory hiring suit, U.S. Equal Employment Opportunity Commission, 2023 WL 5932895 (E.E.O.C.) (settling case where EEOC alleged defendant’s AI-powered recruitment software automatically rejected older applications in violation of the Age Discrimination in Employment Act); Mobley v. Workday, Inc., 2024 WL 3409146 (N.D. Cal. July 12, 2024) (alleging defendant’s AI-driven applicant screening tools discriminated based on race, age, and disability). ↩︎
  6. See Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology without Reasonable Safeguards, Federal Trade Commission (Dec. 19, 2023) (prohibiting retailer from using facial recognition technology for surveillance purposes to settle charges that the retailer failed to implement reasonable procedures and prevent harm to consumers), https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without. ↩︎
  7. See Emma Roth, AI landlord screening tool will stop scoring low-income tenants after discrimination suit, The Verge (Nov. 20, 2024) (settling class-action suit for $2.3 million), https://www.theverge.com/2024/11/20/24297692/ai-landlord-tool-saferent-low-income-tenants-discrimination-settlement. ↩︎
  8. Chiraag Bains, The legal doctrine that will be key to preventing AI discrimination, Brookings (September 13, 2024), www.brookings.edu/articles/the-legal-doctrine-that-will-be-key-to-preventing-ai-discrimination ↩︎
  9. See my Jan. 2024 blog article on Bias in, bias out: The Algorithmic discrimination challenge, here. ↩︎
  10. Again, the circumstances are far different in Europe, where: 1. there is a fundamental individual right to one’s private data; and 2.) under the General Data Protection Regulation (GDPR), each European Union member state etsablishes a Data Protection Authority (DPA) responsible for ensuring compliance with the GDPR and protecting the data privacy rights of individuals, and work together under the guidance of the European Data Protection Board (EDPB) to ensure consistent application across the EU. ↩︎
  11. See my Dec. 2023 blog article on Your duties to your AI customers and their private data, here. ↩︎
  12. The interests of the litigious and deep-pocketed anti-Diversity, Equity, and Inclusion (DEI) movement are directly aligned here with those of the AI providers on these “algorithmic discrimination” issues. I’m not saying, I’m just saying…. ↩︎

0 0 votes
Article Rating
simple-ad

This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x