Category: Government use of AI
-
In LLM Providers We Trust…? [Conclusion of the Parsing the Blame for AI series]
AI disrupts traditional liability frameworks, making strict product liability ill-suited and negligence hard to establish. Data privacy laws are outdated and ineffective against AI’s pervasive data collection, while algorithmic discrimination in hiring and services introduces bias risks. Regulatory actions, like FTC oversight, remain limited, leaving significant gaps in accountability and relying heavily on future legislative… Read more
-
Software and product liability law [Part 1 of the Parsing the Blame for AI series]
Explore the complexities of AI liability in the evolving legal landscape. From product liability to on-presmises software to software-as-a-service (SaaS), uncover how laws for tangible goods differ from those governing intangible services like AI. Delve into key cases, challenges, and the implications for accountability in our interconnected AI age. Read more
-
Regulating the regulators: Ensuring patent examiners use AI “responsibly”
The patent examination process—whereby the U.S. Patent and Trademark Office reviews patent applications and issues or grants those that meet the requirements for patentability—is tailor-made for the implementation of AI. But what are the risks to the quality and fairness of the patent examination process when the USPTO implements AI? And what policies and procedures… Read more