Ko IP & AI Law PLLC

Arizona patent lawyer focused on intellectual property & artificial intelligence law. Own your ideas, implement your AI, and mitigate the risks.

Sedona WG13 on AI

The Sedona Conference Launches Dialogue on Artificial Intelligence and the Law

Posted by:

|

On:

|

, , , ,

The Sedona Conference, one of the nationโ€™s leading nonpartisan think-tanks on issues of law and technology, will be launching its Working Group 13 on Artificial Intelligence and the Law in January, building on its long-standing reputation in the areas of eDiscovery, digital records management, patent litigation, trade secrets, cybersecurity, data privacy, and cross-border data transfers. Sedonaโ€™s Executive Director Ken Withers sat down with Sedona veteran and former Sr. Program Attorney Jim Ko to talk about this new initiative.


Withers: Jim, I feel like Iโ€™ve been here before. 40 years ago, I was installing the first Westlaw terminals in my law schoolโ€™s library. 30 years ago, I was learning HTML in library school and developing my first web site. 20 years ago, I was in Sedonaโ€™s Working Group 1 helping develop the first set of principles governing eDiscovery. Ten years ago, I was sitting at a Sedona conference in London discussing data protection regulation with European judges and lawyers. This year, Sedona has already hosted two conferences on AI and the Law, and several of our Sedona Working Groups have leapt ahead and started developing analyses and best practice guidance for courts and practitioners addressing AI. I like to think weโ€™re on the cutting edge, but we always seem to be trying to catch up.

Ko: I hear you, Ken. The law always lags behind technological change. Novel issues brought about by novel technologies have a way of slipping through the cracks of the laws as written, inconsistent with the principles underlying them. How can we in the legal community help the law keep pace with the rapid and sweeping societal changes initiated by the rise of AI and mitigate AIโ€™s excesses? How can we regulate the use of AI and encourage its safe and responsible usage but also mitigate the risks of government overreach or excessive litigation? Our federal and state legislative and judicial branches should be the primary sources of legal authority on all issues. But to the degree that getting ahead of the rapid developments of AI is important, the deliberative nature of both effectively precludes that. And the existing patchwork of guidelines and regulations from administrative agencies is spotty, compounded by the Supreme Courtโ€™s Loper Bright Enterprises v. Raimondo decision earlier this year, calling into question any ability of regulatory agencies to expand their authority to meaningfully address AI.

Withers: This is the situation we faced in the early 2000โ€™s with eDiscovery, only AI is far more consequential. Judges, lawyers, and concerned citizens were searching for clarity and direction on complex issues, and there were no easy answers. That is when The Sedona Conference formed Working Group 1 and applied a nonpartisan, dialogue-based, consensus-building approach to develop The Sedona Principles on eDiscovery. Representation from all key stakeholders was required. Good-faith willingness on the part of those stakeholders to โ€œcheck their hats at the doorโ€ and work toward collective solutions was required. And where we reached consensusโ€”which was not everywhere, but substantialโ€”the Principles were swiftly and almost universally adopted by the courts and leading practitioners, finding their way into more than 200 court decisions and even into the procedural rules in the U.S. and Canada. It was, as we say, โ€œdialogue, not debateโ€ that brought clarity.

Ko: And later we helped bring that same spirit into the world of patent litigation best practices, in the wake of the America Invents Act.

Withers: That took longer, but has been a rather stunning success, thanks in no small part to your skill in keeping a number of strong legal egos, with often conflicting client interests, focused on the big picture. But now youโ€™re in private practice, focusing on AI and Intellectual Property. What are some of the issues that you plan to address in your blog?

Ko: The first set of issues is probably what keeps most lawyers up at night, and that is potential liability associated with the use of AI by their clients, and how they can advise them. When should a company be liable for alleged harm resulting from the outputs of any AI agent that it implements in either its products and services or its operations, such as human resources? What measures should companies take to mitigate these potential liabilities? For instance, what level of human oversight before, during, or after AI implementation should reduce company liability and either push it upstream to the provider of the foundation model or simply leave the affected third party downstream to absorb the harm? Would the availability of scientific validation of AI tools move the analysis towards a more traditional product liability theory? On the other hand, can we agree on benchmarks for validation when AI tools are, by their very nature, always evolving with new data modifying underlying algorithms? Should this push the analysis more toward a principle/agent theory of liability, leaving companies generally vicariously liable for the outputs of their AI agents, but provide a safe harbor in some yet-to-be defined circumstance when their AI agents act outside the scope of their intended implementation?

Withers:

The question of validation is where Sedona might have significant impact.
0
Join the Sedona WG1/13 dialogue on this foundational issuex
In the early days of Technology Assisted Review, which many argue was a precursor to todayโ€™s AI in the eDiscovery space, Sedona was instrumental in setting up and running the National Institute of Standards and Technologyโ€™s annual Text Retrieval Conference or โ€œTRECโ€ Legal Track competition.

At last monthโ€™s Sedona Working Group 1 Annual Meeting in Phoenix, there was some discussion about collaborating again with the U.S. Department of Commerce to revive that program, given the incredible advances in the technology. Luckily between Working Group 1 in the U.S. and Working Group 7 in Canada, we have some of the leading data scientists in the legal community already on board.

Ko:

Another set of issues deals with data privacy and data security.
0
If you have data privacy/cybersecurity expertise, we need your help!x

In the old days, which arenโ€™t that old, there was a level of anonymity provided by the sheer volume of unrelated data dispersed across the internet. But with AI foundational models being trained on the entire internet and more, a previously worthless bit of personally identifiable information may well be combined with other missing pieces to create a complete profile of an individual or group, unlocking tremendous value for targeted commercial marketing, political advertising, or identity theft. The affected individuals whose data is being trafficked often have little to no actual or practical legal recourse under applicable federal or state law.

Similarly, while data security vulnerabilities have existed since the advent of networked computers and the internet for all companies, the widespread availability of GenAI has dramatically reduced the cost of hacking, as GenAI tools can be trained to autonomously and tirelessly replicate the steps taken by a human hacker to breach any given network and then to exploit that breach. And any implementation of publicly facing GenAI by an organization, for instance, a customer service chatbot, can itself be a target of new types of cyberattack designed to manipulate the behavior of such AI systems.

Withers: Ouch. This is definitely in the wheelhouse of members of Sedona Working Group 11, which has developed widely accepted guidelines for โ€œreasonable securityโ€ analysis to address the standard for data incident liability in the past but must update these standards as the tools available to bad actors become more sophisticated. And there is a significant cross-border aspect to this that Sedona Working Group 6 will be looking at, as both the data stores available to the bad actors, and the bad actors themselves, have little or no respect for international borders, and every nation develops independent regulations and enforcement mechanisms that global enterprises need to navigate. But your first love is intellectual property law, so tell us about the issues we need to address in that area.

Ko: The issues raised by GenAI with respect to data privacy and data security issues may require some stretching of the current existing law to address. GenAI, however, flips a foundational pillar of patent law and copyright law upside-down. The scope of both patent and copyright protections have been significantly weakened with the advent of GenAI. The only question is to what degree. No longer can it be assumed that the sole inventive or creative force for an invention or work of authorship is one or more human beings. Any use of GenAI as part of an inventive or creative process gives rise to a question that previously did not exist: What degree of human contribution is now required to confer patent or copyright rights? The U.S. Patent and Trademark Office and the U.S. Copyright Office have weighed in with publications stating their respective positions on these questions and providing some guidance to applicants on how to meet these requirements. But fundamental questions exist as to whether either Office adopted the appropriate standards for these issues, whether they have the authority to establish any standards to begin with, whether their representatives have the requisite time, information, or ability to analyze these issues in a valid and reliable manner during examination, and what information disclosures are required of applicants with respect to their AI use.

Withers: You recently co-authored an article on this issue with Judge Paul Michel, former Chief Judge of the Federal Circuit Court of Appeals, which we published in The Sedona Conference Journal (click here). What are some of the specific questions raised by GenAI that the IP community needs to address?

Ko: There are so many. On the patent side, whether inventions directed at GenAI should continue to be presumptively ineligible for patentability under 35 U.S.C. ยง 101 and Supreme Court caselaw, because they are inherently โ€œabstract ideas/mental processesโ€ that can be done โ€œwithin the human mindโ€ (thus putting a thumb on the scales against all GenAI patent applications)? Whether the level of knowledge and skill of that hypothetical โ€œPerson Having Ordinary Skill In The Artโ€ (PHOSITA) for 35 U.S.C. ยง 103 obviousness rejection purposes should expand commensurate with the recent widespread availability of GenAI, thus potentially rendering all future inventionsโ€”both AI-related and non-AI-relatedโ€”invalid for obviousness? Whether GenAI inputs or outputs constitute public disclosures invalidating patent and trade secret rights under the current law, and if so, should the law be amended to provide some form of safe harbor?

Withers:

And on the copyright side?
0
We need more copyright expertise, particularly re creatives!x

Ko: Whether GenAI-assisted software code is not protectable by copyright under the current law, and if so, should the law be amended to provide some form of IP protections for GenAI- assisted software coding? Whether the unauthorized use of works of authorship for training GenAI models is not protectable by copyright under the current law, and if not, should the law be amended to provide some form of protection for the creators?


Withers: Well, I see your point about upending established IP law, especially in regard to patents. GenAI raises fundamental questions as to whether a patent-based or a trade-secret-based IP protection system for inventions is better for society.

Iโ€™m hearing from patent practitioners, especially the patent prosecution bar, that GenAI inventions lend themselves better to protection as a trade secret, because while they may be difficult to patent, they are inherently difficult if not impossible to reverse engineer.
0
We need more patent prosecution expertise!x
And the training data itself used to fuel any GenAI model can only be protectable as a trade secret. The major large language model providers themselves do not rely on the patent system primarily to protect their IP. But does this shift to trade secret protection undermine the societal benefits of a patent system, the reason why the Founders wrote it into the Constitution?

Ko: Thatโ€™s a good question. The quid pro quo for patent protection is that a patent applicant must disclose its technology to the public. This public disclosure facilitates both broader technological development and also any required regulation of the technology. The ability to protect GenAI as a trade secret, however, inherently limits any efforts to evaluate or regulate it. Furthermore, patents have historically played an important societal role in providing smaller companies with intellectual property rights in their innovations to help compete with larger companies. In the recent past, patents were a โ€œmust haveโ€ for securing venture capital funding for technology startups. But todayโ€™s startupsโ€”in particular AI-first startupsโ€”are generally not even exploring the possibility of developing a patent portfolio, due to the ever-increasing costs and uncertainties of filing for patents and of enforcing them.

Withers: Well, it appears that Sedona Working Group 10 on Patent Litigation and Working Group 12 on Trade Secrets have their work cut out for them, and they will need to coordinate, which will be the role of the new Working Group 13 on AI and the Law, which will bring together all the stakeholders, including those who havenโ€™t historically been involved in Sedona: patent prosecutors, inventors, copyright attorneys, writers, and artists, and in-house counsel for creative companies and license holders that rely on patent and copyright protection. Letโ€™s close on a broader socioeconomic question that may permeate all of this discussion:

What is the proper balance between being โ€œfirstโ€ in AI and the development of โ€œresponsibleโ€ AI?
0
We need your voice here!x

Ko: You might not think of that as a legal question, but the law needs to reflect the choices we make as a society. On the one hand, the law may prioritize establishing guiderails against, and maintaining accountability for, harms resulting from the outputs of AI agents, including data privacy and data security breaches; political or pornographic deepfakes; biological, financial, or nuclear terrorism, etc. On the other hand, the law may encourage freeing foundation model providers and AI agent developers by protecting them from excessive liability, so they can advance the field and to support the U.S. AI industry in the global competition. So, for instance, should foundation model providers be able to contractually indemnify themselves against any liability, leaving it entirely with the AI agent developer or the company implementing the AI agent? And should the AI agent developer and the company be able to do the same, ultimately again leaving the affected third party downstream to absorb the harm?

Withers: Food for thought, and a great starting point for what will be robust dialogue. Many of these articles will be authored by The Sedona Conference Working Group Series members. They wonโ€™t, however, necessarily represent โ€œconsensus,โ€ but rather individual viewpoints meant to be starting points for dialogue.

ยฉ 2024 The Sedona Conference


Dear reader, as the ancient Chinese aphorism has it, we are living in interesting times. Please consider joining the inaugural meeting of The Sedona Conference Working Group 13 on AI and the Law on January 16-17, 2025, in Phoenix, Arizona. Membership in The Sedona Conference Working Group Series is required. And if you are interested in lending us your time, energy, and expertise and being considered for a Sedona WG13 commentary drafting team on any of these topics, please comment below or reach out to us at comments@sedonaconference.org.

Loading
5 1 vote
Article Rating
simple-ad

This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.

Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x