The Sedona Conference, one of the nationโs leading nonpartisan think-tanks on issues of law and technology, will be launching its Working Group 13 on Artificial Intelligence and the Law in January, building on its long-standing reputation in the areas of eDiscovery, digital records management, patent litigation, trade secrets, cybersecurity, data privacy, and cross-border data transfers. Sedonaโs Executive Director Ken Withers sat down with Sedona veteran and former Sr. Program Attorney Jim Ko to talk about this new initiative.
Withers: Jim, I feel like Iโve been here before. 40 years ago, I was installing the first Westlaw terminals in my law schoolโs library. 30 years ago, I was learning HTML in library school and developing my first web site. 20 years ago, I was in Sedonaโs Working Group 1 helping develop the first set of principles governing eDiscovery. Ten years ago, I was sitting at a Sedona conference in London discussing data protection regulation with European judges and lawyers. This year, Sedona has already hosted two conferences on AI and the Law, and several of our Sedona Working Groups have leapt ahead and started developing analyses and best practice guidance for courts and practitioners addressing AI. I like to think weโre on the cutting edge, but we always seem to be trying to catch up.
Ko: I hear you, Ken. The law always lags behind technological change. Novel issues brought about by novel technologies have a way of slipping through the cracks of the laws as written, inconsistent with the principles underlying them. How can we in the legal community help the law keep pace with the rapid and sweeping societal changes initiated by the rise of AI and mitigate AIโs excesses? How can we regulate the use of AI and encourage its safe and responsible usage but also mitigate the risks of government overreach or excessive litigation? Our federal and state legislative and judicial branches should be the primary sources of legal authority on all issues. But to the degree that getting ahead of the rapid developments of AI is important, the deliberative nature of both effectively precludes that. And the existing patchwork of guidelines and regulations from administrative agencies is spotty, compounded by the Supreme Courtโs Loper Bright Enterprises v. Raimondo decision earlier this year, calling into question any ability of regulatory agencies to expand their authority to meaningfully address AI.
The WG1 Sedona Principles (Jan. 2004) that started it all in eDiscovery. Now in its 3d edition (click here).
Withers: This is the situation we faced in the early 2000โs with eDiscovery, only AI is far more consequential. Judges, lawyers, and concerned citizens were searching for clarity and direction on complex issues, and there were no easy answers. That is when The Sedona Conference formed Working Group 1 and applied a nonpartisan, dialogue-based, consensus-building approach to develop The Sedona Principles on eDiscovery. Representation from all key stakeholders was required. Good-faith willingness on the part of those stakeholders to โcheck their hats at the doorโ and work toward collective solutions was required. And where we reached consensusโwhich was not everywhere, but substantialโthe Principles were swiftly and almost universally adopted by the courts and leading practitioners, finding their way into more than 200 court decisions and even into the procedural rules in the U.S. and Canada. It was, as we say, โdialogue, not debateโ that brought clarity.
Ko: And later we helped bring that same spirit into the world of patent litigation best practices, in the wake of the America Invents Act.
Withers: That took longer, but has been a rather stunning success, thanks in no small part to your skill in keeping a number of strong legal egos, with often conflicting client interests, focused on the big picture. But now youโre in private practice, focusing on AI and Intellectual Property. What are some of the issues that you plan to address in your blog? [cont’d โ]
Kenneth J. Withers
The Sedona Conference
Jim W. Ko
Ko IP & AI Law PLLC
Ko: The first set of issues is probably what keeps most lawyers up at night, and that is potential liability associated with the use of AI by their clients, and how they can advise them. When should a company be liable for alleged harm resulting from the outputs of any AI agent that it implements in either its products and services or its operations, such as human resources? What measures should companies take to mitigate these potential liabilities? For instance, what level of human oversight before, during, or after AI implementation should reduce company liability and either push it upstream to the provider of the foundation model or simply leave the affected third party downstream to absorb the harm? Would the availability of scientific validation of AI tools move the analysis towards a more traditional product liability theory? On the other hand, can we agree on benchmarks for validation when AI tools are, by their very nature, always evolving with new data modifying underlying algorithms? Should this push the analysis more toward a principle/agent theory of liability, leaving companies generally vicariously liable for the outputs of their AI agents, but provide a safe harbor in some yet-to-be defined circumstance when their AI agents act outside the scope of their intended implementation?
Withers:
At last monthโs Sedona Working Group 1 Annual Meeting in Phoenix, there was some discussion about collaborating again with the U.S. Department of Commerce to revive that program, given the incredible advances in the technology. Luckily between Working Group 1 in the U.S. and Working Group 7 in Canada, we have some of the leading data scientists in the legal community already on board.
AI exponentially increases data privacy and cybersecurity risks. Sedona WG11 has been dedicated to mitigate them since 2014.
Ko:
In the old days, which arenโt that old, there was a level of anonymity provided by the sheer volume of unrelated data dispersed across the internet. But with AI foundational models being trained on the entire internet and more, a previously worthless bit of personally identifiable information may well be combined with other missing pieces to create a complete profile of an individual or group, unlocking tremendous value for targeted commercial marketing, political advertising, or identity theft. The affected individuals whose data is being trafficked often have little to no actual or practical legal recourse under applicable federal or state law.
Similarly, while data security vulnerabilities have existed since the advent of networked computers and the internet for all companies, the widespread availability of GenAI has dramatically reduced the cost of hacking, as GenAI tools can be trained to autonomously and tirelessly replicate the steps taken by a human hacker to breach any given network and then to exploit that breach. And any implementation of publicly facing GenAI by an organization, for instance, a customer service chatbot, can itself be a target of new types of cyberattack designed to manipulate the behavior of such AI systems. [cont’d โ]
Withers: Ouch. This is definitely in the wheelhouse of members of Sedona Working Group 11, which has developed widely accepted guidelines for โreasonable securityโ analysis to address the standard for data incident liability in the past but must update these standards as the tools available to bad actors become more sophisticated. And there is a significant cross-border aspect to this that Sedona Working Group 6 will be looking at, as both the data stores available to the bad actors, and the bad actors themselves, have little or no respect for international borders, and every nation develops independent regulations and enforcement mechanisms that global enterprises need to navigate. But your first love is intellectual property law, so tell us about the issues we need to address in that area.
AI turns IP law upside-down. Sedona WG9/10 and 12 will help keep the interface of AI and patent and trade secret law on principled grounds as The Sedona Conference has done for well over a decade.
Ko: The issues raised by GenAI with respect to data privacy and data security issues may require some stretching of the current existing law to address. GenAI, however, flips a foundational pillar of patent law and copyright law upside-down. The scope of both patent and copyright protections have been significantly weakened with the advent of GenAI. The only question is to what degree. No longer can it be assumed that the sole inventive or creative force for an invention or work of authorship is one or more human beings. Any use of GenAI as part of an inventive or creative process gives rise to a question that previously did not exist: What degree of human contribution is now required to confer patent or copyright rights? The U.S. Patent and Trademark Office and the U.S. Copyright Office have weighed in with publications stating their respective positions on these questions and providing some guidance to applicants on how to meet these requirements. But fundamental questions exist as to whether either Office adopted the appropriate standards for these issues, whether they have the authority to establish any standards to begin with, whether their representatives have the requisite time, information, or ability to analyze these issues in a valid and reliable manner during examination, and what information disclosures are required of applicants with respect to their AI use.
Withers: You recently co-authored an article on this issue with Judge Paul Michel, former Chief Judge of the Federal Circuit Court of Appeals, which we published in The Sedona Conference Journal (click here). What are some of the specific questions raised by GenAI that the IP community needs to address?
Ko: There are so many. On the patent side, whether inventions directed at GenAI should continue to be presumptively ineligible for patentability under 35 U.S.C. ยง 101 and Supreme Court caselaw, because they are inherently โabstract ideas/mental processesโ that can be done โwithin the human mindโ (thus putting a thumb on the scales against all GenAI patent applications)? Whether the level of knowledge and skill of that hypothetical โPerson Having Ordinary Skill In The Artโ (PHOSITA) for 35 U.S.C. ยง 103 obviousness rejection purposes should expand commensurate with the recent widespread availability of GenAI, thus potentially rendering all future inventionsโboth AI-related and non-AI-relatedโinvalid for obviousness? Whether GenAI inputs or outputs constitute public disclosures invalidating patent and trade secret rights under the current law, and if so, should the law be amended to provide some form of safe harbor?
Withers:
Ko: Whether GenAI-assisted software code is not protectable by copyright under the current law, and if so, should the law be amended to provide some form of IP protections for GenAI- assisted software coding? Whether the unauthorized use of works of authorship for training GenAI models is not protectable by copyright under the current law, and if not, should the law be amended to provide some form of protection for the creators?
Withers: Well, I see your point about upending established IP law, especially in regard to patents. GenAI raises fundamental questions as to whether a patent-based or a trade-secret-based IP protection system for inventions is better for society.
Ko: Thatโs a good question. The quid pro quo for patent protection is that a patent applicant must disclose its technology to the public. This public disclosure facilitates both broader technological development and also any required regulation of the technology. The ability to protect GenAI as a trade secret, however, inherently limits any efforts to evaluate or regulate it. Furthermore, patents have historically played an important societal role in providing smaller companies with intellectual property rights in their innovations to help compete with larger companies. In the recent past, patents were a โmust haveโ for securing venture capital funding for technology startups. But todayโs startupsโin particular AI-first startupsโare generally not even exploring the possibility of developing a patent portfolio, due to the ever-increasing costs and uncertainties of filing for patents and of enforcing them. [cont’d โ]
Sedona WG13 will move the AI law forward in a reasoned and just way. Come join the dialogue!
Withers: Well, it appears that Sedona Working Group 10 on Patent Litigation and Working Group 12 on Trade Secrets have their work cut out for them, and they will need to coordinate, which will be the role of the new Working Group 13 on AI and the Law, which will bring together all the stakeholders, including those who havenโt historically been involved in Sedona: patent prosecutors, inventors, copyright attorneys, writers, and artists, and in-house counsel for creative companies and license holders that rely on patent and copyright protection. Letโs close on a broader socioeconomic question that may permeate all of this discussion:
Ko: You might not think of that as a legal question, but the law needs to reflect the choices we make as a society. On the one hand, the law may prioritize establishing guiderails against, and maintaining accountability for, harms resulting from the outputs of AI agents, including data privacy and data security breaches; political or pornographic deepfakes; biological, financial, or nuclear terrorism, etc. On the other hand, the law may encourage freeing foundation model providers and AI agent developers by protecting them from excessive liability, so they can advance the field and to support the U.S. AI industry in the global competition. So, for instance, should foundation model providers be able to contractually indemnify themselves against any liability, leaving it entirely with the AI agent developer or the company implementing the AI agent? And should the AI agent developer and the company be able to do the same, ultimately again leaving the affected third party downstream to absorb the harm?
Withers: Food for thought, and a great starting point for what will be robust dialogue. Many of these articles will be authored by The Sedona Conference Working Group Series members. They wonโt, however, necessarily represent โconsensus,โ but rather individual viewpoints meant to be starting points for dialogue.
ยฉ 2024 The Sedona Conference
Dear reader, as the ancient Chinese aphorism has it, we are living in interesting times. Please consider joining the inaugural meeting of The Sedona Conference Working Group 13 on AI and the Law on January 16-17, 2025, in Phoenix, Arizona. Membership in The Sedona Conference Working Group Series is required. And if you are interested in lending us your time, energy, and expertise and being considered for a Sedona WG13 commentary drafting team on any of these topics, please comment below or reach out to us at comments@sedonaconference.org.
This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.