[updated Nov. 2024 – I’m going to have to take a mulligan on this one….]
Trade secrets for AI will be compromised by regulations in the U.S., in the areas most important to society. The only question is to what degree.
President Biden’s Oct. 2023 Executive Order on AI called the U.S. Patent and Trademark Office to action re AI patents and called the U.S. Copyright Office to action re AI copyrights. Who did the Order call to action re AI trade secrets? No one. The Order makes no mention of trade secrets.
I. AI and Trade Secrets
A. Trade secrets are supposed to be secret….
The radio silence regarding trade secrets does make some sense. Trade secrets are the odd ducks of the big 4 in IP (patents, copyrights, trademarks, and trade secrets). There is no “U.S. Trade Secret Office” or any other federal or state registration office or procedure for trade secrets.
Trade secrets are fundamentally different. The quid pro quo for patents is the federal government grants you exclusive rights for a finite number of years in exchange for the public disclosure of your technology. Such public disseemination encourages innovation long term by incentivizing R&D, teaching the public how to carry out your claimed invention, and facilitating a licensing marketplace for inventors. Copyright registration also provides value to the public, in part also by facilitating a licensing marketplace.
There’s no such exchange for trade secrets, which has until recently been exclusively a creature of state and common law.1 The policy grounds for trade secrets are completely different. The government recognizes trade secret rights to deter business espionage. This allows businesses to keep their business secrets and the economic value that they derive from them.
Having said this, the Executive Order does not mention trade secrets for another reason. Because our government is appropriating AI trade secrets, for regulatory purposes. And thus weakening them.
B. AI provider technology appears tailor made for trade secret protections
1. Trade secrets can be better than patents, when….
Deciding whether to protect an innovation by applying for a patent or by keeping it as a trade secret is the gating decision for any technology company and its IP protection strategy. In the U.S., you only get a year to get a patent application on file after you first disclose your inventive concept to the public, typically with an offer for sale. As such, the clock starts ticking on this decision really soon for any startup, on what is presumably its core technology.
The key determining factor is to assess the likelihood of whether a competitor will develop this same technology. How vulnerable are your products to reverse engineering? How likely is it that a competitor might develop it independently?
If you are confident that your developmental moat is strong, then trade secrets are probably the way to go. You don’t need to navigate the tortuous patenting process and system. And you can maintain a trade secret forever, unlike a patent which has a term of 20 years. All you need to do is issue a comprehensive Confidential Information and Trade Secret Protection Policy (easy to do) and implement it (not so easy, but doable).
2. AI providers use SaaS, which work well with trade secret protections
The standard way that AI providers provide their services to AI implementers is as a software-as-a-service (SaaS).2 3E.g., OpenAI operates and offers its ChatGPT language-model based chatbot over the cloud, to be built upon by AI agent providers and incorporated into their own “AIaaS” offering or implemented by businesses for incorporation in their own products or services.
The incentives for AI providers to keep innovations as a trade secret are stronger than for other industries. Like for all SaaS providers, the reverse engineering calculation is in their favor, because:
- Trade secrets work well for protecting all software. Software source code can’t readily be reverse engineered from the object code that is provided in a traditional software license.
- SaaS is even better. In a SaaS subscription, the software owner does not provide any code, neither source nor object, to the subscriber. The AI implementer only has access to the output of the AI, making it entirely impossible to reverse engineer anything.
- You can copyright your AI software and still maintain key portions as a trade secret. When you register your software source code for copyright protection, you have to “publish” it by submitting a deposit copy to the Copyright Office. But the Office has a procedure for you to redact the portions on which you claim trade secret protections in your copyright application. So you can have your copyright cake protections and eat your trade secret protections too!4
The above “featured image” is my first foray into using generative AI to create an image. It is Google Duet AI’s output from the prompt: “Defending Artificial Intelligence.”
So, so many questions…. Why the typo? What source material did Dewey draw from? And will I get sued by this dude…?
Self-schedule a free 20-min. video or phone consult with Jim W. Ko of Ko IP and AI Law PLLC here.
II. U.S. Regulation of AI
A. The U.S. will closely regulate AI (at least for us …)
President Biden issued his Executive Order on AI last month
AI will become one of the most heavily regulated technologies in history. And deservedly so. The breadth, depth, and speed of the societal impact of AI will be massive. The obvious recent point of comparison is with the rise of the internet in the 90s. Google’s CEO Sundar Pichai famously went so far as to predict in 2021 that AI’s impact will surpass the Internet’s. And also that of electricity and fire.
But while heavy regulation of AI in Europe and in China is a given, the U.S. is taking its customary decentralized approach. Under the Biden Executive Order, each existing agency is to report on and manage AI’s impacts.
Unlike other countries, there is no explicit talk of our federal government imposing any registration requirement for AI.5 Under the EU’s AI Act, a new EU office will be created to monitor enforcement, with penalites including fines of up to 6% of total worldwide revenue for larger companies and of up to 3% for SMEs including startups. Not so much with the U.S., with thus far only the Biden Order which is heavy on aspirational statements regarding regulating AI. The Order is silent on enforcement, implicitly leaving it to existing mechanisms.
B. How can we regulate something we can’t fully understand?
The forthcoming regulations impacting AI providers and implementersโin the U.S. and around the worldโwill center on three issues:
- data privacy / security,
- equal protection / “algorithmic discrimination,” and
- trustworthiness and fraudulent / negligent misrepresentation.6
All three issues are fundamentally ones of technical accuracy, and require the definition of a standard against which an AI platform’s performance can be measured:
- To protect against each AI amassing personal identifying information (PII) as part of its inexorable collection of data, AI providers will be required in some form to identify and either anonymize or discard such data or better yet somehow not collect it in the first place.
- Protecting against algorithmic discrimination is an entirely different animal. The reality is there are demographic differences and an AI’s output, if accurate, will reflect those differences. Having said this, there will be some sampling discrepancies (e.g., poor people will likely have less data than not poor people), and AI should be quality controlled to account for such differences. But is it possible to do this without falling into the affirmative action trap that the Supreme Court ruled was in violation of the 14th Amendment earlier this year ?
- “AI hallucinations” just sorta happen. Generative AI models generate inaccurate information and present if as fact with frequency and without explanation.
Even their programmers do not understand how or why AI hallucinations happen or the evolving workings of their AI. So how can standards be developed to monitor them?7
President Biden’s Executive Order delegates this challenge to NIST.
C. The National Institute of Standards and Technology (NIST) will play a key role in regulating AI
NIST has come a long way…
The National Institute of Standards and Technology (NIST) will become the most influential federal agency that no one has ever heard of.
The Executive Order directs NIST “to create guidance and benchmarks for evaluating and auditing capabilities” and to help “ensure the availability of testing environments.”8
In Jan 2023, NIST issued its AI Risk Management Framework. The Executive Order further directs NIST in the coming year to publish a companion resource focused on generative AI.9 10
III. Trade secrets for AI will be compromised by regulations
The Executive Order mandates the Secretary of Commerce to require companies “developing or demonstrating an intent to develop potential dual-use foundation models” to regularly provide reports to the government.11 The reports are to include:
- “the results of any developed dual-use foundation model’s performance12 in relevant AI red-team testing based on guidance developed by NIST,” and
- “a description of any associated measures the company has taken to meet safety objectives.”
This reporting process itself will inevitably entail the disclosure of the AI business’s trade secrets. Any follow up investigation by the appropriate government agency of an AI business’s report that falls short of the forthcoming standards from NIST even more so.
Effective regulation is impossible without a level of transparency that is antithetical to the concept of a trade secret. Cybertheft is an ever-present threat. Furthermore, individuals move back and forth between government service and the private sector all the time. Even presuming confidentiality terms are in place, the thing with trade secrets and good ideas in general is that once you learn them, you don’t forget them. And you will inevitably use them when needed.
The “responsible AI” movement will further compound this issue. The public’s trust of AI and big tech is not high. And as AI displaces large numbers of people from the work force, the demands for transparencyโincluding the further public disclosure of any underlying trade secretsโwill only increase.
So for a host of reasons, trade secrets for AI will be compromised to some degree.
IV. Patents are compatible with the disclosure risks of AI regulations
All things being equal, patents are the strongest category of IP for protecting your technology IP, bar none. They are the only game in town for excluding others from being able to use not just your product but your inventive concept, even if they subsequently independently develop it.
The U.S. patent system is, however, not for the faint of heart, in particular in the AI space. To successfully navigate, you really have to understand the misaligned incentives built into the system, as I will illustrate in this blog in the coming weeks.
Many AI businesses, in particular startups, have rationally leaned toward trade secrets to date. The heightened risk of disclosure associated with the regulatory minefield to come, however, may well tip the balance toward seeking patent protection instead. You just never know how, when, or to what degree your trade secrets will be exposed to the public, rendering them less valuable if not worthless.
Patent applicants already have to disclose their technology to the public as part of the patenting process. As such, patents are entirely compatible with the calls for transparency in AI, whether by regulators or the public.
V. Conclusion
In sum, the scope of protections for trade secrets for AI will be compromised by the regulatory regime to come. This strengthens the business case for seeking patent protections from the start of your AI business.
The elephant in the room remains: How will laws, regulations, regulatory bodies, and the courts allocate liability between AI providers and AI implementers? This blog will examine every aspect of this issue in the weeks and months to come.
Most immediately, I will compare and contrast the terms of use and privacy policies for the major AI providers today and put them into context next week.
[Nov. 2024 update: Well a lot has happened this past year, and almost all of it has strengthened the case for AI providers (for both the behemoth LLM providers and even moreso for the smaller AI Agent providers) to rely on trade secret protections and forgo any potential patent protections. I will post a full article updating these issues in the coming weeks.]
ยฉ 2023 Ko IP & AI Law PLLC
I’m taking one for the team this week….
To protect your IP and your AI business as a whole, you need to:
- stay up-to-date with developments in AI law and policy, and
- regularly update your AI business’s policies and contract negotiation strattegy accordingly.
I will publish an article every Monday at 1 PM Pacific. Subscribe to this blog below so you do not miss a single article.
Come back next Monday for my next blog article:
Do AI providers really care about “responsible AI”? Their terms of use and privacy policies speak louder than their words.
- The federal government joined the party in 2016 with the passage of the Defend Trade Secrets Act. This made trade secret misappropriation enforceable in the federal courts for the first time. โฉ๏ธ
- For definitions of “AI providers” (including “LLM providers” and “AI-Agent providers”) and “AI implementers” (including those incorporating AI into one’s products and services and those implementing AI into one’s internal business processes) as used here and throughout this blog, see 11/13/23 blog article (“…and without implementing AI successfully, you will be replaced“). โฉ๏ธ
- This article focuses primarily on AI providers. The trade secrets v. patents analysis for AI chip manufacturers is different. โฉ๏ธ
- Practice note: If you decide to copyright your AI software and intend to maintain parts as a trade secret, please make sure you use this avenue. Based on a New York federal court’s recent ruling on this issue in Capricorn Mgmt Sys. v. GEICO, you use it or lose it. โฉ๏ธ
- But see Sect. III. below discussing required reporting requirements. โฉ๏ธ
- There is also a fourth major category: AI issues implicating national security, e.g., protecting against the use of AI for nuclear, chemical, biological, or cyberwarfare. But national security issues fall outside the scope of this article. โฉ๏ธ
- The current Director of the Office of Science and Technology Policy (OSTP), President Biden’s Science Advisor Arati Prabhakar addressed during an interview the challenge of making AI models explanable with the following insights:
Most of the risks we deal with as human beings come from things that are not explainable. As an example, I take a medicine every single day. While I canโt actually predict exactly how itโs going to interact with the cells in my body, we have found ways to make pharmaceuticals safe enough. Think about drugs before we had clinical trials. You could hawk some powder or syrup and it might make you better or it might kill you. But when we have clinical trials and a process in place, we started having the technical means to know enough to start harnessing the value of pharmaceuticals. This is the journey we have to be on now for artificial intelligence. Weโre not going to have perfect measures, but I think we can get to the point where we know enough about the safety and effectiveness of these systems to really use them and to get the value that they can offer. โฉ๏ธ - See President Biden’s Executive Order on AI at ยง 4.1(a)(i)(C) & 4.1(a)(ii)(B). โฉ๏ธ
- See id. at ยง 4.1(a)(i)(A). โฉ๏ธ
- NIST’s potential influence on our broader political issues is even greater. The combination of AI and social media have broken the marketplace of ideas model so fundamental to a healthy democracy. NIST will be the focal point of efforts to fix it and prevent groups from gaming the system by using generative AI to overwhelm it with “news” generated out of thin air.
And to boot, NIST also has an important part to play with the rising threat of cyber crimes. It is responsible for developing the cybersecurity standards for the U.S. federal government. โฉ๏ธ - See Executive Order at ยง 4.2(a)(i). Only AI models of a certain size are subject to this reporting requirement. See Executive Order, at Sect. 4.2(b). I currently have no understanding of how inclusive or exclusive the current requirements are or how easily AI providers could game the definitions to evade this reporting requirement. I will investigate and report on this later. โฉ๏ธ
- The term “dual-use foundation model” is defined as “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety.” See Executive Order at ยง 3(k). โฉ๏ธ
This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.