Today, President Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americansโ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, Oct. 30, 2023
I. Introduction
While more aspirational than specific, the Oct. 2023 Executive Order on AI was a step in the right direction for addressing these issues. But the Executive Order punts on or omits entirely some key issues as depicted in the featured image above.
A. Seizing the promise v. managing the risks of AI
The Executive Order focuses almost exclusively on only one side of the coin: managing the various risks of AI. This makes sense on some levels. But aspirational statements such as “Artificial Intelligence must be safe and secure” or directives, e.g. to the DOJ Civil Rights Division to “prevent and address discrimination in the use of automated systems, including algorithmic discrimination”1 are simply not completely attainable. Nor are they the real end game in the first place.
The simple reality is that the other side of the coinโcompeting for economic supremacyโis more important to the U.S. government and to most governments around the world. The only question is by how much.
Let’s:
- just add the following qualifier to all aspirational statements throught the Order: “…but without unreasonably or unnecessarily impeding the ability of US businesses to compete in the global market for AI,” and
- have discussions evaluating specifics presented on both sides of the scale.
This would help mitigate against the development of unrealistic proposals. And ultimately against the adoption of toothless regulations designed more for the appearance of safety than for anything else. [cont’d โ]
Self-schedule a free 20-min. video or phone consult with Jim W. Ko of Ko IP and AI Law PLLC here.
B. Punted IP issues to the US Patent and Trademark and US Copyright Offices
President Biden’s Executive Order identifies some key intellectual property issues, but without providing any hint as to its position on any. Instead it delegates the initial analysis for each as follows:
The Executive Order requires the US Patent and Trademark Office to:
- by March 2024, publish guidance to patent examiners and applicants by March 2024 addressing inventorship issues and the use of AI.
- by August 2024, publish guidance to patent examiners and applicants to address other considerations at the intersection of AI and IP, which could include updated guidance on patent eligiblity to address innovation in AI and critical and emerging technologies.2
The Executive Order requires the US Copyright Office to:
- by August 2024, publish a study and issue recommendations to the President on potential executive actions relating to copyright and AI, including:
- the scope of protection for works producing using AI and
- the treatment of copyrighted works in AI training.3
The courts, however, will be the ultimate arbiter on these issues, regardless of whatever guidance the USPTO and USCO ultimately issue.4 Subject to any additional legislation Congress might pass.
President Biden’s punting on these IP-specific issues is generally appropriate, as they are at least in part outside of the executive branch’s authority anyways.
The fact, however, that “creators”โwhose protection is broadly speaking within the executive branch’s authorityโmerit no more than one mention in the entire Order speaks volume.5 The rise of AI poses an existential crisis for creators, but the Order simply does not even acknowledge this.
C. Deepfakes? Let me get my microscope and telescope…
The term “deepfake” does not appear anywhere in the AI Executive Order. You have to dig deep to find references to the concept of deepfakes in Sec. 4.5 (Reducing the Risks Posed by Synthetic Content).6
1. What the Executive Order says on deepfakes
Sec. 4.5 directs the Secretary of Commerce to consult with other agencies and submit a report identifying existing and potential methods for:
- “authenticating content and tracking its provenance”, “labeling synthetic content, such as using watermarking,” and “detecting synthetic content”7
- “preventing generative AI from producing child sexual abuse material or producing non-consensual intimate imagery of real individuals (to include intimate digital depictions of the body or body parts of an identifiable individual)”8
The second bullet point is the sum total of the discussion concerning pornographic deepfakes.9
The discussion of political ad deepfakes is started by the first bullet point and directs the Office of Management and Budget to consult with other agencies to “issue guidance to agencies for labeling and authenticating such content that theys produce or publish.” This is “for the purpose of strenghtening public confidence in the integrity of official United States Government digital content.”10 [cont’d โ]
Will digital watermarking ever instill confidence comparable to physical watermarking?
2. What the Executive Order ducks on deepfakes
The AI Executive Order address the issues at only the extreme granular level of developing technological tools to detect deepfakes and at the extreme high aspirational level of saying we should prevent them and strengthen public confidence in the real stuff.
The Order completely leaves any mention of the individual or societal impacts of each or any actions to prevent themโincluding imposing any criminal or civil liability against those who generate or distribute deepfakesโfor another day.
Some federal agencies have started dipping their toes into political campaign deepfakes, including the National Security Agency11 and the Federal Election Commission.12
None have stepped into the breach regarding pornographic deepfakes to date.
ยฉ 2024 Ko IP & AI Law PLLC
II. Coming Attractions
The Executive Order does more meaningfully discuss the following important issues re AI:
- algorithmic discrimination
- developing privacy-enhancing technologies
- protecting against the misuse of AI for cyber, bio, nuclear attacks, etc.
I plan to focus my next two blog articles on the first two topics. And I will provide a substantive discussion on political ad deepfakes and the related topic of watermarking after that.
Come back on M 1/22 for my next blog article
Bias in, Bias Out: The Algorithmic Discrimination Challenge
- Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Oct. 30, 2023, at Sec. 2(a) and 7.1(a)(ii). โฉ๏ธ
- Id. at Sec. 5.2(c)(i)-(ii). โฉ๏ธ
- Id. at Sec. 5.2(c)(iii). โฉ๏ธ
- cxLoyalty, Inc. v. Maritz Holdings, Inc., 986 F.3d 1367, 1375, n. 1 (Fed. Cir. 2021) (noting that USPTO guidance on patent eligibility โis not, itself, the law of patent eligibility, does not carry the force of law, and is not binding on [the court’s] patent eligibility analysis,” and to the extent the guidance โcontradicts or does not fully accord with our caselaw, it is our caselaw, and the Supreme Court precedent it is based upon, that must control.โ). โฉ๏ธ
- “This effort requires investments in…while simultaneously tackling novel intellectual property (IP) questions and other problems to protect inventors and creators.” Executive Order at Sec. 2(b). And that’s all she wrote for “creators”…. โฉ๏ธ
- There’s one other reference to the concept of deepfakes in Sec. 10 (Advancing Federal Government Use of AI) of the AI Executive Order. This just repeats and applies the same concept, specifically in the context of the federal government’s internal operations. โฉ๏ธ
- AI Executive Order at Sec. 4.5(a)(i)-(iii). โฉ๏ธ
- Id. at Sec. 4.5(a)(iv). โฉ๏ธ
- For a substantive discussion of pornographic deepfakes, see my earlier blog article Pornography deepfakes: My body and identity, my choice, available here. โฉ๏ธ
- I will provide a substantive discussion of political ad deepfakes in a forthcoming blog article next month. โฉ๏ธ
- See NSA, U.S. Federal Agencies Advise on Deepfake Threats, National Security Agency/Central Security Service, Sept. 12, 2023, available here. โฉ๏ธ
- See Ali Swenson, FEC moves toward potentially regulating AI deepfakes in campaign ads, AP News, Aug. 10, 2023, available here. โฉ๏ธ
This is a demo advert, you can use simple text, HTML image or any Ad Service JavaScript code. If you’re inserting HTML or JS code make sure editor is switched to ‘Text’ mode.