On October 24, 2024, President Biden issued the first-ever National Security Memorandum (NSM) on Artificial Intelligence (AI), fulfilling another directive (subsection 4.8) set forth in the Administration’s Executive Order on AI that outlined how the federal government plans to approach AI national security policy. The NSM also contains a classified appendix, which addresses sensitive national security issues. The release of the NSM follows other recent national security-focused AI actions from the Biden administration, including the Commerce Department’s proposed rule to establish mandatory reporting requirements for developers of high-performance AI models (see our legal update on the proposal), and the interim final rule that would issue new export controls on, among other things, advanced semiconductor manufacturing equipment (see our legal update on the final rule).
The development of the NSM is based on the fundamental premise that “advances in AI will have significant implications for national security and foreign policy in the near future.”1 With that in mind, the NSM directs several actions that the federal government must take to: (1) ensure that the United States leads the global development of safe, secure, and trustworthy AI; (2) leverage cutting-edge AI technologies to advance the national security mission of the United States; and (3) promote international consensus and governance around AI. While the NSM focuses on actions to be taken by the federal government, it promises to have significant implications for private sector entities as they develop and deploy powerful AI models.
In this Legal Update we summarize the most important provisions and guidelines of the NSM.
Summary of the National Security Memorandum
The NSM provides three main objectives and associated guidelines regarding AI and national security.
1. Lead the global development of safe, secure and trustworthy AI: To maintain and expand U.S. leadership in AI development, the NSM identifies key policy actions, including: promoting progress and competition in AI development; protecting industry, civil society, academia and related infrastructure from threats from foreign intelligence services; and developing technical and policy tools to address the potential security, safety and reliability risks of AI. Important guidelines in this area include:
- The Department of State (DOS), the Department of Defense (DOD), and the Department of Homeland Security (DHS) will use all available legal authorities to attract and facilitate foreign individuals with relevant technical expertise who will enhance U.S. competitiveness in the field of AI and related fields.
- Various agencies – including the Department of Commerce (DOC), DOD and the Department of Energy (DOE) – will coordinate their efforts, plans, investments and policies to facilitate and support the development of advanced AI semiconductors, AI-centric computing infrastructure to encourage. and other AI-enabling infrastructure (e.g. clean energy, fiber optic data links for energy transmission, etc.).
- The Office of the Director of National Intelligence (ODNI), in coordination with other agencies, will identify critical nodes in the AI supply chain and develop a list of ways in which these nodes could be disrupted or compromised by foreign actors. These agencies are taking steps to reduce such risks.
- The Committee on Foreign Investment in the United States (CFIUS) “will consider, as appropriate, whether a covered transaction would involve access by foreign actors to proprietary information about AI training techniques, algorithmic improvements, hardware advances, critical technical artifacts (CTAs) or other proprietary insights that shed light on how to create and effectively use powerful AI systems.”
- DOC, acting through the AI Safety Institute (AISI) and the National Institute of Standards and Technology (NIST), will serve as the federal government’s primary point of contact with private sector AI developers to facilitate voluntary testing of dual-use foundation models to facilitate. DOC will establish a capacity to lead these tests and issue guidance and benchmarks for AI developers on how to test, evaluate and manage the risks arising from these models. AISI will submit a report to the President summarizing the findings of its voluntary tests and share the results with the developers of such models.
- The National Security Agency (NSA) “will develop the ability to conduct rapid systematic covert testing of AI models’ ability to detect, generate, and/or aggravate offensive cyber threats[,]” and DOE will do the same regarding “nuclear and radiological risks.”
- DOE, DHS and AISI will work together to develop a roadmap for assessments of AI models’ ability to generate or exacerbate intentional chemical and biological threats.[.]DOE will develop a pilot program to establish the capacity to conduct classified testing in this area, and other agencies will support efforts to use AI to improve biosafety and biosecurity.
- DOD, DHS, the Federal Bureau of Investigation and NSA “will issue unclassified guidance on known AI cybersecurity vulnerabilities and threats; best practices for avoiding, detecting, and mitigating such issues during model training and deployment; and the integration of AI into other software systems.”
2. Use AI responsibly to achieve national security objectives: To further integrate AI into U.S. national security functions, the NSM identifies key policy actions, including adapting partnerships, policies, and infrastructure to enable effective and responsible use of AI; and developing robust policies for AI governance and risk management. Important guidelines in this area include:
- DOD and ODNI will establish a working group to address issues related to the acquisition of AI by DOD and Intelligence Community (IC) elements. The working group will make recommendations to the Federal Acquisition Regulatory Council (FARC) regarding changes to existing regulations and guidelines, to accelerate and simplify the AI procurement process.
- DOD and ODNI will work with private sector stakeholders, including AI technology and defense companies, to identify and understand emerging AI capabilities.
- Agency heads monitor, assess, and mitigate risks directly related to their agency’s development and use of AI, including risks related to physical security, privacy, discrimination and bias, transparency, accountability, and performance.
- Heads of agencies using AI as part of a national security system (NSS) will issue or update guidance on AI governance and risk management for NSS.
3. Promote a stable, responsible and globally beneficial international AI governance landscape: U.S. international engagement in AI “will support and facilitate improvements to the safety, security, and reliability of AI systems worldwide; promoting democratic values, including respect for human rights, civil rights, civil liberties, privacy and security; prevent the misuse of AI in the context of national security; and promote equitable access to the benefits of AI.” To this end:
- The Department of State, in coordination with other agencies, will “develop a strategy to promote international AI governance standards consistent with safe, secure, and trustworthy AI and democratic values, including human rights, civil rights, civil liberties, and privacy.”
Conclusion
The scope of the NSM is not limited to the implementation of AI in the national security context. It also considers a comprehensive AI supply chain—which includes not only semiconductors and computing equipment, but also energy and power generation—and the effects of AI for commercial use as essential to U.S. national security. With that framework in mind, the NSM has significant implications not only for AI developers and defense contractors, but also for other sectors such as energy and infrastructure. Additionally, the NSM makes clear that federal national security policy for AI in the coming years will likely involve a wide range of issues, including topics as diverse as immigration, foreign investment, federal research, public-private partnerships, government contracts, and supply chain security .
1 See the White House fact sheet on the NSM.