A new national security memorandum from President Biden aims to accelerate the Pentagon and intelligence community’s adoption of emerging artificial intelligence capabilities while addressing security concerns associated with the technology.
The document, released Thursday, includes provisions to accelerate the U.S. government’s use of AI to advance national security missions, including by leveraging fast-moving innovation in the private sector.
During the rollout of the directive, White House National Security Advisor Jake Sullivan told military officials and others at the National Defense University that the United States currently leads the way in “latent” capabilities that could be deployed on these types. applied. of missions, but America risks squandering its lead if it does not move faster in deploying new tools for its armed forces.
“The core insight we have achieved in recent years is that we are leading the way when it comes to latent capacity; the United States has the best latent capacity in AI in the world. How can we translate that into actual application on the battlefield, in our logistics, in our intelligence enterprise?” said Sullivan.
“If you think about the … national security memorandum, it’s basically trying to lay out a roadmap that says: This is how the national security enterprise, the joint force, the intelligence community should work with private sector partners, and here’s how that should be done to happen. you can work in a transparent, effective and, yes, legal way to ensure that we adopt private sector-developed technologies, capabilities and solutions into the force, into our intelligence community, in a way that is also shared across the nation security company . So the whole design of this second pillar is about answering… how do we arrive at a solution? [a company like] IBM has developed applications for warfare or for logistics or for intelligence analysis, integrate those and then make sure that these are available on a consistent basis across the board and that we’re not setting up multiple different, competing or inconsistent solutions?” he said.
Sullivan added: “Before this NSM was adopted, some of this work was done in a patchwork manner by enterprising people across the different services, but for the first time we now have a framework to say: here’s a If you need a demand signal to the industry, we want what you offer, we want to integrate it quickly, effectively, comprehensively and in a way that reduces overlaps, gaps and conflicts.”
The Pentagon is pursuing new artificial intelligence tools in hopes of deploying new technologies across its vast enterprise, from back offices to the battlefield.
AI-powered applications will change the way the U.S. military trains and fights, but it’s not easy for government officials to predict exactly what form they will take and how soon they will arrive, Sullivan noted.
“The bottom line is: there are already opportunities and there will be more soon. So we must leverage them quickly and effectively or our competitors will do it first,” said Sullivan, adding that significant technical, organizational and policy changes are needed that facilitate collaboration with the innovators driving the technology’s development.
Agencies are instructed to look for ways to boost collaboration with non-traditional vendors, such as leading AI companies and cloud computing providers.
“In practice, this means quickly deploying the most advanced systems into our national security enterprise, just after they are developed, as many in the private sector are doing. We need to quickly adopt these systems that are iterating and improving, as we see every few months,” Sullivan said.
The new memo highlights the need for more coordinated and effective acquisition and procurement systems across national security services, including a strengthened capacity to assess, define and formulate AI-related requirements and greater accessibility for companies in this sector that do not have significant prior have experience working with Uncle Sam.
The guidance directs the Department of Defense and the Office of the Director of National Intelligence, in coordination with the White House Office of Management and Budget and other agencies, to establish a working group within 30 days to address issues related to DOD’s acquisition of artificial intelligence technologies. and the intelligence community and develop new recommendations for acquiring them for use in national security systems.
Within 210 days of the memo’s issuance, the working group is tasked with providing written recommendations to the Federal Acquisition Regulatory Council for changes to existing regulations.
“DOD and ODNI will continually seek to work with diverse U.S. private sector stakeholders – including AI technology and defense companies and members of the U.S. investment community – to identify and better understand emerging capabilities that would benefit the United States or otherwise influence. United States National Security Mission,” the memo said.
However, officials also note that there are numerous risks associated with adopting artificial intelligence tools for national security missions.
The Pentagon previously laid out a plan for implementing “responsible AI” and updated its autonomous weapons policy, both of which are intended to provide safeguards to ensure that artificial intelligence systems do not go off the rails once they are developed and deployed. .
The new White House memo outlines a number of concerns regarding the deployment of AI technology on national security systems, including risks to physical security; privacy; discrimination and prejudice; improper use; lack of transparency and accountability; data waste; poor performance; and intentional manipulation and abuse.
Operators may not fully understand the capabilities and limitations of AI tools, even in war scenarios. That could hinder their ability to exercise the appropriate level of human judgment. Additionally, inadequate training programs and guidance could result in an overreliance on these types of systems, including so-called “automation biases,” the memo said.
There are also concerns that use of the technology by U.S. national security agencies could ultimately benefit adversaries without proper safeguards.
“AI systems can reveal aspects of their training data – unintentionally or through deliberate manipulation by malicious actors – and data leakage can result from AI systems trained on classified or controlled information when used on networks where such information is not permitted,” the memo said. . Furthermore, “foreign state competitors and malicious actors may deliberately undermine the accuracy and effectiveness of AI systems, or attempt to extract sensitive information from such systems.”
Within 180 days of the issuance of the memorandum, the heads of the Department of Defense, ODNI and other relevant agencies are tasked with updating the guidelines for their components in the field of AI governance and risk management for national security systems, which will then be updated annually will be revised. and updated if necessary.
A “Framework to Advance AI Governance and Risk Management in National Security” must be approved by the NSC Deputies Committee and reviewed “periodically” to determine if changes are needed to address the risks identified in the memo.