WASHINGTON — President Joe Biden’s directive to all U.S. national security agencies to embed artificial intelligence technologies into their systems sets ambitious goals in a volatile political environment.
That is the first startling assessment from technology experts after Biden on October 24 ordered a wide range of organizations to use AI responsibly, even as the technology is developing rapidly.
“It’s like trying to put together an airplane while you’re in the middle of it,” said Josh Wallin, a defense program fellow at the Center for a New American Security. “It’s a heavy burden. This is a new area that a lot of agencies are having to look at, that they may not necessarily have paid attention to in the past, but I would also say that it is certainly a critical area.”
Federal agencies will need to quickly hire experts, give them security clearances and get to work on the tasks Biden is imposing as private companies pour in money and talent to advance their AI models, Wallin said.
The memo, which stems from the president’s executive order last year, asks the Pentagon; spy agencies; the Departments of Justice, Homeland Security, Commerce, Energy, and Health and Human Services; and others to leverage AI technologies. The guidance emphasizes the importance of national security systems “while protecting human rights, civil rights, civil liberties, privacy and security in AI-enabled national security activities.”
Federal agencies have deadlines, some as early as 30 days, to complete tasks. Wallin and others said the deadlines will be determined by the pace of technological progress.
The memo asks that by April, the National Institute of Standards and Technology’s AI Safety Institute “continue voluntary preliminary testing of at least two groundbreaking AI models before they are publicly deployed or released to evaluate capabilities that could pose a threat to national security.”
Frontier models refer to large AI models such as ChatGPT that can recognize speech and generate human-like text.
The testing is intended to ensure that the models do not inadvertently enable rogue actors and adversaries to launch offensive cyber operations or “accelerate the development of biological and/or chemical weapons, conduct autonomous malicious behavior, facilitate the development and deployment of to automate other models.”
But the memo also adds an important caveat: the deadline for testing the AI models would be “subject to collaboration with the private sector.”
Meeting that testing deadline is realistic, said John Miller, senior vice president of policy at ITI, a trade group that represents top technology companies including Google, IBM, Intel, Meta and others.
Because the institute “already works with model developers on model testing and evaluation, it is feasible that the companies could complete or at least begin such testing within 180 days,” Miller said in an email. But the memo also asks the AI Safety Institute to provide guidance on model testing within 180 days, and so “it seems reasonable to wonder how exactly these two timelines will sync,” he said.
By February, the National Security Agency will “develop the ability to conduct rapid systematic covert testing of AI models’ ability to detect, generate, and/or aggravate offensive cyber threats. Such tests will assess the extent to which AI systems, if misused, could accelerate offensive cyber operations,” the memo said.
‘Dangerous’ order
With the presidential elections just a week away, the outcome for this directive is great.
The Republican Party platform says that if elected, Donald Trump would revoke Biden’s “dangerous Executive Order,” which hinders AI innovation and imposes radical left ideas on the development of this technology. Instead, Republicans support the development of AI, rooted in free speech and human flourishing.”
Since Biden’s memo is the result of the executive order, it’s likely that if Trump wins, “they would just pull the plug” and go their own way on AI, said Daniel Castro, deputy president of the Information Technology and Innovation Foundation, in an interview.
The leadership of federal departments charged with compliance would also change significantly under Trump. As many as 4,000 positions in the federal government are changing hands with the arrival of a new administration.
However, people following the issue note that there is broad bipartisan consensus that the adoption of AI technologies for national security purposes is too critical to derail partisan disputes.
The tasks and deadlines in the memo reflect deep interagency discussions going back several months, said Michael Horowitz, a professor at the University of Pennsylvania who until recently served as deputy assistant secretary of Defense with a portfolio that included military applications of AI. advanced technologies.
“I think the implementation of [the memo] regardless of who wins the election will be absolutely critical,” Horowitz said in an interview.
Wallin noted that the memo highlights the need for U.S. agencies to understand the risks of advanced generative AI models, including risks associated with chemical, biological and nuclear weapons. There is agreement between the parties on threats such as those to national security, he said in an interview.
Senate Intelligence Chairman Mark Warner, D-Va., said in a statement that he supported Biden’s memo but that the administration “should work with Congress in the coming months to advance a clearer strategy to engage the private sector on national security risks focused on AI. systems throughout the supply chain.”
Immigration policy
The memo recognizes the long-term need to attract talented people from around the world to the United States in areas such as semiconductor design, an issue that could become tied to broader questions about immigration. The departments of Defense, State Security and Homeland Security are instructed to use available legal authorities to bring them in.
“I think there is broad recognition of the unique importance of STEM talent in ensuring American technology leadership,” Horowitz said. “And AI is no exception.”
The memo also asks the State Department, the U.S. Mission to the United Nations and the U.S. Agency for International Development to develop a strategy within four months to advance international governance standards for the use of AI in national security.
The US has already taken several steps to promote international cooperation in artificial intelligence, for both civilian and military uses, Horowitz said. He cited the example of the US-led political declaration on responsible military use of artificial intelligence and autonomy, which has been endorsed by more than fifty countries.
“It shows how the United States is already leading the way in establishing strong standards for responsible behavior,” Horowitz said.
The push for responsible use of technology should be seen in the context of the broader global debate over whether countries are moving toward authoritarian systems or leaning toward democracy and respect for human rights, Castro said. He noted that China is increasing investment in Africa.
“If we want to get African countries to join the US and Europe on AI policy instead of moving to China,” he said, “what are we actually doing to get them on our side ?”
___
©2024 CQ-Roll Call, Inc., all rights reserved. Visit cqrollcall.com. Distributed by Tribune Content Agency, LLC.
Story continues
© Copyright 2024 CQ-Roll Call. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.