The Pentagon’s nuclear command, control and communications enterprise is decades old and desperate for an upgrade, the head of U.S. Strategic Command says, and artificial intelligence could help strengthen the nuclear C3 for its undeniable mission.
STRATCOM “is exploring all possible technologies, techniques and methods to assist in the modernization of our NC3 capabilities,” said Air Force Gen. Anthony J. Cotton, who has led the command since December 2022.
“AI will enhance our decision-making capabilities,” Cotton said at the Defense Department’s 2024 Intelligence Information System Conference. “But we should never allow artificial intelligence to make these decisions for us.”
Growing threats, an overwhelming flood of sensor data and growing cybersecurity concerns mean AI must keep the U.S. military one step ahead of those seeking to challenge the U.S., Cotton said. “Advanced systems can inform us faster and more efficiently,” he explains. “But we must always maintain a human decision to maximize the adoption of these capabilities and maintain our advantage over our adversaries.”
Cotton said AI can give leaders more “decision latitude” to ensure the entire nuclear enterprise remains safe. “Our adversaries must know that our nuclear command and control capabilities and other capabilities that provide decision advantage are ready 24 hours a day, 7 days a week, 365 hours a day and cannot be compromised or defeated,” Cotton said .
Cotton’s predecessors at STRATCOM, Admiral Charles Richard and General John E. Hyten, also discussed the modernization of NC3. But at a time when the Air Force is also trying to modernize its strategic bomber force with the B-21 Raider bomber, its ICBM force with the Sentinel missile, and its ballistic missile submarines with the Columbia-class submarines, NC3 is getting little attention – even though none of these systems can be effective without it.
“Despite warnings from top national security officials, major improvements to NC3 have been fragmented,” wrote Peter L. Hays of the Center for Strategic and International Studies and consultant Sarah Mineiro in an Oct. 28 blog post for the Atlantic Council.
Heather Penney, senior resident fellow at AFA’s Mitchell Institute for Aerospace Studies, noted in a recent podcast that NC3 is often taken for granted, “because it is largely invisible… underground cables, computers, communications links and a very small number of specialized aircraft and satellites are the backbone of this mission function,” she said. “But it’s not like we see those things at air shows or on promotional posters.”
Chris Adams, general manager of Northrop Grumman’s Strategic Space Systems Division, said the real challenge with NC3 is that it has so many parts. “It’s not a single system that’s ever been deployed,” he said on the Mitchell Institute podcast. “It’s a system of systems. It actually involves hundreds of individual systems that are modernized and maintained over long periods of time in response to an ever-changing threat.”
Injecting AI into some of those systems offers the opportunity for greater speed and the ability to make sense of the vast amounts of information being pulled in by that system of systems. There are risks associated with AI, as researchers have noted. including misplaced trust in AI, “poisoned” data being ingested into systems, inaccurate algorithms, and more.
Cotton has been responsible for such pitfalls, but sees greater promise overall. “Advanced AI and robust data analytics capabilities provide decision advantage and improve our deterrent posture,” he said. “The superiority of IT and AI enables more effective integration of conventional and nuclear capabilities, strengthening deterrence.”
AI could be used to automate data collection and accelerate data sharing and integration with allies, he suggested. But he also said: “We must focus research efforts on understanding the risks of cascading effects of AI models, emergent and unexpected behavior, and the indirect integration of AI into nuclear decision-making processes.”
Implementing new NC3 systems – with and without artificial intelligence – will have to be a deliberate process, Adams said.
“We need to consider when, where and how we want to deploy the next generation of systems incrementally and carefully so that we don’t leave any vulnerabilities behind,” Adams said. “A good analogy is grabbing the next ring on the playground before letting go of the last one.”