One of, if not the highest, priority of the Army’s intelligence task force is figuring out how AI can stack up against the vast amounts of data being collected on various platforms.
“We are drowning in data. I see data as a challenge, but also as an opportunity. That’s essentially what AI is for our military. It’s an opportunity. AI offers more opportunity for progress than any other technology we have seen in decades,” said Lt. Gen. Anthony Hale, G2 deputy chief of staff, during a presentation at the annual AUSA conference Wednesday.
To quantify it, the world will have about 180 zettabytes (a zettabyte is 1,000,000,000,000,000,000,000 bytes) of data by the end of 2025, said Andrew Evans, the director of the Intelligence, Surveillance and Reconnaissance Task Force. army, in an interview at the conference. .
The Task Force – a temporary entity initially set up to act as a cross-cutting facilitator that will transition into a permanent directorate within the G2 once the newly created cross-functional team covering all domains reaches full operational capacity – the military’s senior leadership is taking direction to not just throw humans at that problem, but use AI to take the load off the analysts.
“One of our key missions in the Intel enterprise around transformation is figuring out how we can leverage artificial intelligence to attack that data in the right, impactful and ethical ways, things that you have to keep in mind as you build the data piece makes. ” said Evans. “AI will be a big focus for us in the future. We could put a million people in front of that and the data will always grow at an astronomical rate, beyond what humans can do.”
There is also a sense of urgency behind this effort. According to Hale, a battle could take place tonight in three of the Army’s six geographic combat commands: Indo-Pacific Command, Central Command and European Command.
“This is the driving pace for transformation in our military. It’s the urgency you hear the Army Chief of Staff talk about every day,” he said. “We must learn to use AI to organize the world’s information, reduce the need for manpower, make it useful, and position our people for speed and accuracy and delivering information to the commander for decision dominance.”
One area the Army is trying to improve is the Processing, Exploitation and Dissemination (PED) process.
The agency wants to reimagine PED to move away from the counterinsurgency era – when assets could linger around a target for days, develop target life patterns and transmit data over a permissive network – to new concepts such as multi-intelligence data fusion and analytics .
“When we talk about multi-INT data fusion and analysis, we’re talking about how we can apply realistic AI-based technologies or machine learning models to real-world operational requirements today,” said Col. Brandon VanOrden, chief of intelligence operations and deputy chief of staff. G2, said at the conference. “We need help from this group to do things like layer those ML models vertically and then integrate them horizontally into platforms and programs to repurpose them, against real, finite resources and prioritized operational requirements the theaters assigned to us.”
VanOrden also explained that the military wants a “conversation” with data. This means you have the opportunity to ask informed and operational questions instead of looking at data for data’s sake.
In a hypothetical example, he imagines an analyst asking a data set where a unit is, what it does, and where it might go. The data should also be able to tell the analyst what happens when conditions change and how that might affect the likelihood of what that unit will do.
Ultimately, it comes down to providing better context for that human analyst.
One of the biggest hurdles for the military in applying AI for intelligence purposes is being able to train the algorithms based on the data contained in highly classified environments.
“One of the biggest challenges is that you have to train your algorithms on military-grade data. “When you’re talking about top secret data, in most cases, some of the stuff you see around AUSA today, the algorithms are trained on commercially available data,” Evans said. “That doesn’t always represent military data. Your algorithm may be trained differently than how it will be used, if you think about that. One of the things we have to do as Intel professionals [is] help bridge that gap. How do we ensure that an algorithm we might be interested in has been trained on data with military value? We are trying to find that out now.”
Hale noted that the military needs the industry as it looks to data analytics, security, generative AI and large language models to help with the service’s challenges.
While there are plenty of AI providers, Evans says, the question arises: who are the trusted ones who have trained with the right types of data sets, and will the algorithm then learn as it is used? Algorithms can’t be a one-time thing, he added, noting that they need to learn as they go.
According to Evans, one of the first efforts to provide opportunities for the industry is a governance process that does not hinder innovation or rapid employment.
“How do we create governance that is fast and responsive, so that when we integrate AI, it is done in an ethical way, but still keeps pace with technology?” he said. “So how do we give vendors a space where they can come, take their algorithms and models, test and validate them against military-grade data, and then deploy them and allow users to download them and run them on their data set to use? ”
The Intelligence, Electronic Warfare and Sensors Program Board is building an AI and machine learning ecosystem to enable a trusted and secure environment for other program managers and Army elements to deploy their models against curated and trusted data . It will be in an environment where they are trained and verified, ensuring the right security wrappers are in place to understand if there is any kind of drift or things that are out of tolerance of those models, Brig. Gen. Ed Barker, the PEO, said.
“As we build that out and establish that trusted environment, we’re looking at ways to really achieve what we’ve been talking about here on PED,” he said. “The goal is really to do that hard work and not do that, through AI and ML, and not put that burden on our analyst. Really give that analyst the opportunity to arrive at the high-level analysis that we want from him, to provide that right context that only humans can really do.”
Barker noted that there are instances where officials are working with the U.S. Army Pacific, working with the National Reconnaissance Office, the National Geospatial-Intelligence Agency and the Chief Digital and AI Office to help solve the PED problems in the Pacific region .
Ethical principles
Officials noted that AI will not completely replace military personnel, especially when it comes to targeting. They emphasized that the military will apply ethical principles to these algorithms and ensure that the technology is never responsible for pulling the trigger, but rather for supporting human decision-making processes.
“We must also conduct these efforts responsibly. Like [Defense] Secretary [Lloyd] Austin says responsible AI is where advanced technology meets our timeless values,” said Hale.
Evans noted that machines are very good at calculations, while humans use discretion and value-based judgments.
Protecting against AI hallucinations and building trust against these algorithms will be a major challenge and focus for the military.
“Hallucination, it’s a thing of the moment. We’re probably technologically capable… of doing a lot of things quickly, but I know from the way the US wants to fight through the laws of armed conflict, that we’re pretty far removed at this point from relying on AI and the ability to to automate targeting therein. respect,” Brigade. Gen. Rory Crooks, director of the Cross-Functional Long-Range Precision Fire Team, said at the conference. “We have to build trust.”
Providing hypothetical examples and possible use cases of how AI can help analysts, Evans said the machines can alert humans to look at a particular problem for which the soldier can apply discretion.
“Is this a problem? And if so, how do I want to respond to this problem?” he said. “What we need from humans is to look at what the machine has nominated as a potential hotspot, problem, target, threat, name your condition, and then assign a value to that and say, ‘Yes, that is and yes, I’. I’m going to take action.’”
An example might occur in the vast expanses of the Pacific Ocean, where an analyst might be tasked with tracking hostile or aggressive ships across a 700,000-square-mile array of satellite images. Instead of instructing people to look at blue squares for hours, the algorithm can identify where ships are almost instantly.
By setting certain rules for the AI, it could determine the most important areas for human interrogation based on the data set. If people are not happy with that or want more data, they can ask for more nominations.
However, a difficult issue in the future is: what if the AI does not make a nomination?
“[My] Personal view of this is where we will discover the deepest knowledge about AI: not if it gets a nomination, but what happens if it doesn’t. How do you know if it’s not nominated? Is there still a threat you should look into? Evans said. “That’s where we just have to learn as we continue to do this… Trust is about repeatability and then verifying that the information provided was complete. No one likes to talk about that, but a full machine assessment is an essential part of building trust. If it’s incomplete, you can take action, you can do the right ethical things, but you’re still acting on an incomplete data set. We need to make sure it looks at the whole thing [of] what you should look at.”