Less than one years after OpenAI quietly signaled its intention to do business with the Pentagon, a procurement document obtained by The Intercept shows that US Africa Command, or AFRICOM, believes access to OpenAI’s technology is “essential” to its mission.
The September 30 document lays out AFRICOM’s rationale for purchasing cloud computing services directly from Microsoft as part of the $9 billion Joint Warfighting Cloud Capability contract, rather than seeking another provider on the open market. “The USAFRICOM operates in a dynamic and evolving environment where IT plays a critical role in achieving mission objectives,” the document reads, including “its essential mission in support of our African mission partners. [and] USAFRICOM Joint Exercises.”
The document, labeled Controlled Unclassified Information, is marked FEDCON, indicating it is not intended for distribution outside of government or contractors. This shows that AFRICOM’s request has been approved by the Defense Information Systems Agency. While the price of the purchase is redacted, the approval document states its value is less than $15 million.
Like the rest of the Defense Department, AFRICOM — which oversees the Pentagon’s operations across Africa, including local military cooperation with U.S. allies there — has been showing increasing interest in cloud computing. The Department of Defense already purchases cloud computing access from Microsoft through the Joint Warfighting Cloud Capability project. This new document reflects AFRICOM’s desire to bypass red tape and immediately purchase Microsoft Azure cloud services, including OpenAI software, without regard to other vendors. AFRICOM states that “the ability to support advanced AI/ML workloads is critical. This includes services for search, natural language processing, [machine learning]and unified analytics for data processing.” And according to AFRICOM, Microsoft’s Azure cloud platform, which includes a suite of tools from OpenAI, is the only cloud provider that can meet its needs.
Microsoft began selling OpenAI’s GPT-4 major language model to defense customers in June 2023. Earlier this year, after revealing that OpenAI had changed its mind about military work, the company announced a cybersecurity partnership with DARPA in January and said it was using its tools for an unspecified veteran suicide prevention initiative. In April, Microsoft pitched to the Pentagon on using DALL-E, OpenAI’s image generation tool, for command and control software. But the AFRICOM document marks the first confirmed purchase of OpenAI products by a U.S. combatant command whose mission is to kill.
OpenAI’s corporate mission remains “to ensure that artificial general intelligence benefits all humanity.”
The AFRICOM document marks the first confirmed purchase of OpenAI products by a U.S. combatant command whose mission is to kill.
The document states that “OpenAI tools” are among the “unique features” offered by Microsoft “that are essential to ensure that the cloud services provided align with USAFRICOM’s mission and operational needs. … Without access to Microsoft’s integrated suite of AI tools and services, USAFRICOM would face significant challenges in analyzing and gaining actionable insights from massive amounts of data. … This could lead to delays in decision-making, compromised situational awareness and reduced agility in responding to dynamic and evolving threats on the African continent. Defense and intelligence agencies around the world have shown great interest in using large language models to search large amounts of intelligence, or to quickly transcribe and analyze audio data from interrogations.
Microsoft invested $10 billion in OpenAI last year and now exerts a lot of influence over the company, in addition to reselling its technology. In February, The Intercept and other digital news outlets sued Microsoft and OpenAI for using their journalism without permission or credit.
An OpenAI spokesperson told The Intercept: “OpenAI has no partnership with US Africa Command” and referred questions to Microsoft. Microsoft did not immediately respond to a request for comment. Neither does an AFRICOM spokesperson.
“It is extremely alarming that they are explicit in using OpenAI tools for ‘unified analytics for data processing’ to align with USAFRICOM mission objectives,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute, who has previously conducted security evaluations for OpenAI. “Especially by stating that they believe these tools improve efficiency, accuracy and scalability, when in fact these tools have been shown to be highly inaccurate and consistently produce results. These claims demonstrate a concerning lack of awareness among those purchasing these technologies of the high risks these tools pose in mission-critical environments.”
Since OpenAI quietly dropped the part of its terms of service that prohibited military work in January, the company has steadily ingratiated itself with the U.S. national security establishment, which is eager to integrate impressive but often inaccurate tools like ChatGPT. In June, OpenAI added Trump-appointed former head of the National Security Agency Paul Nakasone to its board; The current head of the company’s national security partnerships is Katrina Mulligan, a Pentagon alumnus who previously worked in Special Operations and Irregular Warfare, according to her LinkedIn profile.
On Thursday, following a White House directive directing the Pentagon to accelerate the adoption of tools like OpenAI’s, the company published an article outlining its “approach to AI and national security.” According to the post “The values that guide our work in national security” include “democratic values,” “human rights” and “accountability,” which explains: “We believe that all AI applications, especially those involving government and national security must be subject to oversight, clear usage guidelines and ethical standards.” OpenAI’s language is a clear reflection of the White House order, which banned security and intelligence agencies from using artificial intelligence in ways that are “inconsistent with democratic values,” the Washington Post reported.
While the AFRICOM document provides few details on exactly how it might use OpenAI tools, the command’s regular implications on African coups, civilian killings, torture, and covert warfare appear incompatible with OpenAI’s professed national security framework. Last year, AFRICOM chief Gen. Michael Langley told the House Armed Services Committee that his command shares “core values” with Col. Mamady Doumbouya, an AFRICOM trainee who overthrew Guinea’s government and declared himself leader in 2021.
Although U.S. military activity in Africa receives relatively little attention compared to U.S. Central Command, which oversees U.S. forces in the Middle East, AFRICOM’s presence is not only significant but often the subject of controversy. Despite claims of a “light footprint” on the continent, The Intercept reported in 2020 a previously classified AFRICOM map showing “a network of 29 U.S. military bases stretching from one side of Africa to the other.” Much of AFRICOM’s purpose since its founding in 2007 has been to train and advise African forces, conduct stealth missions by Special Operations forces and operate drone bases to counter militant groups in the Sahel, Chad Basin and Horn of Africa in their efforts to provide security and stability for the continent. The results were dismal. Across Africa, the State Department counted a total of just nine terrorist attacks in 2002 and 2003, the first years of U.S. counterterrorism assistance on the continent. According to the Africa Center for Strategic Studies, a Pentagon research institute, the annual number of attacks by militant Islamist groups in Africa now tops 6,700 – an increase of 74,344 percent.
As violence has increased, at least 15 officers who have benefited from US security support have been involved in 12 coups in West Africa and the greater Sahel during the war on terror, including in Niger last year. (At least five leaders of the July 2023 coup received U.S. assistance, according to a U.S. official.) U.S. allies are also involved in a range of alleged human rights abuses. In 2017, The Intercept reported that a Cameroonian military base used by AFRICOM to organize drone flights had been used to torture military prisoners.
Dealing with data has long been a challenge for AFRICOM. For example, after The Intercept tallied the number of U.S.-trained coup leaders on the continent, the command admitted that it did not know how many coups the charge had attempted, nor did the command even keep a list of how often such takeovers occurred. happened. “AFRICOM does not maintain a database of this information,” spokesperson Kelly Cahalan told The Intercept last year.
AFRICOM’s mismanagement of information has also been deadly. Following a drone strike in Somalia in 2018, AFRICOM announced that it had killed “five terrorists” and destroyed one vehicle, and that “no civilians were killed in this airstrike.” A secret U.S. military investigation, obtained by The Intercept through the Freedom of Information Act, showed that despite months of “target development,” the attack on a pickup truck killed at least three and possibly five civilians, including Luul Dahir Mohamed and her 4-year-old daughter, Mariam Shilow Muse.