The White House has moved to restrict foreign access to U.S. artificial intelligence breakthroughs, setting new boundaries for the global AI market as U.S. tech companies face new restrictions on international deals.
The directive, which directs federal agencies to protect private sector AI innovations such as military technology, could force U.S. tech giants to obtain government approval before sharing the developments with foreign partners or selling to foreign customers. This could affect the global business of companies like OpenAI, Anthropic and Microsoft, according to industry analysts and executives familiar with the matter.
“AI is really becoming a national security issue, and there will be two sides to that,” Kristof Horompoly, vice president of AI risk management at ValidMind, told PYMNTS. “One is that the US, and other countries as well, want to maintain innovation in the country and ensure that the brightest minds in AI come and develop AI in the country. protect yourself against the export of that technology.”
Export restrictions on advanced AI are likely to tighten, especially for non-allied countries, but domestic opportunities could expand as governments increase their support and resources to keep AI development within U.S. borders, Horompoly said.
The new restrictions on AI exports could change the way U.S. tech companies sell their most advanced systems abroad, with companies like OpenAI and Microsoft potentially requiring government approval for deals with foreign customers. Industry analysts estimate that billions in international AI revenues could be affected, as companies may have to create separate versions of their technology for domestic and foreign markets or limit certain export opportunities.
Memo intended to keep AI secrets
President Biden on Thursday (Oct. 24) issued the first National Security Memorandum on Artificial Intelligence, directing federal agencies to protect U.S. AI advances as strategic assets while promoting their safe development for national security. The memo establishes the AI Safety Institute as the government’s primary point of contact for the industry and prioritizes intelligence gathering on foreign efforts to steal U.S. AI technology.
The White House directive outlines three core objectives: maintaining U.S. leadership in the safe development of artificial intelligence, leveraging AI for national security while protecting democratic values and building international consensus on AI -management. It follows recent US-led efforts, including a G7 Code of Conduct on AI and agreements with more than 50 countries on military AI use.
“It will become more difficult for AI companies to sell their technology abroad, especially some of the sensitive AI and some of the advanced AI,” Horompoloy said. “I am sure this will become increasingly limited. I see a future where AI is treated in the same category as weapons today. I think that’s where we’re going. AI can already be used as a very powerful weapon, and this will become even more so in the form of disinformation campaigns, deepfakes and even infiltrating organizations.”
While the AI NSM may limit U.S. artificial intelligence companies on many fronts, it simultaneously presents opportunities in the form of government contracts, other government funding initiatives and the accelerated creation of an AI testing industry, Anthony Miyazaki, professor of marketing at Florida International University, told PYMNTS. He said U.S. AI companies would also have more opportunities to recruit tech-savvy workers globally, thanks to specific wordings that address the immigration of AI-trained talent.
“The need to test AI systems for potential national security threats could prove to be the biggest delay in innovation timelines,” he said. “The fastest innovations are typically generated through open beta testing with built-in feedback mechanisms among different user groups. This enables rapid improvements in a continuous manner. Repeatedly halting beta testing feedback loops for potentially months at a time for government reviews could drastically limit AI growth for US developers. Meanwhile, developers in other countries can benefit from less restrictive government requirements.”