The 2024 presidential election will certainly have far-reaching consequences in many areas – and artificial intelligence is no exception.
EY’s latest technology pulse poll, published in October, found that 74% of 503 technology leaders expect the election to impact AI regulation and global competitiveness. Although technology leaders said they plan to significantly increase AI investments in the coming year, the future growth of AI could depend on the outcome of the election.
Respondents believe that the outcome of the election will mainly impact regulations on cybersecurity/data protection, AI and machine learning, and monitoring of user data and content.
“These are all, of course, closely linked to innovation, growth and global competitiveness,” James Brundage, technology sector leader at EY Global & Americas, told TechRepublic. “The U.S. is the world leader in technology innovation, so future technology policy must strike a balance that supports American innovation while putting in place guardrails where they are needed,” such as in data privacy, children’s online safety and national security .
SEE: Year-round IT budget template (TechRepublic Premium)
Greater investments in AI
According to the research, technology companies will continue to make significant investments in AI regardless of the outcome of the presidential election. However, the outcome could impact the direction of fiscal, fiscal, tariff, antitrust and regulatory policies, as well as interest rates, mergers and acquisitions, IPOs and AI regulations, the study said.
“We were surprised that trade/tariffs were not higher on the minds of these executives,” Brundage noted.
After a sluggish tech market in 2024, he said that “the trajectory for 2025 is bullish as companies focus on raising capital to invest in growth and emerging technologies such as AI.”
The majority of technology leaders (82%) say their company plans to increase AI investments by 50% or more in the coming year. Over the next year, AI investments will focus on key areas including AI-specific talent (60%), cybersecurity (49%) and back-office functions (45%).
With innovation in mind, most technology leaders surveyed also plan to dedicate resources to AI investments over the next six to 12 months. 78% of technology leaders say their company is considering divesting non-core businesses or assets as part of their growth. strategy at that time.
Large organizations are struggling with AI initiatives
Brundage also finds it surprising that 63% of technology leaders report that their organization’s AI initiatives have successfully moved to the implementation phase.
“That number seems high, but several factors could explain this,” he noted. “First, companies may focus on AI projects with short-term low-hanging fruit, which are easier to implement, have higher success rates, but may not be the maximum impact opportunities.”
Additionally, using “quick buy solutions like ChatGPT or Copilot, which are relatively easy to implement and increase productivity, could increase this percentage.” Furthermore, successful implementation “will likely mean moving from proof of concept (POC) to implementation,” Brundage said, adding that “real challenges such as data quality, scaling, governance and infrastructure still lie ahead.”
Additionally, size matters: the report finds that organizations with more employees have less success in moving AI initiatives to the implementation phase.
Data quality issues (40%) and talent/skills shortages (34%) are the most common reasons why AI initiatives fail to reach the next stage, according to those who reported that less than half of their AI initiatives have been successfully implemented.
How the impact of the elections on AI could be felt
Regardless of who comes to power in 2025, there could be a continuation of current regulatory and enforcement trends regarding AI, as the Federal Trade Commission and the Department of Justice have been and may continue to be very active remain, according to Brundage. Given that “some legislative proposals are bipartisan… we expect them to make progress in 2025 or 2026,” such as child online safety.
But he pointed out that state legislatures and attorneys general also influence policy, “so it’s a nuanced playing field. We expect these changes will be measured in years, not months.”
Technology leaders need to realize that the U.S. is experiencing a new geopolitical environment compared to five to 10 years ago, Brundage said.
“New government industrial policies in the US and around the world are driving business action – both in the technology sector and in the industries and supply chains it relies on. These global tech companies are particularly at the forefront of geopolitics as countries seek to reduce risks to each other.”
AI capabilities have also become highly competitive and geopolitically important around the world, he said. “There is a dual race here in the US and elsewhere to innovate and regulate. We see the need for business models that take into account the different regulatory approaches, such as sovereign border models.”
Wanted: The search for AI tech talent is intensifying
As organizations continue to integrate more AI functionality into their businesses, the need to hire AI-specific talent will increase, as will the need to restructure or reduce workforces in legacy roles, the research shows.
Eighty percent of technology leadership respondents expect a reduction or restructuring of the workforce from older roles to other in-demand roles, and 77% expect an increase in hiring of AI-specific talent, according to the survey. Additionally, 40% of technology leaders said human capital efforts, such as training, will be the focus of their company’s AI investments next year.
The impact of AI on national security and foreign policy
Meanwhile, the Biden administration on Thursday released the first-ever AI-focused National Security Memorandum (NSM) to ensure the US continues to lead the development and deployment of AI technologies. The memorandum also prioritizes how the country adopts and uses AI, while maintaining privacy, human rights, civil rights and civil liberties, so that the technology can be trusted.
The NSM also calls for creating a governance and risk management framework for how agencies implement AI and requiring them to monitor, assess and mitigate AI risks associated with these issues.