Hosting
Sunday, February 23, 2025
Google search engine
HomeArtificial IntelligenceOpenAI disbands another security team, chief advisor resigns

OpenAI disbands another security team, chief advisor resigns


OpenAI is disbanding its “AGI Readiness” team, which advised the company on OpenAI’s ability to handle increasingly powerful artificial intelligence and the world’s readiness to manage that technology, the team’s head said.

On Wednesday, AGI Readiness senior advisor Miles Brundage announced his departure from the company via a Substack post. He wrote that his main reasons were that the opportunity cost had become too high and that he thought his research would have more impact externally, that he wanted to be less biased, and that he had accomplished what he set out to do at OpenAI.

Artificial general intelligence, or AGI, is a branch of AI that pursues technology that equals or exceeds human intellect on a wide range of tasks. AGI is a hotly debated topic. Some leaders say we are close to achieving this, while others say it is not possible at all.

In his post, Brundage also wrote: “Neither OpenAI nor any other frontier lab is ready for it, and the world isn’t ready for it either.”

Brundage said he plans to start his own nonprofit, or join an existing one, to focus on AI policy research and advocacy. “AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it happen,” he said.

Former AGI Readiness team members will be transferred to other teams, according to Brundage’s post.

“We fully support Miles’ decision to continue his policy research outside of industry and are deeply grateful for his contributions,” an OpenAI spokesperson told CNBC. “His plan to go all-in on independent AI policy research gives him the opportunity to have an impact on a larger scale, and we are excited to learn from his work and track its impact. We are confident that in his new role Miles will continue to raise the bar for the quality of policymaking in industry and government.”

In May, OpenAI disbanded its Superalignment team – which OpenAI said was focusing on “scientific and technical breakthroughs to direct and control AI systems much smarter than we do” to prevent them from “going rogue” – just a year after the group had announced. familiar with the situation, which was confirmed to CNBC at the time.

The news of the AGI Readiness team’s dissolution follows the OpenAI board’s possible plans to restructure the company into a for-profit company, and after three executives – CTO Mira Murati, research chief Bob McGrew and research VP Barret Zoph – had announced their departure on the same day. in September.

In early October, OpenAI closed its buzzy funding round at a valuation of $157 billion, including the $6.6 billion the company raised from a wide selection of investment firms and Big Tech companies. It also received a $4 billion revolving line of credit, bringing its total liquidity to more than $10 billion. The company expects about $5 billion in losses on $3.7 billion in revenue this year, CNBC confirmed in September to a person familiar with the situation.

In September, OpenAI announced that its Safety and Security Committee, which the company introduced in May amid controversy over security processes, would become an independent oversight committee. It recently completed its 90-day review of OpenAI’s processes and safeguards and subsequently made recommendations to the board, also publishing the findings in a public blog post.

The news of executive departures and board changes also follows a summer of mounting security concerns and controversies surrounding OpenAI, which along with Google, MicrosoftMeta and other companies are at the helm of a generative AI arms race — a market expected to reach $1 trillion in sales within a decade — as companies in seemingly every industry rush to adopt AI-powered chatbots and agents to avoid being left behind by competitors.

In July, OpenAI reassigned Aleksander Madry, one of OpenAI’s top security officials, to a job focused on AI reasoning, people familiar with the situation confirmed to CNBC at the time.

Madry was the head of OpenAI’s preparedness, a team that was “charged with tracking, evaluating, predicting and helping protect against catastrophic risks associated with cutting-edge AI models,” according to a biography for Madry on a Princeton AI initiative website University. Madry will still work on core AI safety work in his new role, OpenAI told CNBC at the time.

The decision to reassign Madry came around the same time that Democratic senators sent a letter to OpenAI CEO Sam Altman regarding “questions about how OpenAI addresses emerging security issues.”

The letter, seen by CNBC, also said: “We are seeking additional information from OpenAI about the steps the company is taking to meet its public security commitments, how the company is internally monitoring its progress on that evaluates commitments, and on the company’s identification and mitigation of cybersecurity threats.”

Microsoft gave up its observer seat on OpenAI’s board in July, writing in a letter seen by CNBC that it may now step aside because it is satisfied with the makeup of the startup’s board, which had been revamped since the uprising that led until the brief ouster of OpenAI’s board. Altman and threatened Microsoft’s massive investment in the company.

But in June, a group of current and former OpenAI employees published an open letter expressing concerns about the artificial intelligence industry’s rapid progress despite a lack of oversight and whistleblower protections for those who wish to express their opinions.

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe that tailor-made corporate governance structures are sufficient to change this,” the staff wrote at the time.

Days after the letter was published, a person familiar with the matter confirmed to CNBC that the Federal Trade Commission and the Justice Department would open antitrust investigations into OpenAI, Microsoft and Nvidia, focusing on the companies’ conduct.

FTC Chair Lina Khan has described her agency’s action as a “market investigation into the investments and partnerships being formed between AI developers and major cloud service providers.”

The current and former employees wrote in the June letter that AI companies have “substantial non-public information” about what their technology can do, the extent of the safeguards they have in place and the levels of risk the technology poses to different types of damage.

“We also understand the serious risks these technologies pose,” they wrote, adding that the companies “currently have only weak obligations to share some of this information with governments, and none with society.” midfield. We don’t think it can all be relied on. to share it voluntarily.”

OpenAI’s Superalignment team, which was announced last year and disbanded in May, had focused on “scientific and technical breakthroughs to direct and control AI systems much smarter than we do.” OpenAI said at the time that it would dedicate 20% of its computing power to the initiative over four years.

The team was disbanded after its leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departure from the startup in May.

“Building machines that are smarter than human machines is an inherently dangerous endeavor,” Leike wrote in a post on X. “OpenAI carries an enormous responsibility on behalf of all humanity. But in recent years, safety culture and processes have undergone a major change. backseat for shiny products.”

Altman said on X at the time that he was sad to see Leike go and that OpenAI had more work to do. Shortly thereafter, co-founder Greg Brockman posted a statement on

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote in his May post on core priorities of the company until we finally reached a breaking point.”

Leike wrote that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and social impact.

“These problems are quite difficult to solve, and I fear we are not on the right track to get there,” he wrote at the time. “In recent months my team has been sailing against the wind. Sometimes we had trouble with it [computing resources] and it became increasingly difficult to get this crucial research done.”

Leike added that OpenAI should become an “AGI security company.”



Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular