Hosting
Wednesday, March 12, 2025
Google search engine
HomeArtificial IntelligenceExclusive: OpenAI builds first chip with Broadcom and TSMC, scales back foundry...

Exclusive: OpenAI builds first chip with Broadcom and TSMC, scales back foundry ambition


  • OpenAI develops AI inference chip and scraps foundry network plans
  • Broadcom is helping OpenAI with chip design and securing TSMC for production
  • OpenAI diversifies its chip offering by adding AI chips from AMD
Oct 29 (Reuters) – OpenAI partners with Broadcom (AVGO.O)opens a new tab and TSMC (2330.TW)opens a new tab to build its first in-house chip designed to power its artificial intelligence systems, while adding AMD (AMD.O)opens a new tab chips next to Nvidia (NVDA.O)opens a new tab chips to meet rising infrastructure demand, sources told Reuters.

OpenAI, the fast-growing company behind ChatGPT, has been exploring a range of options to diversify chip offerings and reduce costs. OpenAI considered building everything in-house and raising capital for an expensive plan to build a network of factories known as “foundries” for chip production.

The company has abandoned ambitious plans for the foundry for now due to the cost and time required to build a network, and plans instead to focus on internal chip design efforts, said sources, who requested anonymity because they were not authorized to discuss private matters. business.

The company’s strategy, detailed here for the first time, highlights how the Silicon Valley startup uses industry partnerships and a mix of internal and external approaches to secure chip supply and control costs, like larger rivals Amazon, Meta, Google and Microsoft. As one of the largest chip buyers, OpenAI’s decision to source from a wide range of chip makers in developing its custom chip could have broader implications for the technology sector.

Shares of Broadcom rose after the report, ending Tuesday’s trading more than 4.5% higher. AMD shares also extended their gains from the morning session, ending the day up 3.7%.

OpenAI, AMD and TSMC declined to comment. Broadcom did not immediately respond to a request for comment.

OpenAI, which has helped commercialize generative AI that produces human-like answers to questions, relies on significant computing power to train and run its systems. As one of the largest buyers of Nvidia’s graphics processing units (GPUs), OpenAI uses AI chips both to train models, where the AI ​​learns from data, and to make inferences, where AI is applied to make predictions or decisions based on new information.

Reuters previously reported on OpenAI’s chip design efforts. The Information reported on discussions with Broadcom and others.

According to sources, OpenAI has been working with Broadcom for months to build its first AI chip focused on inference. Demand for training chips is currently higher, but analysts predict that the need for inference chips could surpass it as more AI applications are deployed.

Broadcom helps companies including Alphabet (GOOGL.O)opens a new tab unit Google refines chip designs for production and also provides parts of the design that allow information to be moved quickly to and from the chips. This is important in AI systems where tens of thousands of chips are strung together to work together.

OpenAI is still deciding whether to develop or acquire other elements for its chip design, and may bring in additional partners, two sources said.

The company has assembled a chip team of about 20 people, led by top engineers who previously built Tensor Processing Units (TPUs) at Google, including Thomas Norrie and Richard Ho.

Sources said OpenAI has secured manufacturing capacity at Taiwan Semiconductor Manufacturing Company (2330.TW) through Broadcomopens a new tab to create its first custom-designed chip in 2026. They said the timeline could change.

Currently, Nvidia GPUs have a market share of over 80%. But shortages and rising costs have led major customers like Microsoft, Meta and now OpenAI to explore internal or external alternatives.

OpenAI’s planned use of AMD chips via Microsoft’s Azure, first reported here, shows how AMD’s new MI300X chips are looking to capture a slice of the market dominated by Nvidia. AMD expects AI chip sales of $4.5 billion in 2024, following the chip’s launch in the fourth quarter of 2023.

Training AI models and operational services like ChatGPT are expensive. According to sources, OpenAI expects a loss of $5 billion this year on revenue of $3.7 billion. Computing costs, or expenses for hardware, electricity and cloud services needed to process large data sets and develop models, are the company’s largest expense, prompting efforts to optimize usage and diversify suppliers.

OpenAI has been cautious about poaching talent from Nvidia as it wants to maintain a good relationship with the chipmaker it continues to work with, especially for access to next-generation Blackwell chips, sources said.

Nvidia declined to comment.

Register here.

Reporting by Krystal Hu in New York, Fanny Potkin in Singapore, Stephen Nellis in San Francisco, additional reporting by Anna Tong and Max Cherney in San Francisco; Editing by Kenneth Li and David Gregorio

Our Standards: Thomson Reuters Trust Principles.opens a new tab

Buy licensing rights

Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, technology investing and AI. She has previously covered mergers and acquisitions for Reuters, breaking stories about Trump’s SPAC and Elon Musk’s Twitter funding. She previously reported on Amazon for Yahoo Finance, and her investigation into the company’s retail practices was cited by lawmakers in Congress. Krystal began a career in journalism writing about technology and politics in China. She has a master’s degree from New York University and loves a scoop of Matcha ice cream as much as she enjoys a scoop at work.



Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular