OpenAI Builds First Chip With Broadcom and TSMC, Scales Back Foundry Ambition (reuters.com) 4
OpenAI is partnering with Broadcom and TSMC to design its first in-house AI chip while supplementing its infrastructure with AMD chips, aiming to diversify its reliance on Nvidia GPUs. "The company has dropped the ambitious foundry plans for now due to the costs and time needed to build a network, and plans instead to focus on in-house chip design effort," adds Reuters. From the report: OpenAI has been working for months with Broadcom to build its first AI chip focusing on inference, according to sources. Demand right now is greater for training chips, but analysts have predicted the need for inference chips could surpass them as more AI applications are deployed. Broadcom helps companies including Alphabet unit Google fine-tune chip designs for manufacturing and also supplies parts of the design that help move information on and off the chips quickly. This is important in AI systems where tens of thousands of chips are strung together to work in tandem. OpenAI is still determining whether to develop or acquire other elements for its chip design, and may engage additional partners, said two of the sources.
The company has assembled a chip team of about 20 people, led by top engineers who have previously built Tensor Processing Units (TPUs) at Google, including Thomas Norrie and Richard Ho. Sources said that through Broadcom, OpenAI has secured manufacturing capacity with Taiwan Semiconductor Manufacturing Company to make its first custom-designed chip in 2026. They said the timeline could change. Currently, Nvidia's GPUs hold over 80% market share. But shortages and rising costs have led major customers like Microsoft, Meta, and now OpenAI, to explore in-house or external alternatives.
The company has assembled a chip team of about 20 people, led by top engineers who have previously built Tensor Processing Units (TPUs) at Google, including Thomas Norrie and Richard Ho. Sources said that through Broadcom, OpenAI has secured manufacturing capacity with Taiwan Semiconductor Manufacturing Company to make its first custom-designed chip in 2026. They said the timeline could change. Currently, Nvidia's GPUs hold over 80% market share. But shortages and rising costs have led major customers like Microsoft, Meta, and now OpenAI, to explore in-house or external alternatives.
Another vapour ware they (Score:2)
Wut? (Score:2)
Re: (Score:2)
If they didn't want bleeding edge nodes, they could have licensed something from Samsung (for example). But even then they'd be competing for the equipment and talent to build and operate their own fab. It's probably for the best that they stuck with TSMC.
Translation (Score:2)
The company has dropped the ambitious foundry plans for now due to the costs and time needed to build a network, and plans instead to focus on in-house chip design effort,
Translation: OpenAI has fucked around and now they have found out.
Silicon is an expensive medium to revise and it takes a different approach than software projects to get it right the first time.