Written by 4:06 PM Tech

OpenAI is expected to introduce its own chip in the second half of next year, raising expectations for Samsung HBM’s mid- to long-term benefits.

[Formalization of Collaboration with Broadcom]
Combining with ARM-based CPUs
Building a 10GW-class data center within four years,

[Seoul Economy] OpenAI has formalized its development of its own artificial intelligence (AI) accelerator with Broadcom. By combining it with ARM-based central processing units (CPUs), they aim to establish a 10GW-scale data center and lay the foundation for an independent AI chipset. Samsung Electronics, which supplies high-bandwidth memory (HBM) to Broadcom, is also expected to benefit in the medium to long term.

On the 13th (local time), OpenAI announced their collaboration with Broadcom for the development of a custom AI accelerator to reach 10GW capacity. They also provided a detailed schedule for introducing their own chipsets to data centers from the second half of 2026 to 2029. Broadcom plans to supply the necessary network infrastructure for data center construction, in addition to the chipsets. According to The Information, the AI chipsets developed by OpenAI and Broadcom will use ARM-based CPUs. ARM’s largest shareholder is SoftBank, which is also a primary investor in OpenAI. OpenAI CEO Sam Altman emphasized that “the development of our own accelerators with Broadcom is a critical step in maximizing AI potential and building the necessary infrastructure.”

Rumors about OpenAI’s endeavor with Broadcom to develop its own AI chipset have been circulating for 18 months. Broadcom has previously demonstrated its design and development capabilities with Google’s proprietary AI chipset, the Tensor Processing Unit (TPU). OpenAI aims to reduce costs by developing its own chipsets. In practice, Google has seen infrastructure cost reductions by adopting the TPU in their cloud services early on. Altman and Broadcom CEO Hock Tan appeared together on a podcast, emphasizing that “by optimizing the entire infrastructure, we can achieve tremendous efficiency, leading to better performance, faster models, and cheaper models.” The intention is also to broaden the choices beyond NVIDIA and AMD for graphics processing units (GPUs). Tan’s remark, “If you make your own chips, you can determine your own destiny,” underscores this intent.

Following the news of the collaboration, Broadcom’s stock price surged by 9.88% on the New York Stock Exchange. The domestic semiconductor industry is also expected to benefit significantly. Samsung Electronics, already supplying HBM3E to Broadcom, is speculated to likely provide HBM4 for the “ChatGPT dedicated chipset.” Recently, a meeting between Samsung Electronics Chairman Lee Jae-yong and OpenAI CEO Altman, where they agreed on memory supply for the “Stargate” data center, raised expectations. However, concerns remain about OpenAI’s aggressive infrastructure investments. The AI infrastructure investment scale announced by OpenAI, Broadcom, Oracle, AMD, CoreWeave, and others has already surpassed 1 trillion dollars.

Meanwhile, American software company Oracle announced on the 14th that it has entered into a contract with Advanced Micro Devices (AMD) to supply 50,000 AI chips. AMD will supply its next-generation AI chip, the MI450, launched earlier this year, to Oracle’s data centers starting from the third quarter of next year. This contract highlights how AI software developers are accelerating the acquisition of the latest AI chips due to the rapidly increasing demand for computing infrastructure.

Visited 1 times, 1 visit(s) today
Close Search Window
Close