Now Reading
Meta Expands $2.3B Broadcom AI Chip Deal Through 2029

Meta Expands $2.3B Broadcom AI Chip Deal Through 2029

Meta Broadcom AI chip collaboration

Meta Platforms and Broadcom extended their multi-year partnership to co-develop custom AI accelerator chips through 2029. As a result, the agreement outlines a roadmap spanning several silicon generations, while also committing to more than one gigawatt of computing capacity. This level of capacity could power roughly 750,000 U.S. homes, which highlights the scale of the initiative.

Moreover, the collaboration relies on Broadcom’s XPU accelerator platform for chip design, packaging, and networking. In addition, the companies confirmed that upcoming MTIA chips will adopt a 2-nanometer manufacturing process, marking a notable industry milestone. Meanwhile, financial disclosures revealed that Meta paid Broadcom $2.3 billion in 2025, offering rare visibility into such partnerships.

Mark Zuckerberg emphasized the importance of the alliance. “Meta is collaborating with Broadcom on chip design, packaging, and networking to build out the massive computing foundation we need to deliver personal superintelligence to billions of people,” Zuckerberg said in a statement.

At the same time, Hock Tan will step down from Meta’s board when his term ends. However, he will transition into an advisory role, maintaining strategic involvement.

Rapid Chip Development and Performance Gains

The partnership builds on Meta’s accelerated silicon roadmap, which includes four MTIA chip generations planned within roughly two years. Currently, the MTIA 300 is already in production, and it supports ranking and recommendation systems across major platforms.

Subsequently, the MTIA 400, 450, and 500 will increasingly focus on generative AI inference workloads. These chips will roll out gradually through 2027, ensuring continuous capability improvements. Furthermore, Meta uses open-source RISC-V architecture, while TSMC handles fabrication.

Each generation delivers measurable gains. For instance, HBM bandwidth will rise by four and a half times, while compute performance will increase by 25 times from MTIA 300 to MTIA 500. Consequently, the roadmap reflects both scale and efficiency improvements.

See Also
WeChat AI agent chat interface

Broader Shift in AI Infrastructure Strategy

At the same time, the agreement forms part of Meta’s broader effort to diversify AI hardware. While the company continues to use GPUs, it is expanding beyond reliance on Nvidia. For example, Meta has committed to six gigawatts of GPUs from AMD, alongside millions of Nvidia chips.

In addition, the company is developing custom processors with Arm Holdings and leasing capacity from cloud providers like CoreWeave and Nebius. As a result, its capital expenditure could reach between $115 billion and $135 billion in 2026.

Meanwhile, Broadcom continues expanding its influence across the industry. Notably, it has strengthened its partnership with Google through a long-term agreement covering future TPU generations. Therefore, the company is increasingly positioned as a key design partner for large-scale AI infrastructure.

View Comments (0)

Leave a Reply

Your email address will not be published.

© 2024 The Technology Express. All Rights Reserved.