
Meta Platforms has expanded its long-term partnership with Broadcom to co-develop custom artificial intelligence chips, extending the collaboration through 2029. The move reflects Meta’s aggressive push to build large-scale AI infrastructure and reduce reliance on third-party chip suppliers as demand for computing power continues to surge.
The extended agreement focuses on developing multiple generations of Meta’s custom AI processors under its Meta Training and Inference Accelerator program. These chips are designed to support both training and inference workloads, forming the backbone of Meta’s growing AI ecosystem across platforms such as WhatsApp, Instagram, and Threads.
As part of the deal, the companies have committed to an initial deployment exceeding one gigawatt of computing capacity, which is roughly enough to power around 750,000 U.S. homes. This marks the first phase of a broader multi-gigawatt rollout, highlighting the scale at which Meta plans to expand its AI data center capabilities in the coming years.
The partnership also includes deep collaboration on chip design, packaging, and networking technologies. Broadcom will leverage its XPU platform and advanced Ethernet solutions to ensure high-speed connectivity and efficient scaling of Meta’s AI clusters. This integrated approach is expected to improve performance while reducing overall infrastructure costs for large-scale AI operations.
In a notable leadership shift tied to the agreement, Broadcom CEO Hock Tan will step down from Meta’s board and transition into an advisory role focused on guiding the company’s custom silicon strategy. The change underscores the strategic importance of the partnership as Meta accelerates its investment in AI hardware.
The deal comes amid a broader industry trend where major technology companies are increasingly designing their own chips to optimize performance and control costs, rather than relying heavily on external suppliers like Nvidia. By extending its collaboration with Broadcom, Meta is positioning itself to build a more efficient and scalable AI infrastructure, supporting its long-term vision of delivering advanced AI capabilities to billions of users worldwide.




