
Meta Platforms is significantly expanding its partnership with Nvidia, striking a far-reaching chip agreement that will see millions of AI chips deployed across its growing network of artificial intelligence data centers. The deal includes Nvidia’s new standalone Grace CPUs, next-generation GPUs, and Vera Rubin rack-scale systems, underscoring Meta’s intensifying investment in advanced AI infrastructure.
In a statement, Meta CEO Mark Zuckerberg said the expanded collaboration supports the company’s ambition “to deliver personal superintelligence to everyone in the world,” a broader vision he first outlined in July. While financial details were not disclosed, industry analysts suggest the agreement represents a commitment worth tens of billions of dollars.
In January, Meta announced plans to spend up to $135 billion on AI in 2026. According to chip analyst Ben Bajarin of Creative Strategies, “The deal is certainly in the tens of billions of dollars. We do expect a good portion of Meta’s capex to go toward this Nvidia build-out.” Following the announcement, shares of both Meta and Nvidia rose in extended trading, while Advanced Micro Devices saw its stock decline by about 4%.
Although the two Silicon Valley companies have worked together for over a decade, primarily around Nvidia’s GPUs, this agreement represents a much broader technology partnership. A notable element of the deal is Meta becoming the first company to deploy Nvidia’s Grace central processing units as standalone chips in data centers, rather than pairing them with GPUs within servers. Nvidia described it as the first large-scale implementation of standalone Grace CPUs.
“They’re really designed to run those inference workloads, run those agentic workloads, as a companion to a Grace Blackwell/Vera Rubin rack,” Bajarin said. “Meta doing this at scale is affirmation of the soup-to-nuts strategy that Nvidia’s putting across both sets of infrastructure: CPU and GPU.” Meta is also expected to roll out Nvidia’s next-generation Vera CPUs starting in 2027.
The multiyear agreement aligns with Meta’s broader commitment to invest $600 billion in the United States by 2028 on data centers and supporting infrastructure. The company has outlined plans for 30 data centers, 26 of which will be located in the U.S. Two of its largest AI facilities are currently under construction: the Prometheus 1-gigawatt site in New Albany, Ohio, and the 5-gigawatt Hyperion site in Richland Parish, Louisiana.
Beyond processors, the deal also covers Nvidia’s Spectrum-X Ethernet switches, networking technology used to interconnect GPUs within hyperscale AI environments. Additionally, Meta will leverage Nvidia’s security technologies to support AI-powered features on WhatsApp.
Despite the deepening partnership, Meta continues to diversify its chip strategy. The company develops its own in-house silicon and also uses chips from AMD. Reports in November indicated that Meta had been considering deploying tensor processing units from Google in its data centers in 2027, a move that at the time contributed to a dip in Nvidia’s stock price. Meanwhile, AMD recently secured a significant AI-related agreement with OpenAI, highlighting growing competition as companies seek alternative suppliers amid tight chip availability.
Nvidia’s current Blackwell GPUs have remained on back-order for months, and its next-generation Rubin GPUs have recently entered production. Through this agreement, Meta has ensured a steady supply of both platforms. Engineering teams from Nvidia and Meta will work closely together “in deep codesign to optimize and accelerate state-of-the-art AI models” tailored to Meta’s expanding AI ecosystem, reinforcing the strategic depth of their collaboration.




