Amazon Balances AI Chip Strategy with NVIDIA Support and In-House Silicon Expansion

Amazon Balances AI Chip Strategy with NVIDIA Support and In-House Silicon Expansion

While Amazon Web Services (AWS) continues to sell NVIDIA’s latest AI chips, including those built on the Blackwell architecture, it is also ramping up efforts to reduce dependency on third-party silicon. Amazon’s recent moves highlight a dual strategy — offering top-tier NVIDIA GPUs to customers while investing heavily in its own Trainium and Graviton chip lines to gradually challenge NVIDIA’s dominance.

A notable milestone in this shift is that “Claude Opus 4 was trained entirely on AWS’ in-house chips,” demonstrating that cutting-edge AI development can now take place without NVIDIA hardware. This underlines Amazon’s goal to be both a provider of NVIDIA infrastructure and a self-reliant silicon powerhouse.

AWS’s in-house chip development, led by Annapurna Labs, is gaining momentum. The upgraded Graviton4 CPU, offering up to 600 Gbps bandwidth, is described by AWS engineer Ali Saidi as “fast enough to read 100 music CDs per second.” But the real focus is on Trainium2, the AI chip powering Project Rainier, a supercomputer designed for Anthropic’s model training.

Trainium2 enabled Anthropic’s Claude Opus 4 to launch without NVIDIA hardware. According to AWS senior director Gadi Hutt, while Blackwell GPUs lead in raw performance, Trainium2 wins in “energy and cost efficiency.” A third-generation Trainium3 chip is in the works, promising double the performance and 50% less energy usage.

Citing a Morgan Stanley report, Alchip has emerged as the sole foundry partner for Trainium3 — and possibly Trainium4 — with mass production of its 3nm chips expected by 2026. Future development could involve a 2nm chip, in collaboration with Astera Labs. With Marvell exiting the space, Alchip now controls nearly all turnkey production for AWS’s AI chips.

Despite these ambitions, AWS isn’t sidelining NVIDIA. The launch of P6e-GB200 UltraServers, powered by Grace Blackwell Superchips, underscores this. These systems include 72 interconnected GPUs with 360 petaflops of compute and over 13 TB of memory — ideal for massive AI models. Lighter P6-B200 instances are also available for broader use cases.

AWS’s hybrid approach reflects both strategic independence and customer-centric pragmatism. By simultaneously building its silicon capabilities and delivering cutting-edge NVIDIA infrastructure, Amazon is laying the foundation for a future where AI chip choice is no longer monopolized.

 

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch