At its Cloud Next 2026 event, Google introduced its latest generation of Tensor Processing Units (TPUs), marking a major step in its effort to compete with Nvidia in the rapidly growing AI hardware market. The new chips are part of Google’s broader strategy to strengthen its cloud infrastructure and support the next wave of AI-driven applications.
The company unveiled two specialized versions of its eighth-generation TPUs—one optimized for training large AI models and another designed specifically for inference, which involves running AI models in real-time applications. This split reflects a wider industry shift, where demand is increasingly moving toward inference workloads as AI systems are deployed at scale.
The training-focused chip, often referred to as TPU 8t, is built to handle large-scale model development and reportedly delivers significant performance improvements over previous generations. Meanwhile, the inference-focused TPU 8i is designed to provide faster and more cost-efficient processing for real-time AI tasks, with improvements in performance-per-dollar and energy efficiency.
Google’s move is aimed at reducing reliance on Nvidia’s GPUs, which currently dominate the AI chip market. However, the company is not completely replacing Nvidia hardware. Instead, it is adopting a hybrid approach—continuing to offer Nvidia’s latest GPUs within its cloud platform while simultaneously expanding its own custom silicon capabilities.
The new TPUs are also integrated into Google’s broader AI ecosystem, supporting advanced workloads such as autonomous AI agents and enterprise applications. This aligns with the company’s push toward what it describes as the “agentic era,” where AI systems can independently perform complex tasks rather than simply respond to prompts.
Overall, the announcement highlights intensifying competition in the AI infrastructure space, as major tech companies increasingly develop custom chips to optimize performance, reduce costs, and gain greater control over their AI technology stack.




