
SAN FRANCISCO— Cisco Systems on Tuesday introduced a new networking chip and router designed to accelerate data movement inside large-scale AI data centers, positioning the company to compete directly with Broadcom and Nvidia for a share of the rapidly growing AI infrastructure market, which analysts estimate could reach $600 billion.
At the center of the announcement is Cisco’s Silicon One G300 switch chip, which the company expects to make commercially available in the second half of the year. Cisco said the chip is designed to help processors that train and run artificial intelligence systems communicate efficiently across hundreds of thousands of network connections.
As AI deployments extend beyond hyperscale cloud operators, Cisco said its latest AI networking developments are aimed at addressing the next phase of data center expansion, including improving energy efficiency, reducing operating costs, and simplifying network operations. The Silicon One G300 delivers 102.4 terabits per second of switching capacity and is designed to support gigawatt-scale AI clusters used for training, inference, and real-time agentic workloads. Cisco said the chip can maximize GPU utilization, delivering a 28% improvement in job completion time.
Cisco is also launching new networking systems powered by the G300, including additions to its N9000 and Cisco 8000 product lines. These platforms offer 102.4 Tbps switching speeds and are targeted at hyperscalers, neocloud providers, sovereign cloud operators, service providers, and enterprise customers. The company said the new systems will be available in a fully liquid-cooled configuration which, combined with new optical technologies, can improve energy efficiency by nearly 70%. Cisco also announced updates to Nexus One, which now provides a unified management plane that allows customers to deploy network fabrics more quickly, scale more predictably, and operate their infrastructure securely and efficiently.
As AI training and inference clusters continue to grow in size, networking requirements are increasing accordingly, with demand for higher bandwidth and lower latency. With the introduction of the Silicon One G300, Cisco now offers a 102.4 Tbps switch that competes directly with Broadcom’s Tomahawk 6 and Nvidia’s Spectrum-X Ethernet Photonics platforms. Like its rivals, the G300 integrates 512 high-speed 200 Gbps serializers/deserializers, known as SerDes. This large radix enables Cisco to support AI deployments with as many as 128,000 GPUs using approximately 750 switches, compared with about 2,500 switches previously required. Alternatively, the SerDes can be aggregated to support port speeds of up to 1.6 Tbps.
Cisco acknowledged that these bandwidth scaling characteristics are not unique to its design. Similar performance figures apply to Broadcom’s and Nvidia’s 102.4 Tbps silicon, reflecting how networking bandwidth scales across the industry.
According to Cisco fellow and senior vice president Rakesh Chopra, the distinguishing feature of the G300 is its collective networking engine, which incorporates a fully shared packet buffer and a path-based load-balancing system designed to reduce congestion, improve link utilization, lower latency, and shorten overall job completion times.
“There’s no sort of segmentation of packet buffers, allowing packets to come in [and] be absorbed irrespective of the port. That means that you can ride through bursts better in AI workflows or front-end workloads,” he said.
Chopra added that the load-balancing system continuously analyzes traffic across the network. The load-balancing agent “monitors the flows coming through the G300. It monitors congestion points and it communicates with all the other G300s in the network and builds sort of a global collective map of what is happening across the entire AI cluster,” he said.
While congestion management techniques are already used by competitors, including Broadcom and Nvidia, Cisco claims its implementation delivers a 33% improvement in link utilization, which it says can reduce AI training times by as much as 28% compared with packet-spraying-based approaches. Chopra did not specify which competing products Cisco used as a benchmark, though both Broadcom and Nvidia rely on packet spraying in their current designs. Industry observers note that vendor performance claims can vary depending on network architecture and deployment topology, and real-world results may differ across use cases.
Beyond congestion management, Cisco is also emphasizing the G300’s P4 programmability as a key differentiator. Chopra said this capability allows the chip to be reprogrammed to introduce new features and deploy the same hardware across multiple roles.
“It means that we can take our device, we can reprogram it to add new functionality, new capabilities, and deploy the same equipment in multiple different roles,” Chopra said. He added that programmability can extend the lifespan of networking hardware by enabling software-based upgrades rather than requiring new physical equipment.
Cisco is not alone in adopting P4 programmability. AMD’s Pensando network interface cards also use the programming language, enabling AMD to ship Ultra Ethernet-compatible products before final specifications were completed, with later updates delivered through software.
As with previous Silicon One products, the G300 will compete in the merchant silicon market while also powering Cisco’s own networking platforms. Cisco said the chip will be deployed in its N9000 and Cisco 8000 systems, both of which will feature 64 1.6 Tbps OSFP ports.




