Cisco Silicon One G300 Targets AI’s Data Center Bottleneck
AI is running into a data center bottleneck, and Cisco wants to be the fix.
Cisco announced the Silicon One G300, a 102.4 Tbps switching chip, plus new systems and optics aimed at scaling AI data centers for what it calls the agentic era.
Cisco’s move is a reminder that AI buildouts are becoming an infrastructure problem, not just a model problem. The company is positioning G300-powered switching platforms as part of a coordinated stack meant to keep large AI environments fed with data as they scale.
What Cisco actually announced
According to Cisco, the Silicon One G300 is 102.4 Tbps switching silicon designed for massive AI cluster buildouts and will power new Cisco Nexus 9000 and Cisco 8000 systems.
On the specs side, Cisco’s G300 materials say the processor is optimized for high-bandwidth AI networking and web-scale switching, enabling a deterministic, low-latency, and power-efficient 64x 1600 Gigabit Ethernet (GE) switch.
Cisco is also framing this as more than a chip release: silicon, systems, and optics meant to work together for AI back-end networking at scale.
Why agentic AI raises the bar
Cisco is explicitly tying this release to the agentic era, meaning AI systems that can take more autonomous actions rather than only respond to prompts. As that model spreads, pressure increases on data center networks moving data between compute, memory, and storage.
For enterprise teams building larger AI environments, the practical takeaway is that the network can become the bottleneck before compute does. According to Cisco, the G300 and its related platforms are designed to scale AI networking without requiring constant redesign as clusters expand.
Also read: The $650 billion AI spending push is forcing big tech to rethink data center buildouts and infrastructure priorities.
The post Cisco Silicon One G300 Targets AI’s Data Center Bottleneck appeared first on eWEEK.