Hyve's Liquid-Cooled Network Switches Eliminate the DC Infrastructure Divide
Odds are your modern data center runs 80% to 85% liquid-cooled compute infrastructure. The remaining 15% to 20%? That would be network switches still running with air cooling.
For years, this split posed no challenge. Switches generated manageable heat loads that made air cooling the cost-efficient choice. But network performance demands for AI workloads now push switches past conventional thermal limits, forcing operators to maintain two parallel cooling ecosystems for increasingly lopsided rack operations.
The cost is considerable, you need a hot aisle containment specifically dedicated for networking. Network racks can’t sit in the same rows as liquid-cooled compute racks. Some operators have adopted a modular, hybrid environment – deploying hot air containment systems and cooling infrastructure above the racks to support both air and liquid configurations. This requires significant capital investment to accommodate roughly one out of every eight racks.
The Air-Cooling Constraint
As switch performance increases, air cooling forces a choice: make switches physically larger to accommodate more cooling fins or accept performance constraints. Hot/cold aisle containment becomes necessary just for networking, and facilities teams must manage two distinct cooling systems.
Hyve is already prepping the field for direct liquid cooling (DLC) in our next-gen switches based on Broadcom’s Tomahawk 6 platform where cold plates will cover not only the switch ASIC, but also the optical transceivers. This maintains the compact 2RU form factor while delivering peak performance.
Unified Infrastructure
DLC switches can be deployed in the same rows as AI compute racks. There are no separate containment zones or special cooling runs for a minority of equipment. Data center planning simplifies when network infrastructure uses the same physical requirements as AI compute. A common DLC infrastructure deployment allows you to scale your cooling infrastructure once, with compute and networking on the same foundation.
When transitioning to a liquid-cooled platform, one must consider how to manage situations when a leak occurs. For these next generation switches, internal leak detection and baseboard management controller (BMC) will be implemented compatible with systems commonly used for the compute platforms. While this entails some incremental cost, the outsized benefit is that the same telemetry tools now manage any DLC device in your data center: switch, server, or JBOD. One management platform replaces separate tools for network and compute infrastructure. A single switch connects hundreds of GPUs, so the BMC logic assesses thermal events and minimizes impact on running workloads.
AI-Optimized in Practice
We refer to our forthcoming Hyve-designed switches (based on Broadcom’s Tomahawk 6) as “AI-optimized,” but what does this actually mean? AI training creates massive GPU-to-GPU traffic as models distribute computation across hundreds or thousands of processors. Lower network latency means GPUs spend less time waiting on data, more time computing. When you cut network latency in half, from over 600 nanoseconds down to 250 nanoseconds, training cycles complete faster and inference serves requests more efficiently.
Tomahawk 6 delivers that latency reduction through enhanced ASIC performance. Higher radix switches flatten the network topology, reducing hops between GPUs. The network component of model execution shrinks.
Ten years ago, the industry would have called this networking leap “HPC-optimized.” Of course, workloads have evolved and become more demanding, but the networking principles remain constant.
Ready for Today’s (and Tomorrow’s) Data Center
Hyve’s DLC switch development follows our Tomahawk 5 and Ultra launch trajectory. Those initial platforms will gain customer validation through 2026. Liquid-cooled solutions represent the next evolution.
Much of the networking industry retrofits air-cooling solutions for increasing performance demands, but Hyve is building for DLC. We’re anticipating one cooling infrastructure with one management platform, because your data center doesn’t need two separate ecosystems.