TechPro
Harju maakond, Tallinn, Kesklinna linnaosa, Tartu mnt 25-46, 10117 smarttek.ou@gmail.com

Nvidia used its GTC DC event in Washington, D.C., to shift attention toward the plumbing that keeps modern AI running, unveiling new hardware aimed at speeding data movement and infrastructure management for large AI deployments.

SuperNIC arrives with 1.6T performance per GPU

The company’s biggest networking announcement was the debut of the ConnectX-9 SuperNIC, part of Nvidia’s broader Vera Rubin architecture. This adapter is built for extremely bandwidth-hungry AI environments, supplying 1.6 terabits of connectivity per GPU, enhanced RDMA capabilities, and support for PCIe Gen 6.

Unlike conventional NICs, the SuperNIC is purpose-engineered for workloads such as multi-trillion-parameter model training and dense inference clusters. Nvidia says the card incorporates line-rate security, hardware-accelerated cryptography, and programmable I/O pipelines that let operators tune behavior for evolving AI workloads instead of relying on fixed security templates.

BlueField 4 doubles throughput over its predecessor

Alongside the SuperNIC, Nvidia introduced BlueField 4, its next major data-processing unit designed to function as the control layer for “AI factories.” The new generation can push 800 Gbit/s, doubling the bandwidth of BlueField 3 while delivering roughly six times the compute resources.

BlueField 4 pairs the company’s Arm-based Grace CPU—a 64-core Neoverse-derived design—with the ConnectX-9 networking engine. This combination accelerates storage operations, networking flows, and security tasks across data-center-scale AI platforms. The previous generation relied on a far more modest 16-core CPU typically used in mobile-class hardware, marking a significant architectural shift.

According to Nvidia senior director Dion Harris, the new DPU is intended to serve as the “operating system” layer for high-density AI clusters by offloading resource-heavy services from host CPUs and optimizing data paths for enormous model workloads.

Deployment and availability

BlueField 4 will first appear inside Nvidia’s Vera Rubin rack-scale systems scheduled for release next year. The technology will extend across Nvidia’s own server lines—including DGX and NVL 72 platforms—and will also be offered to third-party system builders.

With ConnectX-9 delivering ultra-high-performance RDMA and PCIe Gen 6 switching, and BlueField 4 handling compute-intensive infrastructure tasks, Nvidia aims to squeeze the maximum efficiency from modern AI stacks—from sprawling training supercomputers to large-volume inference deployments.

Share:

Leave a Reply

Your email address will not be published. Required fields are marked *