NVIDIA GB200 NVL72

NVIDIA GB200 NVL72

Ignite the New Era of Computing
HOME solutions NVIDIA Solutions NVIDIA GB200 NVL72
We are now entering an era of trillion-parameter large language models, where AI is reaching unprecedented levels of complexity and capability. At the forefront of this revolution is the NVIDIA GB200 NVL72, a liquid-cooled, rack-scale solution that connects 36 Grace CPUs and 72 Blackwell GPUs through fifth-generation NVLink to function as a single massive GPU. With 30x faster inference and 4x faster training compared to the previous NVIDIA H100 GPU systems, it sets new benchmarks in performance for real-time trillion-parameter inference and training. Utilizing liquid cooling architecture, the GB200 NVL72 is the ultimate infrastructure for modern AI data centers, combining superior energy efficiency with high compute density to unleash the new AI breakthrough.
960x540

Highlights

  • 36 NVIDIA Grace CPU Superchips
  • 72 NVIDIA B200 Tensor Core GPUs
  • CPU and GPU Connected by 5th Gen NVIDIA NVLink Technology
  • Built for Real-Time Trillion-Parameter Inference and Training
  • Front Access IT Equipment for High Serviceability

Rack-scale Architecture

Design Details
1
3 x Management Switches
1 x TOR Switch
2 x Management Switches
2
4 x 1 RU Power Shelves
Provide maximum power output of 33kW
3
10 x Compute Trays
Feature 2 GB200 Grace Blackwell Superchips per tray, with 2 NVIDIA Grace CPUs and 4 NVIDIA Blackwell GPUs in total
4
9 x NVLink Switch Trays
Connect up to 72 Grace Blackwell Superchips in one giant NVLink domain
5
8 x Compute Trays
Feature 2 GB200 Grace Blackwell Superchips per tray, with 2 NVIDIA Grace CPUs and 4 NVIDIA Blackwell GPUs in total
6
4 x 1 RU Power Shelves
Provide maximum power output of 33kW

Key Components of NVIDIA GB200 NVL72

 

GB200 Grace Blackwell Superchip

As the powerful core of the NVIDIA GB200 NVL72, the GB200 Grace Blackwell Superchip combines two Blackwell GPUs with a Grace CPU over a high-speed 900GB/s NVLink chip-to-chip interconnect to propel data processing and calculations for actionable insight.

1130x710
1130x710

Blackwell Compute Node

The Blackwell compute node, designed with liquid-cooled MGX architecture, is powered by two GB200 Grace Blackwell Superchips. Delivering 80 petaFLOPs of AI performance, it stands as the most powerful compute node ever created and can scale up to the GB200 NVL72 for even greater performance.

The Next-Gen AI Data Center Solution

 

 

960x540

Liquid-to-Air Solution

  • Support Superior Cooling Capacity up to 70kW
  • All Heat Dissipation Removed by Fans
  • Ideal to Upgrade Existing Air-Cooled Data Center
960x540

Liquid-to-Liquid Solution

  • Provide Extreme Cooling Capacity up to 1300kW, supporting 4 or more AI Racks*
  • Main Heat Dissipation through Facility Liquid
  • Enable High-density Computing while Reducing Energy Consumption Significantly 

    *Depending on the power consumption of IT racks
TOP

We use cookies to allow our website to work properly, personalize content and advertising, provide social media features and analyze traffic. We also share information about your use of our site with our social media, advertising and analytics partners

Manage Cookies

Privacy preferences

We use cookies to allow our website to work properly, personalize content and advertising, provide social media features and analyze traffic. We also share information about your use of our site with our social media, advertising and analytics partners

Privacy Policy

Manage preferences

Necessary cookie

Always on

The website cannot function without these cookies and you cannot switch them off on your system. These cookies are typically set solely in response to an action you perform (i.e. a service request), such as setting privacy preferences, logging in, or filling in a form. You can set your browser to block or prompt you for these cookies, but this may prevent some site features from working.