Hyperscale Data Center
High Performance Computing
AI and Machine Learning
The GPU platform packs 16 high-performing computing engines, NVIDIA GPUs, all connected via NVLink 3.0 technology, into one powerful GPU box. This delivers unparalleled AI acceleration, and achieves extremely fast GPU-to-GPU interconnects than traditional PCIe-based solution.
The GPU box can be directly connected to one or two CPU head node server(s) to integrate as a unified AI solution. This disaggregated GPU and CPU approach increases the flexibility of choosing different CPU head node servers of your own to satisfy specific workloads, and each CPU and GPU component can be scaled independently as demand grows.
The PCIe Gen4-enabled GPU accelerator supports up to 16 high-bandwidth network interface cards, which allows for high-speed data movement without bottleneck and achieves faster and better time to results for business insight.
Supported GPU | 16 x Nvidia SXM4 A100 |
---|---|
Expansion Slots | 16 x PCIe Gen4 x 16 Slots (8 x PCIe Slots per 3U GPU Sled) |
Storage | 8 x U.2 NVMe SSDs (4 x U.2 NVMe SSDs per 3U GPU Sled) |
Front Panel |
|
TPM | 1 x TPM 2.0 Module |
Form Factor |
8U Rackmount (2 x 3U GPU Sled with 2U Power Shelf) |
Chassis Dimensions (H x W x D) |
13.8" x 17.5" x 37.3" / 352.5mm x 447.0mm x 948.1mm |
AC Input | 180Vac ~ 264Vac, 50-60Hz |
Management | 1 x ASPEED AST2520 |
PSUs | 4+4 Redundant 3000W Power Supplies (Titanium Efficiency) |
Fans | 24 x 60*56mm for N+1 Cooling Redundancy |
Certification | CE/FCC/RCM/BSMI/UL/IECEE CB |
Operating Temperature | 10°C to 35°C (50°F to 95°F) |
Non-operating Temperature | -40°C to 60°C (-40°F to 140°F) |
Operating Relative Humidity | 8% to 85%RH |
Non-operating Relative Humidity | 5% to 95%RH |