Unrivaled HGX Hl00 8-GPU Al Server to Deliver Highest Al Performance
High Performance Computing
AI and Machine Learning
Large Language Model (LLM)
Generative AI
Integrating a NVIDIA HGX baseboard with 8 H100 GPUs, the AI server brings together the full power of accelerated NVIDIA GPUs, liquid cooling technology, and high-speed PCIe Gen5 connectivity to bring breakthrough performance for the next generation of AI-enabled applications.
The AI server can be directly connected to a single CPU head node server, creating a unified AI solution. This disaggregated GPU and CPU approach enhances flexibility, enabling you to select the CPU head node server that best suits your needs. (Ingrasys head node solution: SV2121A/SV2121I)
The 6U server, compliant with Open Rack Spec v3, incorporates liquid cooling technology to provide high-performance computing capabilities that generative AI and HPC demand while minimizing energy consumption for a reduced environmental footprint and lower carbon emissions.
Supported GPU | NVIDIA HGX H100 8-GPU |
---|---|
Expansion Slots | 20 x PCIe 5.0 x16 Slots |
Storage | 32 x Hot-swap U.2 NVMe Drive Bays |
Front Panel |
|
Form Factor |
6 OU Rackmount (3 OU IOB Sled with 3 OU GPU Sled) |
Chassis Dimensions (H x W x D) |
11.1” x 21.0” x 34.1” / 283.1mm x 535.0mm x 867.5mm |
Management | 1 x ASPEED AST2600 |
Power Supply | Centralized 48V Bus Bar with PDB |
Fans | 16 x 60*56mm for N+1 Cooling Redundancy |
Cooling Solution | Liquid Cooling Solution |
Certification | CE/FCC/RCM/BSMI/UL/ IECEE CB |
Operating Temperature | 10°C to 35°C (50°F to 95°F) |
Non-operating Temperature | -40°C to 60°C (-40°F to 140°F) |
Operating Relative Humidity | 8% to 85%RH |
Non-operating Relative Humidity | 5% to 95%RH |