Ingrasys is developing advanced cooling systems to reduce overall data center power consumption.
Energy conservation is a major topic on industrial activities. By enhancing the power usage efficiency, we can provide our next generation in a peace of mind.
Advanced cooling solution will become the key technology with the increasing power consumption of chips. For higher power demand, Ingrasys offers the rack-level liquid cooling solution and immersion cooling solution to lower power usage effectiveness.
The rack-level liquid cooling can meet your requirements whether your facility is traditional air cooled or liquid plumbing. Ingrasys integrates the direct-contact liquid cooling modules, manifold, coolant distribution unit (CDU) / reservoir pumping unit (RPU), and rear door heat exchanger (RDHx) with our high performance computing servers to provide a customized total solution that satisfies your needs.
The chip power is restricted by traditional air cooling solution due to the poor thermophysical property of air. With our advanced cooling solutions, we aim to support higher power demand.
PUE (Power Usage Effectiveness) is the ratio of the total amount of energy used by data center facility. Our solution provides dominantly low PUE.
Blind mate design enables the front-side service of all the gears in the rack, including hot-swappable fans, pumps, and PSUs of RPU/ CDU.
Advanced liquid cooling allows the facility to use existing data center space more efficient and eliminate the need for new constructions or expansions. Besides, it is also suitable for edge applications where physical space is limited.
Processor | One/Two* 2nd/3rd Generation AMD EPYC™ Processor(s) per Node/
up to 32 Cores, 180W (TDP) SKU 1: One Processor SKU 2 & 3: Two Processors |
---|---|
Memory | 16 x DDR4 3200MHz RDIMM/LRDIMM per Node, 1 x NVDIMM per Node |
Storage | Front: SKU 1: 12 x 2.5"/3.5" Hot-swap SATA Drives per Chassis (3 x Drives per Node) SKU 2: 24 x 2.5" Hot-swap U.2 NVMe Drives per Chassis (6 x Drives per Node) SKU 3: 12 x 3.5" Hot-swap SATA or 2.5" Hot-swap NVMe Drives per Chassis (3 x Drives per Node) Internal: 1x SATA/NVMe M.2 (2280/22110) per Node 1x SATA M.2 (2280/22110) per Node Rear: 1 x 2.5" Hot-swap 7/15mm U.2 NVMe SSD per Node (Optional for Single Processor Configuration, Occupy One PCIe LP Slot) |
Front Panel (per Node) |
1 x Power Button with LED 1 x UID Button with LED 1 x System Health LED |
Rear Panel (per Node) |
1 x RJ45 for BMC Dedicated Management 1 x VGA 2 x USB 3.0 1 x UID Button with LED |
TPM | 1 x TPM 2.0 Module per Node |
Expansion Slots |
Up to 2 x PCIe Gen4 x16 LP Slots per Node (by SKU) 1 x OCP 3.0 NIC PCIe Gen4 x16 per Node |
Management | 1 x ASPEED AST2500 BMC per Node Support Intelligent Platform Management Interface v.2.0 |
PSUs | 1+1 Redundant 3000W Platinum PSU (2200W for Single Processor Configuration) |
Fans | 3 x 40*56mm per Node, 12 x per Chassis |
Certification | FCC/CE/UL/CB/BSMI |
Chasis Dimensions (H x W x D) |
3.42" x 17.6" x 33.17"/ 87.0mm x 447.0mm x 842.4mm |
Operating Temperature | 5°C to 35°C (41°F to 95°F) |
Non-operating Temperature | -40°C to 70°C (-40°F to 158°F) |
Operating Relative Humidity | 8% to 90%RH |
Non-operating Relative Humidity | 5% to 90%RH |
Liquid can be used to cool hot chips more efficiently than air alone. There are many types of rack-level liquid cooling, but they can be divided by where the heat is exhausted in the data center. The heat from the rack can be completely exhausted into the data center air, or completely into the facility water cooling system, or into a combination of those two. Ingrasys has products for all three scenarios.
More efficient cooling compared to traditional air-cooling solutions
High cooling capacity gives you more application choice
No facility change needed. Front-side service: Blind mate connection between the server and the manifold.
Can monitor liquid / air flow rate, water / air temperature, pump speed, RDHx fan speed and provide a reliable control.
HPC systems are packed with power-consuming components in dense configurations. As more power is consumed, more heat is generated. Our rack-level liquid cooling solutions are designed to handle these demanding requirements and trends.
Regardless of the lack of facility water support or cold/hot-aisle desgin, we provide air-assisted liquid and liquid-cooled hybrid solutions to fullfill different data center needs.
ORv3 Liquid Cooling Rack | |
Product Number | LA0452 |
Cooling Mechanism | Air-Assisted Liquid Cooling |
Rack Size | 600mm x (1200+305)mm x 2295mm |
Cooling Capability | 45kW(38kW (85%) heat dissipated through liquid |
Computing Node | RU/OU 21" server compatible |
Redundancy | 9+1 fans
1+1 pumps 2+1 PSUs |
Serviceability | Blind-mate mechanism Front Access IT equipment Slide-rails compatible Hot swappable fans, pumps, PSUs, control board |
ORv3 Liquid Cooling Rack | |
Product Number | LA0763 |
Cooling Mechanism | Air-Assisted Liquid Cooling |
Rack Size | 600mm x 1068mm x 2295mm |
Cooling Capability | 76kW |
Rack Deployment | Support Multiple Racks |
Cooling Kits |
• Modular RPU • Radiator |
Redundancy | 15+1 Fans
2+1 Pumps 3+3 PSUs |
Serviceability | Front-Access, Hot swappable Fans, Pumps, PSUs, Control Board
|
ORv3 Liquid Cooling Rack | |
Product Number | LL1001 |
Cooling Mechanism | Liquid-to-Liquid and Air-to-Liquid |
Rack Size | 600 mm x 1287 mm x 2295 mm |
Cooling Capability | 100kW (w/ RDHx: 100% heat dissipated through liquid; w/o RDHx: 80% heat dissipated through liquid) |
Computing Node | RU/OU 21" sever compatible |
Redundancy | 1+1 pumps 2 +1 PSUs |
Serviceability | Blind-mate mechanism Front Access IT equipment Slide-rails compatible Hot swappable pumps, PSUs, control board |
ORv3 Liquid Cooling Rack | |
Product Number | LL1000 |
Cooling Mechanism | Liquid-to-Liquid and Air-to-Air |
Rack Size | 600mm x 1068mm x 2295mm |
Cooling Capability | 100kW 100kW (80% heat dissipated through liquid) |
Computing Node | RU/OU 21" sever compatible |
Redundancy | 1+1 pumps 2 +1 PSUs |
Serviceability | Blind-mate mechanism Front Access IT equipment Slide-rails compatible Hot swappable pumps, PSUs, control board |
In our single-phase immersion cooling system, electronic components are submerged into a cooling tank that is filled with dielectric fluid. Heat is efficiently transferred from the IT equipment to the fluid circulating in the tank. Heat is transferred from the fluid to facility water through a heat exchanger.
Much more efficient cooling compared to any air-cooling solutions.
High cooling capacity gives you more application choice.
Modularized design of tank provides fast deployment, regardless of where the datacenter location is.
All components are immersed into fluid. Free of acoustic noise and vibration impact.
Edge data centers are located closer to the end-user and often in urban areas, with restricted amount of space. Ingrasys immersion cooling solution is an ideal option for edge computing deployment.
Compute-intensive and data-intensive ML/DL model-training requires large data sets and high-performance GPUs . High-powered GPUs can be cooled more efficiently with liquid than air. We provide solutions that reduce cooling power consumption to reduce data center OPEX.
Modular Oriented Single-Phase Tank | |
Cooling Mechanism | Single Phase |
Tank Size | 830mm x 4000mm x 1560mm
(D x W x H, front height 1220mm) |
Cooling Capability | 120 ~180 kW |
Computing Node | RU / OU 21" server compatible |
Redundancy | 1+1 pumps |
Serviceability | Modular device maintenance, Top Access IT equipment, and Top-lid automation mechanism |