General AI Training Server - Nvidia H200

CPU: 2× Intel 8580 Xeon (2.1GHz, 48C) / 2× Intel 8558 Xeon (2.1GHz, 48C)

Memory: 32× 64GB 2RX4 DDR5-4800 RDIMM (Total 2TB Memory)

Storage: 1× 960GB NVMe

Data Disks: 1× 3.84TB NVMe

GPU: NVIDIA H200 8× 141GB HGX Hopper 8-GPU module

Network: 1× Mellanox ConnectX-7 2-Port NDR200, 8× Mellanox Connect-7 400Gb/s single-port, 1× Dual-port 25Gb SFP28 network card, fully equipped with 25G multimode optical modules

Power Supply: 16A national standard plug power cable

System: Configured with BIOS and other options set to high-performance mode

General AI Training Server - Nvidia H100

CPU: 2× Intel 8480+ Xeon (2.0GHz, 56C) / 2× Intel 8558 Xeon (2.1GHz, 48C)

Memory: 32× 64GB 2RX4 DDR5-4800 RDIMM

Storage: 2× 960GB M.2

Data Disks: 4× 3.84TB NVMe

GPU: NVIDIA H100 8× 80GB HGX Hopper 8-GPU module

Network: 4× HCA single-port NDR CX7 400G network cards, 1× Single-port 25G network card, 1× Dual-port Gigabit network card (IPMI), 1× Single-port 200Gb/s HDR high-speed network card (QSFP interface)

Power Supply: N+N redundant power supply / 16A national standard plug power cable (or 16A European standard plug power cable)

Domestic AI Training Server - Teco T100*8

CPU: 2× Intel 8480+ Xeon (2.0GHz, 56C) / 2× Intel 8358 Xeon (2.6GHz, 32C)

Memory: 64× 32GB 2RX4 DDR5-4800 RDIMM / 32× 32GB 2RX4 DDR5-4800 RDIMM

Storage: 2× 960GB NVMe

Data Disks: 4× 7.68TB NVMe / 4× 3.84TB NVMe

GPU: 8× Taichu Yuanji T100AI accelerator cards (FP16 performance not less than 300 TFlops)

Network: 1× 12Gb dual-port SAS RAID card, 1× HCA single-port NDR network card, 1× Single-port 25G network card, 1× Dual-port Gigabit network card (IPMI)

Power Supply: N+N redundant power supply

Basic Environment Setup

Providing installation of the server operating system, CANN version installation, and related firmware and driver installation with the default configuration as follows:

  • Operating System: Ubuntu 22.04
  • Kernel Version: 5.15.0-43-generic
  • CPU Driver: 550.54.15
  • OFED: MLNX_OFED_LINUX-5.9-0.5.6.0 (OFED-5.9-0.5.6)
  • CUDA: 12.4

Rack Mounting Service

Providing server rack mounting and debugging services to ensure smooth business and parameter environment connections. Includes cabling services to ensure the equipment is ready for use out of the box.

Basic Model Deployment Service

  • Server environment debugging
  • Deployment of DeepSeek-R1/V3 models
  • Implementation of a private security protection system
  • 24/7 remote technical team support
AI Training Server

EvoCompute Provides flexible AI computing solution to meet diversity needs.

From most powerful to affordable solutions. To accelerate AI innovtions with EvoCompute’s integreated solution with balanceing performance, flexibility and reliabiliy.