Eden Cluster in 2025

Current configuration of the Eden Cluster.

November 01, 2025
Krzysztof Kaczmarski
Faculty of Mathematics and Information Science, Warsaw University of Technology

Faculty of Mathematics and Information Science HPC Computing Center

Description of the infrastructure

1 x NVIDIA DGX A100 supercomputer node — dgx-1

  • CPU: 2 × AMD Rome 7742 (128 physical cores total)
  • GPU: 8 × NVIDIA A100 40 GB
  • RAM: 2 TiB
  • Storage: 3.8 TiB + 15 TiB SSD
  • GPU interconnect: 200 Gb/s
  • Network: 100 Gb/s

3 × NVIDIA DGX A100 supercomputer nodes — dgx-[2–4]

  • CPU: 2 × AMD Rome 7742 (128 physical cores total)
  • GPU: 8 × NVIDIA A100 40 GiB
  • RAM: 1 TiB
  • Storage: 3.8 TiB + 15 TiB SSD
  • GPU interconnect: 200 Gb/s
  • Network: 100 Gb/s

(Manufacturer datasheet: https://images.nvidia.com/aem-dam/Solutions/Data-Center/nvidia-dgx-a100-datasheet.pdf)

3 × Lenovo ThinkSystem SR665 nodes — sr-[1–3]

  • CPU: 2 × AMD EPYC 7413 (48 physical cores total)
  • RAM: 3 TiB
  • Storage: 56 TiB HDD
  • Network: 100 Gb/s

(Manufacturer datasheet: https://lenovopress.lenovo.com/lp1269-thinksystem-sr665-server)

1 × Lenovo ThinkSystem SR675 node — hopper

  • CPU: 2 × AMD EPYC 9534 (64 physical cores total)
  • GPU: 8 × NVIDIA H100 PCIe 80 GiB
  • RAM: 1 TiB
  • Storage: 14 TiB HDD
  • Network: 100 Gb/s

(Manufacturer datasheet: https://lenovopress.lenovo.com/lp1611-thinksystem-sr675-v3-serve)

1 × Dell PowerEdge C4130 node — pascal

  • CPU: 2 × Intel Xeon CPU E5-2695 v4 @ 2.10GHz (36 physical cores total)
  • GPU: 4 × Tesla P100-PCIE-16GB
  • RAM: 0.5 TiB

(Manufacturer datasheet: https://i.dell.com/sites/doccontent/shared-content/data-sheets/en/Documents/Dell-PowerEdge-C4130-Spec-Sheet.pdf)

Storage arrays: DDN SS9012 and DDN AI400X

  • Capacity: 1.6 PiB
  • DDN AI400X cache: 256 TiB
    • Write speed: 34 GB/s
    • Read speed: 48 GB/s

2 × virtualization servers — Lenovo ThinkSystem SR645

  • CPU: 2 × AMD EPYC 7413 24-Core Processor
  • RAM: 251 GiB
  • OS: Proxmox
  • Virtualization: KVM
  • Containerization: LXC

Switches

  • Mellanox QM8700
    • 40 ports
    • 200 Gb/s InfiniBand
  • Mellanox SN2700
    • 32 ports
    • 100 Gb/s Ethernet

The cluster was funded by:

  • Polish Ministry of Education and Science
  • POB Cybersecurity and data analysis of Warsaw University of Technology within the Excellence Initiative: Research University (IDUB) programme.
  • Faculty of Mathematics and Information Science
  • Warsaw University of Technology

Laboratory and institutional links:

MINI PW Logo