Header_Daten_Foto_Rechenzentrum_Foto_C_Schmid_Hereon

Equipment

The HPC cluster "Strand" consists of 185 computing nodes and 5 frontend servers with 48 cores each. Additionally, 23 of the existing computing nodes are equipped with GPUs. The file system of the cluster is connected to the backup of the central IT via an archive server. All nodes and servers are connected to each other with an Intel ® Omni-Path high speed network with a bandwidth of up to 100 GBit/s.

The combined maximum performance of all computing nodes was measured with 596 TFlops. The utilization of the cluster is 70% on a weekly average.

Computing nodes


The 185 available computing nodes are each equipped with two Intel® Xeon® Platinum 8160 processor CPUs (24 cores with 2.1GHz). 153 of the nodes have 384 GB of RAM; the remaining 32 nodes each have 768 GB of RAM for particularly memory-intensive calculations. 23 of the computing nodes are additionally equipped with NVIDIA Tesla V100 GPUs with 16 GB and 32 GB RAM. The computing capacity is allocated via the SLURM workload manager.

Storage space


The file system is based on IBM Spectrum Scale (also known as GPFS). The total capacity currently comprises 2.5 PB in 295 HDDs with 12 TB each and 20 SSDs with 3.2 TB each.

The file system is divided into 3 areas:

/gpfs/home
/gpfs/work
/gpfs/scratch

114 TB are reserved for the /gpfs/home file system, 171 TB serve as /gpfs/scratch area for temporary data. The /gpfs/work area is used to store data for pre- and post-processing. All areas are connected to the servers with I/O speeds of up to 14 GB/s for processing the computing data.

Archive


The entire file system is connected to the backup of the central IT via an archive server. The data is stored here for 10 years according to the rules of 'good scientific practice' and can be accessed at any time. Incremental backups are also made of the /gpfs/home area.