PARADOX

paradox

PARADOX – Serbian supercomputing cluster

 

Fourth major upgrade of PARADOX installation (Paradox IV) became operational during September 2013. This upgrade consists of 106 working nodes and 3 service nodes. Working nodes (HP ProLiant SL250s Gen8, 2U height) are configured with two Intel Xeon E5-2670 8-core Sandy Bridge processors, at a frequency of 2.6 GHz and 32 GB of RAM (2 GB per CPU-core). The total number of new processor-cores in the cluster is 1696. Each working node contains an additional GP-GPU card (NVIDIA Tesla M2090) with 6 GB of RAM. With a total of 106 NVIDIA Tesla M2090 graphics cards, PARADOX is a premier computer resource in the wider region, which provides access to a large production GPU cluster and new technology. The peak computing power of PARADOX is 105 TFlops. Other technical information is provided in the table below.

One service node (HP DL380p Gen8), equipped with an uplink of 10 Gbps, is dedicated to cluster management and user access (gateway machine). All cluster nodes are interconnected via Infiniband QDR technology, through a non-blocking 144-port Mellanox QDR Infiniband switch. The communication speed of all nodes is 40 Gbps in both directions, which is a qualitative step forward over the previous (Gigabit Ethernet) PARADOX installation. The administration of the cluster is enabled by an independent network connection through the iLO (Integrated Lights-Out) interface integrated on motherboards of all nodes.

PARADOX cluster is installed in four water-cooled racks. The cooling system consists of 4 cooling modules (one within each rack), which are connected via a system of pipes with a large industrial chiller and configured so as to minimize power consumption.

Administrative data

Name PARADOX
Short Description Serbian supercomputing cluster
Owner Institute of Physics Belgrade (IPB)
Country Serbia

 

Dedication to VI-SEEM

CPU (percent) 5%
Storage (percent) 10%
Accelerators (percent) 5%
CPU (core-hours per year) 742,848
Storage in TB 10
Accelerators (hours per year) 47,542,272

Integration

System operational since Sep 2013
Available to the project from PM01
Expected date to be phased -
Interfaces SSH, gridFTP

 

Computational Power

Number of servers 106 Interconnect type QDR InfiniBand Peak performance (Tflops) 105.8
Server specification HP Proliant SL250s Gen8 Interconnect latency 1.15 μs Real performance (Tflops) -
CPU per server 2 Interconnect bandwidth 40 Gbps Operating system Scientific Linux
RAM per server 32 GB Local filesystem type Lustre Version 6.4 (Carbon)
Total number of CPU-cores 1,696 Total storage (TB) 96 TB Batch system/scheduler Torque/Maui
Max number of parallel processes 3,392 Peak performance CPU (Tflops) 35.3 Accelerators type Nvidia Tesla M2090
Number of cores 512 Servers equipped with accelerators 106 Peak performance accelerators (Tflops) 70.5
Accelerators per server 1
Development tools Intel Compilers (C/C++, FORTRAN), Portland Group Compilers (Fortran/C/C++ with accelaterator support for CUDA, OpenACC, OpenCL), GNU Compilers, OpenMPI, CUDA, TotalView, Scalasca, TAU, gprof, gdb, pgdbg, Program Database Toolkit, ANTLR3 C
Libraries Intel MKL, HDF5, FFTW, NetCDF, GSL, LAPACK, Boost, BLAS
Applications Gromacs, NAMD, Desmond, VMD, AgroTagger