MK-03-FINKI – Macedonia national HPC system
MK-03-FINKI Cluster consists of 84 compute nodes powered by HP BladeSystem c7000 technology. The entire solution is built up from 2 BladeSystem chassis, each with 32 HP BL2x220c G7 blade servers. Each blade server has a dual six core Intel Xeon L5640 CPU (clocked at 2.267Ghz) and 24GB RAM memory. The connectivity is delivered by: (a) QDR Infiniband for interconnect and storage and (b) Gigabit for service networking. This cluster was initially setup in 2012. All hardware details are available in the table below.
The cluster is managed by six service nodes, one dedicated to user access (also called head node), one dedicated exclusively for service actions and cluster management and four dedicated for the storage. The storage is shared using Lustre file system over Infiniband using two dedicated nodes for MGS/MDS, and two dedicated for OSS, all in HA mode. The service management is conducted using HP BladeSystem Onboard administration. This feature allows remote administration and monitoring of any of the installed hardware. MK-03-FINKI is cooled using HP Modular Cooling system. The displacement of the cooling unit is between the self-cooled racks.
Administrative data
Name | MK-03-FINKI |
Short Description | Macedonia national HPC system |
Owner | Ss. Cyril and Methodius University in Skopje |
Country | Macedonia |
Dedication to VI-SEEM
CPU (percent) | 5% |
Storage (percent) | 10% |
Accelerators (percent) | - |
CPU (core-hours per year) | 336,384 |
Storage in TB | 1+1 |
Accelerators (hours per year) | - |
Integration
System operational since | Feb 2012 |
Available to the project from | PM04 |
Expected date to be phased | - |
Interfaces | SSH |
Computational Power
Number of servers | 64 | Interconnect type | QDR InfiniBand | Peak performance (Tflops) | - | |
Server specification | Blade servers HP BL2x220c | Interconnect latency | 2 μs | Real performance (Tflops) | - | |
CPU per server | 2 x Intel Six-core L5640 2.267 GHz | Interconnect bandwidth | 40 Gbps | Operating system | Scientific Linux | |
RAM per server | 24 GB | Local filesystem type | Lustre, NFS | Version | 5 | |
Total number of CPU-cores | 768 | Total storage (TB) | 4 TB, 5 TB | Batch system/scheduler | Torque/Maui | |
Max number of parallel processes | 1,536 | Peak performance CPU (Tflops) | 6.9 | Accelerators type | - | |
Development tools | GNU Compiler, MPICH-2 | |||||
Libraries | - | |||||
Applications | - |