800px-hpc-ba-c2 The Bibliotheca Alexandrina (BA) has been operating a High-Performance Computing (HPC) cluster since August 2009. The goal of this initiative is to provide the computational resources needed for modern scientific research in the various domains as a merit-based service to researchers locally and regionally. In late 2016, the cluster was upgraded to accommodate rising demand. The current cluster consists of 82 compute nodes, providing a total of 1,968 CPU cores, each with access to 5.33 GB of RAM, in addition to 16 accelerated compute nodes, each with a Tesla K80 dual GPU. Storage for input and output data is provided by a Lustre file system hosted on storage hardware with a total raw capacity of 288 TB. The cluster is wired with 40-Gbps QDR InfiniBand. The BA-HPC participated in the LinkSCEEM-2 project and continues to participate in joint calls with the Cy-Tera cluster operated by the Cyprus Institute. The majority of usage on the system comes from projects by researchers at Egyptian universities. In the VI-SEEM project, the BA is dedicating approximately 1.8 million core hours yearly for hosting applications given access to HPC resources through VI-SEEM. In addition, on the BA large-scale storage cluster, 100 TB are being dedicated to the VI-SEEM project.

Technical Specification

Administrative Data
Short Description The Bibliotheca Alexandrina High-Performance Computing Cluster
Owner Bibliotheca Alexandrina
Country Egypt
Computational Power
Number of servers 98
Server specification Huawei FusionServer X6800
CPU per server 2
RAM per server 128 GB
Total number of CPU-cores 1,968
Max number of parallel processes 3,926
Interconnect type QDR Infiniband
Interconnect latency μs
Interconnect bandwidth 40 Gbps
Local filesystem type Lustre
Total storage (TB) 288
Accelerators type Tesla K80 dual GPU
Number of accelerator cores
Accelerators per server 2
Servers equipped with accelerators 16
Peak performance CPU (Tflops) 88.14
Peak performance accelerators (Tflops) 29.92-
Peak performance (Tflops) 118.06
Real performance (Tflops) 79.32 (of 88.14 “CPU Peak Performance”)
Operating system CentOS
Version 6.8
Batch system/scheduler Open Grid Scheduler
Development tools GCC, Intel
Libraries Managed via EasyBuild
Applications Managed via EasyBuild
Dedication to VI-SEEM
CPU (percent) 20%
Storage (percent) 2.5%
Accelerators (percent) 10%
CPU (core-hours per year) 3,4447,936 (increase from 1.8M before upgrade – approx. from 2.36 TFLOPS to 17.63 TFLOPS for one year)
Storage in TB 7.2 (based on 20% of cluster before upgrade)
Accelerators (hours per year) 28,032 (increase from 0 before upgrade – approx. 2.99 TFLOPS for one year)
System operational since Aug 2009
Available to the project from PM04
Expected date system to be phased out N/A
Interfaces SSH