Computing Facilities

The following systems are administered by the members of the HPCA group. Would you like to gain access to any of these, please contact the group leader.

NASP

Front-end with the SLURM resource manager dedicated to different kind of HPC applications depending on the queue:

Queue dedicated to multinode HPC applications n[00-07]:

  • 2 x Xeon 4210 (10c.), 128Gb, NVMe 2Tb (FireCuda 510), 2x10GEth, 1EDR(100Gb)

Queue dedicated to GPU computing c[00-03]:

  • c00 => [ 2 x Xeon 4210R, 128GB, NVMe 4TB, 2x10GEth, 1EDR] + [ RTX3090 + A100-32Gb ]
  • c01 => [ 2 x E5-2620 (8 c.), 32GB, 1TB, 1EDR(100Gb) ] + P100
  • c02 => [ 2 x EPYC 7282 (16c.), 256GB, SSD 2Tb, 2x16TB, 2x10GEth, 1HDR(200Gb) ]+2xA6000-48Gb
  • c03 => [ 2 x Xeon 6418H (24c.), 2TB, SSD 2x1TB, 3x2TB, 2x10GEth ]+2xH100-80Gb

Queue dedicated to multicore HPC applications m[00-01]:

The cluster server (front-end) is a node with two Intel Xeon E5520 quad-core (total of 8 cores @ 2.27 GHz) and with a Tesla C2050 GPU.

Older Systems

TINTORRUM

Front-end for the following systems dedicated to HPC applications:

  • hasw – 8 nodes with two Intel Xeon E5-2630 v3 octo-core processors (total of 16 cores @ 2.40 GHz)
  • hexa – 16 nodes with two Intel Xeon E5645 hexa-core processors (total of 12 cores @ 2.40 GHz), connected via Infiniband QDR (Mellanox MTS3600 switch)
  • xphi – 2 nodes with two Intel Xeon E5-2603 v3 hexa-core processors (total of 12 cores @ 1.60 GHz) and an Intel Xeon Phi
  • p100 – 1 node with two Intel Xeon E5-2620 v4 octo-core processors (total of 16 cores @ 2.10 GHz) and an NVIDIA Tesla P100 GPU with 16 GB

The cluster server (front-end) is a node with two Intel Xeon E5520 quad-core (total of 8 cores @ 2.27 GHz) and with a Tesla C2050 GPU.

There are also some old systems connected to TINTORRUM:

  • cuda – 9 nodes with two Intel Xeon E5520 quad-core processors (total of 8 cores @ 2.27 GHz) and a Tesla C2050 GPU, connected via Infiniband QDR (Mellanox MTS3600 switch)
  • s1070 – 1 node with two Intel Xeon E5405 quad-core processors (total of 8 cores @ 2.00 GHz) and a Tesla S1070 GPU computing node (2 Tesla T10 processors)
  • s2050 – 1 node with two Intel Xeon E5440 quad-core processors (total of 8 cores @ 2.83 GHz) and a Tesla S2050 GPU computing node (4 Tesla “Fermi” processors)
  • s870 – 1 node with two Intel Xeon E5410 quad-core processors (total of 8 cores @ 2.33 GHz) and a Tesla S870 GPU computing node (4 Tesla G80 processors)

JOULE

Front-end for the following systems dedicated to GPU and energy-aware computing:

  • wats-ivy – 1 node with one Intel Core i7-3770K quad-core processor (3.50 GHz) and an NVIDIA Titan X
  • wats-ivy2 – 1 node with one Intel Core i7-3930K quad-core processor (3.20 GHz) and an NVIDIA Tesla K20c
  • wats-sandy – 1 node with two Intel Xeon E5-2620 hexa-core processors (total of 12 cores @ 2.10 GHz)
  • wats-sandy2 – 1 node with two Intel Xeon E5-2620 hexa-core processors (total of 12 cores @ 2.10 GHz) and an NVIDIA Tesla K20c

The cluster server (front-end) is a node with two AMD Opteron 6128 8-core processors (total of 16 cores @ 2.00 GHz).

There are also some old systems connected to JOULE:

  • watts1 – 1 node with two Intel Xeon E5504 quad-core processors (total of 8 cores @ 2.00 GHz)
  • watts2 – 1 node with two AMD Opteron 6128 8-core processors (total of 16 cores @ 2.00 GHz)
  • watts3 – 1 node with four AMD Opteron 6172 12-core processors (total of 48 cores @ 2.10 GHz)
  • watts4 – 1 node with eight AMD Opteron 6276 8-core processors (total of 64 cores @ 1.40 GHz)

SEB

Server for HPC computing:

  • seb – 1 node with two Intel Xeon E5-2695 v4 18-core processors (total of 36 cores @ 2.10 GHz)