You are here

Hardware computing facilities

The following systems are administered by the members of the HPCA group. Would you like to gain access to any of these, please contact the group leader.

TINTORRUM

Front-end for the following systems dedicated to HPC applications:

  • hexa
    16 nodes with two Intel Xeon E5645 hexa-core processors (total of 12 cores @ 2.40 GHz), connected via Infiniband QDR (Mellanox MTS3600 switch)
  • cuda
    9 nodes with two Intel Xeon E5520 quad-core processors (total of 8 cores @ 2.27 GHz) and a Tesla C2050 GPU, connected via Infiniband QDR (Mellanox MTS3600 switch)
  • s1070
    1 node with two Intel Xeon E5405 quad-core processors (total of 8 cores @ 2.00 GHz) and a Tesla S1070 GPU computing node (2 Tesla T10 processors)
  • s2050
    1 node with two Intel Xeon E5440 quad-core processors (total of 8 cores @ 2.83 GHz) and a Tesla S2050 GPU computing node (4 Tesla "Fermi" processors)
  • s870
    1 node with two Intel Xeon E5410 quad-core processors (total of 8 cores @ 2.33 GHz) and a Tesla S870 GPU computing node (4 Tesla G80 processors)

The cluster server (front-end) is a node with four Intel Xeon E5520 quad-core (total of 16 cores @ 2.27 GHz) and with a Tesla C2050 GPU.


JOULE

Front-end for the following systems dedicated to energy-aware computing:

  • watts1 - 1 node with two Intel Xeon E5504 quad-core processors (total of 8 cores @ 2.00 GHz)
  • watts2 - 1 node with two AMD Opteron 6128 8-core processors (total of 16 cores @ 2.00 GHz)
  • watts3 - 1 node with four AMD Opteron 6172 12-core processors (total of 48 cores @ 2.10 GHz)
  • watts4 - 1 node with eight AMD Opteron 6276 8-core processors (total of 64 cores @ 1.40 GHz)

The cluster server (front-end) is a node with two AMD Opteron 6128 8-core processors (total of 16 cores @ 2.00 GHz).


OTHER EQUIPMENT

The group also has some other small clusters for different purposes:

  • ENERSIS is the front-end for a cluster composed of 3 nodes with a Intel Xeon X3430 quad-core processors (total of 4 cores @ 2.40GHz), connected via Gigabit Ethernet,
    and dedicated to development. The cluster server (front-end) is a node with a Intel Pentium 4 @ 3.20 GHz.
  • NEON is a cluster composed of 5 nodes with a Intel Xeon X3430 quad-core processors (total of 4 cores @ 2.40GHz) , connected via Gigabit Ethernet, and dedicated to cloud/virtualization technologies.
  • AIC is a cluster composed of 4 nodes, three of them with two Intel Xeon L5530 quad-core processors (total of 8 cores @ 2.40GHz each one) each, connected to a GeForce GTX 690 GPU (2 Kepler processors), a GeForce 9800GTX GPU, and a GeForce GTX 480 GPU. The remaining node is equipped with two Intel hexa-core processors (total of 12 cores @ 2.40 GHz). All nodes are connected via 10 Gigabit Ethernet and two of the nodesare connected via Infiniband QDR. The system is dedicated to the development of rCUDA.

SET

SET is a SGI Altix 350 CC-NUMA multiprocessor with 16 Intel(R) Itanium2(TM) processors:

  • 16 Intel Itanium2 processors, 1.5 Ghz
  • 6 MB of L3 cache memory
  • 32 GB of shared RAM memory
  • 8 nodes connected via a SGI NUMAlink network, with 2 processors per node