Name/Address

Hydra is made of 100 dual nodes and is managed by 1 HPE Proliant XL225n Gen10 Plus.

This headnode is accessed through the address hydra.ift.uam-csic.es or as hydra0 within the cluster.

Hardware Specifications


General

 

  • 52 dual processor computation nodes (see node description below), distributed along three Bullx® Blade chassis:

    • 34 InfiniBand 4X host channel adapter PCI-Express (HCA):
      • QDR 40Gb/s InfiniBand

 

  • 18 dual processor computation nodes bullx B510 (see node description below), distributed along one Bullx® Blade chassis:

 

 

 

 

 

 

 

 

    • 34 Infiniband FDR 56Gb/s PCIe Gen3

 

  • 4 nodes Fujitsu PRIMERGY CX250 S2 (see node description below), distributed along one Fujitsu Primergy CX400 S2 chassis

 

 

  • 12 nodes HPE Proliant XL225n Gen10 Plus (see node description below) distributed along three HPE Apollo n2600 Gen10 Plus SFF chassis

 

 

 

 

 

 

 

 

 

    • 1 Switch Infiniband HDR 40 ports

 

 

 

 

 

 

 

 

 

Backend node

  • Dual processor Intel® Xeon® E5540 Nehalem quad-core at 2,53 GHz
  • 24 GB DDR-3 RAM 1066 Mhz Dual Rank
  • 2 SATA disks hot swap of 250 GB at 7,2krpm with RAID SAS/SATA in mirroring.
  • IB QDR, ConnectX™ IB HCA Card, Single Port 40Gb/s InfiniBand, QSFP, PCIe 2.0 x8 5.0GT/s, MemFree, tall bracket, RoHS (R5) Compliant, (Gen2 Eagle QDR, 1-Port)
  • Remote manager card AOC-SIMSO+IPMI2+SoL
  • Fiber HBA card for connection with storage system Optima® 1500

 

Storage System

  • Storeway® Optima® 1500
  • 12 1Tb fiber disks configured with RAID5 and hot spare
  • Approx. 10Tb of disk space

 

Storage and Backup system (Fully supported by the Comunidad of Madrid through project HEPHACOS)


  • Server bullx R426-E2 4U Storage node
    • 2 Intel® Xeon® E5620 EP 4c/8t (2.40GHz)
    • 24GB DDR3-1333 ECC SDRAM (1x4GB) DR
    • 30  hard disks of 2TB. 60TB of storage in total

     

  •  

    Storage and Backup system


    • Server Supermicro SuperStorage 6049P-E1CR36L
      • 2 Intel® Xeon® Silver 4208 EP 8c/16t (2.10GHz)
      • 256GB DDR4-2933 ECC SDRAM (8x32GB) DR
      • 24  hard disks of 12TB. 288 TB of storage in total

       

    •  

      Head node

      • Server Bull Novascale R423-E3
        • 2 Intel® Xeon® E5-2620 (2.00GHz)
        • 32 GB of DDR3 RAM
        • 2 500GB disks

      Parallel Storage LUSTRE system


      • 2 MDS and 4 OSS Fujitsu PRIMERGY RX200 S8 Servers
        • 2 Intel® Xeon®E5-2620 v2 6c/12t (2.10GHz)
        • 128GB DDR3-1600 ECC SDRAM (8x16GB) DR
        • Infiniband HDR ports

         

      • 3 Fujitsu Eternus DX90 S2 Disk storage systems
        • 20 hard disks of 450 GB 15K RPMs in raid 10 for MDT
        • 36  hard disks of 1TB in 3 RAID6 for each OST (in a mirror configuration). Approx. 54 TB of disk space
        • Fiber HBA card for connection with MDS and OSS servers

         

      •  

        Compute nodes detailed description:


        52 Nodes Bullx® B410


        • 34 nodes with a dual processor Intel® Xeon® E5540 at 2,53 GHz, 18 nodes with a dual processor Intel® Xeon® E5645 at 2.4 GHz
        • 24 GB DDR3 RAM
        • 120GB Solid State Disk hard drive
        • Integrated QDR Infiniband card

         

        9 Dual-node Bullx® B510 (Partially supported by the Comunidad de Madrid through project HEPHACOS)


         


        • 18 nodes with a dual processor Intel® Xeon® E5-2640 at 2.5 GHz
        • 64 GB DDR3 RAM
        • 128GB Solid State Disk hard drive
        • Integrated FDR Infiniband card

         

        4 nodes Fujitsu PRIMERGY CX250 S2

        • 4 Nodes with a dual processor Intel® Xeon® E2650 v2 at 2,60 GHz
        • 32 GB DDR4 RAM
        • 500 GB Solid State Disk hard drive
        • Integrated FDR Infiniband card

         

        12 nodes HPE Proliant XL225n Gen10 Plus


        • 12 Nodes with a dual processor AMD® Epyc® 7552 at 2,2 GHz
        • 512 GB DDR4 RAM
        • Integrated HDR Infiniband card