Community Forum Data-Intensive Computing Community Forum Data-Intensive Computing (DIC) ... e.g. MPI

download Community Forum Data-Intensive Computing Community Forum Data-Intensive Computing (DIC) ... e.g. MPI

of 41

  • date post

    20-May-2020
  • Category

    Documents

  • view

    1
  • download

    0

Embed Size (px)

Transcript of Community Forum Data-Intensive Computing Community Forum Data-Intensive Computing (DIC) ... e.g. MPI

  • 8. Mai 2019

    Community Forum Data-Intensive Computing Support Teams for bwForCluster MLS&WISO, SDS@hd, and bwVisu

    URZ 1/41

  • Common User Meeting

    bwForTreff I for users of bwHPC systems I for students and scientists

    interested in HPC

    SDS@hd user meeting

    I for users of Scientific Data Storage SDS@hd

    I for students and scientists interested in data storage

    Community Forum Data-Intensive Computing (DIC)

    I Get status and news of HPC and storage systems I Learn how to use both systems for different workflows I Networking with support team and other users

    URZ 2/41

  • Community Forum DIC

    Agenda I Scientific Data Storage SDS@hd I bwForCluster MLS&WISO

    I News I Job Efficiency I Container Environment

    I Announcements I Archive Service SDA@hd

    I Demos I Singularity Containers I Remote Visualization Service bwVisu

    I Discussion

    URZ 3/41

  • bwHPC and bwHPC-S5

    URZ 4/41

  • Baden-Württemberg Framework Program 2018-2024

    I Concept of the Universities of Baden-Württemberg for High Performance Computing (HPC), Data Intensive Computing (DIC), and Large Scale Scientific Data Management (LS2DM) ⇒ provides Compute Resources

    I bwHPC-S5: Scientific Simulation and Storage Support Services ⇒ provides User Support

    Information on Website http://www.bwhpc.de Best Practices Guide http://www.bwhpc.de/wiki

    URZ 5/41

    http://www.bwhpc.de http://www.bwhpc.de/wiki

  • bwHPC-S5 Project

    URZ 6/41

  • bwHPC-S5 Support

    I Information seminars, hands-on training, HPC and storage workshops http://www.bwhpc.de/courses_a_tutorials.html

    I Documentation & best practices repository http://www.bwhpc.de/wiki I Providing/maintaining/developing:

    I simplified access to all bwHPC resources I cluster independent & unified user environment I software portfolio I tools for data transfer and management I interface to archive and repositories services

    I Migration support: I code adaptation, e.g. MPI or OpenMP parallelization I code porting (from desktop or old HPC clusters) I migration to higher HPC levels

    URZ 7/41

    http://www.bwhpc.de/courses_a_tutorials.html http://www.bwhpc.de/wiki

  • SDS@hd – Scientific Data Storage

    https://sds-hd.urz.uni-heidelberg.de

    URZ 8/41

    https://sds-hd.urz.uni-heidelberg.de

  • Scientific Data Storage SDS@hd Service Features and Storage Hardware

    I High performance data access for large scientific “hot data”

    I HW parts: LSDF2 & LSDF2b I High availability, no single point of failure I Spatially divided hardware in different fire areas

    I IBM Spectrum Scale Advanced (GPFS) I Current usable capacity: 5,8 PB netto I Data security, encryption

    I Protocols: SMB, NFSv4, SSHFS, SFTP I Flexible group and access management

    I Collaborations among universities are possible

    I ’Landesdienst’ for scientists of BW universities I in HD: valid Uni-ID necessary (if needed: ’unentgeltliche Tätigkeit’)

    I Connection to other services: bwHPC, bwVisu, heiCLOUD

    URZ 9/41

  • Scientific Data Storage SDS@hd News

    Extension of the LSDF 2 hardware: I DFG proposal for LSDF 2 Phase 2 (End 2017) was granted!

    I Extension of storage capacity to 20 PB!

    I The extension of the LSDF2 storage systems with the granted funding is currently prepared and will lead to a public procurement process in 2019

    I Future additional proposal, if more storage capacity is needed ⇒ Reporting

    URZ 10/41

  • Scientific Data Storage SDS@hd Reporting

    The use of SDS@hd must be acknowledged in publications!

    I Highly important for DFG reviewing and future fundings of LSDF2 I Attest of scientific necessity and usefulness with publications

    List of scientific publications:

    I Send info about new publications to sds-hd-reporting@urz.uni-heidelberg.de I Submit list at application for renewal of Speichervorhaben (SV)

    Text for acknowledgement in publications: The authors gratefully acknowledge the data storage service SDS@hd supported by the Ministry of Science, Research and the Arts Baden-Württemberg (MWK) and the German Research Foundation (DFG) through grant INST 35/1314-1 FUGG.

    See: https://sds-hd.urz.uni-heidelberg.de/reporting

    URZ 11/41

    mailto:sds-hd-reporting@urz.uni-heidelberg.de https://sds-hd.urz.uni-heidelberg.de/reporting

  • Scientific Data Storage SDS@hd SDS@hd Registration

    Entitlement I Authorization to use the service I Delivered by home organization

    SDS@hd-Management

    I Register Speichervorhaben (SV) or participate on existing SV I Signatures needed on contracts I https://sds-hd.urz.uni-heidelberg.de/management

    Registration Server

    I Register and manage service account, e.g. set personal service password I https://bwservices.uni-heidelberg.de

    URZ 12/41

    https://sds-hd.urz.uni-heidelberg.de/management https://bwservices.uni-heidelberg.de

  • Scientific Data Storage SDS@hd Costs

    I Investment was funded by MWK (50%) and DFG (50%) (Art. 91b GG) I only operation costs have to be paid

    I Price: I 0,25 ct/GB/month (0,0025 EUR/GB/month, = 30 EUR/TB/year) I Heidelberg User:

    I 0,1 ct/GB/month (12 EUR/TB/year, subsidized by central funds)

    I Measurement: average of used quota I Backup is already included

    URZ 13/41

  • Scientific Data Storage SDS@hd Billing

    I Costs have to be charged I Measurement: average of used quota per month I Beginning in 2019 I Some ”Kostenstellen” on existing contracts are incorrect!

    Will be contacted for correction in the next weeks.

    I Medical Faculties: I Service registration only for scientists of universities I Not for usage as member of university hospital I Announcement as scientific member of the medical faculty I If you have a Uni-ID, everything is OK!

    URZ 14/41

  • Scientific Data Storage SDS@hd Data Access Protocols

    NFSv4 with Kerberos SMB 2.x, 3.x SFTP

    operating system Linux Windows (since Vista) OS X Mavericks (10.9)

    all

    installation effort one-time installation effort (Kerberos, ID mapping)

    minimal effort: Mapping of a network drive

    installation of SFTP client, e.g. sshfs, WinSCP, FileZilla

    possible bandwidth up to 40Gbit/sec up to 40Gbit/sec up to 40Gbit/sec

    fault tolerance highly available highly available –

    encryption (transfer) possible possible possible

    firewall ports 2049 (nfs) 139 (netbios), 135 (rpc) 445 (smb)

    22 (ssh)

    recommendation For workstations, servers, microscopes ...

    For workstations, servers, microscopes, ...

    For Linux PCs, machines in restrictive or external networks

    URZ 15/41

  • bwForCluster MLS&WISO

    URZ 16/41

  • bwForCluster MLS&WISO System Layout

    bwForCluster MLS&WISO Development (IWR) Produc on (URZ/RUM)

    @Heidelberg @Heidelberg @Mannheim

    standard

    IB-QDR

    ad m in

    ad m in

    lo gi n

    lo gi n

    Long distance (28 km) IB connect (160 Gbit/s)

    IB-FDR IB-QDR

    best cop fat standard

    lo gi n

    lo gi n

    lo gi n

    lo gi n

    ad m in

    ad m in

    WORKHOME

    4xFDR 40 GE to LSDF2

    URZ 17/41

  • bwForCluster MLS&WISO Production

    Access Prerequisites

    I bwForCluster Entitlement for LocalUserID from home university I Member of RV for bwForCluster MLS&WISO (Production) at ZAS

    Registration

    I Register at https://bwservices.uni-heidelberg.de I Choose bwForCluster MLS&WISO - Production I Set service password

    Login

    I Login server: bwfor.cluster.uni-mannheim.de bwforcluster.bwservices.uni-heidelberg.de

    I User ID: SitePrefix LocalUserID (= hd UniID for Heidelberg users) I Password: Service password

    URZ 18/41

    https://bwservices.uni-heidelberg.de

  • bwForCluster MLS&WISO Production

    File Systems

    $TMPDIR $HOME Workspaces SDS@hd

    Visibility node local global global global1

    Lifetime batch job walltime permanent workspace lifetime permanent

    Disk Space 128 GB 36 TB 384 TB 5.7 PB

    Quotas no 100 GB no yes2

    Backup no no no yes3

    File System xfs BeeGFS BeeGFS GPFS

    1on all compute nodes except standard nodes 2disk quota set by SDS@hd for SV 3done by SDS@hd

    URZ 19/41

  • SDS@hd on bwHPC clusters

    Cluster Location Connection Mode Status

    bwUniCluster Karlsruhe via data mover rdata in production

    bwForCluster Heidelberg/ Mannheim

    via data mover & direct access on compute nodes

    in production

    bwForCluster Tübingen via data mover in production

    bwForCluster Ulm not needed in evaluation

    bwForCluster Freiburg not needed in evaluation

    ForHLR Karlsruhe via data mover rdata in production

    HazelHen Stuttgart in evaluation in preparation

    URZ 20/41

  • Workspaces

    What are workspaces? Workspaces are allocated folders with a lifetime.

    Workspace tools

    ws allocate foo 10 Allocate a workspace named foo for 10 days.

    ws list -a List all your workspaces.

    ws find foo Get absolute path of workspace foo.

    ws extend foo 5 Ext