The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

19
The NorduGrid The NorduGrid Project Project Oxana Smirnova Oxana Smirnova Lund University Lund University November 3, 2003, Ko November 3, 2003, Ko šice šice

Transcript of The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

Page 1: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

The NorduGrid The NorduGrid ProjectProject

Oxana SmirnovaOxana SmirnovaLund UniversityLund UniversityNovember 3, 2003, KoNovember 3, 2003, Košicešice

Page 2: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 2

Some facts

NorduGrid is: A Globus-based Grid middleware solution for Linux

clusters A large international 24/7 production quality Grid facility A resource routinely used by researchers since summer

2002 A freely available software A project in development

NorduGrid is NOT: Derived from other Grid solutions (e.g. EU DataGrid) An application-specific tool A testbed anymore A finalized solution

Page 3: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 3

Some history

Initiated by several Nordic universities Copenhagen, Lund, Stockholm, Oslo, Bergen, Helsinki

Started in January 2001 Initial budget: 2 years, 3 new positions Initial goal: to deploy EU DataGrid middleware to run “ATLAS Data Challenge”

Cooperation with EU DataGrid Common Certification Authority and Virtual Organization tools, Globus2 configuration Common applications (high-energy physics research)

Switched from deployment to R&D in February 2002 Forced by the necessity to execute “ATLAS Data Challenges” Deployed a light-weight and yet reliable and robust Grid solution in time for the ATLAS

DC tests in May 2002 Will continue for 4-5 years more (and more?..)

Form the ”North European Grid Federation” together with the Dutch Grid, Belgium and Estonia

Will provide middleware for the ”Nordic Data Grid Facility” …as well as for the Swedish Grid facility SWEGRID, Danish Center for Grid

Computing, Finnish Grid projects etc

Page 4: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 4

The resources Almost everything the Nordic academics

can provide (ca 1000 CPUs in total): 4 dedicated test clusters (3-4 CPUs) Some junkyard-class second-hand

clusters (4 to 80 CPUs) Few university production-class facilities

(20 to 60 CPUs) Two world-class clusters in Sweden,

listed in Top500 (238 and 398 CPUs) Other resources come and go

Canada, Japan – test set-ups CERN, Dubna – clients It’s open so far, anybody can join or part Number of other installations unknown

People: the “core” team keeps growing local sysadmins are only called up when

users need an upgrade

Page 5: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 5

Who needs Grid

NorduGrid relies on academic resources of various ownership National HPC centers Universities Research groups

All parts of the “spectrum” are interested in Grid development For different reasons though

At this stage, very vague accounting, if any

Resources: supply/demand ratio

1±ε

Grid

Res

ourc

esUsers

Tec

hn

olo

gy

Page 6: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 6

Middleware

In order to build a Grid from a set of geographically distributed clusters you need: Secure authentication and authorization Access to information about available resources Fast and reliable file transfers

These services are provided by the so called middleware

Most Grid projects have built their middleware using the Globus Toolkit 2 TM as a starting point

Page 7: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 7

Components

Information System

Page 8: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 8

NorduGrid specifics1. It is stable by design:

a) The nervous system: distributed yet stable Information System (Globus’ MDS 2.2+patches)

b) The heart(s): Grid Manager, the service to be installed at master nodes (based on Globus, replaces GRAM)

c) The brain(s): User Interface, the client/broker that can be installed anywhere as a standalone module (makes use of Globus)

2. It is light-weight, portable and non-invasive:a) Resource owners retain full control; Grid Manager is effectively a yet

another user (with many faces though)b) Nothing has to be installed on worker nodesc) No requirements w.r.t. OS, resource configuration, etc.d) Clusters need not be dedicatede) Runs on top of existing Globus installation (e.g. VDT)f) Works with any Linux flavor, Solaris, Tru64

3. Strategy: start with something simple that works for users and add functionality gradually

Page 9: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 9

How does it work Information system knows everything

Substantially re-worked and patched Globus MDS Distributed and multi-rooted Allows for a pseudo-mesh topology No need for a centralized broker

The server (“Grid manager”) on each gatekeeper does most of the job Pre- and post- stages files Interacts with LRMS Keeps track of job status Cleans up the mess Sends mails to users

The client (“User Interface”) does the brokering, Grid job submission, monitoring, termination, retrieval, cleaning etc

Interprets user’s job task Gets the testbed status from the information system Forwards the task to the best Grid Manager Does some file uploading, if requested

Page 10: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 10

Information System

Uses Globus’ MDS 2.2 Soft-state registration

allows creation of any dynamic structure

Multi-rooted tree GIIS caching is not used by

the clients Several patches and bug

fixes are applied A new schema is

developed, to serve clusters Clusters are expected to

be fairly homogeneous

Page 11: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 11

Front-end and the Grid Manager

Grid Manager replaces Globus’ GRAM, still using Globus ToolkitTM 2 libraries

All transfers are made via GridFTP Added a possibility to pre- and post-stage files, optionally

using Replica Catalog information Caching of pre-staged files is enabled Runtime environment support

Page 12: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 12

Summary of Grid services on the front-end machine GridFTP server

Plugin for job submission via a virtual directory Conventional file access with Grid access control

LDAP server for information services Grid Manager

Forks “downloaders” and “uploaders” for file transfer

Page 13: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 13

The User Interface Provides a set of utilities to be invoked from the command line:

Contains a broker that polls MDS and decides to which queue at which cluster a job should be submitted

The user must be authorized to use the cluster and the queue The cluster’s and queue’s characteristics must match the requirements specified in the xRSL string

(max CPU time, required free disk space, installed software etc) If the job requires a file that is registered in a Replica Catalog, the brokering gives priority to

clusters where a copy of the file is already present From all queues that fulfills the criteria one is chosen randomly, with a weight proportional to the

number of free CPUs available for the user in each queue If there are no available CPUs in any of the queues, the job is submitted to the queue with the

lowest number of queued job per processor

ngsub to submit a taskngstat to obtain the status of jobs and clustersngcat to display the stdout or stderr of a running jobngget to retrieve the result from a finished jobngkill to cancel a job requestngclean to delete a job from a remote clusterngrenew to renew user’s proxyngsync to synchronize the local job info with the MDSngcopy to transfer files to, from and between clustersngremove to remove files

Page 14: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 14

Job Description: extended Globus RSL(&(executable="recon.gen.v5.NG")

(arguments="dc1.002000.lumi02.01101.hlt.pythia_jet_17.zebra" "dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602.ntuple" "eg7.602.job" “999")

(stdout="dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602.log")(stdlog="gridlog.txt")(join="yes")(|(&(|(cluster="farm.hep.lu.se")(cluster="lscf.nbi.dk")(*cluster="seth.hpc2n.umu.se"*)(cluster="login-3.monolith.nsc.liu.se"))(inputfiles= ("dc1.002000.lumi02.01101.hlt.pythia_jet_17.zebra"

"rc://grid.uio.no/lc=dc1.lumi02.002000,rc=NorduGrid,dc=nordugrid,dc=org/zebra/dc1.002000.lumi02.01101.hlt.pythia_jet_17.zebra") ("recon.gen.v5.NG" "http://www.nordugrid.org/applications/dc1/recon/recon.gen.v5.NG.db") ("eg7.602.job" "http://www.nordugrid.org/applications/dc1/recon/eg7.602.job.db") ("noisedb.tgz" "http://www.nordugrid.org/applications/dc1/recon/noisedb.tgz"))

)(inputfiles= ("dc1.002000.lumi02.01101.hlt.pythia_jet_17.zebra"

"rc://grid.uio.no/lc=dc1.lumi02.002000,rc=NorduGrid,dc=nordugrid,dc=org/zebra/dc1.002000.lumi02.01101.hlt.pythia_jet_17.zebra") ("recon.gen.v5.NG" "http://www.nordugrid.org/applications/dc1/recon/recon.gen.v5.NG") ("eg7.602.job" "http://www.nordugrid.org/applications/dc1/recon/eg7.602.job"))

)(outputFiles= ("dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602.log"

"rc://grid.uio.no/lc=dc1.lumi02.recon.002000,rc=NorduGrid,dc=nordugrid,dc=org/log/dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602.log") ("histo.hbook" "rc://grid.uio.no/lc=dc1.lumi02.recon.002000,rc=NorduGrid,dc=nordugrid,dc=org/histo/dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602.histo") ("dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602.ntuple" "rc://grid.uio.no/lc=dc1.lumi02.recon.002000,rc=NorduGrid,dc=nordugrid,dc=org/ntuple/dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602.ntuple"))

(jobname="dc1.002000.lumi02.recon.007.01101.hlt.pythia_jet_17.eg7.602")(runTimeEnvironment="ATLAS-6.0.2")(CpuTime=1440)(Disk=3000)(ftpThreads=10))

Page 15: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 15

Task flow

GridManager

GatekeeperGridFTP

RSLRSL

Front-end

Cluster B

Cluster A

B!

Page 16: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 16

A snapshot

Page 17: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 17

Performance The main load: “ATLAS Data Challenge 1” (DC1)

Major load from May 2002 to August 2003 DC1, phase1 (detector simulation):

Total number of jobs: 1300, ca. 24 hours of processing 2 GB of input each Total output size: 762 GB All files uploaded to Storage Elements and registered in the Replica Catalog.

DC1, phase2 (pile-up of data): Piling up the events above with a background signal 1300 jobs, ca. 4 hours each

DC1, phase3 (reconstruction of signal) 2150 jobs, 5-6 hours of processing 1 GB of input each

Other applications: Calculations for string fragmentation models (Quantum Chromodynamics) Quantum lattice models calculations (sustained load of 150+ long jobs at

any given moment for several days) Particle physics analysis and modeling Biology applications

At peak production, up to 500 jobs were managed by the NorduGrid at the same time

Page 18: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 18

What is needed for installation

A cluster or even a single machine For a server:

Any Linux flavor (binary RPMs exist for RedHat and Mandrake, ev. for Debian) A local resource management system, e.g., PBS Globus installation (NorduGrid has an own distribution in a single RPM) Host certificate (and user certificates) Some open ports (depends on the cluster size) One day to go through all the configuration details

The owner always retains a full control Installing NorduGrid does not give automatic access to the resources And other way around But with a bit of negotiations, one can get access to very considerable

resources on a very good network Current stable release is 0.3.30; daily CVS snapshots are available

Page 19: The NorduGrid Project Oxana Smirnova Lund University November 3, 2003, Košice.

2003-11-03 [email protected] 19

Summary

NorduGrid pre-release (currently 0.3.30) works reliably Release 1.0 is slowly but surely on its way; many fixes are still

needed Developers are welcomed: much functionality is still missing, such

as: Bookkeeping, accounting Group- and role-based authorization Scalable resource discovery and monitoring service Interactive tasks Integrated, scalable and reliable data management Interfaces to other resource management systems

New users and resources are welcomed