The ALICE Framework at GSI Kilian Schwarz ALICE Meeting August 1, 2005.
-
Upload
roy-cunningham -
Category
Documents
-
view
221 -
download
0
Transcript of The ALICE Framework at GSI Kilian Schwarz ALICE Meeting August 1, 2005.
The ALICE Framework at GSIThe ALICE Framework at GSI
Kilian SchwarzKilian Schwarz
ALICE Meeting ALICE Meeting
August 1, 2005August 1, 2005
OverviewOverview
ALICE frameworkALICE framework What part of ALICE framework is installed What part of ALICE framework is installed
where at GSI and how can it be where at GSI and how can it be accessed/usedaccessed/used
ALICE Computing model (Tier architecture)ALICE Computing model (Tier architecture) Resource consumption of individual tasksResource consumption of individual tasks Resources at GSI and GridKaResources at GSI and GridKa
ALICE FrameworkALICE Framework
ROOT
AliRoot
STEER
Virtual MC
G3 G4 FLUKA
HIJING
MEVSIM
PYTHIA6
CRT
EMCAL ZDC
FMD
ITS
MUON
PHOSPMD TRD
TPC
TOF
STRUCT START
RICH
RALICE
EVGEN
HBTP
HBTAN
ISAJETA
liEn
F. Carminati, CERN
Software installed at GSI: AliRootSoftware installed at GSI: AliRoot
Installed at: /d/alice04/PPR/AliRootInstalled at: /d/alice04/PPR/AliRoot Newest version: AliRoot v4-03-03Newest version: AliRoot v4-03-03 Environment setup via:Environment setup via:
> . gcc32login> . gcc32login
> . alilogin dev/new/pro/version-number> . alilogin dev/new/pro/version-number
gcc295-04 not supported anymoregcc295-04 not supported anymore
corresponding ROOT version initialized, toocorresponding ROOT version initialized, too
* responsible person: Kilian Schwarz * responsible person: Kilian Schwarz
Software installed at GSI: ROOTSoftware installed at GSI: ROOT(AliRoot is heavily based on ROOT)(AliRoot is heavily based on ROOT)
Installed at: Installed at: /usr/local/pub/debian3.0/gcc323-00/rootmgr/usr/local/pub/debian3.0/gcc323-00/rootmgr
Newest version: 502-00Newest version: 502-00 Environment setup viaEnvironment setup via > . gcc32login / alilogin or rootlogin> . gcc32login / alilogin or rootlogin• Responsible persons:Responsible persons: - Joern Adamczewski / Kilian Schwarz- Joern Adamczewski / Kilian Schwarz• See also: See also: http://www-w2k.gsi.de/roothttp://www-w2k.gsi.de/root
Software installed at GSI: geant3Software installed at GSI: geant3(needed for simulation: accessed via VMC)(needed for simulation: accessed via VMC)
Installed at: /d/alice04/alisoft/PPR/geant3Installed at: /d/alice04/alisoft/PPR/geant3 Newest version: v1-3Newest version: v1-3 Environment setup via gcc32login/aliloginEnvironment setup via gcc32login/alilogin Responsible person: Kilian SchwarzResponsible person: Kilian Schwarz
Software at GSI: geant4/FlukaSoftware at GSI: geant4/Fluka(simulation: accessed via VMC)(simulation: accessed via VMC)
Both so far not heavily used from ALICEBoth so far not heavily used from ALICE Geant4: standalone versions up to G4.7.1Geant4: standalone versions up to G4.7.1 newest VMC version: geant4_vmc_1.3newest VMC version: geant4_vmc_1.3 Fluka: not installed so far by meFluka: not installed so far by me Environment setup viaEnvironment setup via > . gsisimlogin [-vmc] dev/new/prod/version> . gsisimlogin [-vmc] dev/new/prod/version• See also See also
http://www-linux.gsi.de/~gsisim/g4vmc.htmlhttp://www-linux.gsi.de/~gsisim/g4vmc.html• Responsible person: Kilian SchwarzResponsible person: Kilian Schwarz
Software at GSI: event generatorsSoftware at GSI: event generators(task: simulation)(task: simulation)
Installed at: /d/alice04/alisoft/PPR/evgenInstalled at: /d/alice04/alisoft/PPR/evgen Available: Available:
- Pythia5- Pythia5
- Pythia6- Pythia6
- Venus- Venus
• Responsible person: Kilian SchwarzResponsible person: Kilian Schwarz
Software at GSI: AliEnSoftware at GSI: AliEnThe ALICE Grid EnvironmentThe ALICE Grid Environment
Currently being set up in the version2 (AliEn2)Currently being set up in the version2 (AliEn2) Installed at: /u/aliprod/alien Installed at: /u/aliprod/alien Idea: global production and analysisIdea: global production and analysis Environment setup via . .alienloginEnvironment setup via . .alienlogin Copy certs from /u/aliprod/.globus or register own certsCopy certs from /u/aliprod/.globus or register own certs Usage: /u/aliprod/bin/alien (proxy-init/login)Usage: /u/aliprod/bin/alien (proxy-init/login) Then: register files and submit grid-jobsThen: register files and submit grid-jobs Or: directly from ROOT !!!Or: directly from ROOT !!! Status: global AliEn2 production testbed currently being set up.Status: global AliEn2 production testbed currently being set up. Will be used for LCG SC3 in SeptemberWill be used for LCG SC3 in September Individual analysis of globally distributed Grid data at the latest during LCG Individual analysis of globally distributed Grid data at the latest during LCG
SC4 2006 via AliEn/LCG/PROOFSC4 2006 via AliEn/LCG/PROOF Non published analysis possible already now:Non published analysis possible already now: - create AliEn-ROOT Collection (xml file readable via AliEn)- create AliEn-ROOT Collection (xml file readable via AliEn) - analyse via ROOT/PROOF (TFile::Open(“alien://alice/cern.ch/production/…”)- analyse via ROOT/PROOF (TFile::Open(“alien://alice/cern.ch/production/…”) - Web Frontend being created via ROOT/QT- Web Frontend being created via ROOT/QT Responsible person: Kilian SchwarzResponsible person: Kilian Schwarz
AliEn2 servicesAliEn2 services (see (see http://alien.cern.chhttp://alien.cern.ch))
Local scheduler
ALICE VO – central services
Central Task Queue
Job submission
File Catalogue
Configuration
Accounting
User authentication
Computing Element
Workload management
Job Monitoring
Storage Element(s) DB
Data Transfer
Storage Element
Cluster Monitor
AliEn Site services
Disk and MSS
Existing site components
ALICE VO – Site services integration
Software at GSI: GlobusSoftware at GSI: Globus
Installed at: /usr/local/globus2.0 andInstalled at: /usr/local/globus2.0 and /usr/local/grid/globus/usr/local/grid/globus Versions globus2.0 and 2.4Versions globus2.0 and 2.4 Idea: can be used to send batch jobs to GridKa Idea: can be used to send batch jobs to GridKa
(far more resources available than at GSI)(far more resources available than at GSI) Environment setup via: . globusloginEnvironment setup via: . globuslogin Usage: Usage: > grid-proxy-init (Grid certificate needed !!!)> grid-proxy-init (Grid certificate needed !!!) > globus-job-run/submit alice.fzk.de Grid/Batch job> globus-job-run/submit alice.fzk.de Grid/Batch job• Responsible person: Victor Penso/Kilian SchwarzResponsible person: Victor Penso/Kilian Schwarz
GermanGrid CAGermanGrid CA
How to get a certificate in detail:
See http://wiki.gsi.de/Grid/DigitalCertificates
Software at GSI: LCGSoftware at GSI: LCG
Installed at: /usr/local/grid/lcgInstalled at: /usr/local/grid/lcg Newest version: LCG2.5Newest version: LCG2.5 Idea: global batch farmIdea: global batch farm Environment setup: . lcgloginEnvironment setup: . lcglogin Usage:Usage: > grid-proxy-init (Grid certificate needed !!!)> grid-proxy-init (Grid certificate needed !!!) > edg-job-submit batch/grid job (jdl-file)> edg-job-submit batch/grid job (jdl-file)• See also: See also: http://wiki.gsi.de/Gridhttp://wiki.gsi.de/Grid • Responsible person: Victor Penso, Anar Manafov, Responsible person: Victor Penso, Anar Manafov,
Kilian SchwarzKilian Schwarz
LCG: the LHC Grid Computing projectLCG: the LHC Grid Computing project(with ca. 11k CPUs world’s largest Grid Testbed)(with ca. 11k CPUs world’s largest Grid Testbed)
Software at GSI: PROOFSoftware at GSI: PROOF Installed at: /usr/local/pub/debian3.0/gcc323-00/rootmgrInstalled at: /usr/local/pub/debian3.0/gcc323-00/rootmgr Newest version: ROOT 502-00Newest version: ROOT 502-00 Idea: parallel analysis of larger data sets for Idea: parallel analysis of larger data sets for
quick/interactive resultsquick/interactive results Personal PROOF Cluster at GSI, integrated in batch farm, Personal PROOF Cluster at GSI, integrated in batch farm,
can be set up viacan be set up via > prooflogin <parameters> (e.g. number of slaves, data to > prooflogin <parameters> (e.g. number of slaves, data to
be analysed, -h (help))be analysed, -h (help))• See also: See also: http://http://wiki.gsi.de/Grid/TheParallelRootFacilitywiki.gsi.de/Grid/TheParallelRootFacility• Later personal PROOF Cluster including GSI and GridKa Later personal PROOF Cluster including GSI and GridKa
via Globus possiblevia Globus possible• Later global PROOF Cluster via AliEn/D-Grid possibleLater global PROOF Cluster via AliEn/D-Grid possible• Responsible person: Carsten Preuss, Robert Manteufel, Responsible person: Carsten Preuss, Robert Manteufel,
Kilian SchwarzKilian Schwarz
Parallel Analysis of Event DataParallel Analysis of Event Data
root
Remote PROOF Cluster
proof
proof
proof
TNetFile
TFile
Local PC
$ root
ana.Cstdout/obj
node1
node2
node3
node4
$ root
root [0] tree.Process(“ana.C”)
$ root
root [0] tree.Process(“ana.C”)
root [1] gROOT->Proof(“remote”)
$ root
root [0] tree.Process(“ana.C”)
root [1] gROOT->Proof(“remote”)
root [2] dset->Process(“ana.C”)
ana.C
proof
proof = slave server
proof
proof = master server
#proof.confslave node1slave node2slave node3slave node4
*.root
*.root
*.root
*.root
TFile
TFile
LHC Computing ModelLHC Computing Model(Monarc and Cloud)(Monarc and Cloud)
One Tier 0 site at CERN for data takingALICE (Tier 0+1) in 2008:
500 TB disk (8%), 2 PB tape, 5.6 MSI2K (26%)
Multiple Tier 1 sites for reconstruction and scheduled analysis3 PB disk (46%), 3.3 PB tape 9.1 MSI2K (42%)
Tier 2 sites for simulation and user analysis3 PB disk(46%), 7.2 MSI2K (33%)
ALICE Computing model more in ALICE Computing model more in detail:detail:
T0 (CERN): long term storage for raw data, calibration and T0 (CERN): long term storage for raw data, calibration and first reconstructionfirst reconstruction
T1 (5, in Germany GridKa): long term storage of second T1 (5, in Germany GridKa): long term storage of second copy of raw data, 2 subsequent reconstructions, scheduled copy of raw data, 2 subsequent reconstructions, scheduled analysis tasks, reconstruction of MC Pb-Pb data, long term analysis tasks, reconstruction of MC Pb-Pb data, long term storage of data processed at T1s and T2sstorage of data processed at T1s and T2s
T2 (many, in Germany GSI): generate and reconstruct T2 (many, in Germany GSI): generate and reconstruct simulated MC data and chaotic analysissimulated MC data and chaotic analysis
T0/T1/T2: short term storage in multiple copies of active T0/T1/T2: short term storage in multiple copies of active datadata
T3 (many, in Germany T3 (many, in Germany Münster, Frankfurt, Heidelberg, GSI) chaotic analysis
CPU requirements and Event sizeCPU requirements and Event sizep-p / p-p /
KSI2k x s/ev.KSI2k x s/ev.
Heavy IonHeavy Ion
KSI2k x s/ev.KSI2k x s/ev.
ReconstructionReconstruction 5.45.4 6868
Scheduled Scheduled analysisanalysis
1515 230230
Chaotic analysisChaotic analysis 0.50.5 7.57.5
Simulation (ev. Simulation (ev. cr. and rec.) cr. and rec.)
350350 15000 (2-4 hours 15000 (2-4 hours on standard PC)on standard PC)
Raw / Raw / MBMB
ESD / ESD / MBMB
AOD / AOD / MBMB
Raw Raw MCMC
ESD ESD MCMC
p-pp-p 11 0.040.04 0.0040.004 0.40.4 0.040.04
Heavy Heavy IonIon
12.512.5 2.52.5 0.250.25 300300 2.52.5
ALICE Tier resourcesALICE Tier resources
Tier0Tier0 Tier1sTier1s Tier2sTier2s TotalTotal
CPU CPU (MSI2k)(MSI2k)
7.57.5 13.813.8 13.713.7 35.035.0
Disk (PB)Disk (PB) 0.10.1 7.57.5 2.62.6 10.210.2
Tape (PB)Tape (PB) 2.32.3 7.57.5 - - 9.89.8
Bandwidth in Bandwidth in (Gb/s)(Gb/s)
1010 22 0.010.01
Bandwidth Bandwidth out (Gb/s)out (Gb/s)
1.21.2 0.020.02 0.60.6
GridKa (1 of 5 T1s)GridKa (1 of 5 T1s)IN2P3, CNAF, GridKa, NIKHEF, (RAL), Nordic, USA (effective ~5)
ramp up time: due to shorter runs and reduced luminosity at the beginning not full resources needed:
20% 2007, 40% 2008, 100% end of 2008
++ ++ ++ TotalTotal
Status Status 20052005
20062006 20072007 20082008 20092009 20092009
CPU CPU (kSI2k)(kSI2k)
243243 5757 300300 600600 18001800 30003000
Disk Disk (TB)(TB)
2828
(50% (50% used)used)
1212 160160 200200 600600 10001000
Tape Tape (TB)(TB)
5656 2424 220220 300300 900900 15001500
GSI + T3(support for the 10% German GSI + T3(support for the 10% German ALICE members)ALICE members)++ ++ ++ TotalTotal
Status Status 20052005
20062006 20072007 20082008 20092009 20092009
CPU CPU (kSI2k)(kSI2k)
64 Dual P4, 64 Dual P4, 20 DP3, (80 20 DP3, (80 DOpteron DOpteron new bought)new bought)
-- -- 400400 130130 530(800) 530(800) + 500 T3+ 500 T3
Disk Disk (TB)(TB)
2.23 (0.3 2.23 (0.3 free) – free) – 15 TB 15 TB newnew
200200 3030 230 + 230 + 100 T3100 T3
Tape Tape (TB)(TB)
190 (100 190 (100 used)used)
500500 500500 10001000
T3: Münster, Frankfurt, Heidelberg, GSI