Detector Description Framework in LHCb S é bastien Ponce CERN
CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data...
-
date post
20-Dec-2015 -
Category
Documents
-
view
219 -
download
0
Transcript of CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data...
CERN – June 2007
View of the ATLAS detector (under construction)
150 million sensors deliver data …
… 40 million times per second
ATLAS full trigger rate is
780 MB/s
shared among 10 external Tier-1 sites(*), amounting to around 8 PetaBytes per year.
'Tier-0 exercise' of Distributed Data Management project of ATLAS starting June 2007
6th August 2007:first PetaByte of simulated data copied to Tier-1’s worldwide
(*) ASGC in Taiwan, BNL in the USA, CNAF in Italy, FZK in Germany, CC2IN2P3 in France, NDGF in Scandinavia, PIC in Spain, RAL in the UK, SARA in the Netherlands and TRIUMF in Canada.
Computing Model: central operations• Tier-0:
- Copy RAW data to Castor tape for archival- Copy RAW data to Tier-1s for storage and reprocessing- Run first-pass calibration/alignment (within 24 hrs)- Run first-pass reconstruction (within 48 hrs)- Distribute reconstruction output (ESDs, AODs & TAGS) to Tier-1s- Keep current versions of ESDs and AODs on disk for analysis
• Tier-1s:- Store and take care of a fraction of RAW data- Run “slow” calibration/alignment procedures- Rerun reconstruction with better calib/align and/or algorithms- Distribute reconstruction output to Tier-2s
• Tier-2s:- Run simulation- Run calibration/alignment procedures- Keep current versions of AODs on disk for analysis- Run user analysis jobs
Dario Barberis: ATLAS Activities at Tier-2s
Tier-2 Workshop - 12-14 June 2006
Computing Model and Resources The ATLAS Computing Model is still the same as in the Computing TDR (June
2005) and basically the same as in the Computing Model document (Dec. 2004) submitted for the LHCC review in January 2005
The sum of 30-35 Tier-2s will provide ~40% of the total ATLAS computing and disk storage capacity CPUs for full simulation productions and user analysis jobs
On average 1:2 for central simulation and analysis jobs
Disk for AODs, samples of ESDs and RAW data, and most importantly for selected event samples for physics analysis
We do not ask Tier-2s to run any particular service for ATLAS in addition to providing the Grid infrastructure (CE, SE, etc.) All data management services (catalogues and transfers) are run from Tier-
1s
Some “larger” Tier-2s may choose to run their own services, instead of depending on a Tier-1 In this case, they should contact us directly
Depending on local expertise, some Tier-2s will specialise in one particular task Such as calibrating a very complex detector that needs special access to particular
datasets
Dario Barberis: ATLAS Activities at Tier-2s
Tier-2 Workshop - 12-14 June 2006
ATLAS Analysis Work Model
1. Job preparation:
2. Medium-scale testing:
3. Large-scale running:
Local system (shell)
Prepare JobOptions Run Athena (interactive or batch) Get Output
Local system (Ganga)
Job book-keepingGet Output
Local system (Ganga)
Prepare JobOptionsFind dataset from DDMGenerate & submit jobs
GridRun Athena
Local system (Ganga)
Job book-keepingAccess output from
GridMerge results
Local system (Ganga)
Prepare JobOptionsFind dataset from DDMGenerate & submit jobs
ProdSysRun Athena on Grid
Store o/p on Grid
Analysis jobs must run where the input data files areAs transferring data files from other sites may take longer than actually running the job
Annex 3.3.
Tier-2 Services ….The following services shall be provided by each of the Tier2 Centers in respect of the LHC Experiments that they serve … i. provision of managed disk storage providing permanent and/or temporary data storage for files and databases; ii. provision of access to the stored data by other centers of the WLCG and by named AF’s as defined in paragraph 1.4 of this MoU; iii. operation of an end-user analysis facility;
iv. provision of other services, e.g. simulation, according to agreed Experiment requirements;
v. ensure network bandwidth and services for data exchange with Tier1 Centres, as part of an overall plan agreed between the Experiments and the Tier1 Centres concerned.
All storage and computational services shall be “grid enabled” according to standards agreed between the LHC Experiments and the regional centres.
The following parameters define the minimum levels of service. They will be reviewed by the operational boards of the WLCG Collaboration.
AUSTRIAN GRID Grid Computing Infrastruktur Initiative
für Österreich
Business Plan(Phase 2)
Jens Volkert, Bruno Buchberger (Universität Linz) Dietmar Kuhn (Universität Innsbruck)
März 2007
Austrian Grid II = Supported Project: 5.4 M€Contribution by groups from other sources: 5.1 "
Total 10.5 “
Structure:Research Center
Development Center
Service Center
+ 19 Work Packages 1 Administration 10 Basic Research
8 Integrated Applications
Erw eitertes P ro jektm anagem ent
P ro jekt M anagem ent(Koordinator J. Volkert, S te llvertreter B . Buchberger , D . Kuhn)
Kuhn)
Austrian G rid Entw ickluns-
zentrum
Leite r
Austrian G rid Service-zentrum
Leite r
Austrian G rid Forschungs-
zentrum
PAK
Pro jekt Büro
EUM inisterium
Projektkoordin ierungs-kom m itee
Integrierte Anwendungen
Sonstige Anw endungen
PAK und Vertreter PMB: D. Kranzlmüller (VR G. Kotsis), W.Schreiner ,Th. Fahringer
2007 2008 2009 2010 total
CPU (kSI2k) Vienna 400 100 100 100 700 kSI2k
CPU (kSI2k) Innsbruck 20 20 20 0 60 kSI2k
HD (TB) Vienna 80 10 10 0 100 TB
HD (TB) Innsbruck 10 10 0 0 20 TB
Bandwidth (Gb/s) 1 - - - 1 Gb/s
Cost estimate: 1.060 M€
Infrastructure and manpower to be provided by CIS of participating Institutions
C-MoU still not yet singed by Austria, but light at the end of the tunnel:Proposal for national federated Tier-2 (ATLAS+CMS) in ViennaAccepted 2008
Austrian Grid Phase II (2007 – 2009)
Launching project! Expected to be sustainable after 2010
Personnel: 70 my, 15 for SC, 4.5 for fT-25 FTE for Service Center, 1.5 for federated Tier-2 (Ibk)Vienna is expected to use presently vacant available positions(estimate 1,5 FTE, too)
34 k€/FTE/yr. (51k€/y for for Tier-2)
i.e. Hardware 1.053 k€ Personnel 153 „
Total 1.206 k€
Formalities:
Fördervertrag: signed Jan: 08 in MinistryKonsortialvertrag: to be signed March 6 ?
C- MoU: to be signed soon …
This graph shows a snapshot of data throughput at peak operation to the Tier 1 centres. Each bar represents the average throughput over 10 minutes and the colours represent the 10 Tier 1 centres. We can see the average throughput is fairly stable at around 600 MB/s, that is the equivalent of around 1 CD of data being shipped out of CERN every second. The current rates we observe on average are the equivalent of around 1 PetaByte per month, close to the final data rates needed for data taking.