IET visits, 15 & 19 April 2010
description
Transcript of IET visits, 15 & 19 April 2010
![Page 1: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/1.jpg)
IET visits, 15 & 19 April 2010
From Zettabytes to Knowledge
Wolfgang von RüdenCERN IT Department, Head of CERN openlab
![Page 2: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/2.jpg)
2
From the International System of Units *
1000m 10n Prefix Symbol Since Short scale Long scale Decimal
10008 1024 yotta Y 1991 Septillion Quadrillion 1000000000000000000000000
10007 1021 zetta Z 1991 Sextillion Trilliard 1000000000000000000000
10006 1018 exa E 1975 Quintillion Trillion 1000000000000000000
10005 1015 peta P 1975 Quadrillion Billiard 1000000000000000
10004 1012 tera T 1960 Trillion Billion 1000000000000
10003 109 giga G 1960 Billion Milliard 100000000010002 106 mega M 1960 Million 1 000 00010001 103 kilo k 1795 Thousand 100010000 100 (none) (none) NA One 1
* http://en.wikipedia.org/wiki/Yotta-April 2010 Wolfgang von Rüden, CERN
![Page 3: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/3.jpg)
3
CERN’s Tools
• The world’s most powerful accelerator: LHC– A 27 km long tunnel filled with high-tech instruments– Equipped with thousands of superconducting magnets– Accelerates particles to energies never before obtained– Produces particle collisions creating microscopic “big bangs”
• Very large sophisticated detectors– Four experiments each the size of a cathedral– Hundred million measurement channels each– Data acquisition systems treating Petabytes per second
• Top level computing to distribute and analyse the data– A Computing Grid linking ~200 computer centres around the globe– Sufficient computing power and storage to handle the data, making
them available to thousands of physicists for analysis
April 2010 Wolfgang von Rüden, CERN
![Page 4: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/4.jpg)
LHC
![Page 5: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/5.jpg)
The Large Hadron Collider (LHC) tunnel
![Page 6: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/6.jpg)
6
LHC experiments
April 2010 Wolfgang von Rüden, CERN
![Page 7: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/7.jpg)
The “ATLAS” experiment during construction
7000 tons, 150 million sensors, >1 petabyte/s
![Page 8: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/8.jpg)
83 Sept 2008
CMS Closed & Ready for First Beam
April 2010 Wolfgang von Rüden, CERN
![Page 9: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/9.jpg)
About Zettabytes of raw data…
150 million detector elements, ~2 bytes each 300’000’0004 experiments produce roughly … 1 GB per collision
1’000’000’000
40 MHz interaction rate … 40PB/s 40’000’000’000’000’000150 days x 24h x 3600s … 0.5ZB/year 500’000’000’000’000’000’000
1 Zettabyte: 1’000’000’000’000’000’000’000
Massive on-line data reduction required to bring the rates down to an acceptable level before storing the data on disk and tape.
![Page 10: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/10.jpg)
The LHC Computing Challenge
Signal/Noise: 10-9
Data volume High rate * large number of
channels * 4 experiments 15 PetaBytes of new data each year
Compute power Event complexity * Nb. events *
thousands users 100 k of (today's) fastest CPUs 45 PB of disk storage
Worldwide analysis & funding Computing funding locally in major
regions & countries Efficient analysis everywhere GRID technology
![Page 11: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/11.jpg)
simulation
reconstruction
analysis
interactivephysicsanalysis
batchphysicsanalysis
detector
event summary data
rawdata
eventreprocessing
eventsimulation
analysis objects(extracted by physics topic)
Data Handling and Computation for
Physics Analysisevent filter(selection &
reconstruction)
processeddata
les.robe
rtso
n@ce
rn.ch
April 2010 11Wolfgang von Rüden, CERN
![Page 12: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/12.jpg)
How does it work?
![Page 13: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/13.jpg)
Proton acceleration and collision
• Protons are accelerated by several machines up to their final energy (7+7 TeV*)
• Head-on collisions are produced right in the centre of a detector, which records the new particle being produced
• Such collisions take place 40 million times per second, day and night, for about 150 days per year
* In 2010-11 only 3.5 + 3.5 TeV
![Page 14: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/14.jpg)
Particle collisions in the centre of a detector
![Page 15: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/15.jpg)
Massive Online Data Reduction
![Page 16: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/16.jpg)
Tier 0 at CERN: Acquisition, First pass processing Storage & Distribution
![Page 17: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/17.jpg)
Tier 0 – Tier 1 – Tier 2
Tier-0 (CERN):• Data recording• Initial data
reconstruction• Data distribution
Tier-1 (11 centres):• Permanent storage• Re-processing• Analysis
Tier-2 (~130 centres):• Simulation• End-user analysis
![Page 18: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/18.jpg)
Data transfer
Tier-2s and Tier-1s are inter-connected by the general
purpose research networks
Any Tier-2 mayaccess data at
any Tier-1
Tier-2 IN2P3TRIUMF
ASCC
FNAL
BNL
Nordic
CNAF
SARAPIC
RAL
GridKa
Tier-2
Tier-2
Tier-2
Tier-2
Tier-2
Tier-2
Tier-2Tier-2Tier-2
30 mars 2010 18
• Full experiment rate needed is 650 MB/s
• Desire capability to sustain twice that to allow for Tier 1 sites to shutdown and recover
• Have demonstrated far in excess of that
• All experiments exceeded required rates for extended periods, & simultaneously
• All Tier 1s have exceeded their target acceptance rates
Wolfgang von Rüden
Fibre cut near Basel
![Page 19: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/19.jpg)
The Worldwide LHC Computing
• The LHC Grid Service is a worldwide collaboration between:– 4 LHC experiments and– ~200 computer centres that contribute resources– International grid projects providing software and services
• The collaboration is brought together by a MoU that:– Commits resources for the coming years– Agrees a certain level of service availability and reliability
• As of today 33 countries have signed the MoU:– CERN (Tier 0) + 11 large Tier 1 sites– 132 Tier 2 sites in 64 “federations”
• Other sites are expected to participate but without formal commitment
![Page 20: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/20.jpg)
The very first beam-splashevent from the LHC in ATLASon 10:19, 10th September 2008
![Page 21: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/21.jpg)
30 March 2010, first high energy collisions
![Page 22: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/22.jpg)
![Page 23: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/23.jpg)
Capacity of CERN’s data centre (Tier0)
• Compute Nodes:– ~7000 systems– 41’000 cores
• Disk storage:– 14 Petabyte (>20 soon)– 60’000 disk drives
• Tape storage:– Capacity: 48 Petabyte– In use: 24 Petabyte
• Corresponds to ~15% of the total capacity in WLCG
30 mars 2010 Wolfgang von Rüden 23
![Page 24: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/24.jpg)
![Page 25: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/25.jpg)
![Page 26: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/26.jpg)
![Page 27: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/27.jpg)
![Page 28: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/28.jpg)
![Page 29: IET visits, 15 & 19 April 2010](https://reader035.fdocuments.us/reader035/viewer/2022070501/56816931550346895de07ef7/html5/thumbnails/29.jpg)
Thank you !