Post on 10-Jun-2020
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
Grids at the High Energy Frontier
John HuthHarvard University
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
The High Energy Frontier• High energy = short distance (10-19 cm)• Outstanding questions in particle physics:
– Origins of mass, unification of fundamental forces (Higgs particle?, other?)
– Quantum nature of gravity (extra dimensions?)– Composition of the universe (dark matter, energy)
• Large Hadron Collider– Collaborations of unprecedented size
• A vanguard in the evolution of science– Pushes the limits of scientific computing
• Use of grids to enable democratic access to data and computing resources
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
Periodic Table of Fundamental Particles
Families reflectincreasing mass and a theoreticalorganizationu, d, e are “normalmatter”.Because of the chargequarks, electrons, muons, and tau’sparticipate in EM
-1
+2/3
-1/3
0
Mass
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
What holds them together?
• Fundamental forces– Gravity– Electromagnetism– The Strong force (holds the nucleus together)– The Weak force (regulates the burning of
hydrogen into heavier elements in stars)
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
Unified Theories
At very high energies all interactions merge to a single strength.
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
We are on the threshold of a fundamental energy scale
• The structure of the theory only works if the “symmetry breaking” – or generation of mass is apparent at energy scales we are about to probe
• The Large Hadron Collider at CERN– Expected to begin collisions of protons on
protons in 2007
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
Large Hadron Collider at CERN
pp collider = 14 TeVs
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
The CMS Detector
MUON BARREL
CALORIMETERS
Silicon MicrostripsPixels
ECALScintillating
PbWO4 crystals
SUPERCONDUCTINGCOIL
IRON YOKE
TRACKER
MUONENDCAPS
Total weight : 12,500 tOverall diameter : 15 mOverall length : 21.6 mMagnetic field : 4 Tesla
HCALPlastic scintillator/brasssandwich
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
Challenges at the LHC
Petabytes/year of data logged
2000 + Collaborators
37 Countries
140 Institutions (Universities, National Laboratories)
CPU intensive
Global distribution of data
Test with « Data Challenges »
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
CPU v. Collab.
10
10 0
1, 0 0 0
10 , 0 0 0
10 0 , 0 0 0
0 5 0 0 1 0 0 0 1 5 0 0 2 0 0 0 2 5 0 0
C o l l a b o r a t i o n S i z e
C P U CPU v. Collab.
Earth Simulator
Atmospheric Chemistry Group
LHC Exp.
Astronomy
Grav. Wave
Nuclear Exp.
Current accelerator Exp.
CPU vs. Collaboration Size
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
Problem of data and resource sharing
• Large scientific collaborations need to share data globally– Share computing resources– Have applications run seamlessly in an
heterogeneous environment– Guarantee reproducibility of analyses– Intelligent scheduling across a wide-area-network
• Grids are the solution
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
Image courtesy Harvey Newman, Caltech
Grids for High Energy Physics
Tier2 Centre ~1 TIPS
Online System
Offline Processor Farm
~20 TIPS
CERN Computer Centre
FermiLab ~4 TIPSFrance Regional Centre
Italy Regional Centre
Germany Regional Centre
InstituteInstituteInstituteInstitute ~0.25TIPS
Physicist workstations
~100 MBytes/sec
~100 MBytes/sec
~622 Mbits/sec
~1 MBytes/sec
There is a “bunch crossing” every 25 nsecs.There are 100 “triggers” per secondEach triggered event is ~1 MByte in size
Physicists work on analysis “channels”.Each institute will have ~10 physicists working on one or more channels; data for these channels should be cached by the institute server
Physics data cache
~PBytes/sec
~622 Mbits/sec
Tier2 Centre ~1 TIPS
Tier2 Centre ~1 TIPS
Tier2 Centre ~1 TIPS
Caltech ~1 TIPS
~622 Mbits/sec
Tier 0Tier 0
Tier 1Tier 1
Tier 2Tier 2
Tier 4Tier 4
1 TIPS is approximately 25,000 SpecInt95 equivalents
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
Grid3
virtual data researchend-to-end HENPapplications
CERN LHC: US ATLAStestbeds & data challenges
CERN LHC: USCMStestbeds & data challenges
virtual data grid laboratory
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
Service monitoring
Grid3 – a snapshot of sites
Sep 04•30 sites, multi-VO•shared resources•~3000 CPUs (shared)
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
Shared infrastructure
cms dc04
atlasdc2
Usa
ge: C
PU
s
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
Three active grids: Nordu-grid, Grid3 and LCG
-20000
0
20000
40000
60000
80000
100000
120000
140000
4062
340
626
4062
940
702
4070
540
708
4071
140
714
4071
740
720
4072
340
726
4072
940
801
4080
440
807
4081
040
813
4081
640
819
4082
240
825
4082
840
831
4090
340
906
4090
940
912
4091
540
918
Days
LCGNorduGridGrid3Total
# V
alid
ated
Job
s
total
Day
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
Scaling and the Future
• Need to reduce amount of human intervention• Establishment of an economic model for the grid
– What are the real prices of services?– Get beyond “good-will” stage
• Open Science Grid– Next step beyond Grid3
• Security• Data storage and access• Quality of service
• Interoperability among grids– Standards
21 February 2005 John Huth, Harvard UniversityAnnual AAAS Meeting 05
Summary
• Challenges of high energy frontier– Scale of experiments– Democracy of access to data and resources
• Grid solution well matched to the needs of the collaborations– Many applications are now using this infrastructure –
biology, earth sciences etc• Future developments
– Scaling, robustness, interoperability, standards