Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft IWR LinuxTag 2006, WiesbadenVolker...
-
date post
19-Dec-2015 -
Category
Documents
-
view
220 -
download
0
Transcript of Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft IWR LinuxTag 2006, WiesbadenVolker...
Forschungszentrum Karlsruhein der Helmholtz-Gemeinschaft
LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Virtualisation and its Application on the Grid
Institut für Wissenschaftliches Rechnen
Forschungszentrum Karlsruhe
Institut für Experimentelle Kernphysik
Universität Karlsruhe
Volker Büge, Yves Kemp, Marcel Kunze,
Günter Quast, Armin Scheurer
2LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Outline
• What is Particle Physics / High Energy Physics?– Introduction
– Computing and Storage
– Current Experiments
• Introduction to Grid Technologies– What is a Grid?
– The Worldwide LHC Computing Grid
– Live Demo
• Virtualisation in the Grid Environment– Hardware Consolidation
– Virtualisation and Batch Systems
– Gathering Resources from idle Desktop PCs
• Conclusion & Outlook
3LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
What is Particle Physics? - Dimensions
Crystal Molecule
Atom
NucleusProtonQuark /Electron
10-2 m
Macrocosm
101 m 10-9 m
10-10 m
10-14 m10-15 m<10-18 m
4LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
What is Particle Physics? - Big Bang
Tim
esc
ale
Tim
esc
ale
Big BangBig Bang
Elementary ParticlesElementary Particles
NucleiNuclei
NucleonsNucleons
Light AtomsLight Atoms
TodayToday
......
......
Heavy AtomsHeavy Atoms
En
erg
yE
ner
gy
5LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Our Instruments – Accelerators - LHC
Large Hadron Collider
Circumference: 27 kmBeam energy: 7 TeVBelow surface: 100 mTemperature: -271 °CEnergy use: 1 TWh/a
4 large experiments:
CMS ATLASLHCb ALICE
Large Hadron Collider
Circumference: 27 kmBeam energy: 7 TeVBelow surface: 100 mTemperature: -271 °CEnergy use: 1 TWh/a
4 large experiments:
CMS ATLASLHCb ALICE
Lake GenevaLake Geneva
CERNCERN AirportAirport
6LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Our Instruments – Detectors - CMS
Compact Muon Solenoid - CMSCompact Muon Solenoid - CMS Specifications:
total weight:12 500 T
overall diameter:15 m
overall length:21,5 m
magnetic field:4 Tesla
7LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Our Instruments – Detectors - Event Display
The collision of 2 high-energetic hydrogen nuclei (protons)
produces several 1000 particles
8LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Our Instruments – Detectors - Data Rates
~ 60 TB/sec~ 60 TB/sec~
150 GB
/sec~
150 GB
/sec
~ 225 MB/sec~ 225 MB/sec
Collision Rate: ~ 40 MHz
Event size: ~1.5 MBEvent size: ~1.5 MB
for Offline-
Analysis
Tape & HDD
Storage
High Level
Trigger
Software Data Reduction
(PC Farm)
Level 1
Trigger
Reduction with ASICs
9LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Our Instruments – Detectors - Storage & CPU
Number of CPUs
0
20
40
60
80
100
120
140
160
2007 2008 2009 2010
Year
[MS
I200
0]Number of CPUs
0
20
40
60
80
100
120
140
160
2007 2008 2009 2010
Year
[MS
I200
0]
Disc Space
0
10000
20000
30000
40000
50000
60000
70000
80000
2007 2008 2009 2010
Year
[TB
]
Disc Space
0
10000
20000
30000
40000
50000
60000
70000
80000
2007 2008 2009 2010
Year
[TB
]
MSS Space
0
10000
20000
30000
40000
50000
60000
70000
2007 2008 2009 2010
Year
[TB
]
MSS Space
0
10000
20000
30000
40000
50000
60000
70000
2007 2008 2009 2010
Year
[TB
]
Number of C P U s
0
20
40
60
80
100
120
140
160
2007 2008 2009 2010
Y ear
CMS Atlas
LHCb Alice
Resource expectations of the 4 LHC experiments
10LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
CMS High Energy Physics Collaboration
CMS 38 Nations
182 Institutions
2000 Scientists &Engineers
11LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
CMS Software Framework & OS
• long-term experiments, large fluctuations of collaborators
• transparency for analyses
• huge purpose-designed software framework (~ 1GB)
• Some OpenSource projects (ROOT Analysis Framework,
GEANT for particle interactions in matter, ...)
• we build our own read-out hardware drivers, ...
only reasonableanswer
only reasonableanswer
OpenSource, Linux
12LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Peculiarities of HEP Data Processing
Static program and dynamic data:
e.g. meteorology, geography, finance
• set of fixed programs used to analyse
new data in short intervals
• same approach for SETI@home
Static data and “dynamic code”:
e.g. in High Energy Physics
• data acquisition very expensive
• data is analysed repeatedly with iteratively
improved code
• e.g. ~1000 publications from 500 people
with ~1TB preprocessed data!
13LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Data Storage and Access
Constraints and Approaches:
• HEP experiments very expensive! redundant storage of 1.5 PetaByte per year only for CMS!
• not possible at one single computing centre (founding constraints) distribution of data to participating computing centres all over the world
• huge datasets (~TeraByte) cannot be transferred to each user the analysis job goes “where the data set is”
• ensure access to these data for more then 2000 physicists from 182 institutes in 38 countries (in CMS) access to data and computing resources without local login for the user
The LHC experiments cope with these challenges using
grid technologies – The Worldwide LHC Computing Grid
14LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
What is Grid Computing?
Definition of Grid Computing by Ian Foster:
– coordinated resource sharing and problem solving in
dynamic, multi-institutional virtual organizations
– the ability to negotiate resource-sharing arrangements among
a set of participating parties (providers and consumers) and
then to use the resulting resource pool for some purpose.
Today, grids are used in science to …… enable research and collaborations independent
from the geographical location
… share distributed resources like a single computing cluster or access to storage resources
… load balance of resources – opportunistic use!
15LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
The Worldwide LHC Computing Grid (WLCG)
The WLCG Computing Model:– computing centres are organised in a hierarchical structure
– different policies concerning computing power and data storage
The tiered architecture:
1 Tier0 (at CERN):
– accepts raw data from detectors
– data transfer to Tier1 centres
8 Tier 1 centres:
– secure data archiving
– reconstruction & reprocessing
– coordination of associated Tier2s
Several Tier 2 centres:
– capacity for data-intensive analysis
– calibration & Monte Carlo simulation
U n i K a rls ru h e
R W T H A a c h e n
U n i/L a b
F erm ilab / U S A
Ita lien
R u th e rfo rd L ab / G ro ß b ritan n ien
S p an ien
G rid K a /D eu tsch lan d
L y o n /F ran k re ich
U n i H a m b u rg
U n i/L a b
U n i/L a b
U n i/L a b
U n i/L a b
U n i/L a b
U n i/L a b U n i/L a b
U n i/L a b
T ie r 0
T ie r 1
T ie r 2 /3
16LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
The WLCG - Members
Currently 189 sites participatingCurrently 189 sites participating
17LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
The WLCG - Basics
The basic concepts:
Authorization: What may I do ?• certain permissions, duties etc.
• “equivalent” to a visa or access list
• Virtual Organisation Membership Service
User can adapt several different roles,
e.g. software manager, normal user, …
Authentication: Who am I ?• concept of certificates
• “equivalent” to a passport, ID card etc.
18LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
The WLCG - LCG Middleware I
VO Server
• registry office of a VO
• contains all users and
their roles within a VO
LCG File Catalogue
• global file index for a
Virtual Organisation
Information Service
• collects and publishes
information on resources
connected to the LCG
Resource Broker
• “intelligence” of the grid
• distributes incoming job
requests to matching resources
Grid-wide services:
19LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
The WLCG – LCG Middleware II
Site-wide services:
• User Interface
– access point for the user to the
grid
• Computing Element
– portal to the local batch
system of a site
• Storage Element
– offering disk space to a VO
– portal to the local storage
• Monitoring Box
– collects and publishes
information on grid jobs
executed at a site
20LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
The LHC Computing Grid – Access
A job’s way through the grid:
UIJDL
ResourceResourceBrokerBroker
Job SubmissionJob SubmissionServiceService
StorageStorageElementElement
ComputingComputingElementElementJob Status
ReplicaReplicaCatalogueCatalogue
DataSets info
Job S
ub
mit E
vent
Job Q
uery Jo
b St
atus
JDLInput “sandbox”
Input “sandbox”+ Broker Info
Globus RSL
Output “sandbox”
Output “sandbox”
Job Status
Pu
blish
Exp
and
ed J
DL
SE & CE info
Logging &Logging &BookBook--keepingkeeping
Author.&Authen.
grid
-pro
xy-in
it
Information Information ServiceService
UIJDLUI
JDL
ResourceResourceBrokerBrokerResourceResourceBrokerBroker
Job SubmissionJob SubmissionServiceServiceJob SubmissionJob SubmissionServiceService
StorageStorageElementElementStorageStorageElementElement
ComputingComputingElementElementComputingComputingElementElementJob StatusJob Status
ReplicaReplicaCatalogueCatalogueReplicaReplicaCatalogueCatalogue
DataSets infoDataSets info
Job S
ub
mit E
vent
Job S
ub
mit E
vent
Job Q
uery
Job Q
uery Jo
b St
atus
Job
Stat
us
JDLInput “sandbox”JDLInput “sandbox”
Input “sandbox”+ Broker Info
Input “sandbox”+ Broker Info
Globus RSLGlobus RSL
Output “sandbox”
Output “sandbox”
Output “sandbox”Output “sandbox”
Job StatusJob Status
Pu
blish
Pu
blish
Exp
and
ed J
DL
Exp
and
ed J
DL
SE & CE info
SE & CE info
Logging &Logging &BookBook--keepingkeepingLogging &Logging &BookBook--keepingkeeping
Author.&Authen.Author.&Authen.
grid
-pro
xy-in
it
grid
-pro
xy-in
it
Information Information ServiceServiceInformation Information ServiceService
21LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
The WLCG – Live Demo I
The job is described in one configuration file, containing:
• specification, which resources are required, for example
- names of special sites to prefer/ignore
- special software release installed
- minimum CPU-time of a queue
• names of all files which
- the job need to be executed on the site (InputSandbox)
- the user wants to get back when the job has finished (OutputSandbox)
Submission of the following script:#!/bin/tcsh –f
hostname –f
whoami
uname -a
22LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
The WLCG – Live Demo II
The job configuration file: JobDescription.jdlExecutable ="MyScript.csh";
StdOutput ="std.out";
StdError ="std.err";
InputSandbox ={"MyScript.csh"};
OutputSandbox ={"std.out","std.err"};
VirtualOrganisation = "cms";
255 submissions of this job have been executed at these sites:13 a01-004-128.gridka.de 2 fal-pygrid-18.lancs.ac.uk 6 heplnx201.pp.rl.ac.uk
9 beagle14.ba.itb.cnr.it 2 fornax-ce.itwm.fhg.de 29 lcgce01.gridpp.rl.ac.uk
16 ce01.esc.qmul.ac.uk 5 grid002.ca.infn.it 1 lcg-ce0.ifh.de
4 ce1.pp.rhul.ac.uk 1 grid012.ct.infn.it 9 lpnce.in2p3.fr
6 cmslcgce.fnal.gov 9 gridba2.ba.infn.it 67 mars-ce.mars.lesc.doc.ic.ac.uk
4 dgc-grid-35.brunel.ac.uk 6 gridit-ce-001.cnaf.infn.it 7 spaci01.na.infn.it
16 dgc-grid-40.brunel.ac.uk 14 griditce01.na.infn.it 1 t2-ce-02.lnl.infn.it
21 epgce1.ph.bham.ac.uk 6 gw39.hep.ph.ic.ac.uk 1 wipp-ce.weizmann.ac.il
23LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Scientific Linux – The Science Linux Flavour!
What is Scientific Linux?
• large computing facilities in HEP have run adapted RedHat distributions
• change in the RH policies leads to expensive licences
CERN, other labs (also non-HEP) and universities use a recompiled
RH Enterprise Server as base distribution:
• current release: Scientific Linux 3 with a
2.6 Kernel
• recompiled RedHat Enterprise Server 3
• support and updates provided by CERN
• optimised for HEP environment
• no fee for licenses
• will change to SLC4 in autumn
In our case: Scientific Linux CERN Edition
24LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Virtualisation I
Para Virtualisation, e.g. XEN– different hardware components are not fully emulated by the host OS.
It only organises the usages Small loss of performance– layout of a Xen based system: Privileged host system (Dom0) and
unprivileged guest systems (DomUs)– DomUs are working cooperatively!– guest-OS has to be adapted to XEN (Kernel-Patch), but not the
applications!
25LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Virtualisation II
Standard application benchmark: Linux kernel compilation(4 in parallel; make –j4)
1 2 3 4 5 6 7 8 10 12 14 160
0,25
0,5
0,75
1
1,25
1,5
1,75
2Opteron-SMP
XEN 3 0 1
XEN 2 0 7
Commercial Product
UML
Amount of parallel benchmarksRela
tive P
erf
orm
ance Index (
2=
Opte
ron S
MP
)
Both CPUs in the native OS used for one compilationOnly one CPU in the VM
Slightly smaller performance of the Xen based VM compared to the native OS.
26LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Hardware Consolidation at a WLCG Tier3 Centre I
Typical situation at a university’s Tier 2/3 centre:
• for reasons of stability we recommend
to run each service in an isolated OS
instance.
• varying load on the different machines no full usage of resources
“recycling” of older machines leads to
a heterogeneous hardware structure
high administrative effort for
installation and maintenance of the
system
CE SE MON
Host (XEN)
CE SE MON
Virtualisation of these machines lead to one single machine to be maintained and to homogenous OS installations
27LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Hardware Consolidation at a WLCG Tier3 Centre II
Advantages through virtualisation:
• a reduction of hardware overhead : Only one single high-
performance machine needed for the complete LCG
installation including a test WN
cheaper and easier to maintain
• easy and fast setup of basic OS by copying VMs image files
• possibility of migrating VMs to other machines and backup
• cloning of VMs before upgrades of LCG to enable tests less service interrupts and a more effective administration
• balanced load and efficient use of the server machine interception of CPU peaks
28LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Hardware Consolidation at a WLCG Tier3 Centre III
Realisation of a full LCG environment in Virtual Machines
• host system with Virtual Machine Monitor (VMM) Xen– OS: Scientific Linux 4 ( native with 2.6 kernel)
– CPU: Intel(R) Pentium(R) 4 CPU 3.0 GHz
– Memory: 1GB
• VMs: CE, SE and MON now run on SLC3
• second LCG installation for testing purposes available
• both environments for LCG 2.7.0 fully integrated into our batch and storage system
Complete Tier 3 infrastructure on one machine works
29LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Virtualisation of Batch Queues
Basic Ideas:• Different groups at the same computing centre need different
Operating Systems
• Agreement on one OS or no resource sharing
• virtualisation allows to dynamically partition a cluster with different OS
• each queue is linked to one type of Virtual Machine
Such an approach offers all
advantages of a normal
batch system combined with
the free choice of the OS for
the computing centre
administration and user
groups!
30LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Dynamic Partitioning of a Cluster I
• Requirements– should be independent from batch system server and scheduler
• no modifications on existing products
• flexibility through a modular structure
• Implementation:– a daemon is observing the queue and keeps track on next jobs
which will start according to its priority
– will start VM with desired OS and register it to the batch system
– keeps track of used and unused Host-Nodes
• Peculiarities: – optimise number of shutdowns and restarts of VMs
– concept should not affect the prioritisation of the scheduler!
31LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Dynamic Partitioning of a Cluster II
Test System: Simulation of a cluster with 19 nodes– 2 Dual Opteron Machines with 2GB RAM each
– each is hosting 10 VMs• 1 Torque server with MAUI scheduler, running the daemon• 19 Virtual Computing nodes
Running
Suse10
Empty Running
SLC3
daemon Batch Server
Which OS is required next?
2. Starts requested VM
3. VM is connected to batch system
Virtual Machine
Worker Node
32LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
What about idle desktop PCs ?
High-performance Desktop PCs often not used for hours, for example:
• computing pools at universities outside lectures
• office PCs at night
• many more!
Use this power for analyses – Condor cluster for jobs that are independent of the OS
VM on desktops would offer:• dedicated development system for different groups of users
• environment for OS dependant analyses tasks
Starting required OS in VMs on idle desktop PCs
to harvest this computing power
33LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Conclusion & Outlook
• High Energy Physics experiments – collaborations are international
– OpenSource principles indispensable in HEP collaborations
– need large storage and computing resources
– cope with these challenges using grid technologies and Linux
• Virtualisation– hardware consolidation at a Tier 2/3 centre
– dynamic partitioning of shared batch system with different OS
– opportunistic use of idle desktop PCs
Linux with XEN allows to introduce an new layer of abstraction into the Grid gets more flexible and stable!
34LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Links
Cernhttp://www.cern.ch/
The Large Hadron Colliderhttp://lhc.web.cern.ch/lhc/
CMShttp://cmsinfo.cern.ch/Welcome.html
ROOThttp://root.cern.ch
GEANThttp://wwwasd.web.cern.ch/wwwasd/geant/
Worldwide LHC Computing Gridhttp://lcg.web.cern.ch/LCG/
Enabling Grids for E-sciencE (EGEE) http://public.eu-egee.org/
35LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Links
GOC Grid Monitoringhttp://goc02.grid-support.ac.uk/googlemaps/lcg.html
Scientific Linuxhttps://www.scientificlinux.org/
Scientific Linux Cernhttp://linux.web.cern.ch/linux/
Global Grid User Supporthttps://ggus.fzk.de/
The Xen virtual machine monitorhttp://www.cl.cam.ac.uk/Research/SRG/netos/xen/
Institut für Experimentelle Kernphysik – University of Karlsruhehttp://www-ekp.physik.uni-karlsruhe.de/
Institute für Wissenschaftliches Rechnen – Forschungszentrum Karlsruhehttp://www.fzk.de/
37LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Our Instruments – Accelerators - LINAC
SLAC Linear Accelerator
Length: 3,2 kmBeam Energy: 50 GeV
Some experiments:
SLC GLASTB Factory BaBar
SLAC Linear Accelerator
Length: 3,2 kmBeam Energy: 50 GeV
Some experiments:
SLC GLASTB Factory BaBar
PEP II RingsPositron Electron Project
PEP II RingsPositron Electron Project
SLACStanford Linear Accelerator Center
SLACStanford Linear Accelerator Center SLAC LINACSLAC LINAC
38LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
The WLCG - CMS Resources I
#CPUs Site Name #CPUs Site Name
3805 ce101.cern.ch 60 gw39.hep.ph.ic.ac.uk
973 cmslcgce.fnal.gov 50 t2-ce-01.to.infn.it
890 a01-004-128.gridka.de 48 beagle14.ba.itb.cnr.it
838 lcgce01.gridpp.rl.ac.uk 47 wipp-ce.weizmann.ac.il
636 ce01.esc.qmul.ac.uk 46 grid-ce.physik.rwth-aachen.de
482 bigmac-lcg-ce.physics.utoronto.ca 40 lcg-ce0.ifh.de
356 fal-pygrid-18.lancs.ac.uk 38 gridce.pi.infn.it
290 cclcgceli01.in2p3.fr 36 grid0.fe.infn.it
260 gw-2.ccc.ucl.ac.uk 34 griditce01.na.infn.it
224 zeus02.cyf-kr.edu.pl 30 ce01-lcg.projects.cscs.ch
194 mars-ce.mars.lesc.doc.ic.ac.uk 28 ce2.egee.unile.it
166 t2-ce0.desy.de 26 epgce1.ph.bham.ac.uk
156 t2-ce-02.lnl.infn.it 22 fornax-ce.itwm.fhg.de
138 ce1.pp.rhul.ac.uk 22 grid002.ca.infn.it
134 gridba2.ba.infn.it 16 node07.datagrid.cea.fr
120 spaci01.na.infn.it 14 lpnce.in2p3.fr
116 dgc-grid-40.brunel.ac.uk 12 bogrid5.bo.infn.it
104 helmsley.dur.scotgrid.ac.uk 12 polgrid1.in2p3.fr
94 prod-ce-01.pd.infn.it 10 cmsboce.bo.infn.it
87 ekp-lcg-ce.physik.uni-karlsruhe.de 8 grid001.ts.infn.it
84 grid-ce0.desy.de 8 gridit-ce-001.cnaf.infn.it
68 grid001.fi.infn.it 5 ce.epcc.ed.ac.uk
62 heplnx201.pp.rl.ac.uk 4 ce-a.ccc.ucl.ac.uk
62 t2-ce-01.mi.infn.it 2 dgc-grid-35.brunel.ac.uk
60 grid10.lal.in2p3.fr 2 pccmsgrid08.pi.infn.it
39LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Total Space [GB] Free Space [GB] Name of Storage Element Total Space [GB] Free Space [GB] Name of Storage Element
1500000,0 500000,0 castorgrid.cern.ch 1382,3 2,2 cclcgseli01.in2p3.fr
179459,6 32771,9 cmssrm.fnal.gov 1347,0 1,8 polgrid2.in2p3.fr
56220,5 2246,5 fal-pygrid-20.lancs.ac.uk 1073,7 186,1 se-a.ccc.ucl.ac.uk
54085,8 4690,8 gridka-dCache.fzk.de 1070,0 10,0 dgc-grid-34.brunel.ac.uk
42949,7 25854,1 dcache-tape.gridpp.rl.ac.uk 1054,8 28,8 grid2.fe.infn.it
18733,9 11721,3 dcache.gridpp.rl.ac.uk 912,6 805,3 griditse01.na.infn.it
10780,4 6807,8 srm-dcache.desy.de 757,1 228,8 grid002.fi.infn.it
9118,4 3443,0 grid-se002.physik.rwth-aachen.de 732,4 458,3 gridba6.ba.infn.it
5580,3 2594,8 pccms2.cmsfarm1.ba.infn.it 709,1 406,7 gridit002.pd.infn.it
5016,7 3790,7 lcg-gridka-se.fzk.de 491,8 36,0 grid003.ca.infn.it
4394,2 249,2 prod-se-01.pd.infn.it 310,9 34,2 gw38.hep.ph.ic.ac.uk
3820,0 10,0 node12.datagrid.cea.fr 270,0 3,6 ekp-lcg-se.physik.uni-karlsruhe.de
3751,0 797,6 grid005.ct.infn.it 268,4 35,2 grid-se0.desy.de
3673,2 50,3 globe-door.ifh.de 148,9 33,5 se01-lcg.projects.cscs.ch
3310,0 2338,8 t2-srm-01.lnl.infn.it 134,2 3,3 grid-se2.desy.de
3150,0 1360,0 grid006.mi.infn.it 100,0 11,9 gw-3.ccc.ucl.ac.uk
2730,0 260,0 se1.pp.rhul.ac.uk 76,1 1,6 pccms5.cmsfarm1.ba.infn.it
2342,7 46,8 grid002.ts.infn.it 74,8 45,3 boalice1.bo.infn.it
2147,3 1650,0 zeus03.cyf-kr.edu.pl 71,9 2,0 srm.epcc.ed.ac.uk
2105,5 1338,8 t2-se-03.lnl.infn.it 67,1 14,5 grid008.to.infn.it
2092,2 1635,1 grid009.to.infn.it 66,5 25,0 spaci02.na.infn.it
1996,6 1565,0 fal-pygrid-03.lancs.ac.uk 61,6 8,2 beagle.ba.itb.cnr.it
1880,0 590,0 gallows.dur.scotgrid.ac.uk 34,4 34,4 t2-se-01.mi.infn.it
1832,1 839,3 cmsbose2.bo.infn.it 30,8 13,3 ce1.egee.unile.it
1809,3 868,6 grid007g.cnaf.infn.it 20,1 13,6 mars-se.mars.lesc.doc.ic.ac.uk
1760,0 430,0 epgse1.ph.bham.ac.uk 18,2 6,5 node05.datagrid.cea.fr
1682,1 985,4 grid-se001.physik.rwth-aachen.de 14,6 3,8 fornax-se.itwm.fhg.de
1605,1 442,5 bigmac-lcg-se.physics.utoronto.ca 13,8 3,4 gridse.pi.infn.it
The WLCG - CMS Resources II
40LinuxTag 2006, Wiesbaden Volker Büge 6.Mai 2006 IWR
Virtualisation I
Para Virtualisation, e.g. XEN– different hardware components are not fully emulated by the host OS.
It only organises the usages Small loss of performance– layout of a Xen based system: Privileged host system (Dom0) and
unprivileged guest systems (DomUs)– DomUs are working cooperatively!– guest-OS has to be adapted to XEN (Kernel-Patch), but not the
applications!
Xen Virtual Machine Monitor VMM
hardware interface
controleinterface
Virtual CPU
Virtual MMU
Domain 2 (DomU)
unmodified application
Guest-OS
backend driver
Domain 1 (DomU)
unmodified application
Guest-OS
backend driver
native driver
Domain 0Device manager
Hypervisor
XEN-Host-OS
backend driver
native driver
Hard
ware
Xen Virtual Machine Monitor VMM
hardware interface
controleinterface
Virtual CPU
Virtual MMU
Domain 2 (DomU)
unmodified application
Guest-OS
backend driver
Domain 2 (DomU)
unmodified application
Guest-OS
backend driver
Domain 1 (DomU)
unmodified application
Guest-OS
backend driver
native driver
Domain 1 (DomU)
unmodified application
Guest-OS
backend driver
native driver
Domain 0Device manager
Hypervisor
XEN-Host-OS
backend driver
native driver
Domain 0Device manager
Hypervisor
XEN-Host-OS
backend driver
native driver
Hard
ware