WLCG after 1 year with data: Prospects for the future

28
WLCG after 1 year with data: Prospects for the future Ian Bird; WLCG Project Leader openlab BoS meeting CERN 4 th May 2011

description

o penlab BoS meeting CERN4 th May 2011. WLCG after 1 year with data: Prospects for the future. Ian Bird; WLCG Project Leader. Overview. Quick review of WLCG Summary of 1 st year with data Achievements, successes, lessons Outlook for the next 3 years What are our challenges?. - PowerPoint PPT Presentation

Transcript of WLCG after 1 year with data: Prospects for the future

Page 1: WLCG after 1 year with data: Prospects for the future

WLCG after 1 year with data:Prospects for the future

Ian Bird; WLCG Project Leader

openlab BoS meetingCERN 4th May 2011

Page 2: WLCG after 1 year with data: Prospects for the future

[email protected] 2

• Quick review of WLCG• Summary of 1st year with data

– Achievements, successes, lessons• Outlook for the next 3 years

– What are our challenges?

Overview

Page 3: WLCG after 1 year with data: Prospects for the future

Ian Bird, CERN 3

The LHC Computing Challenge

Signal/Noise: 10-13 (10-9 offline) Data volume

High rate * large number of channels * 4 experiments

15 PetaBytes of new data each year Compute power

Event complexity * Nb. events * thousands users

200 k of (today's) fastest CPUs 45 PB of disk storage

Worldwide analysis & funding Computing funding locally in major

regions & countries Efficient analysis everywhere GRID technology

>250 k cores to

day

100 PB disk today!!!

Page 4: WLCG after 1 year with data: Prospects for the future

• A distributed computing infrastructure to provide the production and analysis environments for the LHC experiments

• Managed and operated by a worldwide collaboration between the experiments and the participating computer centres

• The resources are distributed – for funding and sociological reasons

• Our task was to make use of the resources available to us – no matter where they are located

Ian Bird, CERN 4

WLCG – what and why?

Tier-0 (CERN):• Data recording• Initial data reconstruction• Data distribution

Tier-1 (11 centres):•Permanent storage•Re-processing•Analysis

Tier-2 (~130 centres):• Simulation• End-user analysis

Page 5: WLCG after 1 year with data: Prospects for the future

Worldwide resources

[email protected] 5

• Today >140 sites• >250k CPU cores• >100 PB disk

Today we have 49 MoU signatories, representing 34 countries:

Australia, Austria, Belgium, Brazil, Canada, China, Czech Rep, Denmark, Estonia, Finland, France, Germany, Hungary, Italy, India, Israel, Japan, Rep. Korea, Netherlands, Norway, Pakistan, Poland, Portugal, Romania, Russia, Slovenia, Spain, Sweden, Switzerland, Taipei, Turkey, UK, Ukraine, USA.

WLCG Collaboration StatusTier 0; 11 Tier 1s; 68 Tier 2 federations

Page 6: WLCG after 1 year with data: Prospects for the future

1st year of LHC data

Writing up to 70 TB / day to tape (~ 70 tapes per day)

Data written to tape (GB/day)

Disk Servers (GB/s)Tier 0 storage:

• Accepts data at average of 2.6 GB/s; peaks > 11 GB/s• Serves data at average of 7 GB/s; peaks > 25 GB/s• CERN Tier 0 moves > 1 PB data per day

Stored ~ 15 PB in 2010

>5GB/s to tape during HI~ 2 PB/month to tape pp~ 4 PB to tape in HI

Page 7: WLCG after 1 year with data: Prospects for the future

Large numbers of analysis users: ATLAS, CMS ~800 LHCb,ALICE ~250

Use remains consistently high: >1 M jobs/day; ~150k CPU

Grid Usage

100k CPU-days/day

As well as LHC data, large simulation productions always ongoing

CPU used at Tier 1s + Tier 2s (HS06.hrs/month) – last 12 months At the end of 2010 we saw all Tier 1 and Tier 2 job slots being filled

CPU usage now >> double that of mid-2010 (inset shows build up over previous years)

1 M jobs/day

In 2010 WLCG delivered~ 80-100 CPU-millennia!

Page 8: WLCG after 1 year with data: Prospects for the future

• The grid really works• All sites, large and small can contribute

– And their contributions are needed!• Significant use of Tier 2s for analysis• Tier 0 usage peaks when LHC running –

average is much less

[email protected] 8

CPU – around the Tiers

Jan 2011 was highest use month ever … so far

Page 9: WLCG after 1 year with data: Prospects for the future

Ian Bird, CERN 9

Data transfers

LHC running: April – Sept 2010

& the academic/research networks for Tier1/2!CMS HI data zero suppression & FNAL

2011 data Tier 1s

Re-processing 2010 data

ALICE HI data Tier 1s

Page 10: WLCG after 1 year with data: Prospects for the future

[email protected] 10

Successes:

We have a working grid infrastructure Experiments have truly distributed models

Has enabled physics output in a very short time

Network traffic close to that planned – and the network is extremely reliable

Significant numbers of people doing analysis (at Tier 2s)

Today resources are plentiful, and no contention seen ... yet

Support levels manageable ... just

Page 11: WLCG after 1 year with data: Prospects for the future

[email protected] 11

• LHC schedule now has continuous running 2011 + 2012 – expected high integrated luminosity (== lots of interesting data)

• Impacts:– Resources – funding agencies asked to fund more resources in

2012 (had previously expected an “off” year)– Push back upgrades or upgrade during running

• Oracle 11g, network switches, online clusters, OS versions, etc.• Mostly an issue for accelerator or experiment control-related; for WLCG there is

NO downtime, ever.

• … and– The no. events /collision much higher than anticipated for now

• larger event sizes (hence more data volume), more processing time

2011+2012 running

Page 12: WLCG after 1 year with data: Prospects for the future

[email protected] 12

Evolution of requirements

Page 13: WLCG after 1 year with data: Prospects for the future

Some areas where openlab partners have contributed to this success …

(in no particular order )

Ian Bird, CERN 13

Page 14: WLCG after 1 year with data: Prospects for the future

[email protected] 14

Databases

Databases everywhere (LHC, experiments, offline, remote) – large scale deployment and distributed databases:e.g. Streams for data replication

Page 15: WLCG after 1 year with data: Prospects for the future

[email protected] 15

CPU & performance

CPU/machines: evaluation of new generationsPerformance optimisation – how to use many-core machines

Page 16: WLCG after 1 year with data: Prospects for the future

[email protected] 16

Monitoring

New ways to view monitoring data

Gridmaps now appear everywhere

This was a good example of tapping into expertise and experience within the company

Page 17: WLCG after 1 year with data: Prospects for the future

[email protected] 17

Networking

Technology evaluations (e.g. 10 Gb)Campus networking and security – essential for physics analysis at CERN

Page 18: WLCG after 1 year with data: Prospects for the future

and some challenges for the future …

Ian Bird, CERN 18

Page 19: WLCG after 1 year with data: Prospects for the future

• Resource efficiency– Behaviour with resource contention– Efficient use – experiments struggle to live within resource expectations, physics is

potentially limited by resources now!• Changing models – to more effectively use what we have

– Evolving data management– Evolving network model– Integrating other federated identity management schemes

• Sustainability– Grid middleware – has it a future?– Sustainability of operations– Is (commodity) hardware reliable enough?

• Changing technology – Using “clouds”– Other things - NoSQL, etc.

Move away from “special” solutionsIan Bird, CERN 19

Challenges:

Page 20: WLCG after 1 year with data: Prospects for the future

[email protected] 20

• We have a grid because:– We need to collaborate and share resources– Thus we will always have a “grid” – Our network of trust is of enormous value for us and for (e-)science in

general• We also need distributed data management

– That supports very high data rates and throughputs– We will continually work on these tools

• But, the rest can be more mainstream (open source, commercial, … )– We use message brokers more and more as inter-process communication– Virtualisation of our grid sites is happening

• many drivers: power, dependencies, provisioning, …– Remote job submission … could be cloud-like– Interest in making use of commercial cloud resources, especially for peak

demand We should invest effort only where we need to

Grids clouds??

Page 21: WLCG after 1 year with data: Prospects for the future

[email protected] 21

• Is clearly of great interest• CERN has several threads:

– Service consolidation of “VO managed services” – “kiosk” – request a VM via a web interface– Batch service:

• Tested Platform ISF and OpenNebula • Did very large scaling tests

• Very interested in Openstack– Both for cluster management and storage system– Potentially a large community behind– Could be leading towards (de-facto) standards for clouds

• Questions: – is S3 a possible alternative as a storage interface?– Can we virtualise (most of) our computing infrastructure?

• Have much less types of hardware purchase?• Remove distinction between CPU and Disk servers?

– Do we still need a traditional batch scheduler?– How easy to burst out to commercial clouds?– How feasible to use cloud interfaces for distributed job management between grid(cloud)

sites?– How much grid middleware can we obsolete?

Virtualisation and clouds

Page 22: WLCG after 1 year with data: Prospects for the future

• Resource contention (see also sustainable ops)– Need better “monitoring”; we have lots of information,

but:• Really need the ability to mine and analyse monitoring data: within and

across services: trends, correlations• Need warnings of problems before they happen

– Can this lead to automated actions/reactions/recovery?• Efficiency of use

– Many-core CPU & other architectures– CPU efficiency – jobs wait for data? How important is

it? (CPU is cheap…)– Does a virtualised infrastructure help?

[email protected] 22

Resource efficiency

Page 23: WLCG after 1 year with data: Prospects for the future

[email protected] 23

• Recognise network as a resource• Data on-demand will augment data pre-placement • Storage systems will become more dynamic caches • Allow remote data access

– fetch files when needed– I/O over WAN

• Network usage will (eventually) increase & be more dynamic (less predictable)

Computing model evolutionEvolution of computing models

Page 24: WLCG after 1 year with data: Prospects for the future

[email protected] 24

• A consequence of the computing model evolution• Data caching rather than organised data placement• Distinguish between data archives and data caches

– Only allow organised access to archives– Simplifies interfaces – no need for full SRM – Potential to replace archives with commercial back-up

solutions (that scale sufficiently!)• Tools to support:

– Remote data access (all aspects)– Reliable transfer (we have this, but clearly needs reworking)– Cache management– Low latency, high throughput file access (for reading)

Evolution of data management

Page 25: WLCG after 1 year with data: Prospects for the future

[email protected] 25

Network evolutionEvolution of computing models also require evolution of network infrastructure

• Open exchange points built in carrier-neutral facilities: any connector can connect with their own fiber or using circuits provided by any telecom provider

• enables T2s and T3s to obtain their data from any T1 or T2

• Use of LHCONE will alleviate the general R&E IP infrastructure

• LHCONE provides connectivity directly to T1s, T2s, and T3s, and to various aggregation networks, such as the European NRENs, GÉANT, etc.

Page 26: WLCG after 1 year with data: Prospects for the future

• Service incidents – last 2 quarters – any service degradation generates a Service Incident Report (SIR == post-mortem)

– This illustrates quite strongly that the majority (>~75%) of the problems experienced are not related to the distributed nature of the WLCG at all (or grid middleware)

• How can we make the effect of outages less intrusive?• Can we automate recovery (or management)?• Do user community have reasonable expectations? (no…)• Not unique to WLCG !!!

[email protected] 26

Sustainability:Service incidents (outage/degradation)

Incidents Type

11 Infrastructure related

6 Database problems (some also infrastructure caused)

4 Storage related (~all infrastructure caused)

2 Network problems

Page 27: WLCG after 1 year with data: Prospects for the future

[email protected] 27

Everyone has service failures…

Some inform their customers .. … and some don’t!

Failures can and do happen … but these incidents raise many questions for cloud services:

How safe is my data? … where is it? Privacy? Who checks? Dependencies?

Forgot the part about keeping their customers informed …Where are the SIRs???

Page 28: WLCG after 1 year with data: Prospects for the future

[email protected] 28

Conclusions

• WLCG has been a great success and been key in the rapid delivery of physics from LHC

• Challenge now is to be more effective and efficient – computing should limit physics as little as possible

Summary