CERN : Facts

17
2 May 2004 David Foster CERN IT-CS 1 LHC, Networking and Grids David Foster Networks and Communications Systems Group Head [email protected] APAN 2004, CAIRNS

description

LHC, Networking and Grids David Foster Networks and Communications Systems Group Head [email protected] APAN 2004, CAIRNS. CERN : Facts. Geneva could be contained within the LHC ring. (Large Hadron Collider). Primary Objective: Understand the structure of matter Instruments : - PowerPoint PPT Presentation

Transcript of CERN : Facts

Page 1: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 1

LHC, Networking and Grids

David Foster

Networks and Communications Systems Group Head

[email protected]

APAN 2004, CAIRNS

Page 2: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 2

CERN : Facts

Geneva could be contained within the LHC ring.(Large Hadron Collider)

The CERN site:

• > 60 Km2

• Spans the Swiss/Freench border

Le CERN :

•European Organisation

•20 member states

•Founded in 1954 by 12 countries

•Real example of international collaboration

•World Lab

Primary Objective:

Understand the structure of matter

Instruments :

Accelerators and Detectors

Page 3: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 3

CERN site:Next to Lake Geneva

Mont Blanc, 4810 m

Downtown Geneva

Page 4: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 4

LHC Accelerator

LHC :

• 27 Km

• Depth varies from 50 to 175 m

• Energy :450 GeV to 7 teV

• >1200 superconducting magnets, max 8,36 Teslas !

• 24 Km of cryostats at 1,9 °K

•100T Liquid Helium Recycled daily

•60T Liquid Nitrogen daily

Page 5: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 5

level 1 - special hardware

40 MHz (40 TB/sec)level 2 - embedded processorslevel 3 - PCs

75 KHz (75 GB/sec)5 KHz (5 GB/sec)100 Hz(100 MB/sec)data recording &

offline analysis

Concorde(15 Km)

Balloon(30 Km)

CD stack with1 year LHC data!(~ 20 Km)

Mt. Blanc(4.8 Km)

~15 PetaBytes of data each year Analysis will need the computing power of ~ 100,000 of today's fastest PC processors!

Page 6: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 6

The Large Hadron Collider (LHC) has 4 Detectors:

CMSATLAS

LHCb

Accumulating data at 5-8 Petabytes/year (plus copies)

Requirements for world –wide data analysis:

Storage – Raw recording rate 0.1 – 1 GB/s

10 Petabytes of disk

Processing – 100,000 of today’s fastest processors

Page 7: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 7

Main Internet connections at CERN

SWITCH

CIXP

WHO

Europe

USA

CERNCERN

1Gbp

s

1Gbp

s

USLIC 10Gbps

Swiss National Research Network

Mission Oriented & World Health Org.

General purpose A&R and commodityInternet connections(Europe/USA/World)

Commercial

IN2P3

45Mbps

1Gbps

GEANT (2.5/10Gbps)

1/10Gbps

Network Research

NetherLight

ATRIUM/VTHD

2.5Gbps10Gbps

From ~25G (2003)

To ~40G (2004)

Page 8: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 8

CERN’s Distributed Internet Exchange Point (CIXP)

Telecom Operators & dark fibre providers:

Cablecom, COLT, France Telecom, FibreLac/Intelcom, Global Crossing, LDCom, Deutsche Telekom/T-Systems, Interoute(*), KPN, MCI/Worldcom, SIG, Sunrise, Swisscom (Switzerland), Swisscom (France), Thermelec, VTX.

Internet Service Providers include:Infonet, AT&T Global Network Services,

Cablecom, Callahan, Colt, DFI, Deckpoint, Deutsche Telekom, Easynet, FibreLac, France Telecom/OpenTransit, Global-One, InterNeXt, IS-Productions, LDcom, Nexlink, PSI Networks (IProlink), MCI/Worldcom, Petrel, SIG, Sunrise, IP-Plus,VTX/Smartphone, UUnet, Vianetworks.

Others:SWITCH, Swiss Confederation, Conseil

General de Haute Savoie (*)

cixp isp

ispisp

isp

isp

isp

isp

isp

CERNfirewall

Cern LAN

Telecomoperators

Telecomoperators

Page 9: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 9

Virtual Computing Centre

The resources ---

spread throughout the world at collaborating centers

made available through grid technologies

The user ---

sees the image of a single cluster of cpu and disk

does not need to know - where the data is - where the processing capacity is - how things are interconnected - the details of the different hardware

and is not concerned by the local policies of the equipment owners and managers

Page 10: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 10

CollaboratingComputer Centres

The virtual LHC Computing CentreGrid

ATLAS VO

CMS VO

Page 11: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 11

grid for a physicsstudy group

Deploying the LHC Grid

grid for a regional group

les.

rob

ert

son

@ce

rn.c

h

Tier2

Lab a

Uni a

Lab c

Uni n

Lab m

Lab b

Uni bUni y

Uni x

Tier3physics

department

Desktop

Germany

Tier 1

USA

UK

France

Italy

Taipei?

CERN Tier 1

Japan

The LHC Computing

CentreCERN Tier 0

Page 12: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 12

The Goal of the LHC Computing Grid Project (LCG)

To help the experiments’ computing projects prepare, build and operate the computing environment needed to manage and analyse the data coming from the detectors

Phase 1 – 2002-05prepare and deploy a prototype of the environment for LHC computing

Phase 2 – 2006-08acquire, build and operate the LHC computing service

[email protected]

Page 13: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 13

Modes of Use

• Connectivity requirements are subdivided by usage pattern:

– “Buffered real-time” for the T0 to T1 raw data transfer.

– “Peer Services” between the T1-T1 and T1-T2 for the background distribution of data products.

– “Chaotic”• submission of analysis jobs to T1 and T2 centers

• “on-demand” data transfer.

Page 14: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 14

T0 – T1 Buffered Real Time Estimates

MB/Sec RAL Fermilab Brookhaven Karlsruhe IN2P3 CNAF PIC (Barcelona) T0 Total

ATLAS 106.87 0.00 173.53 106.87 106.87 106.87 106.87 707.87

CMS 71.67 71.67 0.00 71.67 71.67 71.67 71.67 430.00

ALICE 101.41 0.00 0.00 101.41 101.41 101.41 0.00 405.63

LHCb 6.80 0.00 0.00 6.80 6.80 6.80 6.80 34.00

T1 Totals MB/sec 286.74 71.67 173.53 286.74 286.74 286.74 185.33 1577.49

T1 Totals Gb/sec 2.29 0.57 1.39 2.29 2.29 2.29 1.48 12.62

Estimated T1 Bandwidth Needed

(Totals * 1.5(headroom))*2(capacity) 6.88 1.72 4.16 6.88 6.88 6.88 4.45 37.86

Assumed Bandwidth Provisioned 10.00 10.00 10.00 10.00 10.00 10.00 10.00 70.00

Page 15: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 15

Peer Services

• Will be largely bulk data transfers.

– Scheduled data “redisribution”• Need a very good, reliable, efficient file transfer

service.

– Much work going on with GridFTP– Maybe a candidate for non-IP service (fiberchannel

over SONET)• Could be provided by a switched infrastructure.

– Circuit based optical switching, on demand or static.– “Well known” and “Trusted” peer end points

(hardware and software) and opportunity to bypass firewall issues.

Page 16: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 16

Some Challenges

• Real bandwidth estimates given the chaotic nature of the requirements.

• End-end performance given the whole chain involved

– (disk-bus-memory-bus-network-bus-memory-bus-disk)

• Provisioning over complex network infrastructures (GEANT, NREN’s etc)

• Cost model for options (packet+SLA’s, circuit switched etc)

• Consistent Performance (dealing with firewalls)• Merging leading edge research with production

networking

Page 17: CERN :  Facts

2 May 2004 David Foster CERN IT-CS 17

Thank You!