Tier 1 in Dubna for CMS: plans and prospects Korenkov Vladimir LIT, JINR
The Beijing Tier 2: status and plans
description
Transcript of The Beijing Tier 2: status and plans
The Beijing Tier 2: status and plans
Xiaomei Zhang
CMS Tier1 visit to IN2P3 in Lyon November 30, 2007
2007/11/30 IHEP Computing Center 2
Outline
• Introduction• Manpower• Site overview: hardware, software, network• Storage status• Data transfer status• Ongoing upgrade• Future plan
2007/11/30 IHEP Computing Center 3
Introduction
• The T2_Beijing is the only CMS T2 site in China mainland
• The T2_Beijing site is set up and maintained by the computing center of IHEP in Beijing– approved in 2006– no direct financial support from the government yet– trying to get financial support soon…
• T2_Beijing shared (mainly) by CMS and ATLAS– common LCG infrastructure– no dedicated WN– dedicated Storage Element for each experiment
2007/11/30 IHEP Computing Center 4
Manpower
• 1 FTE for CMS T2_Beijing, 1 FTE for ATLAS T2_Beijing– Xiaomei Zhang ([email protected]) is
responsible for CMS– Erming Pei ([email protected]) is
responsible for ATLAS
• 1 FTE for technical support– Xiaofei Yan ([email protected])
2007/11/30 IHEP Computing Center 5
Site overview• Computing infrastructure
– our computing hall has just finished repairing– the cooling and power systems are in good condition
• Hardware– LCG components: IBM x3650 (2 Xeon 5130 cpu and 4 GB RAM)– 14 worknodes (2 Xeon 3.2G cpu and 2 GB RAM)
• Software– middleware recently upgraded to gLite 3.0– work nodes OS system upgraded to SLC 4– CMSSW1.6.4 has been installed and tested
• External Network to T1 – 1Gbps to the CNNIC center, which has 1Gbps connected through
CERNET to Europe GEANT, 1Gbps to US Internet– ~80 stream 15MB/s in current tests to IN2P3 and FNAL
2007/11/30 IHEP Computing Center 6
Storage Status
• The resources are very limited now, but will be improved step by step soon
• Use dCache system with 1.1TB storage– 1.1TB is our first step
– 10TB disk has arrived • Headnodes and poolnodes in a single server: IBM x346 (2
Xeon 3.2G CPUs and 4 GB RAM)– one poolnode, one pool with 1.1 TB– it causes much trouble in current link debugging– affect job running in our site, no space for data output and slow
response from SE in the period of link debugging
2007/11/30 IHEP Computing Center 7
Data Transfers
• We try to have two (up/down) links– IN2P3 and FNAL (FNAL is required by our local CMS group)
• Status: two links aren’t good enough, but will be promising after the resource increasing soon– IN2P3 - a good rate about 15 MB/s some days, although the download
link and upload link are still in the commissioning status– FNAL - download link commissioned, upload link is ongoing
• Main reason– only one link at one time is allowed in our poor SE– hard to keep switching links and commissioning four links, even two
links– sometimes T1s are also not stable and have problems, making things
even worse
2007/11/30 IHEP Computing Center 8
PhEDEx
• Despite of bad situation, we still have transferred 30 (10) TB from FNAL (IN2P3)
• Also we have a very start this month for the upload link from FNAL and IN2P3
2007/11/30 IHEP Computing Center 9
Ongoing Upgrade
• 12 new WNs (2 Xeon 5345 cpu 4 cores 16GB RAM) will be added next month
• 5 machines will be used to set up the new SE headnodes and poolnodes
• 5 pools and 10TB disk will be added soon for each experiment(10TB for CMS, 10TB for ATLAS)
• One poolnode with 4GB FC connected to one disk array box with Raid 5+1 xfs file system
2007/11/30 IHEP Computing Center 10
Future Plan
• Try to maintain stable links with IN2P3 and FNAL• Meet the data demand of local CMS physics group in
production instance and provide good service for physics analysis
• Try to support MC production after resource situation improves
• “Everything is getting better and better”, as our computing center manager said