TeraGrid National Cyberinfrastructure for Terascale Science Dane Skow Deputy Director, TeraGrid
Sergiu Sanielevici([email protected]) April 2006June 2006 Overview of TeraGrid Resources and Services...
-
Upload
clinton-fields -
Category
Documents
-
view
225 -
download
4
Transcript of Sergiu Sanielevici([email protected]) April 2006June 2006 Overview of TeraGrid Resources and Services...
April 2006June 2006Sergiu Sanielevici([email protected])
Overview of TeraGridResources and Services
Sergiu Sanielevici, TeraGrid Area Director for User Services Coordination
Pittsburgh Supercomputing Center
April 2006June 2006Sergiu Sanielevici([email protected])
TeraGrid: Integrating NSF Cyberinfrastructure
SDSC
NCAR(mid-2006)
TACC
UC/ANL
NCSA
ORNL
PUIU
PSC
April 2006June 2006Sergiu Sanielevici([email protected])
NSF TeraGrid In Context
1/00 1/05 1/101/951/901/85
NSF Centers Program PACI
ConstructionOperation &
Enhancement
* * *
April 2006June 2006Sergiu Sanielevici([email protected])
The TeraGrid Facility
• Grid Infrastructure Group (GIG)– University of Chicago– TeraGrid integration, planning, management, coordination
• Resource Providers (RP)– Currently NCSA, SDSC, PSC, Indiana, Purdue, ORNL, TACC, UC/ANL
• Additional RPs in discussion– Systems (resources, services) support, user support – Provide access to resources via policies, software, and mechanisms
coordinated by and provided through the GIG.• The Facility
– An integrated set of HPC resources providing NSF scientists with access to resources and collections of resources through unified user support, coordinated software and services, and extensive documentation and training.
• The Federation– Interdependent partners working together under the direction of an overall
project director, the GIG PI.
April 2006June 2006Sergiu Sanielevici([email protected])
It’s all in These URLs
• TG home page: www.teragrid.org• TG User Portal: https://portal.teragrid.org
April 2006June 2006Sergiu Sanielevici([email protected])
ANL/UC IU NCSA ORNL PSC Purdue SDSC TACCComputationalResources
Itanium 2(0.5 TF)
IA-32(0.5 TF)
Itanium2(0.2 TF)
IA-32(2.0 TF)
Itanium2(10.7 TF)
SGI SMP (7.0 TF)
Dell Xeon(17.2TF)
IBM p690(2TF)
Condor Flock(1.1TF)
IA-32 (0.3 TF)
XT3 (10 TF)
TCS (6 TF)
Marvel SMP
(0.3 TF)
Hetero(1.7 TF)
IA-32(11 TF)Opportunistic
Itanium2(4.4 TF)
Power4+(15.6 TF)
Blue Gene(5.7 TF)
IA-32(6.3 TF)
Online Storage
20 TB 32 TB 1140 TB 1 TB 300 TB 26 TB 1400 TB 50 TB
Mass Storage 1.2 PB 5 PB 2.4 PB 1.3 PB 6 PB 2 PB
Net Gb/s, Hub
30 CHI 10 CHI 30 CHI 10 ATL 30 CHI 10 CHI 40 LA 10 CHI
DataCollections# collectionsApprox total sizeAccess methods
5 Col.>3.7 TBURL/DB/GridFTP
> 30 Col.URL/SRB/DB/GridFTP
4 Col.7 TBSRB/Portal/OPeNDAP
>70 Col.>1 PBGFS/SRB/DB/GridFTP
4 Col. 2.35 TBSRB/Web Services/URL
Instruments ProteomicsX-ray Cryst.
SNS and HFIR Facilities
VisualizationResourcesRI: Remote InteractRB: Remote BatchRC: RI/Collab
RI, RC, RB IA-32, 96 GeForce 6600GT
RBSGI Prism, 32 graphics pipes; IA-32
RI, RBIA-32 + Quadro4 980 XGL
RBIA-32, 48 Nodes
RB RI, RC, RBUltraSPARC IV, 512GB SMP, 16 gfx cards
TeraGrid Resources
100+ TF8 distinct
architectures
3 PB Online Disk
>100 data collections
Updated by Kelly Gaither ([email protected])
April 2006June 2006Sergiu Sanielevici([email protected])
TeraGrid Facility Today
HeterogeneousResources at Autonomous Resource Provider Sites
Local Value-Added User Environment
Common TeraGrid Computing Environment
• A single point of contact for help• Integrated documentation and training• A common allocation process • A common baseline user environment• Services to assist users in harnessing the right TeraGrid platforms for
each part of their work,• Enhancements driven by users.• Science Gateways to engage broader communities
April 2006June 2006Sergiu Sanielevici([email protected])
Current Menu of Compute Resources
• Cross-Site IA-64 Cluster: DTF – IBM Itanium-2/Myrinet at NCSA, SDSC, ANL: ~15.6 TF,
5.2 TB Memory combined
• Single-Site IA-32 & IA-64 Clusters– NCSA, TACC, Purdue, IU, ANL, ORNL: ~32 TF in all!
• Tightly-Coupled MPP Systems– PSC XT3 (10 TF)+TCS (6 TF); SDSC Blue Gene (5.7 TF)
• SMP Systems– SDSC Power4+ (15.6 TF); NCSA Altix (7 TF)+p690
(2TF); PSC Marvel (0.3 TF)
Mix and Match With Data and VisualizationMix and Match With Data and Visualization
April 2006June 2006Sergiu Sanielevici([email protected])
Exploring the TeraGrid
• Get started with a Development Grant: TG-DAC up to 30K SU Roaming*– Experiment with code(s) and task(s) on the various
resources for ~1 year– Find the best mapping of your research task flow to the
resources: the scenarios we’ll present today may suggest possible answers
– Document this mapping to write a production proposal: define a science goal and discuss how many SUs you will need on which systems to accomplish it over 1 or 2 CYs.
*Roaming means never having to say which system
April 2006June 2006Sergiu Sanielevici([email protected])
Peer Reviewed Production Grants
• Large (>200K SU, start 4/1 or 10/1) or Medium (start 1/1, 4/1, 7/1 or 10/1) Supersize it!
• Specific or RoamingDepends on the outcome of your task flow mapping:– If a task works best on a specificspecific system, ask for it by name.
Extrapolate DAC benchmarks to justify your request.– You can ask for specific allocationsspecific allocations on several systems.– But But if it’s best for you to use a large number of TG systems (e.g
any/all clusters/SMPs/MPPs), a roaming allocationa roaming allocation will free you from the need to predict what task you will do on which machine!
– Roaming jobs may get lower priority on machines that have been assigned many specific allocations.
April 2006June 2006Sergiu Sanielevici([email protected])
We’re here to work with you!
• Personal consultant contact upon receiving a production grant
• Documentation at www.teragrid.org/userinfo/index.php • [email protected] for any problems or questions• ASTA Program: ASTA Program: intensive help from our consultants for a
focused effort to optimize the effectiveness of your application’s use of TeraGrid resources to achieve a scientific breakthrough. Tough to get into, but worth it! Talk to us… [email protected]
• Science Gateways: enable entire communities of users associated with a common scientific goal to use TeraGrid resources through a common interface. Contact:[email protected]
April 2006June 2006Sergiu Sanielevici([email protected])
And now: A word about Safety and Security on the TeraGrid!
Over to Jim, our much-feared Chief of Security.
Thank you, and please enjoy this Tutorial and this first annual TeraGrid conference!