Grid1
-
Upload
sonia-sharma -
Category
Education
-
view
556 -
download
3
Transcript of Grid1
![Page 1: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/1.jpg)
1
Milestone 2
• Include the names of the papers• You only have a page – be selective about
what you include• Be specific; summarize the authors’
contributions, not just what the paper is “about”.
• You might be able to reuse this text in the final paper if you’re specific and thorough.
![Page 2: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/2.jpg)
2
Introduction to Grid Computing
![Page 3: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/3.jpg)
3
Overview
• Background: What is the Grid?
• Related technologies
• Grid applications
• Communities
• Grid Tools
• Case Studies
![Page 4: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/4.jpg)
4
What is a Grid?• Many definitions exist in the literature
• Early defs: Foster and Kesselman, 1998“A computational grid is a hardware and software
infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational facilities”
• Kleinrock 1969: “We will probably see the spread of ‘computer
utilities’, which, like present electric and telephone utilities, will service individual homes and offices across the country.”
![Page 5: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/5.jpg)
5
3-point checklist (Foster 2002)
1. Coordinates resources not subject to centralized control
2. Uses standard, open, general purpose protocols and interfaces
3. Deliver nontrivial qualities of service• e.g., response time, throughput, availability,
security
![Page 6: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/6.jpg)
6
Grid Architecture
Autonomous, globally distributed computers/clusters
![Page 7: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/7.jpg)
7
Why do we need Grids?
• Many large-scale problems cannot be solved by a single computer
• Globally distributed data and resources
![Page 8: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/8.jpg)
8
Background: Related technologies
• Cluster computing
• Peer-to-peer computing
• Internet computing
![Page 9: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/9.jpg)
9
Cluster computing
• Idea: put some PCs together and get them to communicate
• Cheaper to build than a mainframe supercomputer
• Different sizes of clusters
• Scalable – can grow a cluster by adding more PCs
![Page 10: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/10.jpg)
10
Cluster Architecture
![Page 11: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/11.jpg)
11
Peer-to-Peer computing
• Connect to other computers
• Can access files from any computer on the network
• Allows data sharing without going through central server
• Decentralized approach also useful for Grid
![Page 12: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/12.jpg)
12
Peer to Peer architecture
![Page 13: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/13.jpg)
13
Internet computing
• Idea: many idle PCs on the Internet
• Can perform other computations while not being used
• “Cycle scavenging” – rely on getting free time on other people’s computers
• Example: SETI@home
• What are advantages/disadvantages of cycle scavenging?
![Page 14: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/14.jpg)
14
Some Grid Applications
• Distributed supercomputing
• High-throughput computing
• On-demand computing
• Data-intensive computing
• Collaborative computing
![Page 15: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/15.jpg)
15
Distributed Supercomputing
• Idea: aggregate computational resources to tackle problems that cannot be solved by a single system
• Examples: climate modeling, computational chemistry
• Challenges include:– Scheduling scarce and expensive resources– Scalability of protocols and algorithms– Maintaining high levels of performance across
heterogeneous systems
![Page 16: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/16.jpg)
16
High-throughput computing
• Schedule large numbers of independent tasks
• Goal: exploit unused CPU cycles (e.g., from idle workstations)
• Unlike distributed computing, tasks loosely coupled
• Examples: parameter studies, cryptographic problems
![Page 17: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/17.jpg)
17
On-demand computing
• Use Grid capabilities to meet short-term requirements for resources that cannot conveniently be located locally
• Unlike distributed computing, driven by cost-performance concerns rather than absolute performance
• Dispatch expensive or specialized computations to remote servers
![Page 18: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/18.jpg)
18
Data-intensive computing
• Synthesize data in geographically distributed repositories
• Synthesis may be computationally and communication intensive
• Examples:– High energy physics generate terabytes of
distributed data, need complex queries to detect “interesting” events
– Distributed analysis of Sloan Digital Sky Survey data
![Page 19: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/19.jpg)
19
Collaborative computing
• Enable shared use of data archives and simulations
• Examples:– Collaborative exploration of large geophysical
data sets
• Challenges:– Real-time demands of interactive applications– Rich variety of interactions
![Page 20: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/20.jpg)
20
Grid Communities• Who will use Grids?
• Broad view– Benefits of sharing outweigh costs– Universal, like a power Grid
• Narrow view– Cost of sharing across institutional boundaries
is too high– Resources only shared when incentive to do so– Grid will be specialized to support specific
communities with specific goals
![Page 21: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/21.jpg)
21
Government
• Small number of users
• Couple small numbers of high-end resources
• Goals:– Provide “strategic computing reserve” for crisis
management– Support collaborative investigations of scientific
and engineering problems
• Need to integrate diverse resources and balance diversity of competing interests
![Page 22: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/22.jpg)
22
Health Maintenance Organization
• Share high-end computers, workstations, administrative databases, medical image archives, instruments, etc. across hospitals in a metropolitan area
• Enable new computationally enhanced applications
• Private grid – Small scale, central management, common purpose– Diversity of applications and complexity of
integration
![Page 23: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/23.jpg)
23
Materials Science Collaboratory
• Scientists operating a variety of instruments (electron microscopes, particle accelerators, X-ray sources) for characterization of materials
• Highly distributed and fluid community
• Sharing of instruments, archives, software, computers
• Virtual Grid – strong focus and narrow goals– Dynamic membership, decentralized, sharing
resources
![Page 24: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/24.jpg)
24
Computational Market Economy
• Combine:– Consumers with diverse needs and interests– Providers of specialized services– Providers of compute resources and network
providers
• Public Grid– Need applications that can exploit loosely coupled
resources– Need contributors of resources
![Page 25: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/25.jpg)
25
Grid Users
• Many levels of users– Grid developers– Tool developers– Application developers– End users– System administrators
![Page 26: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/26.jpg)
26
Some Grid challenges
• Data movement
• Data replication
• Resource management
• Job submission
![Page 27: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/27.jpg)
27
Some Grid-Related Projects
• Globus
• Condor
• Nimrod-G
![Page 28: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/28.jpg)
28
Globus Grid Toolkit• Open source toolkit for building Grid systems
and applications
• Enabling technology for the Grid
• Share computing power, databases, and other tools securely online
• Facilities for:– Resource monitoring– Resource discovery– Resource management– Security– File management
![Page 29: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/29.jpg)
29
Data Management in Globus Toolkit
• Data movement– GridFTP– Reliable File Transfer (RFT)
• Data replication– Replica Location Service (RLS)– Data Replication Service (DRS)
![Page 30: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/30.jpg)
30
GridFTP• High performance, secure, reliable data
transfer protocol
• Optimized for wide area networks
• Superset of Internet FTP protocol
• Features:– Multiple data channels for parallel transfers– Partial file transfers– Third party transfers– Reusable data channels– Command pipelining
![Page 31: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/31.jpg)
31
More GridFTP features
• Auto tuning of parameters
• Striping– Transfer data in parallel among multiple
senders and receivers instead of just one
• Extended block mode– Send data in blocks– Know block size and offset– Data can arrive out of order– Allows multiple streams
![Page 32: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/32.jpg)
32
Striping Architecture
• Use “Striped” servers
![Page 33: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/33.jpg)
33
Limitations of GridFTP
• Not a web service protocol (does not employ SOAP, WSDL, etc.)
• Requires client to maintain open socket connection throughout transfer– Inconvenient for long transfers
• Cannot recover from client failures
![Page 34: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/34.jpg)
34
GridFTP
![Page 35: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/35.jpg)
35
Reliable File Transfer (RFT)
• Web service with “job-scheduler” functionality for data movement
• User provides source and destination URLs
• Service writes job description to a database and moves files
• Service methods for querying transfer status
![Page 36: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/36.jpg)
36
RFT
![Page 37: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/37.jpg)
37
Replica Location Service (RLS)
• Registry to keep track of where replicas exist on physical storage system
• Users or services register files in RLS when files created
• Distributed registry– May consist of multiple servers at different sites– Increase scale– Fault tolerance
![Page 38: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/38.jpg)
38
Replica Location Service (RLS)• Logical file name – unique identifier for contents of file• Physical file name – location of copy of file on storage
system• User can provide logical name and ask for replicas• Or query to find logical name associated with physical
file location
![Page 39: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/39.jpg)
39
Data Replication Service (DRS)• Pull-based replication capability
• Implemented as a web service
• Higher-level data management service built on top of RFT and RLS
• Goal: ensure that a specified set of files exists on a storage site
• First, query RLS to locate desired files
• Next, creates transfer request using RFT
• Finally, new replicas are registered with RLS
![Page 40: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/40.jpg)
40
Condor
• Original goal: high-throughput computing
• Harvest wasted CPU power from other machines
• Can also be used on a dedicated cluster
• Condor-G – Condor interface to Globus resources
![Page 41: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/41.jpg)
41
Condor• Provides many features of batch systems:
– job queueing– scheduling policy– priority scheme– resource monitoring– resource management
• Users submit their serial or parallel jobs
• Condor places them into a queue
• Scheduling and monitoring
• Informs the user upon completion
![Page 42: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/42.jpg)
42
Nimrod-G• Tool to manage execution of parametric studies
across distributed computers
• Manages experiment– Distributing files to remote systems– Performing the remote computation – Gathering results
• User submits declarative plan file– Parameters, default values, and commands
necessary for performing the work
• Nimrod-G takes advantage of Globus toolkit features
![Page 43: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/43.jpg)
43
Nimrod-G Architecture
![Page 44: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/44.jpg)
44
Grid Case Studies
• Earth System Grid
• LIGO
• TeraGrid
![Page 45: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/45.jpg)
45
Earth System Grid
• Provide climate studies scientists with access to large datasets
• Data generated by computational models – requires massive computational power
• Most scientists work with subsets of the data
• Requires access to local copies of data
![Page 46: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/46.jpg)
46
ESG Infrastructure
• Archival storage systems and disk storage systems at several sites
• Storage resource managers and GridFTP servers to provide access to storage systems
• Metadata catalog services
• Replica location services
• Web portal user interface
![Page 47: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/47.jpg)
47
Earth System Grid
![Page 48: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/48.jpg)
48
Earth System Grid Interface
![Page 49: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/49.jpg)
49
Laser Interferometer Gravitational Wave Observatory (LIGO)
• Instruments at two sites to detect gravitational waves
• Each experiment run produces millions of files
• Scientists at other sites want these datasets on local storage
• LIGO deploys RLS servers at each site to register local mappings and collect info about mappings at other sites
![Page 50: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/50.jpg)
50
Large Scale Data Replication for LIGO
• Goal: detection of gravitational waves
• Three interferometers at two sites
• Generate 1 TB of data daily
• Need to replicate this data across 9 sites to make it available to scientists
• Scientists need to learn where data items are, and how to access them
![Page 51: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/51.jpg)
51
LIGO
![Page 52: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/52.jpg)
52
LIGO Solution
• Lightweight data replicator (LDR)
• Uses parallel data streams, tunable TCP windows, and tunable write/read buffers
• Tracks where copies of specific files can be found
• Stores descriptive information (metadata) in a database – Can select files based on description rather
than filename
![Page 53: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/53.jpg)
53
TeraGrid
• NSF high-performance computing facility
• Nine distributed sites, each with different capability , e.g., computation power, archiving facilities, visualization software
• Applications may require more than one site
• Data sizes on the order of gigabytes or terabytes
![Page 54: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/54.jpg)
54
TeraGrid
![Page 55: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/55.jpg)
55
TeraGrid
• Solution: Use GridFTP and RFT with front end command line tool (tgcp)
• Benefits of system:– Simple user interface – High performance data transfer capability – Ability to recover from both client and server
software failures– Extensible configuration
![Page 56: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/56.jpg)
56
TGCP Details
• Idea: hide low level GridFTP commands from users
• Copy file smallfile.dat in a working directory to another system:tgcp smallfile.dat tg-login.sdsc.teragrid.org:/users/ux454332
• GridFTP command:globus-url-copy -p 8 -tcp-bs 1198372 \gsiftp://tg-gridftprr.uc.teragrid.org:2811/home/navarro/smallfile.dat \
gsiftp://tg-login.sdsc.teragrid.org:2811/users/ux454332/smallfile.dat
![Page 57: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/57.jpg)
57
The reality
• We have spent a lot of time talking about “The Grid”
• There is “the Web” and “the Internet”
• Is there a single Grid?
![Page 58: Grid1](https://reader035.fdocuments.us/reader035/viewer/2022062319/55504ec6b4c90580748b536c/html5/thumbnails/58.jpg)
58
The reality
• Many types of Grids exist
• Private vs. public
• Regional vs. Global
• All-purpose vs. particular scientific problem