Click to edit Master subtitle style
2/26/2009
Parallel SyStem InterconnectIonS
and communIcatIonS
Abdullah Algarni
2/26/2009
Parallel Architectures- SISD- SIMD- MIMD-Shared memory systems-Distributed memory machines
Physical Organization of Parallel Platforms-Ideal Parallel Computer
Interconnection Networks for Parallel Computers-Static and Dynamic Interconnection Networks-Switches -Network interfaces
Outline
2/26/2009
2/26/2009
Network Topologies-Buses-Crossbars-Multistage Networks-Multistage Omega Network -Completely Connected Network -Linear Arrays -Meshes -Hypercubes -Tree-Based Networks-Fat Trees-Evaluating Interconnection Networks
Grid Computing
Outline (con.)
2/26/2009
2/26/2009
SISD: Single instruction single data– Classical von Neumann architecture
SIMD: Single instruction multiple data
MIMD: Multiple instructions multiple data – Most common and general parallel machine
Classification of Parallel Architectures
2/26/2009
2/26/2009
• Also known as Array-processors• A single instruction stream is broadcasted to multiple processors, each having its own data stream
– Still used in graphics cards today
Single Instruction Multiple Data
2/26/2009
2/26/2009
• Each processor has its own instruction stream and input data
Further breakdown of MIMD usually based on the memory organization
– Shared memory systems– Distributed memory systems
Multiple Instructions Multiple Data
2/26/2009
2/26/2009
All processes have access to the same address space– E.g. PC with more than one processor
Data exchange between processes by writing/reading shared variables
Advantage: Shared memory systems are easy to program
– Current standard in scientific programming: OpenMP
Shared memory systems
2/26/2009
2/26/2009
• Two versions of shared memory systems available today:
– Symmetric multiprocessors (SMP)
– Non-uniform memory access (NUMA)
Shared memory systems
2/26/2009
2/26/2009
• All processors share the same physical main memory
• Disadvantage: Memory bandwidth per processor is limited
• Typical size: 2-32 processors
Symmetric multi-processors (SMPs)
2/26/2009
2/26/2009
• More than one memory but some memory is closer to a certain processor than other memory
◦ The whole memory is still addressable from all processors
NUMA architectures (1)(Non-uniform memory access)
2/26/2009
2/26/2009
• Advantage: It Reduces the memory limitation compared to SMPs• Disadvantage: More difficult to program efficiently
• To reduce effects of non-uniform memory access, caches are often used
• Largest example of this type:SGI Origin with10240 processors
NUMA architectures (cont.)
2/26/2009
Columbia Supercomputer
2/26/2009
Each processor has its own address space Communication between processes by explicit data exchange Some protocols are used:
– Sockets – Message passing – Remote procedure call / remote method invocation
Distributed memory machines
2/26/2009
2/26/2009
• Performance of a distributed memory machine strongly depends on the quality of the network interconnect and the topology of the network interconnect Two classes of distributed memory machines:
1) Massively parallel processing systems (MPPs)2) Clusters
Distributed memory machines(Con.)
2/26/2009
2/26/2009
Physical Organization
of Parallel Platforms
2/26/2009
2/26/2009
A natural extension of the Random Access Machine (RAM) serial architecture is the Parallel Random Access Machine, or PRAM.
PRAMs consist of p processors and a global memory of unbounded size that is uniformly accessible to all processors.
Processors share a common clock but may execute different instructions in each cycle.
Ideal Parallel Computer
2/26/2009
2/26/2009
Depending on how simultaneous memory accesses are handled, PRAMs can be divided into four subclasses. ◦ Exclusive-read, exclusive-write (EREW) PRAM. ◦ Concurrent-read, exclusive-write (CREW) PRAM. ◦ Exclusive-read, concurrent-write (ERCW) PRAM. ◦ Concurrent-read, concurrent-write (CRCW) PRAM.
Ideal Parallel Computer
2/26/2009
2/26/2009
What does concurrent write mean, anyway? ◦ Common: write only if all values are identical. ◦ Arbitrary: write the data from a randomly selected
processor. ◦ Priority: follow a pre-determined priority order. ◦ Sum: Write the sum of all data items.
Ideal Parallel Computer
2/26/2009
2/26/2009
Processors and memories are connected via switches. Since these switches must operate in O(1) time at the
level of words, for a system of p processors and m words, the switch complexity is O(mp).
Physical Complexity of an Ideal Parallel Computer
2/26/2009
2/26/2009
Imagine how long it takes to complete Brain Simulation?
The human brain contains 100,000,000,000 neurons each neuron receives input from 1000 others
To compute a change of brain “state”, one requires 1014 calculations
If each could be done in 1s s, it would take ~3 years to
complete one calculation.
Brain simulation
2/26/2009
2/26/2009
Imagine how long it takes to complete Brain Simulation?
The human brain contains 100,000,000,000 neurons, each neuron receives input from 1000 others
To compute a change of brain “state”, one requires 1014 calculations
If each could be done in 1s s, it would take ~3 years to complete one calculation. Clearly, O(mp) for big values of p and m, a true PRAM is not realizable.
Brain simulation
2/26/2009
2/26/2009
Important metrics:– Latency:• minimal time to send a message from one processor to another• Unit: ms, μs– Bandwidth:• amount of data which can be transferred from one processor to another in a certain time frame• Units: Bytes/sec, KB/s, MB/s, GB/s, Bits/sec, Kb/s, Mb/s, Gb/s
Interconnection Networks for Parallel Computers
2/26/2009
2/26/2009
Important terms
2/26/2009
2/26/2009
Static and Dynamic Interconnection Networks
Classification of interconnection networks: (a) a static network; and (b) a dynamic network.
2/26/2009
2/26/2009
Switches map a fixed number of inputs to outputs.
degree of the switch: the total number of ports on a switch is the degree of the switch.
The cost of a switch: grows as the square of the degree of the switch.
Switches
2/26/2009
2/26/2009
Processors talk to the network via a network interface. The network interface may hang off the I/O bus or the
memory bus. In a physical sense, this distinguishes a cluster from a
tightly coupled multicomputer. The relative speeds of the I/O and memory buses impact
the performance of the network.
Network Interfaces
2/26/2009
2/26/2009
Network Topologies
Single Campus Networkn 538 nodesn 543 links
10 campus networks connected in ring
- A variety of network topologies have been proposed and implemented. - These topologies tradeoff performance for cost. -Commercial machines often implement hybrids of multiple topologies for reasons of packaging, cost, and available components.
2/26/2009
2/26/2009
Some of the simplest and earliest parallel machines used buses. All processors access a common bus for exchanging data. The distance between any two nodes is O(1) in a bus. The bus also
provides a convenient broadcast media. However, the bandwidth of the shared bus is a major bottleneck. Typical bus based machines are limited to dozens of nodes. Sun
Enterprise servers and Intel Pentium based shared-bus multiprocessors are examples of such architectures.
Buses
2/26/2009
2/26/2009
Buses (First type)
The bounded bandwidth of a bus places limitations on the overall performance of the network as the number of nodes increases!
The execution time is lower bounded by: TxKP secondsP: processorsK: data items T: time for each data access
2/26/2009
2/26/2009
Buses (Second type, with chache memory)
If we assume that 50% of the memory accesses (0.5K) are made to local data, in this case:
The execution time is lower bounded by:0.5x TxKP secondsWhich means that we made 50% improvement compared to the first type.
2/26/2009
2/26/2009
Crossbars
A crossbar network uses an p×m grid of switches to connect p inputs to m outputs in a non-blocking manner
2/26/2009
2/26/2009
The cost of a crossbar of p processors grows as O(p2).
This is generally difficult to scale for large values of p.
Examples of machines that employ crossbars include the Sun Ultra HPC 10000 and the Fujitsu VPP500.
Crossbars
2/26/2009
2/26/2009
Crossbars have excellent performance scalability but poor cost scalability.
Buses have excellent cost scalability, but poor performance scalability.
Multistage interconnects strike a compromise between these extremes.
Multistage Networks
2/26/2009
2/26/2009
Multistage Networks
The schematic of a typical multistage interconnection network
2/26/2009
2/26/2009
One of the most commonly used multistage interconnects is the Omega network.
This network consists of log p stages, where p is the number of inputs/outputs.
So, for 8 processors and 8 memory banks we need 3 stages
Multistage Omega Network
2/26/2009
2/26/2009
Each stage of the Omega network implements a perfect shuffle as follows:
Multistage Omega Network
2/26/2009
2/26/2009
The perfect shuffle patterns are connected using 2×2 switches. The switches operate in two modes – crossover or passthrough.
Multistage Omega Network
Two switching configurations of the 2 × 2 switch:
(a) Pass-through; (b) Cross-over. 2/26/2009
2/26/2009
A complete Omega network with the perfect shuffle interconnects and switches can now be illustrated:
Multistage Omega Network
An omega network has p/2 × log p switching nodes, and the cost of such a network grows as (p log p).
2/26/2009
2/26/2009
Let s be the binary representation of the source and d be that of the destination.
The data traverses the link to the first switching node. If the most significant bits of s and d are the same, then the data is routed in pass-through mode by the switch else, it switches to crossover.
This process is repeated for each of the log p switching stages using the next significant bit.
Multistage Omega Network – Routing
2/26/2009
2/26/2009
Multistage Omega Network – Routing
Routing from s= 010 , to d=111Routing from s= 110 , to d=101
2/26/2009
2/26/2009
Each processor is connected to every other processor. The number of links in the network scales as O(p2). While the performance scales very well, the hardware complexity is not realizable
for large values of p. In this sense, these networks are
static counterparts of crossbars.
Completely Connected Network
crossbars
Completely Connected
2/26/2009
2/26/2009
Every node is connected only to a common node at the center. Distance between any pair of nodes is O(1). However, the central node becomes a
bottleneck. In this sense, star connected networks are static counterparts of buses.
Star Connected Networks
BusStat Stat
2/26/2009
2/26/2009
In a linear array, each node has two neighbors, one to its left and one to its right.
If the nodes at either end are connected, we refer to it as a 1-D torus or a ring.
Linear Arrays
Linear arrays: (a) with no wraparound links; (b) with wraparound link.
2/26/2009
2/26/2009
Meshes
Two and three dimensional meshes: (a) 2-D mesh with no wraparound; (b) 2-D mesh with wraparound link (2-D torus); and (c) a 3-D mesh with no wraparound.
Two- and Three Dimensional Meshes
2/26/2009
2/26/2009
HypercubesThe Construction
2/26/2009
2/26/2009
Properties :- The distance between any two nodes is at
most log p.- Each node has log p neighbors.
Hypercubes
2/26/2009
2/26/2009
Tree-Based Networks
Complete binary tree networks: (a) a static tree network; and (b) a dynamic tree network.
2/26/2009
2/26/2009
Properties : The distance between any two nodes is no
more than 2logp. Links higher up the tree potentially carry
more traffic than those at the lower levels. For this reason, a variant called a fat-tree,
fattens the links as we go up the tree.
Tree-Based Networks
2/26/2009
2/26/2009
Fat Trees
A fat tree network of 16 processing nodes.
2/26/2009
2/26/2009
Diameter: The distance between the farthest two nodes in the network.
Bisection Width: The minimum number of wires you must cut to divide the network into two equal parts.
Cost: The number of links or switches Degree: Number of links that connect to aprocessor
Evaluating Interconnection Networks
2/26/2009
2/26/2009
Evaluating Static Interconnection Networks
Network Diameter BisectionWidth Degree
Cost (links& switches)
Completely-connected
Star
Complete binary tree
Linear array
2-D mesh, no wraparound
2-D wraparound mesh
Hypercube
Wraparound k-ary d-cube
2/26/2009
2/26/2009
Evaluating Dynamic Interconnection Networks
Network Diameter Bisection Width
Arc Connectivity
Cost (No. of links)
Crossbar
Omega Network
Dynamic Tree
2/26/2009
2/26/2009
Can we make Sharing between different organizations?
2/26/2009
2/26/2009 2/26/2009
2/26/2009
How? By using Grid computing we can make Computational Resources sharing Across the World. What is the relationship between parallel
computing and grid computing? Grid computing is a special case of parallel computing
Grid Computing
2/26/2009
2/26/2009
Can we tie all components tightly by software?
High Speed Network
DisksPCs, SMPsClusters
Problem Solving Environment
RAID
Visual Data Server
Menu-Template- Solver- Pre & Post- Mesh
2/26/2009
2/26/2009 2/26/2009
User Access Point
Resource Broker
Grid Resources
Result
GRID CONCEPT
2/26/2009
Goals of Grid Computing
Reduce computing costs
Increase computing resources
Reduce job turnaround time
Reduce Complexity to Users
Increase Productivity
Are Grids a Solution?
2/26/2009
2/26/2009 2/26/2009
What is needed?
Reply Choice
Computational Resources
Clusters
MPP
Workstations
MPI, PVM,Condor...
RequestBroker
Scheduler
Database
Client - RPC like
MatlabMathematicaC, Fortran Java, Perl Java GUI
Gatekeeper
ISP
2/26/2009
You submit your work
And the Grid◦ Finds convenient places for it to be run
◦ Organises efficient access to your data
Caching, migration, replication
◦ Deals with authentication to the different sites that you will be using
◦ Interfaces to local site resource allocation mechanisms, policies
◦ Runs your jobs, Monitors progress, Recovers from problems, Tells you when your work is complete
What does the Grid do for you?
2/26/2009
2/26/2009 2/26/2009
INTERNET
Virtual organisations negotiate with sites to agree access to resources
Grid middleware runs on each shared resource to provide◦ Data services◦ Computation services◦ Single sign-on
Distributed services (both people and middleware) enable the grid
Typical current grid
E-infrastructure is the key !!!
2/26/2009
TeraGrid (www.teragrid.org)◦ USA distributed terascale facility at 4 sites for open scientific research
Information Power Grid (www.ipg.nasa.gov)NASAs high performance computing grid
GARUDA
Department of Information Technology (India Gov.).
It connect 45 institutes in 17 cities in the country at
10/100 Mbps bandwidth.
Examples of Grids
2/26/2009
2/26/2009
[1] Introduction to Parallel Computing. By Ananth Grama, Anshul
Gupta, George Karypis, and Vipin Kumar. [2] Parallel System Interconnections and Communications. By D.
Grammatikakies, D. Frank Hsu, and Miro Kraetzl [3] Wikipedia, the free encyclopedia [4] Introduction to Grid Computing with Globus (ibm.com/redbooks) [5] Network and Parallel Computing: Ifip International Conference Npc
2008 Shanghai China Octob. By Jian (EDT)/ Li Cao [6] Network and Parallel Computing . By Jian (EDT) Cao & Minglu (EDT)
Li & Min-you (EDT) Wu & Jinjun (EDT) Chen
References:
2/26/2009
2/26/2009
Any Questions?
2/26/2009
2/26/2009
List three types of dynamic interconnection networks that are used in parallel computing and evaluate each of them.
The answer:
My Question
Network Diameter Bisection Width
Arc Connectivity
Cost (No. of links)
Crossbar
Omega Network
Dynamic Tree
2/26/2009
2/26/2009
Abdullah Algarni
THANK YOU
2/26/2009
Top Related