1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

53
1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008
  • date post

    21-Dec-2015
  • Category

    Documents

  • view

    222
  • download

    2

Transcript of 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

Page 1: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

1

Lecture 2 Parallel Programming Platforms

Parallel ComputingFall 2008

Page 2: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

2

Implicit Parallelism: Trends in Microprocessor Architectures

Microprocessor clock speeds have posted impressive gains over the past two decades (two to three orders of magnitude).

Higher levels of device integration have made available a large number of transistors.

The question of how best to utilize these resources is an important one.

Current processors use these resources in multiple functional units and execute multiple instructions in the same cycle.

The precise manner in which these instructions are selected and executed provides impressive diversity in architectures.

Page 3: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

3

Implicit Parallelism

Pipelining and Superscalar execution

Very Long Instruction Word Processors

Page 4: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

4

Pipelining and Superscalar Execution

Pipelining overlaps various stages of instruction execution to achieve performance.

At a high level of abstraction, an instruction can be executed while the next one is being decoded and the next one is being fetched.

This is akin to an assembly line for manufacture of cars.

Page 5: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

5

Pipelining and Superscalar Execution

Limitations: The speed of a pipeline is eventually limited

by the slowest stage. For this reason, conventional processors rely

on very deep pipelines (20 stage pipelines in state-of-the-art Pentium processors).

However, in typical program traces, every 5-6th instruction is a conditional jump! This requires very accurate branch prediction.

The penalty of a misprediction grows with the depth of the pipeline, since a larger number of instructions will have to be flushed.

Page 6: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

6

Pipelining and Superscalar Execution

One simple way of alleviating these bottlenecks is to use multiple pipelines.

The question then becomes one of selecting these instructions.

Page 7: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

7

Superscalar Execution: An Example

Example of a two-way superscalar execution of instructions.

Page 8: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

8

Superscalar Execution Scheduling of instructions is determined by a

number of factors: True Data Dependency: The result of one operation is an

input to the next. Resource Dependency: Two operations require the same

resource. Branch Dependency: Scheduling instructions across

conditional branch statements cannot be done deterministically a-priori.

The scheduler, a piece of hardware looks at a large number of instructions in an instruction queue and selects appropriate number of instructions to execute concurrently based on these factors.

The complexity of this hardware is an important constraint on superscalar processors.

Page 9: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

9

Very Long Instruction Word (VLIW) Processors

The hardware cost and complexity of the superscalar scheduler is a major consideration in processor design.

To address this issues, VLIW processors rely on compile time analysis to identify and bundle together instructions that can be executed concurrently.

These instructions are packed and dispatched together, and thus the name very long instruction word.

This concept was used with some commercial success in the Multiflow Trace machine (circa 1984).

Variants of this concept are employed in the Intel IA64 processors.

Page 10: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

10

Very Long Instruction Word (VLIW) Processors: Considerations Issue hardware is simpler. Compiler has a bigger context from which to select

co-scheduled instructions. Compilers, however, do not have runtime

information such as cache misses. Scheduling is, therefore, inherently conservative.

Branch and memory prediction is more difficult. VLIW performance is highly dependent on the

compiler. A number of techniques such as loop unrolling, speculative execution, branch prediction are critical.

Typical VLIW processors are limited to 4-way to 8-way parallelism.

Page 11: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

11

Limitations of Memory System Performance

Memory system, and not processor speed, is often the bottleneck for many applications.

Memory system performance is largely captured by two parameters, latency and bandwidth.

Latency is the time from the issue of a memory request to the time the data is available at the processor.

Bandwidth is the rate at which data can be pumped to the processor by the memory system.

Page 12: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

12

Improving Effective Memory Latency Using Caches

Caches are small and fast memory elements between the processor and DRAM.

This memory acts as a low-latency high-bandwidth storage.

If a piece of data is repeatedly used, the effective latency of this memory system can be reduced by the cache.

The fraction of data references satisfied by the cache is called the cache hit ratio of the computation on the system.

Cache hit ratio achieved by a code on a memory system often determines its performance.

Page 13: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

13

Memory System Performance: Summary

Exploiting spatial and temporal locality in applications is critical for amortizing memory latency and increasing effective memory bandwidth.

The ratio of the number of operations to number of memory accesses is a good indicator of anticipated tolerance to memory bandwidth.

Memory layouts and organizing computation appropriately can make a significant impact on the spatial and temporal locality.

Page 14: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

14

Alternate Approaches for Hiding Memory Latency Prefetching Multithreading Spatial locality in accessing

memory words.

Page 15: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

15

Tradeoffs of Multithreading and Prefetching

Bandwidth requirements of a multithreaded system may increase very significantly because of the smaller cache residency of each thread.

Multithreaded systems become bandwidth bound instead of latency bound.

Multithreading and prefetching only address the latency problem and may often exacerbate the bandwidth problem.

Multithreading and prefetching also require significantly more hardware resources in the form of storage.

Page 16: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

16

Explicitly Parallel Platforms

Page 17: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

17

Flynn’s taxonomy of computer architectures (control mechanism) Depending on the execution and data streams computer

architectures can be distinguished into the following groups. (1) SISD (Single Instruction Single Data) : This is a sequential computer. (2) SIMD (Single Instruction Multiple Data) : This is a parallel machine

like the TM CM-200. SIMD machines are suited for data-parallel programs where the same set of instructions are executed on a large data set.

Some of the earliest parallel computers such as the Illiac IV, MPP, DAP, CM-2, and MasPar MP-1 belonged to this class of machines

(3) MISD (Multiple Instructions Single Data) : Some consider a systolic array a member of this group.

(4) MIMD (Multiple Instructions Multiple Data) : All other parallel machines. A MIMD architecture can be an MPMD or an SPMD. In a Multiple Program Multiple Data organization, each processor executes its own program as opposed to a single program that is executed by all processors on a Single Program Multiple Data architecture.

Examples of such platforms include current generation Sun Ultra Servers, SGI Origin Servers, multiprocessor PCs, workstation clusters, and the IBM SP

Note: Some consider CM-5 as a combination of a MIMD and SIMD as it contains control hardware that allows it to operatein a SIMD mode.

Page 18: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

18

SIMD and MIMD Processors

A typical SIMD architecture (a) and a typical MIMD architecture (b).

Page 19: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

19

Taxonomy based on Address-Space Organization (memory distribution)

Message-Passing Architecture In a distributed memory machine each processor has its own memory. Each

processor can access its own memory faster than it can access the memory of a remote processor (NUMA for Non-Uniform Memory Access). This architecture is also known as message-passing architecture and such machines are commonly referred to as multicomputers.

Examples: Cray T3D/T3E, IBM SP1/SP2, workstation clusters. Shared-Address-Space Architecture

Provides hardware support for read/write to a shared address space. Machines built this way are often called multiprocessors.

(1) A shared memory machine has a single address space shared by all processors (UMA, for Uniform Memory Access).

The time taken by a processor to access any memory word in the system is identical.

Examples: SGI Power Challenge, SMP machines. (2) A distributed shared memory system is a hybrid between the two

previous ones. A global address space is shared among the processors but is distributed among them. Example: SGI Origin 2000

Note: The existence of a cache in shared-memory parallel machines cause cache coherence problems when a cached variable is modified by a processor and the shared-variable is requested by another processor. cc-NUMA for cachecoherent NUMA architectures (Origin 2000).

Page 20: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

20

NUMA and UMA Shared-Address-Space Platforms

Typical shared-address-space architectures: (a) Uniform-memory access shared-address-space computer; (b) Uniform-memory-

access shared-address-space computer with caches and memories; (c) Non-uniform-memory-access shared-address-

space computer with local memory only.

Page 21: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

21

Message Passing vs. Shared Address Space Platforms

Message passing requires little hardware support, other than a network.

Shared address space platforms can easily emulate message passing. The reverse is more difficult to do (in an efficient manner).

Page 22: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

22

Taxonomy based on processor granularity The granularity sometimes refers to the power of

individual processors. Sometimes is also used to denote the degree of parallelism.

(1) A coarse-grained architecture consists of (usually few) powerful processors (eg old Cray machines).

(2) a fine-grained architecture consists of (usually many inexpensive) processors (eg TM CM-200, CM-2).

(3) a medium-grained architecture is between the two (eg CM-5).

Process Granularity refers to the amount of computation assigned to a particular processor of a parallel machine for a given parallel program. It also refers, within a single program, to the amount of computation performed before communication is issued. If the amount of computation is small (low degree of concurrency) a process is fine-grained. Otherwise granularity is coarse.

Page 23: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

23

Taxonomy based on processor synchronization

(1) In a fully synchronous system a global clock is used to synchronize all operations performed by the processors.

(2) An asynchronous system lacks any synchronization facilities. Processor synchronization needs to be explicit in a user’s program.

(3) A bulk-synchronous system comes in between a fully synchronous and an asynchronous system. Synchronization of processors is required only at certain parts of the execution of a parallel program.

Page 24: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

24

Physical Organization of Parallel Platforms – ideal architecture(PRAM)

The Parallel Random Access Machine (PRAM) is one of the simplest ways to model a parallel computer.

A PRAM consists of a collection of (sequential) processors that can synchronously access a global shared memory in unit time. Each processor can thus access its shared memory as fast (and efficiently) as it can access its own local memory.

The main advantages of the PRAM is its simplicity in capturing parallelism and abstracting away communication and synchronization issues related to parallel computing.

Processors are considered to be in abundance and unlimited in number. The resulting PRAM algorithms thus exhibit unlimited parallelism (number of processors used is a function of problem size).

The abstraction thus offered by the PRAM is a fully synchronous collection of processors and a shared memory which makes it popular for parallel algorithm design.

It is, however, this abstraction that also makes the PRAM unrealistic from a practical point of view.

Full synchronization offered by the PRAM is too expensive and time demanding in parallel machines currently in use.

Remote memory (i.e. shared memory) access is considerably more expensive in real machines than local memory access

UMA machines with unlimited parallelism are difficult to build.

Page 25: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

25

Four Subclasses of PRAM Depending on how concurrent access to a single memory cell (of the

shared memory) is resolved, there are various PRAM variants. ER (Exclusive Read) or EW (Exclusive Write) PRAMs do not allow concurrent

access of the shared memory. It is allowed, however, for CR (Concurrent Read) or CW (Concurrent Write)

PRAMs. Combining the rules for read and write access there are four PRAM

variants: EREW:

access to a memory location is exclusive. No concurrent read or write operations are allowed.

Weakest PRAM model CREW

Multiple read accesses to a memory location are allowed. Multiple write accesses to a memory location are serialized.

ERCW Multiple write accesses to a memory location are allowed. Multiple read accesses to a

memory location are serialized. Can simulate an EREW PRAM

CRCW Allows multiple read and write accesses to a common memory location. Most powerful PRAM model Can simulate both EREW PRAM and CREW PRAM

Page 26: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

26

Resolve concurrent write access (1) in the arbitrary PRAM, if multiple processors write into a

single shared memory cell, then an arbitrary processor succeeds in writing into this cell.

(2) in the common PRAM, processors must write the same value into the shared memory cell.

(3) in the priority PRAM the processor with the highest priority (smallest or largest indexed processor) succeeds in writing.

(4) in the combining PRAM if more than one processors write into the same memory cell, the result written into it depends on the combining operator. If it is the sum operator, the sum of the values is written, if it is the maximum operator the maximum is written.

Note: An algorithm designed for the common PRAM can be executed on a priority or arbitrary PRAM and exhibit similar complexity. The same holds for an arbitrary PRAM algorithm when run on a priority PRAM.

Page 27: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

27

Innerconnection Networks for Parallel Computers Interconnection networks carry data between

processors and to memory. Interconnects are made of switches and links

(wires, fiber). Interconnects are classified as static or

dynamic. Static networks

Consists of point-to-point communication links among processing nodes

Also referred to as direct networks Dynamic networks

Built using switches (switching element) and links Communication links are connected to one another

dynamically by the switches to establish paths among processing nodes and memory banks.

Page 28: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

28

Static and DynamicInterconnection Networks

Classification of interconnection networks: (a) a static network; and (b) a dynamic network.

Page 29: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

29

Network Topologies Bus-Based Networks

The simplest network that consists a shared medium(bus) that is common to all the nodes.

The distance between any two nodes in the network is constant (O(1)).

Ideal for broadcasting information among nodes. Scalable in terms of cost, but not scalable in terms of

performance. The bounded bandwidth of a bus places limitations on the

overall performance as the number of nodes increases. Typical bus-based machines are limited to dozens of nodes.

Sun Enterprise servers and Intel Pentium based shared-bus multiprocessors are examples of such architectures

Page 30: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

30

Network Topologies Crossbar Networks

Employs a grid of switches or switching nodes to connect p processors to b memory banks.

Nonblocking network: the connection of a processing node to a memory bank

doesnot block the connection of any other processing nodes to other memory banks.

The total number of switching nodes required is Θ(pb). (It is reasonable to assume b>=p)

Scalable in terms of performance Not scalable in terms of cost. Examples of machines that employ crossbars include

the Sun Ultra HPC 10000 and the Fujitsu VPP500

Page 31: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

31

Network Topologies: Multistage Network

Multistage Networks Intermediate class of networks between bus-based network and crossbar

network Blocking networks: access to a memory bank by a processor may

disallow access to another memory bank by another processor. More scalable than the bus-based network in terms of performance,

more scalable than crossbar network in terms of cost.

The schematic of a typical multistage interconnection network.

Page 32: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

32

Network Topologies: Multistage Omega Network Omega network

Consists of log p stages, p is the number of inputs(processing nodes) and also the number of outputs(memory banks)

Each stage consists of an interconnection pattern that connects p inputs and p outputs:

Perfect shuffle(left rotation):

Each switch has two conncetion modes: Pass-thought conncetion: the inputs are sent straight

through to the outputs Cross-over connection: the inputs to the switching node

are crossed over and then sent out. Has p/2*log p switching nodes

2 , 0 / 2 1

2 1 , / 2 1

i i pj

i p p i p

Page 33: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

33

Network Topologies: Multistage Omega Network

A complete omega network connecting eight inputs and eight outputs.

An omega network has p/2 × log p switching nodes, and the cost of such a network grows as (p log p).

A complete Omega network with the perfect shuffle interconnects and switches can now be illustrated:

Page 34: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

34

Network Topologies: Multistage Omega Network – Routing

Let s be the binary representation of the source and d be that of the destination processor.

The data traverses the link to the first switching node. If the most significant bits of s and d are the same, then the data is routed in pass-through mode by the switch else, it switches to crossover.

This process is repeated for each of the log p switching stages.

Note that this is not a non-blocking switch.

Page 35: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

35

Network Topologies: Multistage Omega Network – Routing

An example of blocking in omega network: one of the messages (010 to 111 or 110 to 100) is blocked at link AB.

Page 36: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

36

Network Topologies - Fixed Connection Networks (static) Completely-connection Network Star-Connected Network Linear array 2d-array or 2d-mesh or mesh 3d-mesh Complete Binary Tree (CBT) 2d-Mesh of Trees Hypercube Butterfly

Page 37: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

37

Evaluating Static Interconnection Network

One can view an interconnection network as a graph whose nodes correspond to processors and its edges to links connecting neighboring processors. The properties of these interconnection networks can be described in terms of a number of criteria.

(1) Set of processor nodes V . The cardinality of V is the number of processors p (also denoted by n).

(2) Set of edges E linking the processors. An edge e = (u, v) is represented by a pair (u, v) of nodes. If the graph G = (V,E) is directed, this means that there is a unidirectional link from processor u to v. If the graph is undirected, the link is bidirectional. In almost all networks that will be considered in this course communication links will be bidirectional. The exceptions will be clearly distinguished.

(3) The degree du of node u is the number of links containing u as an endpoint. If graph G is directed we distinguish between the out-degree of u (number of pairs (u, v) ∈ E, for any v ∈ V ) and similarly, the in-degree of u. T he degree d of graph G is the maximum of the degrees of its nodes i.e. d = maxudu.

Page 38: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

38

Evaluating Static Interconnection Network (cont.) (4) The diameter D of graph G is the maximum of the lengths of

the shortest paths linking any two nodes of G. A shortest path between u and v is the path of minimal length linking u and v. We denote the length of this shortest path by duv. Then, D = maxu,vduv. The diameter of a graph G denotes the maximum delay (in terms of number of links traversed) that will be incurred when a packet is transmitted from one node to the other of the pair that contributes to D (i.e. from u to v or the other way around, if D = duv). Of course such a delay would hold if messages follow shortest paths (the case for most routing algorithms).

(5) latency is the total time to send a message including software overhead. Message latency is the time to send a zero-length message.

(6) bandwidth is the number of bits transmitted in unit time. (7) bisection width is the number of links that need to be

removed from G to split the nodes into two sets of about the same size (±1).

Page 39: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

39

Network Topologies - Fixed Connection Networks (static) Completely-connection Network

Each node has a direct communication link to every other node in the network.

Ideal in the sense that a node can send a message to another node in a single step.

Static counterpart of crossbar switching networks Nonblocking

Star-Connected Network One processor acts as the central processor. Every

other processor has a communication link connecting it to this central processor.

Similar to bus-based network. The central processor is the bottleneck.

Page 40: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

40

Network Topologies: Completely Connected and Star Connected Networks

Example of an 8-node completely connected network.

(a) A completely-connected network of eight nodes; (b) a star connected network of nine nodes.

Page 41: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

41

Network Topologies - Fixed Connection Networks (static) Linear array

In a linear array, each node(except the two nodes at the ends) has two neighbors, one each to its left and right.

Extension: ring or 1-D torus(linear array with wraparound). 2d-array or 2d-mesh or mesh

The processors are ordered to form a 2-dimensional structure (square) so that each processor is connected to its four neighbor (north, south, east, west) except perhaps for the processors of the boundary.

Extension of linear array to two-dimensions: Each dimension has p nodes with a node identified by a two-tuple (i,j).

3d-mesh A generalization of a 2d-mesh in three dimensions. Exercise: Find the

characteristics of this network and its generalization in k dimensions (k > 2).

Complete Binary Tree (CBT) There is only one path between any pair of two nodes Static tree network: have a processing element at each node of the tree. Dynamic tree network: nodes at intermediate levels are switching nodes

and the leaf nodes are processing elements. Communication bottleneck at higher levels of the tree.

Solution: increasing the number of communication links and switching nodes closer to the root.

Page 42: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

42

Network Topologies: Linear Arrays

Linear arrays: (a) with no wraparound links; (b) with wraparound link.

Page 43: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

43

Network Topologies: Two- and Three Dimensional Meshes

Two and three dimensional meshes: (a) 2-D mesh with no wraparound; (b) 2-D mesh with wraparound link (2-D torus); and

(c) a 3-D mesh with no wraparound.

Page 44: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

44

Network Topologies: Tree-Based Networks

Complete binary tree networks: (a) a static tree network; and (b) a dynamic tree network.

Page 45: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

45

Network Topologies: Fat Trees

A fat tree network of 16 processing nodes.

Page 46: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

46

Evaluating Network Topologies Linear array

|V | = N, |E| = N −1, d = 2, D = N −1, bw = 1 (bisection width).

2d-array or 2d-mesh or mesh For |V | = N, we have a √N ×√N mesh structure, with |E| ≤

2N = O(N), d = 4, D = 2√N − 2, bw = √N. 3d-mesh

Exercise: Find the characteristics of this network and its generalization in k dimensions (k > 2).

Complete Binary Tree (CBT) on N = 2n leaves For a complete binary tree on N leaves, we define the level

of a node to be its distance from the root. The root is of level 0 and the number of nodes of level i is 2i. Then, |V | = 2N − 1, |E| ≤ 2N − 2 = O(N), d = 3, D = 2lgN, bw = 1(c)

Page 47: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

47

Network Topologies - Fixed Connection Networks (static) 2d-Mesh of Trees

An N2-leaf 2d-MOT consists of N2 nodes ordered as in a 2d-array N ×N (but without the links). The N rows and N columns of the 2d-MOT form N row CBT and N column CBTs respectively.

For such a network, |V | = N2+2N(N −1), |E| = O(N2), d = 3, D = 4lgN, bw = N. The 2d-MOT possesses an interesting decomposition property. If the 2N roots of the CBT’s are removed we get 4 N/2 × N/2 CBT s.

The 2d-Mesh of Trees (2d-MOT) combines the advantages of 2d-meshes and binary trees. A 2d-mesh has large bisection width but large diameter (√N). On the other hand a binary tree on N leaves has small bisection width but small diameter. The 2d-MOT has small diameter and large bisection width.

A 3d-MOTcan be defined similarly.

Page 48: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

48

Network Topologies - Fixed Connection Networks (static)

Hypercube The hypercube is the major representative of a class of networks that are

called hypercubic networks. Other such networks is the butterfly, the shuffle-exchange graph, de-Bruijn graph, Cube-connected cycles etc.

Each vertex of an n-dimensional hypercube is represented by a binary string of length n. Therefore there are |V | = 2n = N vertices in such a hypercube. Two vertices are connected by an edge if their strings differ in exactly one bit position.

Let u = u1u2 . . .ui . . . un. An edge is a dimension i edge if it links two nodes that differ in the i-th bit position.

This way vertex u is connected to vertex ui= u1u2 . . . ūi . . .un with a dimension i edge. Therefore |E| = N lg N/2 and d = lgN = n.

The hypercube is the first network examined so far that has degree that is not a constant but a very slowly growing function of N. The diameter of the hypercube is D = lgN. A path from node u to node v can be determined by correcting the bits of u to agree with those of v starting from dimension 1 in a “left-to-right” fashion. The bisection width of the hypercube is bw = N. This is a result of the following property of the hypercube. If all edges of dimension i are removed from an n dimensional hypercube, we get two hypercubes each one of dimension n − 1.

Page 49: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

49

Network Topologies: Hypercubes and their Construction

Construction of hypercubes from hypercubes of lower dimension.

Page 50: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

50

Network Topologies - Fixed Connection Networks (static) Butterfly.

The set of vertices of a butterfly are represented by (w, i), where w is a binary string of length n and 0 ≤ i ≤ n.

Therefore |V | = (n + 1)2n = (lgN + 1)N. Two vertices (w, i) and (w’, i’ ) are connected by an edge if i‘ = i + 1 and either (a) w = w’ or (b) w and w’ differ in the i’ bit. As a result |E| = O(N lgN), d = 4, D = 2lgN = 2n, and bw = N.

Nodes with i = j are called level-j nodes. If we remove the nodes of level 0 we get two butterflies of size N/2. If we collapse all levels of an n dimensional butterfly into one, we get a hypercube.

Page 51: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

51

Network Topologies - Fixed Connection Networks (static) Other:

Other hypercubic networks is cube-connected cycles (CCC), which is a hypercube whose nodes are replaced by rings/cycles of length n so that the resulting network has constant degree). This network can also be viewed as a butterfly whose first and last levels collapse into one. For such a network |V | = N lgN = n2n, |E| = 3n2n−1, d = 3, D = 2lgN, bw = N/2.

Page 52: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

52

Evaluating Static Interconnection Networks

Network Diameter BisectionWidth

Arc Connectivity

Cost (No. of links)

Completely-connected

Star

Complete binary tree

Linear array

2-D mesh, no wraparound

2-D wraparound mesh

Hypercube

Wraparound k-ary d-cube

Page 53: 1 Lecture 2 Parallel Programming Platforms Parallel Computing Fall 2008.

53

End

Thank you!