CCR Multicore Performance

62
PC08 Tutorial [email protected] 1 CCR Multicore Performance ECMS Multiconference HPCS 2008 Nicosia Cyprus June 5 2008 Geoffrey Fox, Seung-Hee Bae, Neil Devadasan, Rajarshi Guha, Marlon Pierce, Xiaohong Qiu, David Wild, Huapeng Yuan Community Grids Laboratory, Research Computing UITS, School of informatics and POLIS Center Indiana University George Chrysanthakopoulos, Henrik Frystyk Nielsen Microsoft Research, Redmond WA [email protected]

description

CCR Multicore Performance. ECMS Multiconference HPCS 2008 Nicosia Cyprus June 5 2008 Geoffrey Fox, Seung-Hee Bae, Neil Devadasan, Rajarshi Guha, Marlon Pierce, Xiaohong Qiu, David Wild, Huapeng Yuan - PowerPoint PPT Presentation

Transcript of CCR Multicore Performance

Page 1: CCR Multicore Performance

PC08 Tutorial [email protected] 1

CCR Multicore PerformanceECMS Multiconference HPCS 2008

Nicosia Cyprus June 5 2008

Geoffrey Fox, Seung-Hee Bae, Neil Devadasan, Rajarshi Guha, Marlon Pierce, Xiaohong Qiu, David Wild, Huapeng Yuan

Community Grids Laboratory, Research Computing UITS, School of informatics and POLIS Center Indiana University

George Chrysanthakopoulos, Henrik Frystyk NielsenMicrosoft Research, Redmond WA

[email protected]://grids.ucs.indiana.edu/ptliupages/presentations/

Page 2: CCR Multicore Performance

2

Motivation• Exploring possible applications for tomorrow’s

multicore chips (especially clients) with 64 or more cores (about 5 years)

• One plausible set of applications is data-mining of Internet and local sensors

• Developing Library of efficient data-mining algorithms – Clustering (GIS, Cheminformatics, Bioinformatics)

and Hidden Markov Methods (Speech Recognition)• Choose algorithms that can be parallelized well

Page 3: CCR Multicore Performance

3

Approach• Need 3 forms of parallelism

– MPI Style– Dynamic threads as in pruned search– Coarse Grain functional parallelism

• Do not use an integrated language approach as in Darpa HPCS

• Rather use “mash-ups” or “workflow” to link together modules in optimized parallel libraries

• Use Microsoft CCR/DSS where DSS is mash-up/workflow model built from CCR and CCR supports MPI or Dynamic threads

Page 4: CCR Multicore Performance

Parallel Programming Model If multicore technology is to succeed, mere mortals must be able to build

effective parallel programs on commodity machines There are interesting new developments – especially the new Darpa

HPCS Languages X10, Chapel and Fortress However if mortals are to program the 64-256 core chips expected in 5-7

years, then we must use near term technology and we must make it easy• This rules out radical new approaches such as new languages

Remember that the important applications are not scientific computing but most of the algorithms needed are similar to those explored in scientific parallel computing

We can divide problem into two parts:• “Micro-parallelism”: High Performance scalable (in number of

cores) parallel kernels or libraries • Macro-parallelism: Composition of kernels into complete

applications We currently assume that the kernels of the scalable parallel

algorithms/applications/libraries will be built by experts with a Broader group of programmers (mere mortals) composing library

members into complete applications.

Page 5: CCR Multicore Performance

Multicore SALSA at CGL Service Aggregated Linked Sequential Activities Aims to link parallel and distributed (Grid) computing by

developing parallel applications as services and not as programs or libraries• Improve traditionally poor parallel programming

development environments Developing set of services (library) of multicore parallel data

mining algorithms Looking at Intel list of algorithms (and all previous experience),

we find there are two styles of “micro-parallelism”• Dynamic search as in integer programming, Hidden Markov Methods

(and computer chess); irregular synchronization with dynamic threads• “MPI Style” i.e. several threads running typically in SPMD (Single

Program Multiple Data); collective synchronization of all threads together Most Intel RMS are “MPI Style” and very close to scientific

algorithms even if applications are not science

Page 6: CCR Multicore Performance

Scalable Parallel Components How do we implement micro-parallelism? There are no agreed high-level programming environments for

building library members that are broadly applicable. However lower level approaches where experts define

parallelism explicitly are available and have clear performance models.

These include MPI for messaging or just locks within a single shared memory.

There are several patterns to support here including the collective synchronization of MPI, dynamic irregular thread parallelism needed in search algorithms, and more specialized cases like discrete event simulation.

We use Microsoft CCR http://msdn.microsoft.com/robotics/ as it supports both MPI and dynamic threading style of parallelism

Page 7: CCR Multicore Performance

There is MPI style messaging and .. OpenMP annotation or Automatic Parallelism of existing

software is practical way to use those pesky cores with existing code• As parallelism is typically not expressed precisely, one needs luck to get

good performance• Remember writing in Fortran, C, C#, Java … throws away information

about parallelism HPCS Languages should be able to properly express parallelism

but we do not know how efficient and reliable compilers will be• High Performance Fortran failed as language expressed a subset of

parallelism and compilers did not give predictable performance PGAS (Partitioned Global Address Space) like UPC, Co-array

Fortran, Titanium, HPJava• One decomposes application into parts and writes the code for each

component but use some form of global index • Compiler generates synchronization and messaging• PGAS approach should work but has never been widely used – presumably

because compilers not mature

Page 8: CCR Multicore Performance

Summary of micro-parallelism On new applications, use MPI/locks with explicit

user decomposition A subset of applications can use “data parallel”

compilers which follow in HPF footsteps• Graphics Chips and Cell processor motivate such

special compilers but not clear how many applications can be done this way

OpenMP and/or Compiler-based Automatic Parallelism for existing codes in conventional languages

Page 9: CCR Multicore Performance

Composition of Parallel Components The composition (macro-parallelism) step has many excellent solutions

as this does not have the same drastic synchronization and correctness constraints as one has for scalable kernels• Unlike micro-parallelism step which has no very good solutions

Task parallelism in languages such as C++, C#, Java and Fortran90; General scripting languages like PHP Perl Python Domain specific environments like Matlab and Mathematica Functional Languages like MapReduce, F# HeNCE, AVS and Khoros from the past and CCA from DoE Web Service/Grid Workflow like Taverna, Kepler, InforSense KDE,

Pipeline Pilot (from SciTegic) and the LEAD environment built at Indiana University.

Web solutions like Mash-ups and DSS Many scientific applications use MPI for the coarse grain composition

as well as fine grain parallelism but this doesn’t seem elegant The new languages from Darpa’s HPCS program support task

parallelism (composition of parallel components) decoupling composition and scalable parallelism will remain popular and must be supported.

Page 10: CCR Multicore Performance

Integration of Services and “MPI”/Threads Kernels and Composition must be supported both inside chips (the multicore

problem) and between machines in clusters (the traditional parallel computing problem) or Grids.

The scalable parallelism (kernel) problem is typically only interesting on true parallel computers (rather than grids) as the algorithms require low communication latency.

However composition is similar in both parallel and distributed scenarios and it seems useful to allow the use of Grid and Web composition tools for the parallel problem. • This should allow parallel computing to exploit large investment in service

programming environments Thus in SALSA we express parallel kernels not as traditional libraries but as

(some variant of) services so they can be used by non expert programmers Bottom Line: We need a runtime that supports inter-service linkage and micro-

parallelism linkage CCR and DSS have this property

• Does it work and what are performance costs of the universality of runtime?

• Messaging need not be explicit for large data sets inside multicore node. However still use small messages to synchronize

Page 11: CCR Multicore Performance

11

Mashups v Workflow? Mashup Tools are reviewed at

http://blogs.zdnet.com/Hinchcliffe/?p=63 Workflow Tools are reviewed by Gannon and Fox

http://grids.ucs.indiana.edu/ptliupages/publications/Workflow-overview.pdf Both include scripting

in PHP, Python, sh etc. as both implement distributed programming at level of services

Mashups use all types of service interfaces and perhaps do not have the potential robustness (security) of Grid service approach

Mashups typically “pure” HTTP (REST)

Page 12: CCR Multicore Performance

“Service Aggregation” in SALSA Kernels and Composition must be supported both inside

chips (the multicore problem) and between machines in clusters (the traditional parallel computing problem) or Grids.

The scalable parallelism (kernel) problem is typically only interesting on true parallel computers as the algorithms require low communication latency.

However composition is similar in both parallel and distributed scenarios and it seems useful to allow the use of Grid and Web composition tools for the parallel problem. • This should allow parallel computing to exploit large

investment in service programming environments Thus in SALSA we express parallel kernels not as traditional

libraries but as (some variant of) services so they can be used by non expert programmers

For parallelism expressed in CCR, DSS represents the natural service (composition) model.

Page 13: CCR Multicore Performance

Parallel Programming 2.0 Web 2.0 Mashups will (by definition the largest

market) drive composition tools for Grid, web and parallel programming

Parallel Programming 2.0 will build on Mashup tools like Yahoo Pipes and Microsoft Popfly

Yahoo Pipes

Page 14: CCR Multicore Performance

Inter-Service Communication Note that we are not assuming a uniform

implementation of service composition even if user sees same interface for multicore and a Grid• Good service composition inside a multicore chip can require

highly optimized communication mechanisms between the services that minimize memory bandwidth use.

• Between systems interoperability could motivate very different mechanisms to integrate services.

• Need both MPI/CCR level and Service/DSS level communication optimization

Note bandwidth and latency requirements reduce as one increases the grain size of services • Suggests the smaller services inside closely coupled cores and

machines will have stringent communication requirements.

Page 15: CCR Multicore Performance

Inside the SALSA Services We generalize the well known CSP (Communicating Sequential

Processes) of Hoare to describe the low level approaches to fine grain parallelism as “Linked Sequential Activities” in SALSA.

We use term “activities” in SALSA to allow one to build services from either threads, processes (usual MPI choice) or even just other services.

We choose term “linkage” in SALSA to denote the different ways of synchronizing the parallel activities that may involve shared memory rather than some form of messaging or communication.

There are several engineering and research issues for SALSA• There is the critical communication optimization problem area for

communication inside chips, clusters and Grids. • We need to discuss what we mean by services• The requirements of multi-language support

Further it seems useful to re-examine MPI and define a simpler model that naturally supports threads or processes and the full set of communication patterns needed in SALSA (including dynamic threads).• Should start a new standards effort in OGF perhaps?

Page 16: CCR Multicore Performance

Unsupervised Modeling• Find clusters without prejudice• Model distribution as clusters formed

from Gaussian distributions with general shape

• Both can use multi-resolution annealing

SALSA

N data points X(x) in D dimensional space OR points with dissimilarity ij defined between them

General Problem Classes

Dimensional Reduction/Embedding• Given vectors, map into lower dimension

space “preserving topology” for visualization: SOM and GTM

• Given ij associate data points with vectors in a Euclidean space with Euclidean distance approximately ij : MDS (can anneal) and Random ProjectionData Parallel over N data

points X(x)

Page 17: CCR Multicore Performance

Machines UsedAMD4: HPxw9300 workstation, 2 AMD Opteron CPUs Processor 275 at 2.19GHz, 4 coresL2 Cache 4x1MB (summing both chips), Memory 4GB, XP Pro 64bit , Windows Server, Red HatC# Benchmark Computational unit: 1.388 µs

Intel4: Dell Precision PWS670, 2 Intel Xeon Paxville CPUs at 2.80GHz, 4 coresL2 Cache 4x2MB, Memory 4GB, XP Pro 64bitC# Benchmark Computational unit: 1.475 µsIntel8a: Dell Precision PWS690, 2 Intel Xeon CPUs E5320 at 1.86GHz, 8 coresL2 Cache 4x4M, Memory 8GB, XP Pro 64bit C# Benchmark Computational unit: 1.696 µsIntel8b: Dell Precision PWS690, 2 Intel Xeon CPUs E5355 at 2.66GHz, 8 coresL2 Cache 4x4M, Memory 4GB, Vista Ultimate 64bit, Fedora 7C# Benchmark Computational unit: 1.188 µs

Intel8c: Dell Precision PWS690, 2 Intel Xeon CPUs E5345 at 2.33GHz, 8 coresL2 Cache 4x4M, Memory 8GB, Red Hat 5.0, Fedora 7

Page 18: CCR Multicore Performance

Runtime System Used We implement micro-parallelism using Microsoft

CCR(Concurrency and Coordination Runtime) as it supports both MPI rendezvous and dynamic (spawned) threading style of parallelism http://msdn.microsoft.com/robotics/

CCR Supports exchange of messages between threads using named ports and has primitives like:

FromHandler: Spawn threads without reading ports Receive: Each handler reads one item from a single

port MultipleItemReceive: Each handler reads a

prescribed number of items of a given type from a given port. Note items in a port can be general structures but all must have same type.

MultiplePortReceive: Each handler reads a one item of a given type from multiple ports.

CCR has fewer primitives than MPI but can implement MPI collectives efficiently

Use DSS (Decentralized System Services) built in terms of CCR for service model

DSS has ~35 µs and CCR a few µs overhead

Page 19: CCR Multicore Performance

Parallel MulticoreDeterministic Annealing Clustering

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0 0.5 1 1.5 2 2.5 3 3.5 4

Parallel Overheadon 8 Threads Intel 8b

Speedup = 8/(1+Overhead)

10000/(Grain Size n = points per core)

Overhead = Constant1 + Constant2/n

Constant1 = 0.05 to 0.1 (Client Windows) due to threadruntime fluctuations

10 Clusters

20 Clusters

Page 20: CCR Multicore Performance

Parallel Multicore Deterministic Annealing Clustering

0.000

0.050

0.100

0.150

0.200

0.250

0 5 10 15 20 25 30 35

#cluster

over

head

“Constant1”

Increasing number of clusters decreases communication/memory bandwidth overheads

Parallel Overhead for large (2M points) Indiana Census clustering on 8 Threads Intel 8bThis fluctuating overhead due to 5-10% runtime fluctuations between threads

Page 21: CCR Multicore Performance

Parallel Multicore Deterministic Annealing Clustering

“Constant1”

Increasing number of clusters decreases communication/memory bandwidth overheads

0.000

0.020

0.040

0.060

0.080

0.100

0.120

0.140

0.160

0.180

0.200

0 2 4 6 8 10 12 14 16 18#cluster

over

head

Parallel Overhead for subset of PubChem clustering on 8 Threads (Intel 8b)

The fluctuating overhead is reduced to 2% (as bits not doubles)40,000 points with 1052 binary properties (Census is 2 real valued properties)

Page 22: CCR Multicore Performance

10.00

100.00

1,000.00

10,000.00

1 10 100 1000 10000

Execution TimeSeconds 4096X4096 matrices

Block Size

1 Core

8 CoresParallel Overhead

1%

Multicore Matrix Multiplication (dominant linear algebra in GTM)

10.00

100.00

1,000.00

10,000.00

1 10 100 1000 10000

Execution TimeSeconds 4096X4096 matrices

Block Size

1 Core

8 CoresParallel Overhead

1%

Multicore Matrix Multiplication (dominant linear algebra in GTM)

Speedup = Number of cores/(1+f)f = (Sum of Overheads)/(Computation per core)

Computation Grain Size n . # Clusters KOverheads areSynchronization: small with CCRLoad Balance: goodMemory Bandwidth Limit: 0 as K Cache Use/Interference: ImportantRuntime Fluctuations: Dominant large n, KAll our “real” problems have f ≤ 0.05 and speedups on 8 core systems greater than 7.6

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.021/(Grain Size n)

n = 500 50100

Parallel GTM Performance

FractionalOverheadf

4096 Interpolating Clusters

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.021/(Grain Size n)

n = 500 50100

Parallel GTM Performance

FractionalOverheadf

4096 Interpolating Clusters

SALSA

Page 23: CCR Multicore Performance

GTM Projection of 2 clusters of 335 compounds in 155 dimensions

GTM Projection of PubChem: 10,926,94 compounds in 166 dimension binary property space takes 4 days on 8 cores. 64X64 mesh of GTM clusters interpolates PubChem. Could usefully use 1024 cores! David Wild will use for GIS style 2D browsing interface to chemistry

PCA GTM

Linear PCA v. nonlinear GTM on 6 Gaussians in 3DPCA is Principal Component Analysis

Parallel Generative Topographic Mapping GTMReduce dimensionality preserving topology and perhaps distancesHere project to 2D

SALSA

Page 24: CCR Multicore Performance

Use Data Decomposition as in classic distributed memory but use shared memory for read variables. Each thread uses a “local” array for written variables to get good cache performance

Multicore and Cluster use same parallel algorithms but different runtime implementations; algorithms are

Accumulate matrix and vector elements in each process/thread At iteration barrier, combine contributions (MPI_Reduce) Linear Algebra (multiplication, equation solving, SVD)

Parallel Programming Strategy“Main Thread” and Memory M

1m1

0m0

2m2

3m3

4m4

5m5

6m6

7m7

Subsidiary threads t with memory mt

MPI/CCR/DSSFrom other nodes

MPI/CCR/DSSFrom other nodes

Page 25: CCR Multicore Performance

MPI Exchange Latency in µs (20-30 µs computation between messaging)Machine OS Runtime Grains Parallelism MPI Latency

Intel8c:gf12(8 core 2.33 Ghz)(in 2 chips)

Redhat MPJE(Java) Process 8 181

MPICH2 (C) Process 8 40.0

MPICH2:Fast Process 8 39.3

Nemesis Process 8 4.21

Intel8c:gf20(8 core 2.33 Ghz)

Fedora MPJE Process 8 157

mpiJava Process 8 111

MPICH2 Process 8 64.2

Intel8b(8 core 2.66 Ghz)

Vista MPJE Process 8 170

Fedora MPJE Process 8 142

Fedora mpiJava Process 8 100

Vista CCR (C#) Thread 8 20.2

AMD4(4 core 2.19 Ghz)

XP MPJE Process 4 185

Redhat MPJE Process 4 152

mpiJava Process 4 99.4

MPICH2 Process 4 39.3

XP CCR Thread 4 16.3

Intel(4 core) XP CCR Thread 4 25.8

SALSAMessaging CCR versus MPI C# v. C v. Java

Page 26: CCR Multicore Performance

Shift Overhead on DoubleAMD machine

0

20

40

60

80

100

120

0 2000000 4000000 6000000 8000000 10000000 12000000

maximum stage

overhead (μs)

WindowsXP (MPJE)RedHat (MPJE)RedHat (mpiJava)RedHat (MPICH2)

0 2 4 6 8 10

Stages (millions)

MPICH mpiJava MPJE MPI Shift Latency on AMD4

Page 27: CCR Multicore Performance

0

50

100

150

200

250

0 2000000 4000000 6000000 8000000 10000000 12000000

Exchange Overhead on DoubleAMD machine

0

50

100

150

200

250

0 2000000 4000000 6000000 8000000 10000000 12000000

maximum stage

overhead (μs)

WindowsXP (MPJE)RedHat (MPJE)RedHat (mpiJava)RedHat (MPICH2)

0 2 4 6 8 10

Stages (millions)

MPICH mpiJava MPJE MPI Exchange Latency on AMD4

Page 28: CCR Multicore Performance

Exchange Overhead on gf12 (RedHat) machine

0

50

100

150

200

250

0 2000000 4000000 6000000 8000000 10000000 12000000

maximum stage

overhead (μs)MPJEMPICH2MPICH2:NemesisMPICH2:enable-fast

0 2 4 6 8 10

Stages (millions)

MPICH Nemesis MPJE MPI Exchange Latency on Intel8c RedHat

Page 29: CCR Multicore Performance

29

Message

Thread3 Port3

MessageMessage Message

Thread3 Port3

MessageMessage

Message

Thread2 Port2

MessageMessage Message

Thread2 Port2

MessageMessage

Message

Thread0 Port0

MessageMessage Message

Thread0 Port0

MessageMessage Message

Thread0 Port0

MessageMessage

Message

Thread3 Port3

MessageMessage

Message

Thread2 Port2

MessageMessage

Message

Thread1 Port1

MessageMessage Message

Thread1 Port1

MessageMessage Message

Thread1 Port1

MessageMessage

One Stage

Pipeline which is Simplest loosely synchronous execution in CCRNote CCR supports thread spawning model

MPI usually uses fixed threads with message rendezvous

Message

Thread0 Port0

MessageMessage Message

Thread0 Port0

MessageMessage Message

Thread0 Port0

MessageMessage

Message MessageMessage

Message MessageMessage

Message

Thread1 Port1

MessageMessage Message

Thread1 Port1

MessageMessage Message

Thread1 Port1

MessageMessage

Next Stage

Page 30: CCR Multicore Performance

30

Message

Thread0 Port0

MessageMessage

Thread0 Message

Message

Thread3 Port3

MessageMessage

Thread3

EndPort

Message

Thread2 Port2

MessageMessage

Message

Thread2 Message

Message

Thread1 Port1

MessageMessage

Thread1 Message

Idealized loosely synchronous endpoint (broadcast) in CCRAn example of MPI Collective in CCR

Page 31: CCR Multicore Performance

31

WriteExchangedMessages

Port3

Port2

Thread0

Thread3

Thread2

Thread1Port1

Port0

Thread0

WriteExchangedMessages

Port3

Thread2 Port2

Exchanging Messages with 1D Torus Exchangetopology for loosely synchronous execution in CCR

Thread0

Read Messages

Thread3

Thread2

Thread1Port1

Port0

Thread3

Thread1

Page 32: CCR Multicore Performance

Thread0

Port3

Thread2 Port2

Port1

Port0

Thread3

Thread1

Thread2 Port2

Thread0 Port0

Port3Thread3

Port1Thread1

Thread3 Port3

Thread2 Port2

Thread0 Port0

Thread1 Port1

(a) Pipeline (b) Shift

(d) Exchange

Thread0

Port3

Thread2 Port2

Port1

Port0

Thread3

Thread1

(c) Two Shifts

Four Communication Patterns used in CCR Tests. (a) and (b) use CCR Receive while (c) and (d) use CCR Multiple Item Receive

Page 33: CCR Multicore Performance

AMD4: 4 Core Number of Parallel Computations(μs) 1 2 3 4 7 8

SpawnedPipeline 1.76 4.52 4.4 4.84 1.42 8.54

Shift 4.48 4.62 4.8 0.84 8.94Two Shifts 7.44 8.9 10.18 12.74 23.92

(MPI)

Pipeline 3.7 5.88 6.52 6.74 8.54 14.98Shift 6.8 8.42 9.36 2.74 11.16

Exchange As Two Shifts

14.1 15.9 19.14 11.78 22.6

Exchange 10.32 15.5 16.3 11.3 21.38

CCR Overhead for a computation of 27.76 µs between messaging

Rendezvous

Page 34: CCR Multicore Performance

CCR Overhead for a computation of 29.5 µs between messaging

Rendezvous

Intel4: 4 Core Number of Parallel Computations(μs) 1 2 3 4 7 8

Spawned

Pipeline 3.32 8.3 9.38 10.18 3.02 12.12

Shift 8.3 9.34 10.08 4.38 13.52

Two Shifts 17.64 19.32 21 28.74 44.02

MPI

Pipeline 9.36 12.08 13.02 13.58 16.68 25.68

Shift 12.56 13.7 14.4 4.72 15.94Exchange AsTwo Shifts 23.76 27.48 30.64 22.14 36.16

Exchange 18.48 24.02 25.76 20 34.56

Page 35: CCR Multicore Performance

CCR Overhead for a computation of 23.76 µs between messaging

Intel8b: 8 Core Number of Parallel Computations(μs) 1 2 3 4 7 8

DynamicSpawnedThreads

Pipeline 1.58 2.44 3 2.94 4.5 5.06

Shift 2.42 3.2 3.38 5.26 5.14

Two Shifts 4.94 5.9 6.84 14.32 19.44

RendezvousMPI style

Pipeline 2.48 3.96 4.52 5.78 6.82 7.18

Shift 4.46 6.42 5.86 10.86 11.74

Exchange As Two Shifts

7.4 11.64 14.16 31.86 35.62

CCR Custom Exchange 6.94 11.22 13.3 18.78 20.16

Page 36: CCR Multicore Performance

Overhead (latency) of AMD4 PC with 4 execution threads on MPI style Rendezvous Messaging for Shift and Exchange implemented either as two shifts or as custom CCR pattern

0

5

10

15

20

25

30

0 2 4 6 8 10

AMD Exch

AMD Exch as 2 Shifts

AMD Shift

Stages (millions)

Time Microseconds

Page 37: CCR Multicore Performance

Overhead (latency) of Intel8b PC with 8 execution threads on MPI style Rendezvous Messaging for Shift and Exchange implemented either as two shifts or as custom CCR pattern

0

10

20

30

40

50

60

70

0 2 4 6 8 10

Intel Exch

Intel Exch as 2 Shifts

Intel Shift

Stages (millions)

Time Microseconds

Page 38: CCR Multicore Performance

Scaled Speed up Tests The full clustering algorithm involves different values

of the number of clusters NC as computation progresses

The amount of computation per data point is proportional to NC and so overhead due to memory bandwidth (cache misses) declines as NC increases

We did a set of tests on the clustering kernel with fixed NC

Further we adopted the scaled speed-up approach looking at the performance as a function of number of parallel threads with constant number of data points assigned to each thread This contrasts with fixed problem size scenario where the

number of data points per thread is inversely proportional to number of threads

We plot Run time for same workload per thread divided by number of data points multiplied by number of clusters multiped by time at smallest data set (10,000 data points per thread)

Expect this normalized run time to be independent of number of threads if not for parallel and memory bandwidth overheads It will decrease as NC increases as number of computations per

points fetched from memory increases proportional to NC

Page 39: CCR Multicore Performance

Scaled Runtime

1

1.1

1.2

1.3

1.4

1.5

1.6

1 2 3 4 5 6 7 8Number of Threads (one per core)

Intel 8b Vista C# CCR 1 Cluster

500,000

50,000

10,000Scaled

Runtime

Datapointsper thread

a)1

1.1

1.2

1.3

1.4

1.5

1.6

1 2 3 4 5 6 7 8Number of Threads (one per core)

Intel 8b Vista C# CCR 1 Cluster

500,000

50,000

10,000Scaled

Runtime

Datapointsper thread

a)

0.8

0.85

0.9

0.95

1

1 2 3 4 5 6 7 8

Number of Threads (one per core)

Intel 8b Vista C# CCR 80 Clusters

500,000

50,00010,000

ScaledRuntime

Datapointsper thread

b)

0.8

0.85

0.9

0.95

1

1 2 3 4 5 6 7 8

Number of Threads (one per core)

Intel 8b Vista C# CCR 80 Clusters

500,000

50,00010,000

ScaledRuntime

Datapointsper thread

0.8

0.85

0.9

0.95

1

1 2 3 4 5 6 7 8

Number of Threads (one per core)

Intel 8b Vista C# CCR 80 Clusters

500,000

50,00010,000

500,000

50,00010,000

ScaledRuntime

Datapointsper thread

b)

Divide runtime by

Grain Size n . # Clusters K

8 cores (threads) and 1 cluster show memory

bandwidth effect

80 clusters show cache/memory

bandwidth effect

Page 40: CCR Multicore Performance

1 Cluster

0.9

0.95

1

1.05

1.1

1.15

1.2

1.25

1.3

1 2 3 4 5 6 7 8

10,000 Datapts

50,000 Datapts

500,000 Datapts

Intel 8b C with 1 Cluster: Vista Scaled Run Time for Clustering Kernel

• Note the smallest dataset has highest overheads as we increase the number of threads– Not clear why this is

Number of Threads

Scaled Run Time

Page 41: CCR Multicore Performance

80 Clusters

0.8

0.85

0.91 2 3 4 5 6 7 8

10,000 Datapts50,000 Datapts500,000 Datapts

Intel 8b C with 80 Clusters: Vista Scaled Run Time for Clustering Kernel

• As we increase number of clusters, the effects at 10,000 data points decrease

Number of Threads

Scaled Run Time

Page 42: CCR Multicore Performance

1 Cluster

1

1.05

1.1

1.15

1 2 3 4 5 6 7 8

10,000 Datapts

50,000 Datapts

500,000 Datapts

Intel 8c C with 1 Cluster: Red Hat Scaled Run Time for Clustering Kernel

• Deviations from “perfect” scaled speed-up are much less for Red Hat than for Windows

Number of Threads

Scaled Run Time

Page 43: CCR Multicore Performance

80 Clusters

0.98

0.99

1

1 2 3 4 5 6 7 8

10,000 Memory50,000 Memory500,000 Memory

Intel 8c C with 80 Clusters: Red Hat Scaled Run Time for Clustering Kernel

• Deviations from “perfect” scaled speed-up are much less for Red Hat

Number of Threads

Scaled Run Time

Page 44: CCR Multicore Performance

1 Cluster(time vs #thread)

1

1.01

1.02

1.03

1.04

1.05

1.06

1 2 3 4

10,000 Datapts

50,000 Datapts

500,000 Datapts

AMD4 C with 1 Cluster: XP Scaled Run Time for Clustering Kernel

• This is significantly more stable than Intel runs and shows little or no memory bandwidth effect

Number of Threads

Scaled Run Time

Page 45: CCR Multicore Performance

1 Cluster

0.95

1

1.05

1.1

1 2 3 4

10,000 Datapts

50,000 Datapts

500,000 Datapts

AMD4 C# with 1 Cluster: XP Scaled Run Time for Clustering Kernel

• This is significantly more stable than Intel C# 1 Cluster runs

Number of Threads

Scaled Run Time

Page 46: CCR Multicore Performance

80 Clusters

0.75

0.8

0.85

1 2 3 4

10,000 Datapts

50,000 Datapts

500,000 Datapts

AMD4 C# with 80 Clusters: XP Scaled Run Time for Clustering Kernel

• This is broadly similar to 80 Cluster Intel C# runs unlike one cluster case that was very different

Number of Threads

Scaled Run Time

Page 47: CCR Multicore Performance

1 Cluster

0.9

0.95

1

1.05

1 2 3 4

10,000 Datapts

50,000 Datapts

500,000 Datapts

AMD4 C# with 1 Cluster: Windows Server Scaled Run Time for Clustering Kernel

• This is significantly more stable than Intel C# runs

Number of Threads

Scaled Run Time

Page 48: CCR Multicore Performance

Run Time Fluctuations

PC07Intro [email protected] 48

Page 49: CCR Multicore Performance

Intel 8b C# with 1 Cluster: Vista Run Time Fluctuations for Clustering Kernel• This is average of standard deviation of run time of the

8 threads between messaging synchronization points

1 Cluster(ratio of std to time vs #thread)

0

0.1

0.2

0 1 2 3 4 5 6 7 8

thread

std

/ tim

e

10,000 Datapts

50,000 Datapts

500,000 Datapts

Number of Threads

Standard Deviation/Run Time

Page 50: CCR Multicore Performance

Intel 8-core C# with 80 Clusters: Vista Run Time Fluctuations for Clustering Kernel

2 Quadcore Processors This is average of standard deviation of run time of the

8 threads between messaging synchronization points

80 Cluster(ratio of std to time vs #thread)

0

0.05

0.1

0 1 2 3 4 5 6 7 8

thread

std

/ tim

e

10,000 Datpts

50,000 Datapts

500,000 Datapts

Number of Threads

Standard Deviation/Run Time

Page 51: CCR Multicore Performance

Run Time Fluctuations for Clustering Kernel

0

0.002

0.004

0.006

1 2 3 4 5 6 7 8

Number of Threads (one per core)

Intel 8c Redhat C Locks 80 Clusters

500,000

50,000

10,000

Datapointsper thread

Std DevRuntime

b)0

0.002

0.004

0.006

1 2 3 4 5 6 7 8

Number of Threads (one per core)

Intel 8c Redhat C Locks 80 Clusters

500,000

50,000

10,000

Datapointsper thread

Std DevRuntime

0

0.002

0.004

0.006

1 2 3 4 5 6 7 8

Number of Threads (one per core)

Intel 8c Redhat C Locks 80 Clusters

500,000

50,000

10,000

Datapointsper thread

Std DevRuntimeStd DevRuntime

b)

0

0.025

0.05

0.075

0.1

0 1 2 3 4 5 6 7 8

Number of Threads (one per core)

Intel 8a XP C# CCR 80 Clusters

500,000

50,000

10,000

Datapointsper thread

Std DevRuntime

b)0

0.025

0.05

0.075

0.1

0 1 2 3 4 5 6 7 8

Number of Threads (one per core)

Intel 8a XP C# CCR 80 Clusters

500,000

50,000

10,000

Datapointsper thread

Std DevRuntime

0

0.025

0.05

0.075

0.1

0 1 2 3 4 5 6 7 8

Number of Threads (one per core)

Intel 8a XP C# CCR 80 Clusters

500,000

50,000

10,000

Datapointsper thread

Std DevRuntimeStd DevRuntime

b)

This is average of standard deviation of run time of the 8 threads between messaging synchronization points

Page 52: CCR Multicore Performance

1 Cluster(ratio of std to time vs #thread)

0

0.1

0.2

1 2 3 4

10,000 Datapts

50,000 Datapts

500,000 Datapts

AMD4 with 1 Cluster: Windows Server Run Time Fluctuations for Clustering Kernel

• This is average of standard deviation of run time of the 8 threads between messaging synchronization points

• XP (not shown) is similar

Number of Threads

Standard Deviation/Run Time

Page 53: CCR Multicore Performance

Cache Line Interference Early implementations of our clustering

algorithm showed large fluctuations due to the cache line interference effect (false sharing)

We have one thread on each core each calculating a sum of same complexity storing result in a common array A with different cores using different array locations

Thread i stores sum in A(i) is separation 1 – no memory access interference but cache line interference

Thread i stores sum in A(X*i) is separation X

Serious degradation if X < 8 (64 bytes) with Windows Note A is a double (8 bytes) Less interference effect with Linux –

especially Red Hat

Page 54: CCR Multicore Performance

Time µs versus Thread Array Separation (unit is 8 bytes)

1 4 8 1024 Machine

OS

Run Time Mean Std/

Mean Mean Std/

Mean Mean Std/

Mean Mean Std/

Mean Intel8b Vista C# CCR 8.03 .029 3.04 .059 0.884 .0051 0.884 .0069 Intel8b Vista C# Locks 13.0 .0095 3.08 .0028 0.883 .0043 0.883 .0036 Intel8b Vista C 13.4 .0047 1.69 .0026 0.66 .029 0.659 .0057 Intel8b Fedora C 1.50 .01 0.69 .21 0.307 .0045 0.307 .016 Intel8a XP CCR C# 10.6 .033 4.16 .041 1.27 .051 1.43 .049 Intel8a XP Locks C# 16.6 .016 4.31 .0067 1.27 .066 1.27 .054 Intel8a XP C 16.9 .0016 2.27 .0042 0.946 .056 0.946 .058 Intel8c Red Hat C 0.441 .0035 0.423 .0031 0.423 .0030 0.423 .032 AMD4 WinSrvr C# CCR 8.58 .0080 2.62 .081 0.839 .0031 0.838 .0031 AMD4 WinSrvr C# Locks 8.72 .0036 2.42 0.01 0.836 .0016 0.836 .0013 AMD4 WinSrvr C 5.65 .020 2.69 .0060 1.05 .0013 1.05 .0014 AMD4 XP C# CCR 8.05 0.010 2.84 0.077 0.84 0.040 0.840 0.022 AMD4 XP C# Locks 8.21 0.006 2.57 0.016 0.84 0.007 0.84 0.007 AMD4 XP C 6.10 0.026 2.95 0.017 1.05 0.019 1.05 0.017

Cache Line Interference

Note measurements at a separation X of 8 and X=1024 (and values between 8 and 1024 not shown) are essentially identical

Measurements at 7 (not shown) are higher than that at 8 (except for Red Hat which shows essentially no enhancement at X<8)

As effects due to co-location of thread variables in a 64 byte cache line, align the array with cache boundaries

Page 55: CCR Multicore Performance

Services v. micro-parallelism Micro-parallelism uses low latency CCR threads or

MPI processes Services can be used where loose coupling natural

Input data Algorithms

PCA DAC GTM GM DAGM DAGTM – both for complete algorithm

and for each iteration Linear Algebra used inside or outside above Metric embedding MDS, Bourgain, Quadratic Programming …. HMM, SVM ….

User interface: GIS (Web map Service) or equivalent

SALSA

Page 56: CCR Multicore Performance

56

0

50

100

150

200

250

300

350

1 10 100 1000 10000

Round trips

Ave

rage

run

time

(mic

rose

cond

s)

Timing of HP Opteron Multicore as a function of number of simultaneous two-way service messages processed (November 2006 DSS Release)

Measurements of Axis 2 shows about 500 microseconds – DSS is 10 times better

DSS Service Measurements

Page 57: CCR Multicore Performance
Page 58: CCR Multicore Performance
Page 59: CCR Multicore Performance
Page 60: CCR Multicore Performance
Page 61: CCR Multicore Performance
Page 62: CCR Multicore Performance