01/26/05 CS267 Lecture 2a1 High Performance Programming on a Single Processor: Memory Hierarchies...

48
CS267 Lecture 2a 1 01/26/05 High Performance Programming on a Single Processor: Memory Hierarchies Matrix Multiplication Automatic Performance Tuning James Demmel [email protected] www.cs.berkeley.edu/~demmel/ cs267_Spr05
  • date post

    20-Dec-2015
  • Category

    Documents

  • view

    219
  • download

    1

Transcript of 01/26/05 CS267 Lecture 2a1 High Performance Programming on a Single Processor: Memory Hierarchies...

CS267 Lecture 2a 101/26/05

High Performance Programming on a Single Processor:

Memory HierarchiesMatrix Multiplication

Automatic Performance Tuning

James [email protected]

www.cs.berkeley.edu/~demmel/cs267_Spr05

01/26/05 CS267 Lecture 2a 2

Outline

• Idealized and actual costs in modern processors• Memory hierarchies• Case Study: Matrix Multiplication• Automatic Performance Tuning

01/26/05 CS267 Lecture 2a 3

Matrix-multiply, optimized several ways

Speed of n-by-n matrix multiply on Sun Ultra-1/170, peak = 330 MFlops

01/26/05 CS267 Lecture 2a 4

Automatic Performance Tuning

• Why can’t compilers choose fastest code?• Difficulty of accurate performance modeling at compile time

• Remember complexity of Memory Benchmark!

• Large set of possible code variations to consider, with different• Data structures• Algorithms (i.e. perhaps (slightly) different answers)

• May not know everything needed to tune until run-time• Architectures change faster than compilers

• Consequences• Can’t replace expertise of user (yet)• You need to know basics of writing efficient code• You need to recognize previously well-solved problems• Some important kernels (eg matmul) are automatically tuned

• Homework #1 (on web page)• Optimizing matrix multiply• Work in (assigned) interdisciplinary teams• Race against one another, “best practice”

01/26/05 CS267 Lecture 2a 5

Search Over Block Sizes

• Performance models are useful for high level algorithms• Helps in developing a blocked algorithm• Models have not proven very useful for block size selection

• too complicated to be useful

– See work by Sid Chatterjee for detailed model• too simple to be accurate

– Multiple multidimensional arrays, virtual memory, etc.• Speed depends on matrix dimensions, details of code, compiler,

processor

• Some systems use search over “design space” of possible implementations

• PHiPAC: www.icsi.berkeley.edu/~bilmes/phipac in particular tr-98-035.ps.gz• ATLAS: www.netlib.org/atlas• Part of Matlab, some vendor libraries

01/26/05 CS267 Lecture 2a 6

What part of the Matmul Search Space Looks Like

A 2-D slice of a 3-D register-tile search space. The dark blue region was pruned.(Platform: Sun Ultra-IIi, 333 MHz, 667 Mflop/s peak, Sun cc v5.0 compiler)

Num

ber

of

col u

mns

i n r

egis

t er

blo

ck

Number of rows in register block

01/26/05 CS267 Lecture 2a 7

ATLAS (DGEMM n = 500)

• ATLAS is faster than all other portable BLAS implementations and it is comparable with machine-specific libraries provided by the vendor.

0.0

100.0

200.0

300.0

400.0

500.0

600.0

700.0

800.0

900.0

Architectures

MF

LO

PS

Vendor BLASATLAS BLASF77 BLAS

Source: Jack Dongarra

01/26/05 CS267 Lecture 2a 8

Tiling Alone Might Not Be Enough• Naïve and a “naïvely tiled” code on Itanium 2

• Searched all block sizes to find best, b=56• Starting point for next homework

0

200

400

600

800

1000

1200

1400

1600

0 200 400 600 800

Matrix dimension

MF

lop

/s

3 loops

blocked, b=56

01/26/05 CS267 Lecture 2a 9

Optimizing in Practice

• Tiling for registers• loop unrolling, use of named “register” variables

• Tiling for multiple levels of cache and TLB• Exploiting fine-grained parallelism in processor

• superscalar; pipelining

• Complicated compiler interactions• Hard to do by hand (but you’ll try in homework #1)

01/26/05 CS267 Lecture 2a 10

Removing False Dependencies

• Using local variables, reorder operations to remove false dependencies

a[i] = b[i] + c;a[i+1] = b[i+1] * d;

float f1 = b[i];float f2 = b[i+1];

a[i] = f1 + c;a[i+1] = f2 * d;

false read-after-write hazardbetween a[i] and b[i+1]

With some compilers, you can declare a and b unaliased.• Done via “restrict pointers,” compiler flag, or pragma)

01/26/05 CS267 Lecture 2a 11

Exploit Multiple Registers

• Reduce demands on memory bandwidth by pre-loading into local variables

while( … ) { *res++ = filter[0]*signal[0] + filter[1]*signal[1] + filter[2]*signal[2]; signal++;}

float f0 = filter[0];float f1 = filter[1];float f2 = filter[2];while( … ) { *res++ = f0*signal[0] + f1*signal[1] + f2*signal[2]; signal++;}

also: register float f0 = …;

Example is a convolution

01/26/05 CS267 Lecture 2a 13

Loop Unrolling

• Expose instruction-level parallelism

float f0 = filter[0], f1 = filter[1], f2 = filter[2];float s0 = signal[0], s1 = signal[1], s2 = signal[2];*res++ = f0*s0 + f1*s1 + f2*s2;do { signal += 3; s0 = signal[0]; res[0] = f0*s1 + f1*s2 + f2*s0;

s1 = signal[1]; res[1] = f0*s2 + f1*s0 + f2*s1;

s2 = signal[2]; res[2] = f0*s0 + f1*s1 + f2*s2;

res += 3;} while( … );

01/26/05 CS267 Lecture 2a 14

Expose Independent Operations

• Hide instruction latency• Use local variables to expose independent operations that can

execute in parallel or in a pipelined fashion• Balance the instruction mix (what functional units are

available?)

f1 = f5 * f9;f2 = f6 + f10;f3 = f7 * f11;f4 = f8 + f12;

01/26/05 CS267 Lecture 2a 15

Copy optimization

• Copy input operands or blocks• Reduce cache conflicts• Constant array offsets for fixed size blocks• Expose page-level locality

0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

Original matrix(numbers are addresses)

0

1

4

5

2

3

6

7

8

9

12

14

10

11

13

15

Reorganized into2x2 blocks

01/26/05 CS267 Lecture 2a 16

Locality in Other Algorithms

• The performance of any algorithm is limited by q• q = # flops / # memory refs = “Computational Intensity”

• In matrix multiply, we increase q by changing computation order

• increased temporal locality

• For other algorithms and data structures, tuning still an open problem

• sparse matrices (reordering, blocking)• Weekly research meetings• Bebop.cs.berkeley.edu• About to release OSKI – tuning sequential sparse-matrix-vector

multiply and related operations

• trees (B-Trees are for the disk level of the hierarchy)• linked lists (some work done here)

01/26/05 CS267 Lecture 2a 17

Other Automatic Tuning Efforts

• FFTW (MIT)• “Fastest Fourier Transform in the West”• Sequential and parallel• Many variants (real/complex, sine/cosine, multidimensional)• 1999 Wilkinson Prize• www.fftw.org

• Spiral (CMU)• Digital signal processing transforms• FFT and beyond• www.spiral.net

• BEBOP (UCB)• Bebop.cs.berkeley.edu• Sparse matrix kernels

• Interprocessor communication kernels• Bebop, Dongarra (UTK)

01/26/05 CS267 Lecture 2a 18

Summary of Performance Tuning

• Performance programming on uniprocessors requires• understanding of memory system• understanding of fine-grained parallelism in processor

• Simple performance models can aid in understanding

• Two ratios are key to efficiency (relative to peak)

1.computational intensity of the algorithm:

• q = f/m = # floating point operations / # slow memory references

2.machine balance in the memory system:

• tm/tf = time for slow memory reference / time for floating point operation

• Want q > tm/tf to get half machine peak

• Blocking (tiling) is a basic approach to increase q• Techniques apply generally, but the details (e.g., block size) are

architecture dependent• Similar techniques are possible on other data structures and algorithms

• Now it’s your turn: Homework 1 (in assigned teams)

CS267 Lecture 2a 1901/26/05

Extra Slides

01/26/05 CS267 Lecture 2a 20

Reading for Today

• Sourcebook Chapter 3, (note that chapters 2 and 3 cover the material of lecture 2 and lecture 3, but not in the same order).

• "Performance Optimization of Numerically Intensive Codes", by Stefan Goedecker and Adolfy Hoisie, SIAM 2001.

• Web pages for reference:• BeBOP Homepage • ATLAS Homepage • BLAS (Basic Linear Algebra Subroutines), Reference for (unoptimized)

implementations of the BLAS, with documentation. • LAPACK (Linear Algebra PACKage), a standard linear algebra library

optimized to use the BLAS effectively on uniprocessors and shared memory machines (software, documentation and reports)

• ScaLAPACK (Scalable LAPACK), a parallel version of LAPACK for distributed memory machines (software, documentation and reports)

• Tuning Strassen's Matrix Multiplication for Memory Efficiency Mithuna S. Thottethodi, Siddhartha Chatterjee, and Alvin R. Lebeck in Proceedings of Supercomputing '98, November 1998 postscript

• Recursive Array Layouts and Fast Parallel Matrix Multiplication” by Chatterjee et al. IEEE TPDS November 2002.

01/26/05 CS267 Lecture 2a 21

Questions You Should Be Able to Answer

1. What is the key to understand algorithm efficiency in our simple memory model?

2. What is the key to understand machine efficiency in our simple memory model?

3. What is tiling? 4. Why does block matrix multiply reduce the number of

memory references? 5. What are the BLAS? 6. Why does loop unrolling improve uniprocessor

performance?

01/26/05 CS267 Lecture 2a 22

Reading Assignment

• Next week: Current high performance architectures• Cray X1

• http://www.sc-conference.org/sc2003/paperpdfs/pap183.pdf

• Blue Gene L• http://sc-2002.org/paperpdfs/pap.pap207.pdf

• Clusters• http://www.mirror.ac.uk/sites/www.beowulf.org/papers/ICPP95/

• Optional• Chapter 1,2 of the “Sourcebook of Parallel Computing”• Alternative to Beowulf paper:

• http://now.cs.berkeley.edu/Case/case.html

01/26/05 CS267 Lecture 2a 23

Outline

• Goal of parallel computing• Solve a problem on a parallel machine that is

impractical on a serial one• How long does/will the problem take on P processors?

• Quick look at parallel machines• Understanding parallel performance

• Speedup: the effectiveness of parallelism• Limits to parallel performance

• Understanding serial performance• Parallelism in modern processors• Memory hierarchies

01/26/05 CS267 Lecture 2a 24

Microprocessor revolution

• Moore’s law in microprocessor performance made desktop computing in 2000 what supercomputing was in 1990

• Massive parallelism has changed the high end• From a small number of very fast (vector) processors to• A large number (hundreds or thousands) of desktop processors

• Use the fastest “commodity” workstations as building blocks

• Sold in enough quantity to make them inexpensive• Start with the best performance available (at a reasonable price)

• Today, most parallel machines are clusters of SMPs:• An SMP is a tightly couple shared memory multiprocessor • A cluster is a group of this connected by a high speed network

01/26/05 CS267 Lecture 2a 25

A Parallel Computer Today: NERSC-3 Vital Statistics

• 5 Teraflop/s Peak Performance – 3.05 Teraflop/s with Linpack• 208 nodes, 16 CPUs per node at 1.5

Gflop/s per CPU• 4.5 TB of main memory

• 140 nodes with 16 GB each, 64 nodes with 32 GBs, and 4 nodes with 64 GBs.

• 40 TB total disk space• 20 TB formatted shared, global,

parallel, file space; 15 TB local disk for system usage

• Unique 512 way Double/Single switch configuration

01/26/05 CS267 Lecture 2a 26

Performance Levels (for example on NERSC-3)

• Peak advertised performance (PAP): 5 Tflop/s• LINPACK: 3.05 Tflop/s• Gordon Bell Prize, application performance : 2.46 Tflop/s

• Material Science application at SC01

• Average sustained applications performance: ~0.4 Tflop/s• Less than 10% peak!

01/26/05 CS267 Lecture 2a 27

Millennium and CITRIS

• Millennium Central Cluster• 99 Dell 2300/6350/6450 Xeon Dual/Quad: • 332 processors total• Total: 211GB memory, 3TB disk• Myrinet 2000 + 1000Mb fiber ethernet

• CITRIS Cluster 1: 3/2002 deployment• 4 Dell Precision 730 Itanium Duals: 8 processors• Total: 20 GB memory, 128GB disk• Myrinet 2000 + 1000Mb copper ethernet

• CITRIS Cluster 2: 2002-2004 deployment • ~128 Dell McKinley class Duals: 256 processors• Total: ~512GB memory, ~8TB disk• Myrinet 2000 (subcluster) + 1000Mb copper ethernet• ~32 nodes available now~32 nodes available now

01/26/05 CS267 Lecture 2a 28

Outline

• Goal of parallel computing• Solve a problem on a parallel machine that is

impractical on a serial one• How long does/will the problem take on P processors?

• Quick look at parallel machines• Understanding parallel performance

• Speedup: the effectiveness of parallelism• Limits to parallel performance

• Understanding serial performance• Parallelism in modern processors• Memory hierarchies

01/26/05 CS267 Lecture 2a 29

Speedup

• The speedup of a parallel application isSpeedup(p) = Time(1)/Time(p)

• Where• Time(1) = execution time for a single processor and• Time(p) = execution time using p parallel processors

• If Speedup(p) = p we have perfect speedup (also called linear scaling)

• As defined, speedup compares an application with itself on one and on p processors, but it is more useful to compare

• The execution time of the best serial application on 1 processor

versus• The execution time of best parallel algorithm on p processors

01/26/05 CS267 Lecture 2a 30

Efficiency

• The parallel efficiency of an application is defined asEfficiency(p) = Speedup(p)/p

• Efficiency(p) <= 1• For perfect speedup Efficiency (p) = 1

• We will rarely have perfect speedup.• Lack of perfect parallelism in the application or algorithm• Imperfect load balancing (some processors have more work)• Cost of communication• Cost of contention for resources, e.g., memory bus, I/O• Synchronization time

• Understanding why an application is not scaling linearly will help finding ways improving the applications performance on parallel computers.

01/26/05 CS267 Lecture 2a 31

Superlinear Speedup

Question: can we find “superlinear” speedup, that isSpeedup(p) > p ?

• Choosing a bad “baseline” for T(1)• Old serial code has not been updated with optimizations• Avoid this, and always specify what your baseline is

• Shrinking the problem size per processor• May allow it to fit in small fast memory (cache)

• Application is not deterministic• Amount of work varies depending on execution order• Search algorithms have this characteristic

01/26/05 CS267 Lecture 2a 32

Amdahl’s Law

• Suppose only part of an application runs in parallel• Amdahl’s law

• Let s be the fraction of work done serially, • So (1-s) is fraction done in parallel• What is the maximum speedup for P processors?

Speedup(p) = T(1)/T(p)

T(p) = (1-s)*T(1)/p +s*T(1)

= T(1)*((1-s) + p*s)/p

Even if the parallel part speeds up perfectly, we may be limited by the sequential portion of code.

Speedup(p) = p/(1 + (p-1)*s)

assumes perfect speedup for parallel part

01/26/05 CS267 Lecture 2a 33

Amdahl’s Law (for 1024 processors)

Speedup

0

128

256

384

512

640

768

896

1024

0 0.01 0.02 0.03 0.04

s

Does this mean parallel computing is a hopeless enterprise?

See: Gustafson, Montry, Benner, “Development of Parallel Methods for a 1024 Processor Hypercube”, SIAM J. Sci. Stat. Comp. 9, No. 4, 1988, pp.609.

01/26/05 CS267 Lecture 2a 34

Scaled Speedup

• Speedup improves as the problem size grows• Among other things, the Amdahl effect is smaller

• Consider• scaling the problem size with the number of

processors (add problem size parameter, n)• for problem in which running time scales linearly with

the problem size: T(1,n) = T(1)*n• let n=p (problem size on p processors increases by p)

ScaledSpeedup(p,n) = T(1,n)/T(p,n)

T(p,n) = (1-s)*n*T(1,1)/p +s*T(1,1)

= (1-s)*T(1,1) + s*T(1,1)=T(1,1)ScaledSpeedup(p,n) = n = p

assumes serial work does not grow with n

01/26/05 CS267 Lecture 2a 35

Scaled Efficiency

• Previous definition of parallel efficiency wasEfficiency(p) = Speedup(p)/p

• We often want to scale problem size with the number of processors, but scaled speedup can be tricky

• Previous definition depended on a linear work in problem size

• May use alternate definition of efficiency that depends on a notion of throughput or rate, R(p):

• Floating point operations per second• Transactions per second• Strings matches per second

• ThenEfficiency(p) = R(p)/(R(1)*p)

• May use a different problem size for R(1) and R(p)

01/26/05 CS267 Lecture 2a 36

Three Definitions of Efficiency: Summary

• People use the word “efficiency” in many ways• Performance relative to advertised machine peak

Flop/s in application / Max Flops/s on the machine• Integer, string, logical or other operations could be used, but they

should be a machine-level instruction

• Efficiency of a fixed problem sizeEfficiency(p) = Speedup(p)/p

• Efficiency of a scaled problem sizeEfficiency(p) = R(p)/(R(1)*p)

• All of these may be useful in some context• Always make it clear what you are measuring

01/26/05 CS267 Lecture 2a 38

Limits of Scaling – an example of a current debate

• Test run on global climate model reported on the Earth Simulator sustained performance of about 28 TFLOPS on 640 nodes. The model was an atmospheric global climate model (T1279L96) developed originally by CCSR/NEIS and tuned by ESS.

• This corresponds to scaling down to a 10 km^2 grid• Many physical modeling assumptions from a 200

km^2 grid don’t hold any longer• The climate modeling community

is debating the significance of these results

01/26/05 CS267 Lecture 2a 39

Performance Limits

insufficient memory

scale

d sp

eedu

p

fixed size speedup

insufficient parallelism

Log of number of processors

Log

of p

rob

lem

siz

e

01/26/05 CS267 Lecture 2a 40

Outline

• Goal of parallel computing• Solve a problem on a parallel machine that is

impractical on a serial one• How long does/will the problem take on P processors?

• Quick look at parallel machines• Understanding parallel performance

• Speedup: the effectiveness of parallelism• Limits to parallel performance

• Understanding serial performance• Parallelism in modern processors• Memory hierarchies

01/26/05 CS267 Lecture 2a 41

Principles of Parallel Computing

• Speedup, efficiency, and Amdahl’s Law• Finding and exploiting parallelism• Finding and exploiting data locality• Load balancing• Coordination and synchronization• Performance modeling

All of these things make parallel programming more difficult than sequential programming.

01/26/05 CS267 Lecture 2a 42

Overhead of Parallelism

• Given enough parallel work, this is the most significant barrier to getting desired speedup.

• Parallelism overheads include:• cost of starting a thread or process• cost of communicating shared data• cost of synchronizing• extra (redundant) computation

• Each of these can be in the range of milliseconds (= millions of arithmetic ops) on some systems

• Tradeoff: Algorithm needs sufficiently large units of work to run fast in parallel (i.e. large granularity), but not so large that there is not enough parallel work.

01/26/05 CS267 Lecture 2a 43

Locality and Parallelism

• Large memories are slower; fast memories are small.• Storage hierarchies are designed to fast on average.• Parallel processors, collectively, have large, fast memories -- the

slow accesses to “remote” data we call “communication”.• Algorithm should do most work on local data.

Proc

Cache

L2 Cache

L3 Cache

Memory

Conventional Storage Hierarchy

Proc

Cache

L2 Cache

L3 Cache

Memory

Proc

Cache

L2 Cache

L3 Cache

Memory

po

tential

interco

nn

ects

01/26/05 CS267 Lecture 2a 44

Load Imbalance

• Load imbalance is the time that some processors in the system are idle due to

• insufficient parallelism (during that phase).• unequal size tasks.

• Examples of the latter• adapting to “interesting parts of a domain”.• tree-structured computations.• fundamentally unstructured problems.

• Algorithm needs to balance load• but techniques the balance load often reduce locality

01/26/05 CS267 Lecture 2a 45

Performance Programming is Challenging

• Speedup(P) = Time(1) / Time(P)• Applications have “learning curves”

Amber (chemical modeling)

0

10

20

30

40

50

60

70

0 20 40 60 80 100 120 140

Processors

Speedup

Vers. 12/94

Vers. 9/94

Vers. 8/94

46CS267 Lecture 2a01/26/05

Limits to Instruction Level Parallelism (ILP)

• Limits to pipelining: Hazards prevent next instruction from executing during its designated clock cycle

• Structural hazards: HW cannot support this combination of instructions (single person to fold and put clothes away)

• Data hazards: Instruction depends on result of prior instruction still in the pipeline (missing sock)

• Control hazards: Caused by delay between the fetching of instructions and decisions about changes in control flow (branches and jumps).

• The hardware and compiler will try to reduce these:• Reordering instructions, multiple issue, dynamic branch

prediction, speculative execution…

• You can also enable parallelism by careful coding

01/26/05 CS267 Lecture 2a 47

Dependences (Data Hazards) Limit Parallelism

• A dependence or data hazard is one of the following:• true of flow dependence:

• a writes a location that b later reads• (read-after write or RAW hazard)

• anti-dependence• a reads a location that b later writes• (write-after-read or WAR hazard)

• output dependence• a writes a location that b later writes• (write-after-write or WAW hazard)

true anti output

a = = a a = a = = a a =

01/26/05 CS267 Lecture 2a 48

Outline

• Goal of parallel computing• Solve a problem on a parallel machine that is

impractical on a serial one• How long does/will the problem take on P processors?

• Quick look at parallel machines• Understanding parallel performance

• Speedup: the effectiveness of parallelism• Limits to parallel performance

• Understanding serial performance• Parallelism in modern processors• Memory hierarchies

01/26/05 CS267 Lecture 2a 49

Little’s Law

Principle (Little's Law): the relationship of a production system in steady state is:

Inventory = Throughput × Flow TimeFor parallel computing, this means:

Required concurrency = Bandwidth x Latency

Example: 1000 processor system, 1 GHz clock (1ns), 100 ns memory latency, 100 words of memory in data paths between CPU and memory at any given time.

• Main memory bandwidth is: ~ 1000 x 100 words x 109/s = 1014 words/sec.• To achieve full performance, an application needs: ~ 10-7 x 1014 = 107 way concurrency

01/26/05 CS267 Lecture 2a 50

“Naïve” Matrix Multiply

{implements C = C + A*B}for i = 1 to n for j = 1 to n

for k = 1 to n C(i,j) = C(i,j) + A(i,k) * B(k,j)

Sequentialaccess throughentire matrix

Stride-Naccess toone row*

Reuse value from a

register

• When cache (or TLB or memory) can’t hold entire B matrix, there will be a miss on every line.

• When cache (or TLB or memory) can’t hold a row of A, there will be a miss on each access

*Assumes column-major orderSlide source: Larry Carter, UCSD