Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel...

61
Parallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Transcript of Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel...

Page 1: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Parallel Programming with Message Passing Interface (MPI)

October 6, 2016

1

By: Bart Oldeman

Page 2: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Financial Partners

2

Page 3: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Setup for the workshop1. Get a user ID and password paper (provided in class):

##:**********

2. Access to local computer (replace “##” and “___” with appropriate values, “___” is provided in class):a. User name: csuser## (ex.: csuser99)b. Password: ___@[S## (ex.: sec@[S99)

3. SSH connection to Guillimin (replace **********):a. Host name: guillimin.calculquebec.cab. User name: class## (ex.: class99)c. Password: **********

3

Page 4: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Import Examples and Exercises

● On Guillimin:cp -a /software/workshop/cq-formation-intro-mpi ~/

cd cq-formation-intro-mpi

● From GitHub:module load git # If on Guillimin

git clone -b mcgill \

https://github.com/calculquebec/cq-formation-intro-mpi.git

cd cq-formation-intro-mpi

4

Page 5: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Outline

● General Concepts● A First Code● MPI Data Types● Point-to-point Communications● Synchronization Between Processes● Collective Communications● MPI for Python (mpi4py)● Conclusion

5

Page 6: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

6

General Concepts

Page 7: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Parallel Algorithm - Why?

● When do you need to parallelize your algorithm?○ The algorithm takes too much time on a single

processor or even on a full compute node.○ The algorithm uses too much memory

■ The algorithm may fit in cache memory, but only if the data is split on multiple processors

○ The algorithm does a lot of read and write operations (I/O) on the network storage■ The data does not fit on a single node■ Bandwidth and latency of a single node

7

Page 8: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Vocabulary and Concepts● Serial tasks

○ Any task that cannot be split in two simultaneous sequences of actions

○ Examples: starting a process, reading a file, any communication between two processes

● Parallel tasks○ Data parallelism: same action applied on different

data. Could be serial tasks done in parallel.

○ Process parallelism: one action on one set of data, but action is split in multiple processes or threads.■ Data partitioning: rectangles or blocks

8

Page 9: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Serial Code Parallelization● Implicit Parallelization | minimum work for you

○ Threaded libraries (MKL, ACML, GOTO, etc.)○ Compiler directives (OpenMP)○ Good for desktops and shared memory machines

● Explicit Parallelization | work is required !○ You tell what should be done on what CPU○ Solution for distributed clusters (shared nothing!)

● Hybrid Parallelization | work is required !○ Mix of implicit and explicit parallelization

■ Vectorization and parallel CPU instructions○ Good for accelerators (CUDA, OpenCL, etc.)

9

Page 10: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Distributed Memory Model

10

Net

wor

k

Process 1

A(10)

Process 2

A(10)

Different variables!

Page 11: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

In Practice on a Cluster

11

● The scheduler provides a list of worker node names

● The job spawns processes on each worker node○ How to manage all processes?

● Processes must communicate together

○ How to manage

communications? By using “sockets” each time? No!

Page 12: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

● Model for distributed memory approach○ Each process is identified by unique integer

○ Programmer manages memory by placing data in a particular process

○ Programmer sends data between processes (point-to-point communications)

○ Programmer performs collective operations on sets of processes

● MPI is a specification for a standardized library: subroutines linked with your code○ History: MPI-1 (1994), MPI-2 (1997), MPI-3 (2012)

Solution - MPI orMessage Passing Interface!

12

Page 13: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Different MPI ImplementationsDifferent MPI Modules● OpenMPI, MVAPICH2, Intel MPI, …

○ They come with a different compilation wrapper■ Compilation arguments may differ

○ Execution arguments may also differ■ mpiexec or mpirun, -n or -np

● When working on a cluster, please use provided MVAPICH2 or OpenMPI libraries: they have been compiled with Infiniband support (20-40Gbps).

● MPI library must match compiler used (Intel, PGI, or GCC) both at compile and at run time.○ {GNU, Intel, PGI} * {OpenMPI, MVAPICH2}

13

Page 14: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

A First Code

14

Page 15: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Basic Features of MPI Program

● Include basic definitions○ C: #include <mpi.h>○ Fortran: USE mpi (or INCLUDE 'mpif.h')

● Initialize MPI environment● Get information about processes● Send information between two specific

processes (point-to-point communications)● Send information between groups of

processes (collective communications)● Terminate MPI environment

15

Page 16: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Six Basic MPI Functions● You need to know these six functions:

○ MPI_Init: initialize MPI environment○ MPI_Comm_size: number of processes in a group

○ MPI_Comm_rank: unique integer value identifying each process in a group (0 <= rank < size)

○ MPI_Send: send data to another process○ MPI_Recv: receive data from another process○ MPI_Finalize: close MPI environment

● MPI_COMM_WORLD is a default communicator (defined in mpi.h), refers to the group of all processes in the job

● Each statement executes independently in each process

16

Page 17: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Example:A Smiley from N Processors

● smiley.c#include <stdio.h>

#include <mpi.h>

int main (int argc, char * argv[]){ int rank, size;

MPI_Init( &argc, &argv );

MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size );

printf( "%3d/%-3d :-D\n", rank, size );

MPI_Finalize();

return 0;}

17

● smiley.f90PROGRAM smileyIMPLICIT NONEUSE MPI

INTEGER ierr, rank, size

CALL MPI_Init(ierr)

CALL MPI_Comm_rank (MPI_COMM_WORLD, & rank, ierr)

CALL MPI_Comm_size (MPI_COMM_WORLD, & size, ierr)

WRITE(*,*) rank, '/', size, ' :-D'

CALL MPI_Finalize(ierr)

END PROGRAM smiley

Page 18: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Compiling your MPI Code

● Not defined by the standard● More or less similar for all implementations:

○ Need to specify include directory and MPI library

○ But usually a compiler wrapper (mpicc, mpif90) does it for you automatically

● On the Guillimin cluster using iomkl toolchain (Intel + OpenMPI + MKL):

module load iomkl/2015b

mpicc smiley.c -o smiley

mpif90 smiley.f90 -o smiley18

Page 19: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Running your MPI Code● Not defined by the standard● A launching program (mpirun, mpiexec,

mpirun_rsh, ...) is used to start your MPI program● Particular choice of launcher depends on MPI

implementation and on the machine used● A hosts file is used to specify on which nodes to run

MPI processes (--hostfile nodes.txt), but will run on localhost by default or will use the host file provided by the batch system

● On a worker node of Guillimin:

mpiexec -n 4 ./smiley

19

Page 20: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Exercises - Part 11. If not done already, log in to Guillimin:

ssh class##@guillimin.hpc.mcgill.ca2. Check for loaded software modules:

module list3. See all available modules:

module av4. Load necessary modules:

module load iomkl/2015b5. Check loaded modules again6. Verify that you have access to the correct mpi package:

which mpicc # which mpif90/software/CentOS-6/eb/software/Toolchain/iccifort/2015.3.187-GNU-4.9.3-2.25/OpenMPI/1.8.8/bin/mpicc

20

Page 21: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Exercises - Part 2

1. Make sure to get workshop filesa. See page 4: from /software or from GitHub

cd ~/cq-formation-intro-mpi

2. Compile code:mpicc smiley.c -o smileympif90 smiley.f90 -o smiley

3. Edit smily.pbs: reserve 4 processors (ppn=4) for 5 minutes (00:05:00). Edit:

mpiexec -n 4 ./smiley > smiley.out

21

Page 22: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Exercises - Part 31. Verify your smiley.pbs:

#!/bin/bash

#PBS -l nodes=1:ppn=4

#PBS -l walltime=00:05:00

#PBS -N smiley

cd $PBS_O_WORKDIR

module load iomkl/2015b

mpiexec -n 4 ./smiley > smiley.out

2. Submit your job:qsub smiley.pbs

3. Check the job status:qstat -u $USER

4. Check the output (smiley.out)22

Page 23: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Exercises - Part 4● Copy smiley.{c,f90} to smileys.{c,f90}● Modify smileys.c or smileys.f90 such that

each process will print a different smiley:○ If rank is 0, print :-|○ If rank is 1, print :-)○ If rank is 2, print :-D○ If rank is 3, print :-P

● Compile the new code● Copy smiley.pbs to smileys.pbs and modify

the copied version accordingly○ Submit a new job and check the output

23

Page 24: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Another Example - hello.{c,f90}#include <math.h>

// [...]

float a, b;

if (rank == 0) {

a = sqrt(2.0);

b = 0.0;

}

if (rank == 1) {

a = 0.0;

b = sqrt(3.0);

}

printf("On proc %d: a, b = \t%f\t%f\n",

rank, a, b);

24

Page 25: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI Data Types

Portable data types forheterogeneous compute nodes

25

Page 26: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI Basic Data Types (C)

26

MPI Data type C Data type

MPI_CHAR char

MPI_SHORT short

MPI_INT int

MPI_LONG long int

MPI_UNSIGNED_CHAR unsigned char

MPI_UNSIGNED_SHORT unsigned short

MPI_UNSIGNED unsigned int

MPI_UNSIGNED_LONG unsigned long int

MPI_FLOAT float

MPI_DOUBLE double

MPI_LONG_DOUBLE long double

Page 27: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI Basic Datatypes (Fortran)

27

MPI Data type Fortran Data type

MPI_INTEGER INTEGER

MPI_INTEGER8 INTEGER(selected_int_kind(18))

MPI_REAL REAL

MPI_DOUBLE_PRECISION DOUBLE_PRECISION

MPI_COMPLEX COMPLEX

MPI_DOUBLE_COMPLEX DOUBLE_COMPLEX

MPI_LOGICAL LOGICAL

MPI_CHARACTER CHARACTER(1)

Page 28: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI Advanced Datatypes

28

MPI Data type C or Fortran

MPI_PACKED Data packed with MPI_Pack

MPI_BYTES 8 binary digits

Page 29: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Point-to-point Communications

29

qsub -I -l nodes=1:ppn=2 -l walltime=5:00:00module load iomkl/2015b

Page 30: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Six Basic MPI Functions

● MPI_Init: initialize MPI environment● MPI_Comm_size: number of processes in a

group● MPI_Comm_rank: unique integer value

identifying each process in a group(0 <= rank < size)

● MPI_Send: send data to another process● MPI_Recv: receive data from another process● MPI_Finalize: close MPI environment

30

Page 31: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI_Send / MPI_Recv

● Passing message between two different MPI processes (point-to-point communication)

● If one process sends, another initiates the matching receive

● The exchange data types are predefined for portability

● MPI_Send / MPI_Recv is blocking!(There are also non-blocking versions)

31

Page 32: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI: Sending a messageC: MPI_Send(&data, count, data_type, dest, tag, comm)

Fortran: MPI_Send(data, count, data_type, dest, tag, comm, ierr)

● data: variable to send● count: number of data elements to send● data_type: type of data to send● dest: rank of the receiving process● tag: the label of the message● comm: communicator - set of involved processes● ierr: error code (return value for C)

32

Page 33: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI: Receiving a messageC: MPI_Recv(&data, count, data_type, source, tag, comm, &status)

Fortran: MPI_Recv(data, count, data_type, source, tag, comm, status, ierr)

● source: rank of the sending process (or can be set to MPI_ANY_SOURCE)

● tag: must match the label used by sender (or can be set to MPI_ANY_TAG)

● status: a C structure (MPI_Status) or an integer array with information in case of an error (source, tag, actual number of bytes received)

● MPI_Send and MPI_Recv are blocking!33

Page 34: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI_Send / MPI_RecvExample 1 // examples/sendRecvEx1.c

int rank, size, buffer = -1, tag = 10; MPI_Status status;

MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size );

if (size >= 2 && rank == 0) { buffer = 33; MPI_Send( &buffer, 1, MPI_INT, 1, tag, MPI_COMM_WORLD ); } if (size >= 2 && rank == 1) { MPI_Recv( &buffer, 1, MPI_INT, 0, tag, MPI_COMM_WORLD, &status ); printf("Rank %d\tbuffer= %d\n", rank, buffer); if (buffer != 33) printf("fail\n"); }

MPI_Finalize();

34

Page 35: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Exercise: Sending a Matrix

● Goal: sending a matrix 4x4 from process 0 to process 1

● Edit file send_matrix.{c,f90}● Compile your modified code● Run it with 2 processes

35

Page 36: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Synchronization Between Processes

36

Page 37: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Processes Waiting for Communications

● When using blocking communications, unbalanced workload may cause processes to be waiting

● Worst case: everyone waiting after everyone○ Must avoid cases of deadlocks○ Some bugs may lead to non-matching send/receive

37

Page 38: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI_Send / MPI_RecvExample 2 ///////////////////// // Should not work // // Why? // /////////////////////

if (size >= 2 && rank == 0) { MPI_Send( &buffer1, 1, MPI_INT, 1, 10, MPI_COMM_WORLD ); MPI_Recv( &buffer2, 1, MPI_INT, 1, 20, MPI_COMM_WORLD, &status ); }

if (size >= 2 && rank == 1) { MPI_Send( &buffer2, 1, MPI_INT, 0, 20, MPI_COMM_WORLD ); MPI_Recv( &buffer1, 1, MPI_INT, 0, 10, MPI_COMM_WORLD, &status ); }

38

Page 39: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI_Send / MPI_RecvExample 2 - Solution A ////////////////////////////// // Exchange Send/Recv order // // on one processor // //////////////////////////////

if (size >= 2 && rank == 0) { MPI_Send( &buffer1, 1, MPI_INT, 1, 10, MPI_COMM_WORLD ); MPI_Recv( &buffer2, 1, MPI_INT, 1, 20, MPI_COMM_WORLD, &status ); }

if (size >= 2 && rank == 1) { MPI_Recv( &buffer1, 1, MPI_INT, 0, 10, MPI_COMM_WORLD, &status ); MPI_Send( &buffer2, 1, MPI_INT, 0, 20, MPI_COMM_WORLD ); }

39

Page 40: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI_Send / MPI_RecvExample 2 - Solution B ////////////////////////////////// // Use non-blocking send: Isend // //////////////////////////////////

MPI_Request request;

if (size >= 2 && rank == 0) { MPI_Isend( &buffer1, 1, MPI_INT, 1, 10, MPI_COMM_WORLD, &request ); MPI_Recv( &buffer2, 1, MPI_INT, 1, 20, MPI_COMM_WORLD, &status ); }

if (size >= 2 && rank == 1) { MPI_Isend( &buffer2, 1, MPI_INT, 0, 20, MPI_COMM_WORLD, &request ); MPI_Recv( &buffer1, 1, MPI_INT, 0, 10, MPI_COMM_WORLD, &status ); }

MPI_Wait( &request, &status ); // Wait until send is complete

40

Page 41: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Non-blocking Communications

● MPI_Isend() starts the transfer and returns control○ Advantage: between the start and the end of

transfer the code can do other things

○ Warning: during a transfer, the buffer cannot be reused or deallocated

○ MPI_Request: additional structure for the communication

● You need to check for completion!○ MPI_Wait(&request, &status): the code waits

for the completion of the transfer41

Page 42: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Exercise: Exchanging Vectors

● Goal: exchanging a small vector of data● Edit file exchange.{c,f90}● Compile your modified code● Run it with 2 processes

42

Page 43: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Collective Communications

One step beyond

43

Page 44: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Collective Communications● Involve ALL processes in the communicator

(MPI_COMM_WORLD)● MPI_Bcast: Same data sent from “root” process

to all the others● MPI_Reduce: “root” process collects data from

the others and performs an operation (min, max, add, multiply, etc ...)

● MPI_Scatter: distributes variables from the “root” process to each of the others

● MPI_Gather: collects variables from all processes to the “root" one

44

Page 45: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI_BcastExampleint rank, size, root = 0;float a[2];//…

if (rank == root) { a[0] = 2.0f; a[1] = 4.0f;}

MPI_Bcast( a, 2, MPI_FLOAT, root, MPI_COMM_WORLD );

// Print result

45

P0

P0

P1

P2

P3

a

a

a

a

2,4

2,4

2,4

2,4

2,4

Page 46: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI_ReduceExampleint rank, size, root = 0;float a[2], res[2];//…a[0] = 2 * rank + 0;a[1] = 2 * rank + 1;

MPI_Reduce( a, res, 2, MPI_FLOAT, MPI_SUM, root, MPI_COMM_WORLD );

if (rank == root) { // Print result}

46

P0

P0

P1

P2

P3

Sa=a1+a2+a3+a4Sb=b1+b2+b3+b4

a1,b1

a2,b2

a3,b3

a4,b4

Page 47: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Exercise: Approximation of Pi

● Edit pi_collect.{c,f90}○ Use MPI_Bcast to broadcast the number of iterations○ Use MPI_Reduce to compute the final sum

● Run:

mpiexec -n 2 ./pi_collect < pi_collect.in

47

Page 48: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Exercise: MPI_Wtime

● Edit pi_collect.{c,f90} in order to measure elapsed time for computing Pi○ Only process 0 will compute elapsed time

double t1, t2; if (rank == 0) { t1 = MPI_Wtime(); } // Computing Pi… if (rank == 0) { t2 = MPI_Wtime(); printf("Time = %.16f sec\n", t2 - t1); }

48

Page 49: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI_Scatter and MPI_Gather● MPI_Scatter: one-to-all communication -

different data sent from root process to all others in the communicator, following the rank order

● MPI_Gather: data collected by the root process. Is the opposite of Scatter

49

P0a1,a2,a3,a4,a5,a6,a7,a8

P0a1,a2 P1

a3,a4 P2a5,a6 P3

a7,a8

Scatter Gather

Page 50: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

ExampleMPI_Scatter, MPI_Gather// Scatter

int i, sc = 2; // sendcountfloat a[16], b[2];

if (rank == root) { for (i = 0; i < 16; i++) { a[i] = i; }}

MPI_Scatter (a, sc, MPI_FLOAT, b, sc, MPI_FLOAT, root, MPI_COMM_WORLD);

// Print rank, b[0] and b[1]

50

// Gather

int i, sc = 2; // sendcountfloat a[16], b[2];

b[0] = rank;b[1] = rank;

MPI_Gather(b, sc, MPI_FLOAT, a, sc, MPI_FLOAT, root, MPI_COMM_WORLD);

if (rank == root) { // Print a[0] through a[15]}

Page 51: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

ExerciseDot Product of Two Vectors

● Edit dot_prod.{c,f90}○ Dot product = ATB = a1*b1+a2*b2+...○ 2*8400 integers: dp = 1*8400+2*8399+...+8400*1○ The root process initializes vectors A and B○ Use MPI_Scatter to split A and B○ Use MPI_Reduce to compute the final result

● (Optional) Replace MPI_Reduce with MPI_Gather and let the root process compute the final result

51

Page 52: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

52

MPI for Python (mpi4py)

Page 53: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI for Python● The mpi4py package provides bindings from Python to

MPI (Message Passing Interface).● MPI functions are then available in Python but with

some simplifications:○ MPI_Init() and MPI_Finalize() are done automatically○ The bindings can auto-detect many values that

need to be specified as explicit parameters in the C and Fortran bindings.

○ Example: dest = 1; tag = 54321; MPI_Send( &matrix,

count, MPI_INT, dest, tag, MPI_COMM_WORLD )

becomes MPI.COMM_WORLD.Send(matrix, dest=1, tag=54321)

53

Page 54: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI for Python● Import as from mpi4py import MPI● Then often use comm = MPI.COMM_WORLD● Two variations for most functions:

a. all lowercase, e.g. comm.recv()■ works on general Python objects, using pickle (can

be slow)■ received object (value) returned:

● matrix = comm.recv(source=0, tag=MPI.ANY_TAG)

b. capitalized, e.g. comm.Recv()■ works fast on numpy arrays & other buffers■ received object given as parameter:

● comm.Recv(matrix, source=0, tag=MPI.ANY_TAG)

■ Specify [matrix, MPI.INT], or [data, count, MPI.INT] if autodetection fails.

54

Page 55: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Exercise: smileys● Basic example: python/smiley.py (first do cd python):from mpi4py import MPI

comm = MPI.COMM_WORLD

rank = comm.Get_rank()

size = comm.Get_size()

print( "%3d/%-3d :-D\n"%rank, size )

● Edit smiley.pbs, which contains:mpiexec -n 4 python smiley.py > smiley.out

to run for a walltime of 5 minutes on 4 cores, thensubmit using qsub smiley.pbs

55

Page 56: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Exercise: smileys● Copy smiley.py to smileys.py● Modify smileys.py such that each process will print a

different smiley:a. If rank is 0, print :-|b. If rank is 1, print :-)c. If rank is 2, print :-Dd. If rank is 3, print :-P

● Compile the new code● Copy smiley.pbs to smileys.pbs and modify the

copied version accordinglya. Submit a new job and check the output

56

Page 57: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Exercise: dot product● Edit dot_product.py

a. Dot product = ATB = a1*b1+a2*b2+...

b. 2*8400 integers: dp = 1*8400+2*8399+...+8400*1

c. The root process initializes vectors A and B

d. Use comm.Scatter(sendbuf, recvbuf,

root=...) to split A and B

e. Use comm.reduce to compute the final result

● (Optional) Replace comm.reduce with comm.Gather

and let the root process compute the final result

57

Page 58: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Conclusion

58

Page 59: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

MPI routines we know ...

● MPI environment○ MPI_Init, MPI_Finalize

● Information on processes○ MPI_Comm_rank, MPI_Comm_size

● Point-to-point communications○ MPI_Send, MPI_Recv○ MPI_Isend, MPI_Wait

● Collective communications○ MPI_Bcast, MPI_Reduce○ MPI_Scatter, MPI_Gather

59

Page 60: Parallel Programming with Message Passing Interface (MPI) - McGill - Introduction to MPI.pdfParallel Programming with Message Passing Interface (MPI) October 6, 2016 1 By: Bart Oldeman

Further readings

● The standard itself, news, development:○ http://www.mpi-forum.org

● Online reference book:○ http://www.netlib.org/utk/papers/mpi-book/mpi-boo

k.html

● Calcul Quebec's wiki:○ https://wiki.calculquebec.ca/w/MPI/en

● Detailed MPI tutorials:○ http://people.ds.cam.ac.uk/nmm1/MPI/

○ http://www.mcs.anl.gov/research/projects/mpi/tutorial/

60