MPI+X The Right Programming Paradigm for...

43
MPI+X – The Right Programming Paradigm for Exascale? Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~panda Talk at HP-CAST 27 by

Transcript of MPI+X The Right Programming Paradigm for...

Page 1: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

MPI+X – The Right Programming Paradigm for Exascale?

Dhabaleswar K. (DK) Panda

The Ohio State University

E-mail: [email protected]

http://www.cse.ohio-state.edu/~panda

Talk at HP-CAST 27

by

Page 2: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 2Network Based Computing Laboratory

High-End Computing (HEC): ExaFlop & ExaByte

100-200

PFlops in

2016-2018

1 EFlops in

2023-2024?

10K-20K

EBytes in

2016-2018

40K EBytes

in 2020 ?

ExaFlop & HPC•

ExaByte & BigData•

Page 3: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 3Network Based Computing Laboratory

Trends for Commodity Computing Clusters in the Top 500 List (http://www.top500.org)

0102030405060708090100

050

100150200250300350400450500

Per

cen

tage

of

Clu

ste

rs

Nu

mb

er o

f C

lust

ers

Timeline

Percentage of Clusters

Number of Clusters

85%

Page 4: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 4Network Based Computing Laboratory

Drivers of Modern HPC Cluster Architectures

Tianhe – 2 Titan Stampede Tianhe – 1A

• Multi-core/many-core technologies

• Remote Direct Memory Access (RDMA)-enabled networking (InfiniBand and RoCE)

• Solid State Drives (SSDs), Non-Volatile Random-Access Memory (NVRAM), NVMe-SSD

• Accelerators (NVIDIA GPGPUs and Intel Xeon Phi)

Accelerators / Coprocessors high compute density, high

performance/watt>1 TFlop DP on a chip

High Performance Interconnects -InfiniBand

<1usec latency, 100Gbps Bandwidth>Multi-core Processors SSD, NVMe-SSD, NVRAM

Page 5: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 5Network Based Computing Laboratory

Designing Communication Libraries for Multi-Petaflop and Exaflop Systems: Challenges

Programming ModelsMPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,

OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.

Application Kernels/Applications

Networking Technologies(InfiniBand, 40/100GigE,

Aries, and OmniPath)

Multi/Many-coreArchitectures

Accelerators(NVIDIA and MIC)

Middleware

Co-Design

Opportunities

and

Challenges

across Various

Layers

Performance

Scalability

Fault-

Resilience

Communication Library or Runtime for Programming ModelsPoint-to-point

Communicatio

n

Collective

Communicatio

n

Energy-

Awareness

Synchronizatio

n and Locks

I/O and

File Systems

Fault

Tolerance

Page 6: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 6Network Based Computing Laboratory

Exascale Programming models

Next-Generation Programming models

MPI+X

X= ?

OpenMP, OpenACC, CUDA, PGAS, Tasks….

Highly-Threaded

Systems (KNL)

Irregular

Communications

Heterogeneous

Computing (GPU)

• The community believes

exascale programming model

will be MPI+X

• But what is X?

– Can it be just OpenMP?

• Many different environments

and systems are emerging

– Different `X’ will satisfy the

respective needs

Page 7: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 7Network Based Computing Laboratory

• Scalability for million to billion processors– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)

– Scalable job start-up

• Scalable Collective communication– Offload

– Non-blocking

– Topology-aware

• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)– Multiple end-points per node

• Support for efficient multi-threading (OpenMP)

• Integrated Support for GPGPUs and Accelerators (CUDA)

• Fault-tolerance/resiliency

• QoS support for communication and I/O

• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, CAF, …)

• Virtualization

• Energy-Awareness

MPI+X Programming model: Broad Challenges at Exascale

Page 8: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 8Network Based Computing Laboratory

• Extreme Low Memory Footprint– Memory per core continues to decrease

• D-L-A Framework

– Discover

• Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job

• Node architecture, Health of network and node

– Learn

• Impact on performance and scalability

• Potential for failure

– Adapt

• Internal protocols and algorithms

• Process mapping

• Fault-tolerance solutions

– Low overhead techniques while delivering performance, scalability and fault-tolerance

Additional Challenges for Designing Exascale Software Libraries

Page 9: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 9Network Based Computing Laboratory

Overview of the MVAPICH2 Project• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)

– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002

– MVAPICH2-X (MPI + PGAS), Available since 2011

– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014

– Support for Virtualization (MVAPICH2-Virt), Available since 2015

– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015

– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015

– Used by more than 2,690 organizations in 83 countries

– More than 402,000 (> 0.4 million) downloads from the OSU site directly

– Empowering many TOP500 clusters (Nov ‘16 ranking)

• 1st ranked 10,649,640-core cluster (Sunway TaihuLight) at NSC, Wuxi, China

• 13th ranked 241,108-core cluster (Pleiades) at NASA

• 17th ranked 519,640-core cluster (Stampede) at TACC

• 40th ranked 76,032-core cluster (Tsubame 2.5) at Tokyo Institute of Technology and many others

– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)

– http://mvapich.cse.ohio-state.edu

• Empowering Top500 systems for over a decade

– System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) ->

Sunway TaihuLight at NSC, Wuxi, China (1st in Nov’16, 10,649,640 cores, 93 PFlops)

Page 10: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 10Network Based Computing Laboratory

• Hybrid MPI+OpenMP Models for Highly-threaded Systems

• Hybrid MPI+PGAS Models for Irregular Applications

• Hybrid MPI+GPGPUs and OpenSHMEM for Heterogeneous Computing

Outline

Page 11: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 11Network Based Computing Laboratory

• Systems like KNL

• MPI+OpenMP is seen as the best fit

– 1 MPI process per socket for Multi-core

– 4-8 MPI processes per KNL

– Each MPI process will launch OpenMP threads

• However, current MPI runtimes are not “efficiently” handling the hybrid

– Most of the application use Funneled mode: Only the MPI processes perform

communication

– Communication phases are the bottleneck

• Multi-endpoint based designs

– Transparently use threads inside MPI runtime

– Increase the concurrency

Highly Threaded Systems

Page 12: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 12Network Based Computing Laboratory

• MPI-4 will enhance the thread support

– Endpoint proposal in the Forum

– Application threads will be able to efficiently perform communication

– Endpoint is the communication entity that maps to a thread

• Idea is to have multiple addressable communication entities within a single process

• No context switch between application and runtime => better performance

• OpenMP 4.5 is more powerful than just traditional data parallelism

– Supports task parallelism since OpenMP 3.0

– Supports heterogeneous computing with accelerator targets since OpenMP 4.0

– Supports explicit SIMD and threads affinity pragmas since OpenMP 4.0

MPI and OpenMP

Page 13: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 13Network Based Computing Laboratory

• Lock-free Communication

– Threads have their own resources

• Dynamically adapt the number of threads

– Avoid resource contention

– Depends on application pattern and system

performance

• Both intra- and inter-nodes communication

– Threads boost both channels

• New MEP-Aware collectives

• Applicable to the endpoint proposal in MPI-4

MEP-based design: MVAPICH2 Approach

Multi-endpoint Runtime

RequestHandling

ProgressEngine

Comm.Resources

Management

Endpoint Controller

Collective

Optimized AlgorithmPoint-to-Point

MPI/OpenMP Program

MPI Collective

(MPI_Alltoallv,

MPI_Allgatherv...)

*Transparent support

MPI Point-to-Point

(MPI_Isend, MPI_Irecv,

MPI_Waitall...)

*OpenMP Pragma needed

Lock-free Communication Components

M. Luo, X. Lu, K. Hamidouche, K. Kandalla and D. K. Panda, Initial Study of Multi-Endpoint Runtime for MPI+OpenMP Hybrid

Applications on Multi-Core Systems. International Symposium on Principles and Practice of Parallel Programming (PPoPP '14).

Page 14: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 14Network Based Computing Laboratory

Performance Benefits: OSU Micro-Benchmarks (OMB) level

0

20

40

60

80

100

120

140

160

180

16 64 256 1K 4K 16K 64K 256K

La

ten

cy (

us)

Message size

Orig Multi-threaded RuntimeProposed Multi-Endpoint Runtime

Process based Runtime

0

100

200

300

400

500

600

700

800

900

1000

1 4 16 64 256 1K 2K

La

ten

cy (

us)

Message size

origmep

0

100

200

300

400

500

600

700

800

900

1000

1 4 16 64 256 1K 2K

La

ten

cy (

us)

Message size

origmep

Multi-pairs Bcast Alltoallv

• Reduces the latency from 40us to 1.85 us (21X)

• Achieves the same as Processes

• 40% improvement on latency for Bcast on 4,096 cores

• 30% improvement on latency for Alltoall on 4,096 cores

Page 15: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 15Network Based Computing Laboratory

Performance Benefits: Application Kernel level

0

20

40

60

80

100

120

140

160

180

200

CG LU MG

Tim

e (

s)

NAS Benchmark with 256 nodes (4,096 cores) CLASS E

Origmep

0

5

10

15

2K 4K

Exec

uti

on

Tim

e

# of cores and input size

P3DFFT

Orig MEP

• 6.3% improvement for MG, 11.7% improvement for CG, and 12.6% improvement for LU on 4,096

cores.

• With P3DFFT, we are able to observe a 30% improvement in communication time and 13.5%

improvement in the total execution time.

Page 16: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 16Network Based Computing Laboratory

• On-load approach

– Takes advantage of the idle cores

– Dynamically configurable

– Takes advantage of highly multithreaded cores

– Takes advantage of MCDRAM of KNL processors

• Applicable to other programming models such as PGAS, Task-based, etc.

• Provides portability, performance, and applicability to runtime as well as

applications in a transparent manner

Enhanced Designs for KNL: MVAPICH2 Approach

Page 17: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 17Network Based Computing Laboratory

Performance Benefits of the Enhanced Designs

• New designs to exploit high concurrency and MCDRAM of KNL

• Significant improvements for large message sizes

• Benefits seen in varying message size as well as varying MPI processes

Very Large Message Bi-directional Bandwidth16-process Intra-node All-to-AllIntra-node Broadcast with 64MB Message

0

2000

4000

6000

8000

10000

2M 4M 8M 16M 32M 64M

Ban

dw

idth

(M

B/s

)

Message size

MVAPICH2 MVAPICH2-Optimized

0

10000

20000

30000

40000

50000

60000

4 8 16

Late

ncy

(u

s)

No. of processes

MVAPICH2 MVAPICH2-Optimized

27%

0

50000

100000

150000

200000

250000

300000

1M 2M 4M 8M 16M 32M

Late

ncy

(u

s)

Message size

MVAPICH2 MVAPICH2-Optimized17.2%

52%

Page 18: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 18Network Based Computing Laboratory

Performance Benefits of the Enhanced Designs

0

10000

20000

30000

40000

50000

60000

1M 2M 4M 8M 16M 32M 64M

Ban

dw

idth

(M

B/s

)

Message Size (bytes)

MV2_Opt_DRAM MV2_Opt_MCDRAM

MV2_Def_DRAM MV2_Def_MCDRAM 30%

0

50

100

150

200

250

300

4:268 4:204 4:64

Tim

e (s

)

MPI Processes : OMP Threads

MV2_Def_DRAM MV2_Opt_DRAM

15%

Multi-Bandwidth using 32 MPI processesCNTK: MLP Training Time using MNIST (BS:64)

• Benefits observed on training time of Multi-level Perceptron (MLP) model on MNIST dataset

using CNTK Deep Learning Framework

Enhanced Designs will be available in upcoming MVAPICH2 releases

Page 19: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 19Network Based Computing Laboratory

• Hybrid MPI+OpenMP Models for Highly-threaded Systems

• Hybrid MPI+PGAS Models for Irregular Applications

• Hybrid MPI+GPGPUs and OpenSHMEM for Heterogeneous Computing

Outline

Page 20: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 20Network Based Computing Laboratory

Maturity of Runtimes and Application Requirements

• MPI has been the most popular model for a long time

- Available on every major machine

- Portability, performance and scaling

- Most parallel HPC code is designed using MPI

- Simplicity - structured and iterative communication patterns

• PGAS Models

- Increasing interest in community

- Simple shared memory abstractions and one-sided communication

- Easier to express irregular communication

• Need for hybrid MPI + PGAS

- Application can have kernels with different communication characteristics

- Porting only part of the applications to reduce programming effort

Page 21: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 21Network Based Computing Laboratory

Hybrid (MPI+PGAS) Programming

• Application sub-kernels can be re-written in MPI/PGAS based on communication

characteristics

• Benefits:

– Best of Distributed Computing Model

– Best of Shared Memory Computing Model

Kernel 1MPI

Kernel 2MPI

Kernel 3MPI

Kernel NMPI

HPC Application

Kernel 2PGAS

Kernel NPGAS

Page 22: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 22Network Based Computing Laboratory

MVAPICH2-X for Hybrid MPI + PGAS Applications

MPI, OpenSHMEM, UPC, CAF, UPC++ or Hybrid (MPI +

PGAS) Applications

Unified MVAPICH2-X Runtime

InfiniBand, RoCE, iWARP

OpenSHMEM Calls MPI CallsUPC Calls

• Unified communication runtime for MPI, UPC, OpenSHMEM, CAF, UPC++ available with MVAPICH2-

X 1.9 onwards! (since 2012)

– http://mvapich.cse.ohio-state.edu

• Feature Highlights

– Supports MPI(+OpenMP), OpenSHMEM, UPC, CAF, UPC++, MPI(+OpenMP) + OpenSHMEM, MPI(+OpenMP)

+ UPC

– MPI-3 compliant, OpenSHMEM v1.0 standard compliant, UPC v1.2 standard compliant (with initial support

for UPC 1.3), CAF 2008 standard (OpenUH), UPC++

– Scalable Inter-node and intra-node communication – point-to-point and collectives

CAF Calls UPC++ Calls

Page 23: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 23Network Based Computing Laboratory

OpenSHMEM Atomic Operations: Performance

0

5

10

15

20

25

30

fadd finc add inc cswap swap

Tim

e (u

s)UH-SHMEM MV2X-SHMEM Scalable-SHMEM OMPI-SHMEM

• OSU OpenSHMEM micro-benchmarks (OMB v4.1)

• MV2-X SHMEM performs up to 40% better compared to UH-SHMEM

Page 24: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 24Network Based Computing Laboratory

0

1000

2000

3000

4000

64

128

256

512

1K 2K 4K 8K

16K

32K

64K

128K

Tim

e (

ms)

Message Size

UPC-GASNetUPC-OSU

050

100150200250

64

128

256

512

1K 2K 4K 8K

16K

32K

64K

12

8K

25

6K

Tim

e (

ms)

Message Size

UPC-GASNet

UPC-OSU 2X

UPC Collectives PerformanceBroadcast (2,048 processes) Scatter (2,048 processes)

Gather (2,048 processes) Exchange (2,048 processes)

0

5000

10000

15000

64

128

256

512

1K 2K 4K 8K

16K

32K

64K

128K

256K

Tim

e (

us)

Message Size

UPC-GASNetUPC-OSU

0

100

200

300

64

128

256

512

1K

2K 4K 8K

16K

32K

64K

128K

256K

Tim

e (

ms)

Message Size

UPC-GASNet

UPC-OSU

25X

2X

35%

J. Jose, K. Hamidouche, J. Zhang, A. Venkatesh, and D. K. Panda, Optimizing Collective Communication in UPC (HiPS’14, in association with

IPDPS’14)

Page 25: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 25Network Based Computing Laboratory

Performance Evaluations for CAF model

0

1000

2000

3000

4000

5000

6000

Ban

dw

idth

(M

B/s

)

Message Size (byte)

GASNet-IBV

GASNet-MPI

MV2X

0

1000

2000

3000

4000

5000

6000

Ban

dw

idth

(M

B/s

)

Message Size (byte)

GASNet-IBV

GASNet-MPI

MV2X

02468

10

GASNet-IBV GASNet-MPI MV2X

La

ten

cy

(ms)

02468

10

GASNet-IBV GASNet-MPI MV2XLa

ten

cy (

ms

)

0 100 200 300

bt.D.256

cg.D.256

ep.D.256

ft.D.256

mg.D.256

sp.D.256

GASNet-IBV GASNet-MPI MV2X

Time (sec)

Get NAS-CAFPut

• Micro-benchmark improvement (MV2X vs. GASNet-IBV, UH CAF test-suite)

– Put bandwidth: 3.5X improvement on 4KB; Put latency: reduce 29% on 4B

• Application performance improvement (NAS-CAF one-sided implementation)

– Reduce the execution time by 12% (SP.D.256), 18% (BT.D.256)

3.5X

29%

12%

18%

J. Lin, K. Hamidouche, X. Lu, M. Li and D. K. Panda, High-performance Co-array Fortran support with MVAPICH2-X: Initial

experience and evaluation, HIPS’15

Page 26: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 26Network Based Computing Laboratory

UPC++ Collectives Performance

MPI + {UPC++}

application

GASNet Interfaces

UPC++

Runtime

Network

Conduit (MPI)

MVAPICH2-X

Unified

communication

Runtime (UCR)

MPI + {UPC++}

application

UPC++ RuntimeMPI

Interfaces

• Full and native support for hybrid MPI + UPC++ applications

• Better performance compared to IBV and MPI conduits

• OSU Micro-benchmarks (OMB) support for UPC++

• Available since MVAPICH2-X 2.2RC1

0

5000

10000

15000

20000

25000

30000

35000

40000

Tim

e (u

s)

Message Size (bytes)

GASNet_MPI

GASNET_IBV

MV2-X

14x

Inter-node Broadcast (64 nodes 1:ppn)

J. M. Hashmi, K. Hamidouche, and D. K. Panda, Enabling

Performance Efficient Runtime Support for hybrid

MPI+UPC++ Programming Models, IEEE International

Conference on High Performance Computing and

Communications (HPCC 2016)

Page 27: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 27Network Based Computing Laboratory

Application Level Performance with Graph500 and SortGraph500 Execution Time

J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming

Models, International Supercomputing Conference (ISC’13), June 2013

05

101520253035

4K 8K 16K

Tim

e (

s)

No. of Processes

MPI-Simple

MPI-CSC

MPI-CSR

Hybrid (MPI+OpenSHMEM)13X

7.6X

• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design

• 8,192 processes- 2.4X improvement over MPI-CSR

- 7.6X improvement over MPI-Simple

• 16,384 processes- 1.5X improvement over MPI-CSR

- 13X improvement over MPI-Simple

Sort Execution Time

0

500

1000

1500

2000

2500

3000

500GB-512 1TB-1K 2TB-2K 4TB-4K

Tim

e (

seco

nd

s)

Input Data - No. of Processes

MPI Hybrid

51%

• Performance of Hybrid (MPI+OpenSHMEM) Sort Application

• 4,096 processes, 4 TB Input Size- MPI – 2408 sec; 0.16 TB/min

- Hybrid – 1172 sec; 0.36 TB/min

- 51% improvement over MPI-design

J. Jose, S. Potluri, H. Subramoni, X. Lu, K. Hamidouche, K. Schulz, H. Sundar and D. Panda Designing Scalable Out-of-core Sorting with Hybrid MPI+PGAS Programming Models, PGAS’14

Page 28: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 28Network Based Computing Laboratory

Accelerating MaTEx k-NN with Hybrid MPI and OpenSHMEM

KDD (2.5GB) on 512 cores

9.0%

KDD-tranc (30MB) on 256 cores

27.6%

• Benchmark: KDD Cup 2010 (8,407,752 records, 2 classes, k=5)• For truncated KDD workload on 256 cores, reduce 27.6% execution time• For full KDD workload on 512 cores, reduce 9.0% execution time

J. Lin, K. Hamidouche, J. Zhang, X. Lu, A. Vishnu, D. Panda. Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM,

OpenSHMEM 2015

• MaTEx: MPI-based Machine learning algorithm library• k-NN: a popular supervised algorithm for classification• Hybrid designs:

– Overlapped Data Flow; One-sided Data Transfer; Circular-buffer Structure

Page 29: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 29Network Based Computing Laboratory

• Hybrid MPI+OpenMP Models for Highly-threaded Systems

• Hybrid MPI+PGAS Models for Irregular Applications

• Hybrid MPI+GPGPUs and OpenSHMEM for Heterogeneous Computing

Outline

Page 30: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 30Network Based Computing Laboratory

At Sender:

At Receiver:

MPI_Recv(r_devbuf, size, …);

inside

MVAPICH2

• Standard MPI interfaces used for unified data movement

• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)

• Overlaps data movement from GPU with RDMA transfers

High Performance and High Productivity

MPI_Send(s_devbuf, size, …);

GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU

Page 31: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 31Network Based Computing Laboratory

CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.2 Releases• Support for MPI communication from NVIDIA GPU device memory

• High performance RDMA-based inter-node point-to-point communication (GPU-GPU, GPU-Host and Host-GPU)

• High performance intra-node point-to-point communication for multi-GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU)

• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node communication for multiple GPU adapters/node

• Optimized and tuned collectives for GPU device buffers

• MPI datatype support for point-to-point and collective communication from GPU device buffers

• Unified memory

Page 32: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 32Network Based Computing Laboratory

0

1000

2000

3000

4000

1 4 16 64 256 1K 4K

MV2-GDR2.2MV2-GDR2.0bMV2 w/o GDR

GPU-GPU Internode Bi-Bandwidth

Message Size (bytes)

Bi-

Ban

dw

idth

(M

B/s

)

0

5

10

15

20

25

30

0 2 8 32 128 512 2K

MV2-GDR2.2 MV2-GDR2.0bMV2 w/o GDR

GPU-GPU internode latency

Message Size (bytes)

Late

ncy

(u

s)

MVAPICH2-GDR-2.2Intel Ivy Bridge (E5-2680 v2) node - 20 cores

NVIDIA Tesla K40c GPUMellanox Connect-X4 EDR HCA

CUDA 8.0Mellanox OFED 3.0 with GPU-Direct-RDMA

10x2X

11x

Performance of MVAPICH2-GPU with GPU-Direct RDMA (GDR)

2.18us

0

500

1000

1500

2000

2500

3000

1 4 16 64 256 1K 4K

MV2-GDR2.2

MV2-GDR2.0b

MV2 w/o GDR

GPU-GPU Internode Bandwidth

Message Size (bytes)

Ba

nd

wid

th

(MB

/s) 11X

2X

3X

Page 33: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 33Network Based Computing Laboratory

• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)

• HoomdBlue Version 1.0.5

• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768

MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768

MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384

Application-Level Evaluation (HOOMD-blue)

0

500

1000

1500

2000

2500

4 8 16 32

Ave

rage

Tim

e St

eps

per

se

con

d (

TPS)

Number of Processes

MV2 MV2+GDR

0

500

1000

1500

2000

2500

3000

3500

4 8 16 32Ave

rage

Tim

e St

eps

per

se

con

d (

TPS)

Number of Processes

64K Particles 256K Particles

2X2X

Page 34: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 34Network Based Computing Laboratory

Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland

0

0.2

0.4

0.6

0.8

1

1.2

16 32 64 96No

rmal

ized

Exe

cuti

on

Tim

e

Number of GPUs

CSCS GPU cluster

Default Callback-based Event-based

0

0.2

0.4

0.6

0.8

1

1.2

4 8 16 32

No

rmal

ized

Exe

cuti

on

Tim

e

Number of GPUs

Wilkes GPU Cluster

Default Callback-based Event-based

• 2X improvement on 32 GPUs nodes• 30% improvement on 96 GPU nodes (8 GPUs/node)

C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data

Movement Processing on Modern GPU-enabled Systems, IPDPS’16

On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application

Cosmo model: http://www2.cosmo-model.org/content

/tasks/operational/meteoSwiss/

Page 35: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 35Network Based Computing Laboratory

• GDR for small/medium message sizes

• Host-staging for large message to avoid PCIe

bottlenecks

• Hybrid design brings best of both

• 3.13 us Put latency for 4B (7X improvement ) and 4.7

us latency for 4KB

• 9X improvement for Get latency of 4B

Exploiting GDR: OpenSHMEM: Inter-node Evaluation

0

5

10

15

20

25

1 4 16 64 256 1K 4K

Host-Pipeline

GDR

Small Message shmem_put D-D

Message Size (bytes)

Late

ncy

(u

s)

0

200

400

600

800

8K 32K 128K 512K 2M

Host-Pipeline

GDR

Large Message shmem_put D-D

Message Size (bytes)

Late

ncy

(u

s)

05

101520253035

1 4 16 64 256 1K 4K

Host-Pipeline

GDR

Small Message shmem_get D-D

Message Size (bytes)

Late

ncy

(u

s)

7X

9X

Page 36: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 36Network Based Computing Laboratory

0

2

4

6

8

10

1 4 16 64 256 1K 4K

IPC GDR

Small Message shmem_put D-H

Message Size (bytes)

Late

ncy

(u

s)

• GDR for small and medium message sizes

• IPC for large message to avoid PCIe bottlenecks

• Hybrid design brings best of both

• 2.42 us Put D-H latency for 4 Bytes (2.6X improvement) and 3.92 us latency for 4 KBytes

• 3.6X improvement for Get operation

• Similar results with other configurations (D-D, H-D and D-H)

OpenSHMEM: Intra-node Evaluation

0

2

4

6

8

10

12

14

1 4 16 64 256 1K 4K

IPC GDR

Small Message shmem_get D-H

Message Size (bytes)

Late

ncy

(u

s)

2.6X 3.6X

Page 37: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 37Network Based Computing Laboratory

0

0.05

0.1

8 16 32 64

Exec

uti

on

tim

e

(sec

)

Number of GPU Nodes

Host-Pipeline Enhanced-GDR

- Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c +

Mellanox Connect-IB)

- New designs achieve 20% and 19% improvements on 32

and 64 GPU nodes

Application Evaluation: GPULBM and 2DStencil

19%

0

100

200

8 16 32 64

Evo

luti

on

Tim

e

(mse

c)

Number of GPU Nodes

Weak Scaling Host-Pipeline Enhanced-GDR

GPULBM: 64x64x64 2DStencil 2Kx2K• Redesign the application

• CUDA-Aware MPI : Send/Recv=> hybrid CUDA-Aware MPI+OpenSHMEM• cudaMalloc =>shmalloc(size,1); • MPI_Send/recv => shmem_put + fence• 53% and 45% • Degradation is due to smallInput size • Will be available in future MVAPICH2-GDR

45 %

1. K. Hamidouche, A. Venkatesh, A. Awan, H. Subramoni, C. Ching and D. K. Panda, Exploiting

GPUDirect RDMA in Designing High Performance OpenSHMEM for GPU Clusters. IEEE Cluster 2015.

2. K. Hamidouche, A. Venkatesh, A. Awan, H. Subramoni, C. Ching and D. K. Panda, CUDA-Aware

OpenSHMEM: Extensions and Designs for High Performance OpenSHMEM on GPU Clusters.

To appear in PARCO.

Page 38: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 38Network Based Computing Laboratory

OpenACC-Aware MVAPICH2

• acc_malloc to allocate device memory

– No changes to MPI calls

– MVAPICH2 detects the device pointer and optimizes data movement

• acc_deviceptr to get device pointer (in OpenACC 2.0)

– Enables MPI communication from memory allocated by compiler when it is available in OpenACC 2.0 implementations

– MVAPICH2 will detect the device pointer and optimize communication

• Delivers the same performance as with CUDA

A = acc_malloc(sizeof(int) * N);

……

#pragma acc parallel loop deviceptr(A) . . .

//compute for loop

MPI_Send (A, N, MPI_INT, 0, 1, MPI_COMM_WORLD);

……

acc_free(A);

A = malloc(sizeof(int) * N);

……

#pragma acc data copyin(A) . . .

{

#pragma acc parallel loop . . .

//compute for loop

MPI_Send(acc_deviceptr(A), N, MPI_INT, 0, 1, MPI_COMM_WORLD);

}

……

free(A);

Page 39: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 39Network Based Computing Laboratory

• Architectures for Exascale systems are evolving

• Exascale systems will be constrained by– Power

– Memory per core

– Data movement cost

– Faults

• Programming Models, Runtimes and Middleware need to be designed for– Scalability

– Performance

– Fault-resilience

– Energy-awareness

– Programmability

– Productivity

• MPI+OpenMP may not be `the only’ paradigm

• MPI+X (with different options for X) need to be explored

• Highlighted some of these options, associated issues, and challenges

• Need continuous innovation on all these fronts to have the right paradigm

Looking into the Future ….

Page 40: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 40Network Based Computing Laboratory

Funding Acknowledgments

Funding Support by

Equipment Support by

Page 41: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 41Network Based Computing Laboratory

Personnel AcknowledgmentsCurrent Students

– A. Awan (Ph.D.)

– M. Bayatpour (Ph.D.)

– S. Chakraborthy (Ph.D.)

– C.-H. Chu (Ph.D.)

Past Students

– A. Augustine (M.S.)

– P. Balaji (Ph.D.)

– S. Bhagvat (M.S.)

– A. Bhat (M.S.)

– D. Buntinas (Ph.D.)

– L. Chai (Ph.D.)

– B. Chandrasekharan (M.S.)

– N. Dandapanthula (M.S.)

– V. Dhanraj (M.S.)

– T. Gangadharappa (M.S.)

– K. Gopalakrishnan (M.S.)

– R. Rajachandrasekar (Ph.D.)

– G. Santhanaraman (Ph.D.)

– A. Singh (Ph.D.)

– J. Sridhar (M.S.)

– S. Sur (Ph.D.)

– H. Subramoni (Ph.D.)

– K. Vaidyanathan (Ph.D.)

– A. Vishnu (Ph.D.)

– J. Wu (Ph.D.)

– W. Yu (Ph.D.)

Past Research Scientist

– S. Sur

Past Post-Docs

– D. Banerjee

– X. Besseron

– H.-W. Jin

– W. Huang (Ph.D.)

– W. Jiang (M.S.)

– J. Jose (Ph.D.)

– S. Kini (M.S.)

– M. Koop (Ph.D.)

– K. Kulkarni (M.S.)

– R. Kumar (M.S.)

– S. Krishnamoorthy (M.S.)

– K. Kandalla (Ph.D.)

– P. Lai (M.S.)

– J. Liu (Ph.D.)

– M. Luo (Ph.D.)

– A. Mamidala (Ph.D.)

– G. Marsh (M.S.)

– V. Meshram (M.S.)

– A. Moody (M.S.)

– S. Naravula (Ph.D.)

– R. Noronha (Ph.D.)

– X. Ouyang (Ph.D.)

– S. Pai (M.S.)

– S. Potluri (Ph.D.)

– S. Guganani (Ph.D.)

– J. Hashmi (Ph.D.)

– N. Islam (Ph.D.)

– M. Li (Ph.D.)

– J. Lin

– M. Luo

– E. Mancini

Current Research Scientists

– K. Hamidouche

– X. Lu

– H. Subramoni

Past Programmers

– D. Bureddy

– M. Arnold

– J. Perkins

Current Research Specialist

– J. Smith– M. Rahman (Ph.D.)

– D. Shankar (Ph.D.)

– A. Venkatesh (Ph.D.)

– J. Zhang (Ph.D.)

– S. Marcarelli

– J. Vienne

– H. Wang

Page 42: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 42Network Based Computing Laboratory

• Three Conference Tutorials (IB+HSE, IB+HSE Advanced, Big Data)

• Intel HPC Developers Conference

• Technical Papers (SC main conference; Doctoral Showcase; Poster; PDSW-DISC, PAW, COMHPC, and ESPM2 Workshops)

• Booth Presentations (Mellanox, NVIDIA, NRL, PGAS)

• HPC Connection Workshop

• Will be stationed at Ohio Supercomputer Center/OH-TECH Booth (#1107)– Multiple presentations and demos

• More Details from http://mvapich.cse.ohio-state.edu/talks/

OSU Team Will be Participating in Multiple Events at SC ‘16

Page 43: MPI+X The Right Programming Paradigm for Exascale?mvapich.cse.ohio-state.edu/static/media/talks/slide/DK... · 2017-07-18 · MPI+X –The Right Programming Paradigm for Exascale?

HP-CAST 27 43Network Based Computing Laboratory

[email protected],

Thank You!

Network-Based Computing Laboratoryhttp://nowlab.cse.ohio-state.edu/

The MVAPICH Projecthttp://mvapich.cse.ohio-state.edu/