High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming...

58
High-Performance and Scalable Designs of Programming Models for Exascale Systems Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~panda Keynote Talk at HPCAC, Switzerland (April 2017) by

Transcript of High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming...

Page 1: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

High-Performance and Scalable Designs of Programming Models for Exascale Systems

Dhabaleswar K. (DK) Panda

The Ohio State University

E-mail: [email protected]

http://www.cse.ohio-state.edu/~panda

Keynote Talk at HPCAC, Switzerland (April 2017)

by

Page 2: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 2Network Based Computing Laboratory

High-End Computing (HEC): Towards Exascale-Era

Expected to have an ExaFlop system in 2021!

150-300

PFlops in

2017-18?

1 EFlops

in 2021?

Page 3: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 3Network Based Computing Laboratory

• Scientific Computing

– Message Passing Interface (MPI), including MPI + OpenMP, is the Dominant

Programming Model

– Many discussions towards Partitioned Global Address Space (PGAS)

• UPC, OpenSHMEM, CAF, UPC++ etc.

– Hybrid Programming: MPI + PGAS (OpenSHMEM, UPC)

• Deep Learning

– Caffe, CNTK, TensorFlow, and many more

• Big Data/Enterprise/Commercial Computing

– Focuses on large data and data analysis

– Spark and Hadoop (HDFS, HBase, MapReduce)

– Memcached is also used for Web 2.0

Three Major Computing Categories

Page 4: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 4Network Based Computing Laboratory

Parallel Programming Models Overview

P1 P2 P3

Shared Memory

P1 P2 P3

Memory Memory Memory

P1 P2 P3

Memory Memory Memory

Logical shared memory

Shared Memory Model

SHMEM, DSM

Distributed Memory Model

MPI (Message Passing Interface)

Partitioned Global Address Space (PGAS)

Global Arrays, UPC, Chapel, X10, CAF, …

• Programming models provide abstract machine models

• Models can be mapped on different types of systems

– e.g. Distributed Shared Memory (DSM), MPI within a node, etc.

• PGAS models and Hybrid MPI+PGAS models are gradually receiving

importance

Page 5: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 5Network Based Computing Laboratory

Partitioned Global Address Space (PGAS) Models

• Key features

- Simple shared memory abstractions

- Light weight one-sided communication

- Easier to express irregular communication

• Different approaches to PGAS

- Languages

• Unified Parallel C (UPC)

• Co-Array Fortran (CAF)

• X10

• Chapel

- Libraries

• OpenSHMEM

• UPC++

• Global Arrays

Page 6: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 6Network Based Computing Laboratory

Hybrid (MPI+PGAS) Programming

• Application sub-kernels can be re-written in MPI/PGAS based on communication

characteristics

• Benefits:

– Best of Distributed Computing Model

– Best of Shared Memory Computing Model

Kernel 1MPI

Kernel 2MPI

Kernel 3MPI

Kernel NMPI

HPC Application

Kernel 2PGAS

Kernel NPGAS

Page 7: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 7Network Based Computing Laboratory

Designing Communication Middleware for Multi-Petaflop and Exaflop Systems: Challenges

Programming ModelsMPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,

OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.

Application Kernels/Applications

Networking Technologies(InfiniBand, 40/100GigE,

Aries, and OmniPath)

Multi/Many-coreArchitectures

Accelerators(NVIDIA and MIC)

Middleware

Co-Design

Opportunities

and

Challenges

across Various

Layers

Performance

Scalability

Fault-

Resilience

Communication Library or Runtime for Programming Models

Point-to-point

Communication

Collective

Communication

Energy-

Awareness

Synchronization

and Locks

I/O and

File Systems

Fault

Tolerance

Page 8: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 8Network Based Computing Laboratory

• Scalability for million to billion processors– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)

– Scalable job start-up

• Scalable Collective communication– Offload

– Non-blocking

– Topology-aware

• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)– Multiple end-points per node

• Support for efficient multi-threading

• Integrated Support for GPGPUs and Accelerators

• Fault-tolerance/resiliency

• QoS support for communication and I/O

• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, MPI+UPC++, CAF, …)

• Virtualization

• Energy-Awareness

Broad Challenges in Designing Communication Middleware for (MPI+X) at Exascale

Page 9: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 9Network Based Computing Laboratory

• Extreme Low Memory Footprint– Memory per core continues to decrease

• D-L-A Framework

– Discover

• Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job

• Node architecture, Health of network and node

– Learn

• Impact on performance and scalability

• Potential for failure

– Adapt

• Internal protocols and algorithms

• Process mapping

• Fault-tolerance solutions

– Low overhead techniques while delivering performance, scalability and fault-tolerance

Additional Challenges for Designing Exascale Software Libraries

Page 10: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 10Network Based Computing Laboratory

Overview of the MVAPICH2 Project• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)

– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002

– MVAPICH2-X (MPI + PGAS), Available since 2011

– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014

– Support for Virtualization (MVAPICH2-Virt), Available since 2015

– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015

– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015

– Used by more than 2,750 organizations in 83 countries

– More than 413,000 (> 0.4 million) downloads from the OSU site directly

– Empowering many TOP500 clusters (Nov ‘16 ranking)

• 1st, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China

• 13th, 241,108-core (Pleiades) at NASA

• 17th, 462,462-core (Stampede) at TACC

• 40th, 74,520-core (Tsubame 2.5) at Tokyo Institute of Technology

– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)

– http://mvapich.cse.ohio-state.edu

• Empowering Top500 systems for over a decade

– System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) ->

– Sunway TaihuLight (1st in Jun’16, 10M cores, 100 PFlops)

Page 11: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 11Network Based Computing Laboratory

0

50000

100000

150000

200000

250000

300000

350000

400000

450000

Sep

-04

Feb

-05

Jul-

05

Dec

-05

May

-06

Oct

-06

Mar

-07

Au

g-0

7

Jan

-08

Jun

-08

No

v-0

8

Ap

r-0

9

Sep

-09

Feb

-10

Jul-

10

Dec

-10

May

-11

Oct

-11

Mar

-12

Au

g-1

2

Jan

-13

Jun

-13

No

v-1

3

Ap

r-1

4

Sep

-14

Feb

-15

Jul-

15

Dec

-15

May

-16

Oct

-16

Mar

-17

Nu

mb

er o

f D

ow

nlo

ads

Timeline

MV

0.9

.4

MV

2 0

.9.0

MV

2 0

.9.8

MV

21

.0

MV

1.0

MV

21

.0.3

MV

1.1

MV

21

.4

MV

21

.5

MV

21

.6

MV

21

.7

MV

21

.8

MV

21

.9

MV

22

.1

MV

2-G

DR

2.0

b

MV

2-M

IC 2

.0

MV

2-V

irt

2.1

rc2

MV

2-G

DR

2.2

rc1

MV

2-X

2.2

MV

22

.3a

MVAPICH2 Release Timeline and Downloads

Page 12: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 12Network Based Computing Laboratory

Architecture of MVAPICH2 Software Family

High Performance Parallel Programming Models

Message Passing Interface(MPI)

PGAS(UPC, OpenSHMEM, CAF, UPC++)

Hybrid --- MPI + X(MPI + PGAS + OpenMP/Cilk)

High Performance and Scalable Communication RuntimeDiverse APIs and Mechanisms

Point-to-

point

Primitives

Collectives

Algorithms

Energy-

Awareness

Remote

Memory

Access

I/O and

File Systems

Fault

ToleranceVirtualization

Active

MessagesJob Startup

Introspection

& Analysis

Support for Modern Networking Technology(InfiniBand, iWARP, RoCE, Omni-Path)

Support for Modern Multi-/Many-core Architectures(Intel-Xeon, OpenPower, Xeon-Phi (MIC, KNL), NVIDIA GPGPU)

Transport Protocols Modern Features

RC XRC UD DC UMR ODPSR-

IOV

Multi

Rail

Transport Mechanisms

Shared

MemoryCMA IVSHMEM

Modern Features

MCDRAM* NVLink* CAPI*

* Upcoming

Page 13: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 13Network Based Computing Laboratory

MVAPICH2 Software Family High-Performance Parallel Programming Libraries

MVAPICH2 Support for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE

MVAPICH2-X Advanced MPI features, OSU INAM, PGAS (OpenSHMEM, UPC, UPC++, and CAF), and MPI+PGAS programming models with unified communication runtime

MVAPICH2-GDR Optimized MPI for clusters with NVIDIA GPUs

MVAPICH2-Virt High-performance and scalable MPI for hypervisor and container based HPC cloud

MVAPICH2-EA Energy aware and High-performance MPI

MVAPICH2-MIC Optimized MPI for clusters with Intel KNC

Microbenchmarks

OMB Microbenchmarks suite to evaluate MPI and PGAS (OpenSHMEM, UPC, and UPC++) libraries for CPUs and GPUs

Tools

OSU INAM Network monitoring, profiling, and analysis for clusters with MPI and scheduler integration

OEMT Utility to measure the energy consumption of MPI applications

Page 14: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 14Network Based Computing Laboratory

• Released on 03/29/2017

• Major Features and Enhancements

– Based on and ABI compatible with MPICH 3.2

– Support collective offload using Mellanox's SHArP for Allreduce

• Enhance tuning framework for Allreduce using SHArP

– Introduce capability to run MPI jobs across multiple InfiniBand subnets

– Introduce basic support for executing MPI jobs in Singularity

– Enhance collective tuning for Intel Knight's Landing and Intel Omni-path

– Enhance process mapping support for multi-threaded MPI applications

• Introduce MV2_CPU_BINDING_POLICY=hybrid

• Introduce MV2_THREADS_PER_PROCESS

– On-demand connection management for PSM-CH3 and PSM2-CH3 channels

– Enhance PSM-CH3 and PSM2-CH3 job startup to use non-blocking PMI calls

– Enhance debugging support for PSM-CH3 and PSM2-CH3 channels

– Improve performance of architecture detection

– Introduce run time parameter MV2_SHOW_HCA_BINDING to show process to HCA binding

• Enhance MV2_SHOW_CPU_BINDING to enable display of CPU bindings on all nodes

– Deprecate OFA-IB-Nemesis channel

– Update to hwloc version 1.11.6

MVAPICH2 2.3a

Page 15: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 15Network Based Computing Laboratory

• Scalability for million to billion processors– Support for highly-efficient inter-node and intra-node communication

– Support for advanced IB mechanisms (ODP)

– Extremely minimal memory footprint (DCT)

– Scalable Job Start-up

– Dynamic and Adaptive Tag Matching

– Collective Support with SHArP

• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)

• Integrated Support for GPGPUs

• InfiniBand Network Analysis and Monitoring (INAM)

• Virtualization and HPC Cloud

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 16: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 16Network Based Computing Laboratory

One-way Latency: MPI over IB with MVAPICH2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8 Small Message Latency

Message Size (bytes)

Late

ncy

(u

s)

1.111.19

1.011.15

1.04

TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch

Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch

0

20

40

60

80

100

120TrueScale-QDRConnectX-3-FDRConnectIB-DualFDRConnectX-4-EDROmni-Path

Large Message Latency

Message Size (bytes)

Late

ncy

(u

s)

Page 17: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 17Network Based Computing Laboratory

TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch

ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 IB switch

Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch

Bandwidth: MPI over IB with MVAPICH2

0

5000

10000

15000

20000

25000TrueScale-QDR

ConnectX-3-FDR

ConnectIB-DualFDR

ConnectX-4-EDR

Omni-Path

Bidirectional Bandwidth

Ban

dw

idth

(M

Byt

es/

sec)

Message Size (bytes)

21,227

12,161

21,983

6,228

24,136

0

2000

4000

6000

8000

10000

12000

14000 Unidirectional Bandwidth

Ban

dw

idth

(M

Byt

es/

sec)

Message Size (bytes)

12,590

3,373

6,356

12,083

12,366

Page 18: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 18Network Based Computing Laboratory

0

0.5

1

0 1 2 4 8 16 32 64 128 256 512 1K

Late

ncy

(u

s)

Message Size (Bytes)

Latency

Intra-Socket Inter-Socket

MVAPICH2 Two-Sided Intra-Node Performance(Shared memory and Kernel-based Zero-copy Support (LiMIC and CMA))

Intel Ivy-bridge

0.18 us

0.45 us

0

5000

10000

15000

Ban

dw

idth

(M

B/s

)Message Size (Bytes)

Bandwidth (Inter-socket)inter-Socket-CMAinter-Socket-Shmeminter-Socket-LiMIC

0

5000

10000

15000

Ban

dw

idth

(M

B/s

)

Message Size (Bytes)

Bandwidth (Intra-socket)intra-Socket-CMAintra-Socket-Shmemintra-Socket-LiMIC

14,250 MB/s

13,749 MB/s

Page 19: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 19Network Based Computing Laboratory

Minimizing Memory Footprint by Direct Connect (DC) Transport

No

de

0 P1

P0

Node 1

P3

P2

Node 3

P7

P6

No

de

2 P5

P4

IBNetwork

• Constant connection cost (One QP for any peer)

• Full Feature Set (RDMA, Atomics etc)

• Separate objects for send (DC Initiator) and receive (DC Target)

– DC Target identified by “DCT Number”

– Messages routed with (DCT Number, LID)

– Requires same “DC Key” to enable communication

• Available since MVAPICH2-X 2.2a

0

0.5

1

160 320 620

No

rmal

ized

Exe

cuti

on

Ti

me

Number of Processes

NAMD - Apoa1: Large data set

RC DC-Pool UD XRC

1022

4797

1 1 12

10 10 10 10

1 1

35

1

10

100

80 160 320 640

Co

nn

ecti

on

Mem

ory

(K

B)

Number of Processes

Memory Footprint for Alltoall

RC DC-Pool UD XRC

H. Subramoni, K. Hamidouche, A. Venkatesh, S. Chakraborty and D. K. Panda, Designing MPI Library with Dynamic Connected Transport (DCT)

of InfiniBand : Early Experiences. IEEE International Supercomputing Conference (ISC ’14)

Page 20: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 20Network Based Computing Laboratory

• Near-constant MPI and OpenSHMEM

initialization time at any process count

• 10x and 30x improvement in startup time

of MPI and OpenSHMEM respectively at

16,384 processes

• Memory consumption reduced for

remote endpoint information by

O(processes per node)

• 1GB Memory saved per node with 1M

processes and 16 processes per node

Towards High Performance and Scalable Startup at Exascale

P M

O

Job Startup Performance

Mem

ory

Req

uir

ed t

o S

tore

En

dp

oin

t In

form

atio

n

a b c d

eP

M

PGAS – State of the art

MPI – State of the art

O PGAS/MPI – Optimized

PMIX_Ring

PMIX_Ibarrier

PMIX_Iallgather

Shmem based PMI

b

c

d

e

aOn-demand Connection

On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI. S. Chakraborty, H. Subramoni, J. Perkins, A. A. Awan, and D K

Panda, 20th International Workshop on High-level Parallel Programming Models and Supportive Environments (HIPS ’15)

PMI Extensions for Scalable MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, J. Perkins, M. Arnold, and D K Panda, Proceedings of the 21st

European MPI Users' Group Meeting (EuroMPI/Asia ’14)

Non-blocking PMI Extensions for Fast MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, A. Venkatesh, J. Perkins, and D K Panda, 15th

IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’15)

SHMEMPMI – Shared Memory based PMI for Improved Performance and Scalability. S. Chakraborty, H. Subramoni, J. Perkins, and D K Panda, 16th

IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’16)

a

b

c d

e

Page 21: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 21Network Based Computing Laboratory

Scalable Job Startup with Non-blocking PMI

0

10

20

30

40

50

16 64 256 1K 4K 16K

Tim

e Ta

ken

fo

r M

PI_

Init

(s

eco

nd

s)

Number of Processes

Hello World - MV2-2.0 +Default SLURMMPI_Init - MV2-2.0 +Default SLURMHello World - MV2-2.1 +Optimized SLURMMPI_Init - MV2-2.1 +Optimized SLURM 0

10

20

30

64 256 1K 4K 16K 64K

Tim

e Ta

ken

(Se

con

ds)

Number of Processes

Hello World

MPI_Init

• Near-constant MPI_Init at any scale• MPI_Init is 59 times faster• Hello World (MPI_Init + MPI_Finalize) takes 5.7 times

less time with 8,192 processes (512 nodes)• 16,384 processes, 1,024 Nodes (Sandy Bridge + FDR)

• Non-blocking PMI implementation in mpirun_rsh • On-demand connection management for PSM

and Omni-path• Efficient intra-node startup for Knights Landing• MPI_Init takes 5.8 seconds• Hello World takes 21 seconds• 65,536 processes, 1,024 Nodes (KNL + Omni-Path)

Co-designing Resource Manager and MPI library Resource Manager Independent Design

New designs available in MVAPICH2-2.3a and as patch for SLURM-15.08.8 and SLURM-16.05.1

Page 22: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 22Network Based Computing Laboratory

• Introduced by Mellanox to avoid pinning the pages of registered memory regions

• ODP-aware runtime could reduce the size of pin-down buffers while maintaining performance

• Available in MVAPICH2-X 2.2

On-Demand Paging (ODP)

M. Li, K. Hamidouche, X. Lu, H. Subramoni, J. Zhang, and D. K. Panda, “Designing MPI Library with On-Demand Paging (ODP) of

InfiniBand: Challenges and Benefits”, SC, 2016

1

10

100

1000

Exe

cuti

on

Tim

e(s

)

Applications (64 Processes)Pin-down

ODP

Page 23: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 23Network Based Computing Laboratory

Dynamic and Adaptive Tag Matching

Normalized Total Tag Matching Time at 512 ProcessesNormalized to Default (Lower is Better)

Normalized Memory Overhead per Process at 512 ProcessesCompared to Default (Lower is Better)

Adaptive and Dynamic Design for MPI Tag Matching; M. Bayatpour, H. Subramoni, S. Chakraborty, and D. K. Panda; IEEE Cluster 2016. [Best Paper Nominee]

Ch

alle

nge

Tag matching is a significant overhead for receivers

Existing Solutions are

- Static and do not adapt dynamically to communication pattern

- Do not consider memory overhead

Solu

tio

n A new tag matching design

- Dynamically adapt to communication patterns

- Use different strategies for different ranks

- Decisions are based on the number of request object that must be traversed before hitting on the required one

Res

ult

s Better performance than other state-of-the art tag-matching schemes

Minimum memory consumption

Will be available in future MVAPICH2 releases

Page 24: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 24Network Based Computing Laboratory

0

2

4

6

8

(4,4) (8,4) (16,4)

Late

ncy

(u

s)

(Number of Nodes, PPN)

MV2-128B

MV2-SHArP-128B

MV2-32B

MV2-SHArP-32B 2.3x

0

2

4

6

8

10

12

4 8 16 32 64 128 256

Late

ncy

(u

s)

Message Size (Bytes)

4 PPN*, 16 NodesMVAPICH2

MVAPICH2-SHArP

02468

10121416

4 8 16 32 64 128 256

Late

ncy

(u

s)

Message Size (Bytes)

28 PPN, 16 NodesMVAPICH2

MVAPICH2-SHArP

Advanced Allreduce Collective Designs Using SHArPosu_allreduce (OSU Micro Benchmark) using MVAPICH2 2.3a

2.3x

*PPN: Processes Per Node

1.5x

0

2

4

6

8

10

12

(4,28) (8,28) (16,28)La

ten

cy (

us)

(Number of Nodes, PPN)

MV2-128B

MV2-SHArP-128B

MV2-32B

MV2-SHArP-32B

1.4x

Page 25: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 25Network Based Computing Laboratory

0

0.05

0.1

0.15

0.2

(4,28) (8,28) (16,28)

Late

ncy

(se

con

ds)

(Number of Nodes, PPN)

MVAPICH2

MVAPICH2-SHArP13%

Mesh Refinement Time of MiniAMR

0

0.1

0.2

0.3

0.4

(4,28) (8,28) (16,28)

Late

ncy

(se

con

ds)

(Number of Nodes, PPN)

MVAPICH2

MVAPICH2-SHArP

Advanced Allreduce Collective Designs Using SHArP

12%

Avg DDOT Allreduce time of HPCG

Based on MVAPICH2 2.3a

Page 26: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 26Network Based Computing Laboratory

• Scalability for million to billion processors

• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)

• Integrated Support for GPGPUs

• InfiniBand Network Analysis and Monitoring (INAM)

• Virtualization and HPC Cloud

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 27: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 27Network Based Computing Laboratory

MVAPICH2-X for Hybrid MPI + PGAS Applications

• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI– Possible deadlock if both runtimes are not progressed

– Consumes more network resource

• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF

– Available with since 2012 (starting with MVAPICH2-X 1.9)

– http://mvapich.cse.ohio-state.edu

Page 28: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 28Network Based Computing Laboratory

UPC++ Support in MVAPICH2-X

MPI + {UPC++} Application

GASNet Interfaces

UPC++ Interface

Network

Conduit (MPI)

MVAPICH2-XUnified Communication

Runtime (UCR)

MPI + {UPC++} Application

UPC++ Runtime

MPI Interfaces

• Full and native support for hybrid MPI + UPC++ applications

• Better performance compared to IBV and MPI conduits

• OSU Micro-benchmarks (OMB) support for UPC++

• Available since MVAPICH2-X (2.2rc1)

0

5

10

15

20

25

30

35

40

1K 2K 4K 8K 16K 32K 64K 128K256K512K 1M

Tim

e (m

s)

Message Size (bytes)

GASNet_MPI

GASNET_IBV

MV2-X14x

Inter-node Broadcast (64 nodes 1:ppn)

MPI Runtime

Page 29: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 29Network Based Computing Laboratory

Application Level Performance with Graph500 and SortGraph500 Execution Time

J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models,

International Supercomputing Conference (ISC’13), June 2013

J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation,

Int'l Conference on Parallel Processing (ICPP '12), September 2012

05

101520253035

4K 8K 16K

Tim

e (

s)

No. of Processes

MPI-Simple

MPI-CSC

MPI-CSR

Hybrid (MPI+OpenSHMEM)13X

7.6X

• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design

• 8,192 processes- 2.4X improvement over MPI-CSR

- 7.6X improvement over MPI-Simple

• 16,384 processes- 1.5X improvement over MPI-CSR

- 13X improvement over MPI-Simple

J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned

Global Address Space Programming Models (PGAS '13), October 2013.

Sort Execution Time

0

500

1000

1500

2000

2500

3000

500GB-512 1TB-1K 2TB-2K 4TB-4K

Tim

e (

seco

nd

s)

Input Data - No. of Processes

MPI Hybrid

51%

• Performance of Hybrid (MPI+OpenSHMEM) Sort Application

• 4,096 processes, 4 TB Input Size- MPI – 2408 sec; 0.16 TB/min

- Hybrid – 1172 sec; 0.36 TB/min

- 51% improvement over MPI-design

Page 30: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 30Network Based Computing Laboratory

• Scalability for million to billion processors

• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)

• Integrated Support for GPGPUs– CUDA-aware MPI

– GPUDirect RDMA (GDR) Support

– Preliminary Evaluation on OpenPower

• InfiniBand Network Analysis and Monitoring (INAM)

• Virtualization and HPC Cloud

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 31: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 31Network Based Computing Laboratory

At Sender:

At Receiver:

MPI_Recv(r_devbuf, size, …);

inside

MVAPICH2

• Standard MPI interfaces used for unified data movement

• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)

• Overlaps data movement from GPU with RDMA transfers

High Performance and High Productivity

MPI_Send(s_devbuf, size, …);

GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU

Page 32: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 32Network Based Computing Laboratory

CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.2 Releases• Support for MPI communication from NVIDIA GPU device memory

• High performance RDMA-based inter-node point-to-point communication (GPU-GPU, GPU-Host and Host-GPU)

• High performance intra-node point-to-point communication for multi-GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU)

• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node communication for multiple GPU adapters/node

• Optimized and tuned collectives for GPU device buffers

• MPI datatype support for point-to-point and collective communication from GPU device buffers

• Unified memory

Page 33: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 33Network Based Computing Laboratory

0

1000

2000

3000

4000

1 4 16 64 256 1K 4K

MV2-GDR2.2MV2-GDR2.0bMV2 w/o GDR

GPU-GPU Internode Bi-Bandwidth

Message Size (bytes)

Bi-

Ban

dw

idth

(M

B/s

)

0

5

10

15

20

25

30

0 2 8 32 128 512 2K

MV2-GDR2.2 MV2-GDR2.0bMV2 w/o GDR

GPU-GPU internode latency

Message Size (bytes)

Late

ncy

(u

s)

MVAPICH2-GDR-2.2Intel Ivy Bridge (E5-2680 v2) node - 20 cores

NVIDIA Tesla K40c GPUMellanox Connect-X4 EDR HCA

CUDA 8.0Mellanox OFED 3.0 with GPU-Direct-RDMA

10x2X

11x

Performance of MVAPICH2-GPU with GPU-Direct RDMA (GDR)

2.18us

0

500

1000

1500

2000

2500

3000

1 4 16 64 256 1K 4K

MV2-GDR2.2

MV2-GDR2.0b

MV2 w/o GDR

GPU-GPU Internode Bandwidth

Message Size (bytes)

Ban

dw

idth

(M

B/s

) 11X

2X

3X

Page 34: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 34Network Based Computing Laboratory

• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)

• HoomdBlue Version 1.0.5

• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768

MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768

MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384

Application-Level Evaluation (HOOMD-blue)

0

500

1000

1500

2000

2500

4 8 16 32

Ave

rage

Tim

e St

eps

per

se

con

d (

TPS)

Number of Processes

MV2 MV2+GDR

0

500

1000

1500

2000

2500

3000

3500

4 8 16 32Ave

rage

Tim

e St

eps

pe

r se

con

d (

TPS)

Number of Processes

64K Particles 256K Particles

2X2X

Page 35: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 35Network Based Computing Laboratory

Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland

0

0.2

0.4

0.6

0.8

1

1.2

16 32 64 96No

rmal

ized

Exe

cuti

on

Tim

e

Number of GPUs

CSCS GPU cluster

Default Callback-based Event-based

0

0.2

0.4

0.6

0.8

1

1.2

4 8 16 32

No

rmal

ized

Exe

cuti

on

Tim

e

Number of GPUs

Wilkes GPU Cluster

Default Callback-based Event-based

• 2X improvement on 32 GPUs nodes• 30% improvement on 96 GPU nodes (8 GPUs/node)

C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data

Movement Processing on Modern GPU-enabled Systems, IPDPS’16

On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application

Cosmo model: http://www2.cosmo-model.org/content

/tasks/operational/meteoSwiss/

Page 36: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 36Network Based Computing Laboratory

0

1000

2000

3000

4000

5000

6000

1 4

16

64

25

6

1K

4K

16

K

64

K

25

6K

1M

4M

Ban

dw

idth

(M

B/s

)

GDR=1 GDRCOPY=0

GDR=1 GDRCOPY=1

MVAPICH2-GDR on OpenPower (Preliminary Evaluation)

0

10

20

30

40

1 2 4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

8K

Late

ncy

(u

s)GDR=1 GDRCOPY=0

GDR=1 GDRCOPY=1

osu_latency [D-D]

osu_bw [H-D]

0

5

10

15

20

1 4 16 64 256 1K 4K 16K 64K

Late

ncy

(u

s)

GDR=1 GDRCOPY=0

GDR=1 GDRCOPY=1

osu_latency [H-D]

POWER8NVL-ppc64le @ 4.023GHz

MVAPICH2-GDR2.2, MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=32768

Intra-node Numbers (Basic Configuration)

Page 37: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 37Network Based Computing Laboratory

• Scalability for million to billion processors

• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)

• Integrated Support for GPGPUs

• Energy-Awareness

• InfiniBand Network Analysis and Monitoring (INAM)

• Virtualization and HPC Cloud

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 38: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 38Network Based Computing Laboratory

Overview of OSU INAM• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the

MPI runtime

– http://mvapich.cse.ohio-state.edu/tools/osu-inam/

• Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes

• OSU INAM v0.9.1 released on 05/13/16

• Significant enhancements to user interface to enable scaling to clusters with thousands of nodes

• Improve database insert times by using 'bulk inserts‘

• Capability to look up list of nodes communicating through a network link

• Capability to classify data flowing over a network link at job level and process level granularity in conjunction with

MVAPICH2-X 2.2rc1

• “Best practices “ guidelines for deploying OSU INAM on different clusters

• Capability to analyze and profile node-level, job-level and process-level activities for MPI communication

– Point-to-Point, Collectives and RMA

• Ability to filter data based on type of counters using “drop down” list

• Remotely monitor various metrics of MPI processes at user specified granularity

• "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X

• Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes

Page 39: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 39Network Based Computing Laboratory

OSU INAM Features

• Show network topology of large clusters

• Visualize traffic pattern on different links

• Quickly identify congested links/links in error state

• See the history unfold – play back historical state of the network

Comet@SDSC --- Clustered View

(1,879 nodes, 212 switches, 4,377 network links)

Finding Routes Between Nodes

Page 40: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 40Network Based Computing Laboratory

OSU INAM Features (Cont.)

Visualizing a Job (5 Nodes)

• Job level view• Show different network metrics (load, error, etc.) for any live job

• Play back historical data for completed jobs to identify bottlenecks

• Node level view - details per process or per node• CPU utilization for each rank/node

• Bytes sent/received for MPI operations (pt-to-pt, collective, RMA)

• Network metrics (e.g. XmitDiscard, RcvError) per rank/node

Estimated Process Level Link Utilization

• Estimated Link Utilization view• Classify data flowing over a network link at

different granularity in conjunction with

MVAPICH2-X 2.2rc1

• Job level and

• Process level

Page 41: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 41Network Based Computing Laboratory

• Scalability for million to billion processors

• Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …)

• Integrated Support for GPGPUs

• InfiniBand Network Analysis and Monitoring (INAM)

• Virtualization and HPC Cloud

Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale

Page 42: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 42Network Based Computing Laboratory

• Virtualization has many benefits– Fault-tolerance

– Job migration

– Compaction

• Have not been very popular in HPC due to overhead associated with Virtualization

• New SR-IOV (Single Root – IO Virtualization) support available with Mellanox InfiniBand adapters changes the field

• Enhanced MVAPICH2 support for SR-IOV

• MVAPICH2-Virt 2.1 (with and without OpenStack) is publicly available

Can HPC and Virtualization be Combined?

J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based

Virtualized InfiniBand Clusters? EuroPar'14

J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR-IOV enabled InfiniBand

Clusters, HiPC’14 J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build

HPC Clouds, CCGrid’15

Page 43: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 43Network Based Computing Laboratory

• Redesign MVAPICH2 to make it

virtual machine aware

– SR-IOV shows near to native

performance for inter-node point to

point communication

– IVSHMEM offers zero-copy access to

data on shared memory of co-resident

VMs

– Locality Detector: maintains the locality

information of co-resident virtual machines

– Communication Coordinator: selects the

communication channel (SR-IOV, IVSHMEM)

adaptively

Overview of MVAPICH2-Virt with SR-IOV and IVSHMEM

Host Environment

Guest 1

Hypervisor PF Driver

Infiniband Adapter

Physical Function

user space

kernel space

MPI proc

PCI Device

VF Driver

Guest 2

user space

kernel space

MPI proc

PCI Device

VF Driver

Virtual

Function

Virtual

Function

/dev/shm/

IV-SHM

IV-Shmem Channel

SR-IOV Channel

J. Zhang, X. Lu, J. Jose, R. Shi, D. K. Panda. Can Inter-VM

Shmem Benefit MPI Applications on SR-IOV based

Virtualized InfiniBand Clusters? Euro-Par, 2014.

J. Zhang, X. Lu, J. Jose, R. Shi, M. Li, D. K. Panda. High

Performance MPI Library over SR-IOV Enabled InfiniBand

Clusters. HiPC, 2014.

Page 44: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 44Network Based Computing Laboratory

• OpenStack is one of the most popular

open-source solutions to build clouds and

manage virtual machines

• Deployment with OpenStack

– Supporting SR-IOV configuration

– Supporting IVSHMEM configuration

– Virtual Machine aware design of MVAPICH2

with SR-IOV

• An efficient approach to build HPC Clouds

with MVAPICH2-Virt and OpenStack

MVAPICH2-Virt with SR-IOV and IVSHMEM over OpenStack

J. Zhang, X. Lu, M. Arnold, D. K. Panda. MVAPICH2 over OpenStack with SR-IOV: An Efficient Approach to Build

HPC Clouds. CCGrid, 2015.

Page 45: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 45Network Based Computing Laboratory

0

50

100

150

200

250

300

350

400

milc leslie3d pop2 GAPgeofem zeusmp2 lu

Exe

cuti

on

Tim

e (

s)

MV2-SR-IOV-Def

MV2-SR-IOV-Opt

MV2-Native

1%9.5%

0

1000

2000

3000

4000

5000

6000

22,20 24,10 24,16 24,20 26,10 26,16

Exe

cuti

on

Tim

e (

ms)

Problem Size (Scale, Edgefactor)

MV2-SR-IOV-Def

MV2-SR-IOV-Opt

MV2-Native2%

• 32 VMs, 6 Core/VM

• Compared to Native, 2-5% overhead for Graph500 with 128 Procs

• Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs

Application-Level Performance on Chameleon

SPEC MPI2007Graph500

5%

Page 46: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 46Network Based Computing Laboratory

0

500

1000

1500

2000

2500

3000

3500

4000

22, 16 22, 20 24, 16 24, 20 26, 16 26, 20

Exec

uti

on

Tim

e (

ms)

Problem Size (Scale, Edgefactor)

Container-Def

Container-Opt

Native

0

10

20

30

40

50

60

70

80

90

100

MG.D FT.D EP.D LU.D CG.D

Exec

uti

on

Tim

e (

s)

Container-Def

Container-Opt

Native

• 64 Containers across 16 nodes, pining 4 Cores per Container

• Compared to Container-Def, up to 11% and 16% of execution time reduction for NAS and Graph 500

• Compared to Native, less than 9 % and 4% overhead for NAS and Graph 500

• Optimized Container support is available in the latest MVAPICH2-Virt

Containers Support: Application-Level Performance on Chameleon

Graph 500NAS

11%

16%

Page 47: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 47Network Based Computing Laboratory

MVAPICH2 Point-to-Point and Collectives Performance with Singularity

0

5

10

15

20

1 4 16 64 256 1024 4096 16384 65536

Late

ncy

(u

s)

Message Size (Byte)

Latency

Singularity-Intra-Node

Native-Intra-Node

Singularity-Inter-Node

Native-Inter-Node

0

2000

4000

6000

8000

10000

12000

14000

16000

18000

Ban

dw

idth

(M

B/s

)

Message Size (Byte)

BW

Singularity-Intra-Node

Native-Intra-Node

Singularity-Inter-Node

Native-Inter-Node

0

10

20

30

40

50

60

70

80

1 4 16 64 256 1024 4096 16384 65536

Late

ncy

(u

s)

Message Size (Byte)

BcastSingularity

Native

0

50

100

150

200

4 16 64 256 1024 4096 16384 65536La

ten

cy (

us)

Message Size (Byte)

Allreduce

Singularity

Native

Performance Very Close to Native

Page 48: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 48Network Based Computing Laboratory

High Performance VM Migration Framework for MPI Applications on SR-IOV enabled InfiniBand Clusters

J. Zhang, X. Lu, D. K. Panda. High-Performance Virtual Machine Migration Framework for MPI Applications on SR-IOV

enabled InfiniBand Clusters. IPDPS, 2017 (Support will be available in upcoming MVAPICH2-Virt)

• Migration with SR-IOV device has to handle the

challenges of detachment/re-attachment of

virtualized IB device and IB connection

• Consist of SR-IOV enabled IB Cluster and External

Migration Controller

• Multiple parallel libraries to notify MPI

applications during migration (detach/reattach

SR-IOV/IVShmem, migrate VMs, migration status)

• Handle the IB connection suspending and

reactivating

• Propose Progress engine (PE) and migration

thread based (MT) design to optimize VM

migration and MPI application performance

Page 49: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 49Network Based Computing Laboratory

• Scientific Computing

– Message Passing Interface (MPI), including MPI + OpenMP, is the Dominant

Programming Model

– Many discussions towards Partitioned Global Address Space (PGAS)

• UPC, OpenSHMEM, CAF, UPC++ etc.

– Hybrid Programming: MPI + PGAS (OpenSHMEM, UPC)

• Deep Learning

– Caffe, CNTK, TensorFlow, and many more

Major Computing Categories

Page 50: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 50Network Based Computing Laboratory

• Deep Learning frameworks are a different game

altogether

– Unusually large message sizes (order of

megabytes)

– Most communication based on GPU buffers

• How to address these newer requirements?

– GPU-specific Communication Libraries (NCCL)

• NVidia's NCCL library provides inter-GPU

communication

– CUDA-Aware MPI (MVAPICH2-GDR)

• Provides support for GPU-based communication

• Can we exploit CUDA-Aware MPI and NCCL to

support Deep Learning applications?

Deep Learning: New Challenges for MPI Runtimes

1

32

4

InternodeComm.(Knomial)

1 2

CPU

PLX

3 4

PLX

Intranode Comm.(NCCLRing)

RingDirection

Hierarchical Communication (Knomial + NCCL ring)

Page 51: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 51Network Based Computing Laboratory

• NCCL has some limitations

– Only works for a single node, thus, no scale-out on

multiple nodes

– Degradation across IOH (socket) for scale-up (within a

node)

• We propose optimized MPI_Bcast

– Communication of very large GPU buffers (order of

megabytes)

– Scale-out on large number of dense multi-GPU nodes

• Hierarchical Communication that efficiently exploits:

– CUDA-Aware MPI_Bcast in MV2-GDR

– NCCL Broadcast primitive

Efficient Broadcast: MVAPICH2-GDR and NCCL

1

10

100

1000

10000

100000

1 8

64

512

4K

32K

256K 2M

16M

128M

Late

ncy

(u

s)Lo

g Sc

ale

Message Size

MV2-GDR MV2-GDR-Opt

100x

0

10

20

30

2 4 8 16 32 64Tim

e (s

eco

nd

s)

Number of GPUs

MV2-GDR MV2-GDR-Opt

Performance Benefits: Microsoft CNTK DL framework (25% avg. improvement )

Performance Benefits: OSU Micro-benchmarks

Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning, A. Awan , K. Hamidouche , A. Venkatesh , and D. K. Panda, The 23rd European MPI Users' Group Meeting (EuroMPI 16), Sep 2016 [Best Paper Runner-Up]

Page 52: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 52Network Based Computing Laboratory

0

100

200

300

2 4 8 16 32 64 128

Late

ncy

(m

s)

Message Size (MB)

Reduce – 192 GPUs

Large Message Optimized Collectives for Deep Learning

0

100

200

128 160 192

Late

ncy

(m

s)

No. of GPUs

Reduce – 64 MB

0

100

200

300

16 32 64

Late

ncy

(m

s)

No. of GPUs

Allreduce - 128 MB

0

50

100

2 4 8 16 32 64 128

Late

ncy

(m

s)

Message Size (MB)

Bcast – 64 GPUs

0

50

100

16 32 64

Late

ncy

(m

s)

No. of GPUs

Bcast 128 MB

• MV2-GDR provides

optimized collectives for

large message sizes

• Optimized Reduce,

Allreduce, and Bcast

• Good scaling with large

number of GPUs

• Available with MVAPICH2-

GDR 2.2GA

0

100

200

300

2 4 8 16 32 64 128La

ten

cy (

ms)

Message Size (MB)

Allreduce – 64 GPUs

Page 53: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 53Network Based Computing Laboratory

• Caffe : A flexible and layered Deep Learning

framework.

• Benefits and Weaknesses

– Multi-GPU Training within a single node

– Performance degradation for GPUs across different

sockets

– Limited Scale-out

• OSU-Caffe: MPI-based Parallel Training

– Enable Scale-up (within a node) and Scale-out

(across multi-GPU nodes)

– network on ImageNet dataset

OSU-Caffe: Scalable Deep Learning

0

50

100

150

200

250

8 16 32 64 128

Trai

nin

g Ti

me

(sec

on

ds)

No. of GPUs

GoogLeNet (ImageNet) on 128 GPUs

Caffe OSU-Caffe (1024) OSU-Caffe (2048)

Invalid use caseOSU-Caffe is publicly available from: http://hidl.cse.ohio-state.edu

Awan, K. Hamidouche, J. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters,PPoPP, Sep 2017

Page 54: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 54Network Based Computing Laboratory

MVAPICH2 – Plans for Exascale

• Performance and Memory scalability toward 1M cores

• Hybrid programming (MPI + OpenSHMEM, MPI + UPC, MPI + CAF …)• MPI + Task*

• Enhanced Optimization for GPU Support and Accelerators

• Enhanced communication schemes for upcoming architectures• Knights Landing with MCDRAM*

• NVLINK*

• CAPI*

• Extended topology-aware collectives

• Extended Energy-aware designs and Virtualization Support

• Extended Support for MPI Tools Interface (as in MPI 3.1)

• Extended Checkpoint-Restart and migration support with SCR

• Support for * features will be available in future MVAPICH2 Releases

Page 55: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 55Network Based Computing Laboratory

Funding Acknowledgments

Funding Support by

Equipment Support by

Page 56: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 56Network Based Computing Laboratory

Personnel AcknowledgmentsCurrent Students

– A. Awan (Ph.D.)

– R. Biswas (M.S.)

– M. Bayatpour (Ph.D.)

– S. Chakraborthy (Ph.D.)

Past Students

– A. Augustine (M.S.)

– P. Balaji (Ph.D.)

– S. Bhagvat (M.S.)

– A. Bhat (M.S.)

– D. Buntinas (Ph.D.)

– L. Chai (Ph.D.)

– B. Chandrasekharan (M.S.)

– N. Dandapanthula (M.S.)

– V. Dhanraj (M.S.)

– T. Gangadharappa (M.S.)

– K. Gopalakrishnan (M.S.)

– R. Rajachandrasekar (Ph.D.)

– G. Santhanaraman (Ph.D.)

– A. Singh (Ph.D.)

– J. Sridhar (M.S.)

– S. Sur (Ph.D.)

– H. Subramoni (Ph.D.)

– K. Vaidyanathan (Ph.D.)

– A. Venkatesh (Ph.D.)

– A. Vishnu (Ph.D.)

– J. Wu (Ph.D.)

– W. Yu (Ph.D.)

Past Research Scientist

– K. Hamidouche

– S. Sur

Past Post-Docs

– D. Banerjee

– X. Besseron

– H.-W. Jin

– W. Huang (Ph.D.)

– N. Islam (Ph.D.)

– W. Jiang (M.S.)

– J. Jose (Ph.D.)

– S. Kini (M.S.)

– M. Koop (Ph.D.)

– K. Kulkarni (M.S.)

– R. Kumar (M.S.)

– S. Krishnamoorthy (M.S.)

– K. Kandalla (Ph.D.)

– P. Lai (M.S.)

– J. Liu (Ph.D.)

– M. Luo (Ph.D.)

– A. Mamidala (Ph.D.)

– G. Marsh (M.S.)

– V. Meshram (M.S.)

– A. Moody (M.S.)

– S. Naravula (Ph.D.)

– R. Noronha (Ph.D.)

– X. Ouyang (Ph.D.)

– S. Pai (M.S.)

– S. Potluri (Ph.D.)

– M.-W. Rahman (Ph.D.)

– C.-H. Chu (Ph.D.)

– S. Guganani (Ph.D.)

– J. Hashmi (Ph.D.)

– H. Javed (Ph.D.)

– J. Lin

– M. Luo

– E. Mancini

Current Research Scientists

– X. Lu

– H. Subramoni

Past Programmers

– D. Bureddy

– M. Arnold

– J. Perkins

Current Research Specialist

– J. Smith– M. Li (Ph.D.)

– D. Shankar (Ph.D.)

– H. Shi (Ph.D.)

– J. Zhang (Ph.D.)

– S. Marcarelli

– J. Vienne

– H. Wang

Page 57: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 57Network Based Computing Laboratory

International Workshop on Communication Architectures at Extreme Scale (ExaComm)

ExaComm series is being held with Int’l Supercomputing Conference

(ISC) since 2015

ExaComm 2017 will be held in conjunction with ISC ’17

http://nowlab.cse.ohio-state.edu/exacomm/

Keynote Talk: Jeffrey S. Vetter, ORNL

Best Paper Award

Technical Paper Submission Deadline: Friday, April 14, 2017

Page 58: High-Performance and Scalable Designs of Programming ......High Performance Parallel Programming Models Message Passing Interface (MPI) PGAS (UPC, OpenSHMEM, CAF, UPC++) Hybrid ---

HPCAC-Switzerland (April ‘17) 58Network Based Computing Laboratory

[email protected]

Thank You!

The High-Performance Deep Learning Projecthttp://hidl.cse.ohio-state.edu/

Network-Based Computing Laboratoryhttp://nowlab.cse.ohio-state.edu/

The MVAPICH2 Projecthttp://mvapich.cse.ohio-state.edu/