Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions...

29
Fault Tolerant Programming Abstractions and Failure Recovery Models for MPI Applications Ignacio Laguna Center for Applied Scientific Computing Salishan Conference on High-speed Computing, Apr 27-30, 2015 LLNL-PRES-670002. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Lawrence Livermore National Security, LLC

Transcript of Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions...

Page 1: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

Fault Tolerant Programming Abstractions and Failure Recovery Models for MPI Applications

Ignacio Laguna Center for Applied Scientific Computing

Salishan Conference on High-speed Computing, Apr 27-30, 2015

LLNL-PRES-670002. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Lawrence Livermore National Security, LLC

Page 2: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 2 >

We use MPI workloads to design future machines MPI IS WIDELY USED, AND WILL CONTINUE TO BE…

75% CORAL tier-1 benchmarks use MPI CORAL is the recent DOE procurement to deliver next-generation (petaflops) supercomputers

46,600 Hits are returned by Google Scholar for the term “message passing interface”

Many implementations are available C/C++, Java, Matlab, Python, R, …

MPI is widely cited

MPI+X will remain a common programming model

Page 3: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 3 >

MPI is the dominant “glue” for HPC applications MOST NODE/PROCESS FAILURES SHOW UP IN MPI

Process Process Process Process

Node

MPI MPI MPI Node Node

Node Node

MPI

MPI MPI

MPI

Examples: • Application error (bug) • Hardware error (soft error)

Page 4: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

From the MPI standard: ! “... after an error is detected, the state of MPI is

undefined” ! “MPI itself provides no mechanisms for handling

processor failures.”

MPI doesn’t provide guaranties about failure detection and/or notifications Resource manager kills the job (by default)

< 4 >

Failures are not an option in MPI MPI DOES NOT PROVIDE FAULT TOLERANCE

Page 5: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 5 >

WHY TO INVEST IN FAULT TOLERANCE IN MPI?

MPI will continue to

be used

Nice layer to detect failures

No resilience

abstractions in the

standard

1 2 3

+ +

Solution?

Page 6: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 6 >

Roadmap of the talk PUZZLE PIECES OF THE PROBLEM

Problem Description • Why adding FT to MPI is difficult? • Challenges & areas of concern

Lessons Learned • Where do we go from here? • Summary

Approaches • Current solutions to the problem • Proposals in the MPI forum

Experimental Evaluation • Modeling & simulation • Early evaluation results

1 2

3 4

Page 7: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 7 >

The devil is on the details… FIXING A FAILED MPI RANK TRANSPARENTLY IS HARD

Ideal fault-tolerance strategy:

Replace transparently a failed process

This is difficult to implement correctly and efficiently in MPI libraries

How to bring a new MPI process up-to-date?

How to handle in-transit messages and operations?

Where to re-inject control in the application?

1

2

3

Some implementation questions / considerations:

Page 8: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

MPI programs don’t check for errors Fault detection that rely on error codes would be hard to use

< 8 >

Reasoning about error propagation in a complex code is hard MOST CODES ASSUME NO ERROR CHECKING

for$(...)$$$err$=$MPI_Isend();,$$if$(err)$recover();$for$(...)$$$err$=$MPI_Irecv();,$$if$(err)$recover();$$err$=$MPI_Waitall();,if$(err)$recover();$err$=$MPI_Barrier();,if$(err)$recover();$

Ideal world

for$(...)$$$MPI_Isend();,for$(...)$$$MPI_Irecv();,$MPI_Waitall();,MPI_Barrier();,

Real world

Most codes will recover from failures via checkpoint/restart

Page 9: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

What failures to consider in the MPI standard? !  Node / process failures? !  Communication errors? !  Silent errors?

Should the application continue executing after a failure? How?

!  Forward vs. backward recovery

Fault-tolerant APIs that don’t involve much code changes Should fault tolerance be provided as a library?

< 9 >

OPEN CHALLENGES AND QUESTIONS

Page 10: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 10 >

Roadmap of the talk PUZZLE PIECES OF THE PROBLEM

Problem Description • Why adding FT to MPI is difficult? • Challenges & areas of concern

Lessons Learned • Where do we go from here? • Summary

Approaches • Current solutions to the problem • Proposals in the MPI forum

Experimental Evaluation • Modeling & simulation • Early evaluation results

1 2

3 4

Page 11: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 11 >

Resilient programming abstractions for MPI POSSIBLE SOLUTIONS TO THE PROBLEM

1 2 3

4

ULFM: User level failure mitigation Local shrinking recovery strategy

Reinit interface Global non-shrinking recovery strategy

?

Fault tolerant libraries e.g., Local Failure Local Recovery (LFLR)

Page 12: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

Shrinking recovery: the available resources after a failure are shrunk or reduced Focus on process failures

!  Communication that involves a failed process would fail

Communicators can be revoked

!  Enables fault propagation

Communicators can be shrunk

!  Code must create new communicators with fewer processes

< 12 >

Current proposal for MPI 4.0 ULFM: USER LEVEL FAILURE MITIGATION

New error codes: $$MPI_ERR_PROC_FAILED$ New MPI calls: $$MPI_COMM_REVOKE$$$MPI_COMM_SHRINK$$$MPI_COMM_AGREE$$$MPI_COMM_FAILURE_ACK$

Shrinking recovery strategy

Page 13: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

Works well for master-slave codes !  Only few processes need to know of a failure

Difficult to use in bulk synchronous codes !  All processes need to know of failures (global recovery) !  Codes must rollback to a previous checkpoint

Most codes cannot handle shrinking recovery !  Cannot re-decompose problem in fewer processes !  Requires load balancing

< 13 >

PROS AND CONS OF ULFM

Bulk synchronous

Everyone must rollback

Master-slave

Some may rollback

Page 14: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

With ULFM, faults are “eventually” delivered to the application

Global recovery avoids this issue—all processes roll back to a known safe state

< 14 >

DELAYED DETECTION IS DIFFICULT TO USE FOR ALGORITHMS THAT USE NON-BLOCKING OPERATIONS

for$(i=0;$i$<$nsends;$++i)${/*$computation$*/MPI_Isend(...);

}

for$(i=0;$i$<$nrecvs;$++i)${/*$computation$*/MPI_Irecv(...);$

}

MPI_Waitall(...);

/*$computation$*/

MPI_Barrier(...);$

Data exchange patter

Failure?

Failure?

Failure?

Where in the loop do we re-inject control?

Delayed detection?

Page 15: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 15 >

Global non-shrinking recovery strategy

REINIT INTERFACE

MPI_Init();,$

MPI_Reinit();$MPI_Error_handlers();$$

for$(...)$$$MPI_Isend();,for$(...)$$$MPI_Irecv();,$

MPI_Waitall();,MPI_Barrier();,,

MPI_Finalize();,

MPI library performs: " Failure detection " Failure notification

"  Code specifies cleanup functions "  Emulates exception handling

Error handler 1

Error handler 2

Error handler 3

Stack of error handlers

•  Difficult to clean up state of multithreaded code (OpenMP) •  Won’t work if application’s initialization takes too much time Disadvantages

•  Job is not killed •  Faster checkpoint/restart Advantages

Page 16: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

Approach: use ULFM’s functionality to provide fault tolerance as a library

Example: Local Failure Local Recovery (LFLR)

< 16 >

FAULT TOLERANT LIBRARIES

Run

Run

Run

Stand by Join

Wait

Fault Run

Run

…"Rank 0

Rank N

Rank N+1

Reference: Keita Teranishi and Michael A. Heroux. Toward Local Failure Local Recovery Resilience Model using MPI-ULFM, EuroMPI/ASIA '14.

•  Applications cannot use other tools / libraries •  Inherits any performance issues and/or bottlenecks from ULFM Disadvantages

•  Handles fault tolerance transparently Advantages

Page 17: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 17 >

Resilient programming abstractions for MPI POSSIBLE SOLUTIONS TO THE PROBLEM

1 2 3

4

ULFM: User level failure mitigation Local shrinking recovery strategy

Reinit interface Global non-shrinking recovery strategy

Fault tolerant libraries e.g., Local Failure Local Recovery (LFLR)

Don’t integrate fault tolerance into MPI Rely in Checkpoint/Restart

?

Page 18: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 18 >

Roadmap of the talk PUZZLE PIECES OF THE PROBLEM

Problem Description • Why adding FT to MPI is difficult? • Challenges & areas of concern

Lessons Learned • Where do we go from here? • Summary

Approaches • Current solutions to the problem • Proposals in the MPI forum

Experimental Evaluation • Modeling & simulation • Early evaluation results

1 2

3 4

Page 19: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

Scalable molecular dynamics application !  Not a proxy / mini / benchmark code

Problem can be decomposed onto any number of processes Includes load balancing Uses a few communicators

!  Simplifies implementing shrinking recovery !  We have to shrink only one communicator

( MPI_COMM_SHRINK)

< 19 >

TESTBED APPLICATION: ddcMD

Page 20: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

Open MPI 1.7, Sierra cluster at LLNL (InfiniBand)

< 20 >

ELIMINATING A PROCESS FROM A COMMUNICATOR TAKES TOO MUCH TIME

0

2

4

6

8

10

12

0 50 100 150 200 250 300

Tim

e (s

ec)

MPI processes

Time to shrink MPI_COMM_WORLD when a process fails

Page 21: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

Shrinking recovery only works when: !  Application can balance loads quickly after failures !  System experiences high failure rates !  Application can re-decompose problem on fewer processes/nodes

Most codes/systems don’t have these capabilities

< 21 >

Most codes will use non-shrinking recovery at large scale SHRINKING RECOVERY IS ONLY USEFUL IN SOME CASES

0.1

1

10

0" 10" 20" 30" 40"

Pena

lty fa

ctor

Mean time between failures (hours)

Non-shrinking recovery

Shrinking recovery

Page 22: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

Prototype Reinit in Open MPI

Tests on Cray XC30 system (BTL network)

Applications: !  Lattice Bolzmann transport code (LBMv3) !  Molecular dynamics code (ddcMD)

< 22 >

Recovery time is reduced compared to traditional job restarts REINIT PERFORMANCE MEASUREMENTS ARE PROMISING

Time to recover from a failure using Reinit versus a standard job restart

With Reinit, we believe that data of recent checkpoints is likely cached in the filesystem buffers since the job is not killed

0"5"

10"15"20"25"30"35"40"45"

64 128 200

Tim

e (s

ec)

MPI processes

Job restart

Using Reinit

Insight

Page 23: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 23 >

Roadmap of the talk PUZZLE PIECES OF THE PROBLEM

Problem Description • Why adding FT to MPI is difficult? • Challenges & areas of concern

Lessons Learned • Where do we go from here? • Summary

Approaches • Current solutions to the problem • Proposals in the MPI forum

Experimental Evaluation • Modeling & simulation • Early evaluation results

1 2

3 4

Page 24: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

#  The MPI community should evaluate carefully the pros and cons of current fault-tolerant proposals

#  It is important to consider a broad range of applications

#  Pay special attention to legacy scalable codes (e.g., BSP)

#  Viewing the problem only from the system perspective doesn’t work

#  We must design interfaces after consulting with several users

< 24 >

SOME LESSONS LEARNED

Page 25: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 25 >

How do we solve this problem? FUTURE DIRECTIONS

…and only then we propose modifications to the MPI standard

Evaluate multiple resilient programming abstractions (other than ULFM and Reinit)

1 Test models on a broad range of applications

2 Evaluate not only performance, but also programmability

3

Page 26: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 26 >

Smart people that contribute to this effort ACKNOWLEDGMENTS

Martin Schulz, LLNL

Todd Gamblin, LLNL

Kathryn Mohror, LLNL

David Richards, LLNL

Adam Moody, LLNL

Howard Pritchard, LANL

Bronis R. de Supinski, LLNL

Page 27: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

Thank you!

< 27 >

Page 28: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 28 >

It is hard to use ULFM in bulk synchronous codes ULFM IS SUITABLE ONLY FOR A SUBSET APPLICATIONS

Shrinking Recovery

Local Recovery

Backward Recovery

Non-shrinking Recovery

Global Recovery

Forward Recovery

Bulk synchronous Master-slave Applications

Reference: Ignacio Laguna, David F. Richards, Todd Gamblin, Martin Schulz, Bronis R. de Supinski, “Evaluating User-Level Fault Tolerance for MPI Applications”, EuroMPI/ASIA, Kyoto, Japan, Sep 9-12, 2014.

Suitable for ULFM (easy to implement with few changes in the application)

Application can “naturally” support this model

ULFM

APP

ULFM

ULFM

ULFM APP

ULFM APP

ULFM APP

APP APP

APP

APP

Page 29: Fault Tolerant Programming Abstractions and Failure ... · Fault Tolerant Programming Abstractions and Failure ... Data exchange patter Failure? Failure? ... # Pay special attention

< 29 >

In contrast, the focus of ULFM is forward recovery REINIT SUPPORTS BACKWARD RECOVERY

Backward recovery Attempts to restart the application from a previously saved state

Time

Failure

Forward recovery Attempts to find a new state from which the application can continue.

ULFM • Fix communicators and continue • Attempt to “fix” MPI state

Reinit Interface • Restart from a checkpoint • Get “fresh” MPI state