Coordinating Metadata Replication: Survival Strategy for Distributed Systems

42
Coordinating Metadata Replication Survival Strategy for Distributed Systems Konstantin V. Shvachko April 3, 2014 Amsterdam, Netherlands

description

Hadoop Summit, April 2014 Amsterdam, Netherlands Just as the survival of living species depends on the transfer of essential knowledge within the community and between generations, the availability and reliability of a distributed computer system relies upon consistent replication of core metadata between its components. This presentation will highlight the implementation of a replication technique for the namespace of the Hadoop Distributed File System (HDFS). In HDFS, the namespace represented by the NameNode is decoupled from the data storage layer. While the data layer is conventionally replicated via block replication, the namespace remains a performance and availability bottleneck. Our replication technique relies on quorum-based consensus algorithms and provides an active-active model of high availability for HDFS where metadata requests (reads and writes) can be load-balanced between multiple instances of the NameNode. This session will also cover how the same techniques are extended to provide replication of metadata and data between geographically distributed data centers, providing global disaster recovery and continuous availability. Finally, we will review how consistent replication can be applied to advance other systems in the Apache Hadoop stack; e.g., how in HBase coordinated updates of regions selectively replicated on multiple RegionServers improve availability and overall cluster throughput.

Transcript of Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Page 1: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Coordinating Metadata

Replication

Survival Strategy for Distributed Systems

Konstantin V. Shvachko

April 3, 2014

Amsterdam, Netherlands

Page 2: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Introduction to Survival

Clusters of servers as communities of machines

- United by a common mission

Communication

- Act as a whole towards a single goal

Coordination is the key for survival

- Got a problem – propose a solution

- Make a decision – come to the agreement

- Execute the decision

Typical for their type of species

- Computers propose Numbers, and

- Use Algorithms to reach an agreement

In the world of computers

2

Page 3: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Contents

Coordination Engine – the survival basics

Replicated Namespace for HDFS

Geographically-distributed HDFS

Replicated Regions for HBase

Potential Benefits – more survival tools

3

Page 4: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Coordination Engine The Survival Basics

4

Page 5: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Coordination Engine

Coordination Engine allows to agree on the order of events submitted to the

engine by multiple proposers

- Anybody can Propose

- Single Agreement every time

- Learners observe the same agreements in the same order

Sequencer: Determines the order in which a number of operations occur

5

Page 6: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Proposer Crash course in Coordination Engines: submit Proposal

6

Page 7: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Acceptor Crash course in Coordination Engines: produce an Agreement

7

Page 8: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Learner Crash course in Coordination Engines: take action based on the Agreement

8

Page 9: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Single-Node Coordination Engine

Easy to Coordinate

- Single NameNode is an example of a simple Coordination Engine

- Performance and availability bottleneck

- Single point of failure

Simple but lacks reliability

9

Page 10: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Distributed Coordination Engine

Not so easy to Coordinate

A reliable approach is to use multiple nodes

10

Page 11: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Challenges of Distributed Coordination

Coordinating single value (the battle start time)

- Design for failures

Two Generals’ Problem

11

Page 12: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Challenges of Distributed Coordination

Failures in distributed systems

- Any node can fail at any time

- Any failed node can recover at any time

- Messages can be lost, duplicated, reordered, or delayed arbitrary long

Anything that can go wrong will go wrong: Murphy's law (the law of entropy)

12

Page 13: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Distributed Coordination Engine

Distributed Coordination Engine consists of nodes

- Node Roles: Proposer, Learner, and Acceptor

- Each node can combine multiple roles

Distributed coordination

- Multiple nodes submit events as proposals to a quorum of acceptors

- Acceptors agree on the order of each event in the global sequence of events

- Learners learn agreements in the same deterministic order

A reliable approach is to use multiple nodes

13

Page 14: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Consensus Algorithms

Coordination Engine guarantees the same state of the learners at a given GSN

- Each agreement is assigned a unique Global Sequence Number (GSN)

- GSNs form a monotonically increasing number series – the order of agreements

- Learners start from the same initial state

- Learners apply the same deterministic agreements in the same deterministic order

- GSN represents “logical” time in the coordinated system

PAXOS is a consensus algorithm proven to tolerate a variety of failures

- Quorum-based Consensus

- Deterministic State Machine

- Leslie Lamport: Part-Time Parliament (1990)

Consensus is the process of agreeing on one result among a group of participants

14

Page 15: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

HDFS State of the Art

15

Page 16: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

HDFS Architecture

HDFS metadata is decoupled from data

- Namespace is a hierarchy of files and directories represented by Inodes

- INodes record attributes: permissions, quotas, timestamps, replication

NameNode keeps its entire state in RAM

- Memory state: the namespace tree and the mapping of blocks to DataNodes

- Persistent state: recent checkpoint of the namespace and journal log

File data is divided into blocks (default 128MB)

- Each block is independently replicated on multiple DataNodes (default 3)

- Block replicas stored on DataNodes as local files on local drives

Reliable distributed file system for storing very large data sets

16

Page 17: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

HDFS Cluster

Single active NameNode

Thousands of DataNodes

Tens of thousands of HDFS clients

Active-Standby Architecture

17

Page 18: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Standard HDFS operations

Active NameNode workflow

1. Receive request from a client,

2. Apply the update to its memory state,

3. Record the update as a journal transaction in persistent storage,

4. Return result to the client

HDFS Client (read or write to a file)

- Send request to the NameNode, receive replica locations

- Read or write data from or to DataNodes

DataNode

- Data transfer to / from clients and between DataNodes

- Report replica state change to NameNode(s): new, deleted, corrupt

- Report its state to NameNode(s): heartbeats, block reports

18

Page 19: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Consensus Node Coordinated Replication of HDFS Namespace

19

Page 20: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Replicated Namespace

Replicated NameNode is called a ConsensusNode or CNode

ConsensusNodes play equal active role on the cluster

- Provide write and read access to the namespace

The namespace replicas are consistent with each other

- Each CNode maintains a copy of the same namespace

- Namespace updates applied to one CNode propagated to the others

Coordination Engine establishes the global order of namespace updates

- All CNodes apply the same deterministic updates in the same deterministic order

- Starting from the same initial state and applying the same updates = consistency

Coordination Engine provides consistency of multiple namespace replicas

20

Page 21: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Coordinated HDFS Cluster

Independent CNodes – the same namespace

Load balancing client requests

Proposal, Agreement

Coordinated updates

Multiple active Consensus Nodes share namespace via Coordination Engine

21

Page 22: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Coordinated HDFS operations

ConsensusNode workflow

1. Receive request from a client

2. Submit proposal to update to the Coordination Engine

Wait for agreement

3. Apply the agreed update to its memory state,

4. Record the update as a journal transaction in persistent storage (optional)

5. Return result to the client

HDFS Client and DataNode operations remain the same

Updates to the namespace when a file or a directory is created are coordinated

22

Page 23: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Strict Consistency Model

Coordination Engine transforms namespace modification proposals into the

global sequence of agreements

- Applied to namespace replicas in the order of their Global Sequence Number

ConsensusNodes may have different states at a given moment of “clock” time

- As the rate of consuming agreements may vary

CNodes have the same namespace state when they reach the same GSN

One-copy-equivalence

- each replica presented to the client as if it has only one copy

One-Copy-Equivalence as known in replicated databases

23

Page 24: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Consensus Node Proxy

CNodeProxyProvider – a pluggable substitute of FailoverProxyProvider

- Defined via Configuration

Main features

- Randomly chooses CNode when client is instantiated

- Sticky until a timeout occurs

- Fails over to another CNode

- Smart enough to avoid SafeMode

Further improvements

- Take into account network proximity

Reads do not modify namespace can be directed to any ConsensusNode

24

Page 25: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Stale Read Problem

1. Same client fails over to a CNode, which has an older namespace state (GSN)

- CNode1 at GSN 900: mkdir(p) –> ls(p) –> failover to CNode2

- CNode2 at GSN 890: ls(p) –> directory not found

2. One client modifies namespace, which needs to be seen by other clients

MapReduce use case:

- JobClient to CNode1: create job.xml

- MapTask to CNode2: read job.xml –> FileNotFoundException

1) Client connects to CNode only if its GSN >= last seen

- May need to wait until CNode catches up

2) Must coordinate file read

- Special files only: configuration-defined regexp

- CNode coordinates read only once per file, then marks it as coordinated

- Coordinated read: submit proposal, wait for agreement

A few read requests must be coordinated

25

Page 26: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Block Management

Block Manager keeps information about file blocks and DataNodes

- Blocks Map, DataNode Manager, Replication Queues

- New block generation: <blockId, generationStamp, locations>

- Replication and deletion of replicas that are under- or over-replicated

Consistent block management problem

- Collision of BlockIDs if the same ID generated by different CNodes

- Over-replicated block: if CNodes decide to replicate same block simultaneously

- Missing block data: if CNodes decide to delete different replicas of the block

New block generation – all CNodes

- Assign nextID and nextGeneratoinStamp while processing the agreement

Replication and deletion of replicas – a designated CNode (Block Replicator)

- Block Replicator is elected using Coordination Engine and reelected if fails

- Only Block Replicator sends DataNode commands to transfer or delete replicas

26

Page 27: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

GeoNode Geographically Distributed HDFS

27

Page 28: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Scaling HDFS Across Data Centers

The system should appear, act, and be operated as a single cluster

- Instant and automatic replication of data and metadata

Parts of the cluster on different data centers should have equal roles

- Data could be ingested or accessed through any of the centers

Data creation and access should typically be at the LAN speed

- Running time of a job executed on one data center as if there are no other centers

Failure scenarios: the system should provide service and remain consistent

- Any GeoNode can fail

- GeoNodes can fail simultaneously on two or more data centers

- An entire data center can fail. E.g. due to WAN partitioning

- Tolerate DataNode failures traditional for HDFS

• Simultaneous failure of two DataNodes

• Failure of an entire rack

Continuous Availability, and Disaster Recovery over WAN

28

Page 29: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

WAN HDFS Cluster

Wide Area Network replication

Metadata – online

Data – offline

Multiple active GeoNodes share namespace via Coordination Engine over WAN

29

Page 30: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Architecture Principles

Foreign vs Native notations for nodes, block replicas, clients

- Foreign means on a different data center. Native – on the same data center

Metadata between GeoNodes is coordinated instantaneously via agreements

Data between data centers replicated asynchronously in the background

- Replicas of a block can be stored on DataNodes of one or multiple data centers

- GeoNode may learn about the file and its blocks before native replicas created

DataNodes do not communicate to foreign GeoNodes

- Heartbeats, block reports, etc. sent to native GeoNodes only

- GeoNodes cannot send direct commands to foreign DataNodes

DataNodes copy blocks over the WAN to provide foreign block replication

- Copy single replica over the WAN, allow native replication of that replica

Clients optimized to access or create data on native DataNodes

- But can read data from another data center if needed

30

Page 31: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

File Create

Client: choose a native GeoNode, send request

GeoNode: propose, wait for agreement

- All GeoNodes execute the agreement

- addBlock: choose native locations only

Client: create pipeline, write data

DataNodes report replicas to native GeoNodes

Client can continue adding next block or closing file

- Clients do not wait until over the WAN transfer is completed

Foreign block replication handled asynchronously

Create file entry, add blocks, write data, close file

31

Page 32: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

1

2

3

4

5

Foreign Block Replication Block is created on one data center and replicated to other asynchronously

32

FRR

Page 33: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Foreign Replica Report

Foreign Replica Report is submitted in two cases

1. Count of native block replicas reaches full replication for the data center or

2. Count of native block replicas is reduced to 0

If block replica is lost it is replicated natively first

- New locations reported to other data centers after full replication is reached

If no native replicas left, one should be transferred from another data center

Replication factor of a given block can vary on different data centers

33

Page 34: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Foreign Block Management

Designated BlockReplicator per data center

- Elected among GeoNodes of its data center

- Handles deletion and replication of blocks among native DataNodes

- Schedules foreign replica transfers

BlockMonitor in addition to native replicas analyses foreign block replication

- Periodically scans blocks

- If a block with no foreign replicas found – schedule WAN replication

34

Page 35: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Selective Data Replication

Requirements / use cases:

- Directory is replicated to and accessible from all data centers

- Directory is readable from all DCs but is writeable only at a given site

- Directory is replicated on some (single) DCs, but never replicated to other

Asymmetric Block Replication

- Allow different replications for files on different data centers

- Files created with the default for the data center replication

- setReplication() is applied to single data center

- Special case of replication = 0: data is not replicated on that data center

Datacenter-specific attributes

- Default replication: per directory

- Permissions - Selective visibility of files and directories

35

Page 36: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

HBase Replicated Regions

36

Page 37: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Apache HBase

Table: big, sparse, loosely structured

- Collection of rows, sorted by row keys

- Rows can have arbitrary number of columns

- Columns grouped into Column families

Table is split into Regions by row ranges

- Dynamic Table partitioning

- Region Servers serve regions to applications

Distributed Cache:

- Regions are loaded in nodes’ RAM

- Real-time access to data

A distributed key-value store for real-time access to semi-structured data

37

DataNode DataNode DataNode

NameNode Resource

Manager

RegionServer RegionServer RegionServer

Node

Manager Node

Manager

Node

Manager

HB

ase

Ma

ste

r

Page 38: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

HBase Operations

Administrative functions

- Create, delete, list tables

- Create, update, delete columns, column families

- Split, compact, flush

Access table data

- Get – retrieve cells of a row

- Put – update a row

- Delete – delete cells/row

Mass operations

- Scanner – scan col family

- Variety of Filters

HBase client connects to a server, which serves the region

38

Page 39: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

HBase Challenge

Failure of a region requires failover

- Regions reassigned to other Region Servers

- Clients failover and reconnect to new servers

Regions in high demand

- Many client connections to one server introduce bottleneck

Good idea to replicate regions on multiple Region Servers

Open Problem: consistent updates of region replicas

- Solution: Coordinated updates

39

Page 40: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Coordinated Put

Client sends Put request to one of the Region Servers hosting the region

The Region Server submits Put proposal to update the row

Coordination Engine decides the order of the update

Agreement delivered to all Region Servers hosting the region

- The row is deterministically updated on multiple Region Servers

40

Page 41: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Future Benefits

Atomic update of multiple rows

- Proposal to Update <row1, row2, row3>

- Agreement delivered to all regions containing either of the rows

- Consistent state of regions at the same GSN

Secondary indexes

- Agreement to update is delivered both primary and secondary tables

Distributed Transactions

- ACID: atomicity, consistency, isolation, durability

- Atomic multi-row and multi-table updates are transactions

Coordination Engine as a tool for the evolution of distributed systems

41

Page 42: Coordinating Metadata Replication: Survival Strategy for Distributed Systems

Konstantin V Shvachko

Thank You

Plamen Jeliazkov

Tao Luo

Keith Pak

Jagane Sundar

Henry Wang

Byron Wong

Dasha Boudnik

Sergey Melnykov

Virginia Wang

Yeturu Aahlad

Naeem Akhtar

Ben Horowitz

Michael Parkin

Mikhail Antonov

Konstantin Boudnik

Sergey Soldatov

Siva G

Durai Ezhilarasan

Gordon Hamilton

Trevor Lorimer

David McCauley

Guru Yeleswarapu

Robert Budas

Brett Rudenstein

Warren Harper