ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650...

34
ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008 Some content on 2-phase commit courtesy Ramakrishnan, Gehrke

description

ARIES: Basic Data Structures DB Data pages each with a pageLSN Xact Table lastLSN status Dirty Page Table recLSN flushedLSN RAM prevLSN XID type length pageID offset before-image after-image LogRecords LOG master record

Transcript of ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650...

Page 1: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

ARIES, Concludedand Distributed Databases: R*

Zachary G. IvesUniversity of Pennsylvania

CIS 650 – Implementing Data Management Systems

October 2, 2008Some content on 2-phase commit courtesy Ramakrishnan, Gehrke

Page 2: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

2

AdministriviaNext reading assignment: Principles of Data Integration

Chapter 3.3

No review required for this

Review Mariposa & R* for Tuesday

Page 3: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

ARIES: Basic Data Structures

DB

Data pageseachwith apageLSN

Xact TablelastLSNstatus

Dirty Page TablerecLSN

flushedLSN

RAM

prevLSNXIDtype

lengthpageID

offsetbefore-imageafter-image

LogRecords

LOG

master record

Page 4: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Normal Execution of a TransactionSeries of reads & writes, followed by commit

or abortWe will assume that write is atomic on disk

In practice, additional details to deal with non-atomic writes

Strict 2PLSTEAL, NO-FORCE buffer management, with

Write-Ahead Logging

Page 5: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Checkpointing

Periodically, the DBMS creates a checkpoint Minimizes recovery time in the event of a system crash Write to log:

begin_checkpoint record: when checkpoint began end_checkpoint record: current Xact table and dirty page table A “fuzzy checkpoint”:

Other Xacts continue to run; so these tables accurate only as of the time of the begin_checkpoint record

No attempt to force dirty pages to disk; effectiveness of checkpoint limited by oldest unwritten change to a dirty page. (So it’s a good idea to periodically flush dirty pages to disk!)

Store LSN of checkpoint record in a safe place (master record)

Page 6: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Simple Transaction Abort, 1/2For now, consider an explicit abort of a Xact

(No crash involved)We want to “play back” the log in reverse

order, UNDOing updates Get lastLSN of Xact from Xact table Can follow chain of log records backward via

the prevLSN field When do we quit?

Before starting UNDO, write an Abort log record For recovering from crash during UNDO!

Page 7: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Abort, 2/2

To perform UNDO, must have a lock on data!No problem – no one else can be locking it

Before restoring old value of a page, write a CLR: You continue logging while you UNDO!! CLR has one extra field: undoNextLSN

Points to the next LSN to undo (i.e. the prevLSN of the record we’re currently undoing).

CLRs never Undone (but they might be Redone when repeating history: guarantees Atomicity!)

At end of UNDO, write an “end” log record

Page 8: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Transaction Commit

Write commit record to log All log records up to Xact’s lastLSN are

flushed Guarantees that flushedLSN lastLSN Note that log flushes are sequential,

synchronous writes to disk Many log records per log page

Commit() returns Write end record to log

Page 9: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Crash Recovery: Big Picture

Start from a checkpoint (found via master record)

Three phases:1. Figure out which Xacts

committed since checkpoint, which failed (Analysis)

2. REDO all actions– (repeat history)

3. UNDO effects of failed Xacts

Oldest log rec. of Xact active at crashSmallest recLSN in dirty page table after Analysis

Last chkpt

CRASH

A R U

Page 10: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Recovery: The Analysis PhaseReconstruct state at checkpoint

via end_checkpoint recordScan log forward from checkpoint

End record: Remove Xact from Xact table (no longer active)

Other records: Add Xact to Xact table, set lastLSN=LSN, change Xact status on commit

Update record: If P not in Dirty Page Table, Add P to D.P.T., set its recLSN=LSN

Page 11: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Recovery: The REDO PhaseRepeat history to reconstruct state at crash:

Reapply all updates (even of aborted Xacts!), redo CLRs Puts us in a state where we know UNDO can do right thing

Scan forward from log rec containing smallest recLSN in D.P.T.For each CLR or update log rec LSN, REDO the action unless:

Affected page is not in the Dirty Page Table, or Affected page is in D.P.T., but has recLSN > LSN, or pageLSN (in DB) LSN

To REDO an action: Reapply logged action Set pageLSN to LSN. Don’t log this!

Page 12: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Recovery: The UNDO PhaseToUndo = { l | l a lastLSN of a “loser” Xact}Repeat:

Choose largest LSN among ToUndo If this LSN is a CLR and undoNextLSN==NULL

Write an End record for this Xact If this LSN is a CLR and undoNextLSN != NULL

Add undoNextLSN to ToUndo Else this LSN is an update

Undo the update, write a CLR, add prevLSN to ToUndoUntil ToUndo is empty

Page 13: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Example of Recovery

begin_checkpoint end_checkpointupdate: T1 writes P5update T2 writes P3T1 abortCLR: Undo T1 LSN 10T1 Endupdate: T3 writes P1update: T2 writes P5CRASH, RESTART

LSN LOG 00 05 10 20 30 40 45 50 60

Xact TablelastLSNstatus

Dirty Page TablerecLSN

flushedLSN

ToUndo

prevLSNs

RAM

Page 14: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Example: Crash During Restart

begin_checkpoint, end_checkpointupdate: T1 writes P5update T2 writes P3T1 abortCLR: Undo T1 LSN 10, T1 Endupdate: T3 writes P1update: T2 writes P5CRASH, RESTARTCLR: Undo T2 LSN 60CLR: Undo T3 LSN 50, T3 endCRASH, RESTARTCLR: Undo T2 LSN 20, T2 end

LSN LOG00,05 10 20 3040,45 50 60

7080,85

90

Xact TablelastLSNstatus

Dirty Page TablerecLSN

flushedLSN

ToUndo

undoNextLSN

RAM

Page 15: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Additional Crash IssuesWhat happens if system crashes during

Analysis?

How do you limit the amount of work in REDO? Flush asynchronously in the background Watch “hot spots”!

How do you limit the amount of work in UNDO? Avoid long-running Xacts

Page 16: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Summary of Logging/Recovery Recovery Manager guarantees Atomicity

& Durability Use WAL to allow STEAL/NO-FORCE w/o

sacrificing correctness LSNs identify log records; linked into

backwards chains per transaction (via prevLSN)

pageLSN allows comparison of data page and log records

Page 17: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Summary, Continued Checkpointing: A quick way to limit the

amount of log to scan on recovery. Recovery works in 3 phases:

Analysis: Forward from checkpoint Redo: Forward from oldest recLSN Undo: Backward from end to first LSN of oldest

Xact alive at crash Upon Undo, write CLRs Redo “repeats history”: Simplifies the logic!

Page 18: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

18

Distributed Databases Goal: provide an abstraction of a single database,

but with the data distributed on different sites Pose a new set of challenges:

Source tables (or subsets of tables) may be located on different machines

There is a data transfer cost – over the network Different CPUs have different amounts of resources Available resources change during optimization- and run-

time Today:

R*: the first “real” distributed DBMS prototype (Distributed INGRES never actually ran) – focus was a LAN, 10-12 sites

Mariposa: an attempt to distribute across the wide area

Page 19: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

19

Issues in Distributed Databases How do we place the data?

R*: this is done by humans Mariposa: this is done via economic model

What new capabilities do we have? R*: SHIP, 2-phase dependent join, bloomjoin, … Mariposa: ship processing to another node

Challenges in optimization R*: more complex cost model, more exec. options Mariposa: bidding on computation and other

resources

Page 20: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

20

System-R* Optimization Focus: distributed joins

1. Can ship a table and then join it (“ship whole”)2. Can probe the inner table and return matches

(“fetch matches”) Their measurements favored #1 – why?

Why do they require: Cardinality of outer < ½ # messages required to

ship inner Join cardinality < inner cardinality

How can the 2nd case be improved, in the spirit of block NLJ?

Page 21: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

21

Parallelism in Joins They assert it’s better to ship from outer

relation to site of inner relation because of potential for parallelism Where?

What changes if the inner relation is horizontally partitioned across multiple sites (i.e., relations are “striped”)?

They can also exploit the possibility of parallelism in sorting for a merge join – how?

Page 22: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

22

Other Joins Ship the inner relation and then index it Two-phase semijoin

Very similar to “fetch matches” Take S, T, sort them and remove duplicates Ship to opposite sites, use to fetch tuples that

match Merge-join the matching tuples

Bloomjoin Generate a Bloom filter from S Send to site of T, find matches Return to S, join

Page 23: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

23

Bloom Filters (Bloom 1970) Use k hash functions, m-bit

vector For each tuple, hash key k with

each function hi Set bit hi(k) in the bit vector

Probe the Bloom filter for k’ by testing whether all hi(k’) are set

After n values have been isnerted, probability of false positive is (1 – 1/m)kn

11

1

h1(k)

h2(k)

h3(k)

h4(k)

m bits

Page 24: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

24

Joins – “High-Speed” Network

Semijoin

R* + Temp Indices

R* (Distributed)

R* (Local)

Bloomjoin

Page 25: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

25

R* Optimization Assessment Distributed optimization is hard! They ignore load-balance issues, and

messaging overhead is probably ~10%, but still… Shipping costs are difficult to assess, since

they depend on precise cardinality results Optimizing the plan locally, and using that as a

model for distributed processing, doesn’t provide any optimality guarantees either – doesn’t account for parallelism

Page 26: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Updates in R* Require Two-Phase Commit (2PC) Site at which a transaction originates is the

coordinator; other sites at which it executes are subordinates

Two rounds of communication, initiated by coordinator: Voting

Coordinator sends prepare messages, waits for yes or no votes

Then, decision or termination Coordinator sends commit or rollback messages, waits for

acks Any site can decide to abort a transaction!

Page 27: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

27

Steps in 2PCWhen a transaction wants to commit:

Coordinator sends prepare message to each subordinate Subordinate force-writes an abort or prepare log record

and then sends a no (abort) or yes (prepare) message to coordinator

Coordinator considers votes: If unanimous yes votes, force-writes a commit log record

and sends commit message to all subordinates Else, force-writes abort log rec, and sends abort message

Subordinates force-write abort/commit log records based on message they get, then send ack message to coordinator

Coordinator writes end log record after getting all acks

Page 28: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

28

Illustration of 2PCCoordinator Subordinate 1Subordinate 2force-write begin log entry

force-write prepared log entryforce-writeprepared log entry

send “prepare” send “prepare”

send “yes”send “yes”force-writecommit log entry

send “commit” send “commit”force-writecommit log entryforce-writecommit log entry

send “ack”send “ack”writeend log entry

Page 29: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Comments on 2PC Every message reflects a decision by the sender;

to ensure that this decision survives failures, it is first recorded in the local log

All log records for a transaction contain its ID and the coordinator’s ID The coordinator’s abort/commit record also includes IDs

of all subordinates Thm: there exists no distributed commit protocol

that can recover without communicating with other processes, in the presence of multiple failures!

Page 30: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

What if a Site Fails in the Middle? If we have a commit or abort log record for transaction T,

but not an end record, we must redo/undo T If this site is the coordinator for T, keep sending commit/abort

msgs to subordinates until acks have been received If we have a prepare log record for transaction T, but not

commit/abort, this site is a subordinate for T Repeatedly contact the coordinator to find status of T, then write

commit/abort log record; redo/undo T; and write end log record If we don’t have even a prepare log record for T,

unilaterally abort and undo T This site may be coordinator! If so, subordinates may send

messages and need to also be undone

Page 31: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Blocking for the Coordinator If coordinator for transaction T fails,

subordinates who have voted yes cannot decide whether to commit or abort T until coordinator recovers T is blocked Even if all subordinates know each other (extra

overhead in prepare msg) they are blocked unless one of them voted no

Page 32: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Link and Remote Site Failures If a remote site does not respond during the

commit protocol for transaction T, either because the site failed or the link failed: If the current site is the coordinator for T, should

abort T If the current site is a subordinate, and has not

yet voted yes, it should abort T If the current site is a subordinate and has voted

yes, it is blocked until the coordinator responds!

Page 33: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

Observations on 2PC Ack msgs used to let coordinator know

when it’s done with a transaction; until it receives all acks, it must keep T in the transaction-pending table

If the coordinator fails after sending prepare msgs but before writing commit/abort log recs, when it comes back up it aborts the transaction

Page 34: ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008.

34

R* Wrap-Up One of the first systems to address both

distributed query processing and distributed updates

Focus was on local-area networks, small number of sites

Next system, Mariposa, focuses on environments more like the Internet…