Accountable systems or how to catch a liar? Jinyang Li (with slides from authors of SUNDR and...

43
Accountable systems or how to catch a liar? Jinyang Li (with slides from authors of SUNDR and PeerReview)

Transcript of Accountable systems or how to catch a liar? Jinyang Li (with slides from authors of SUNDR and...

Accountable systems or

how to catch a liar?

Jinyang Li(with slides from authors of SUNDR and PeerReview)

What we have learnt so far

• Use BFT replication to handle (a few) malicious servers

• Strengths: – Transparent (service goes on as if no faults)– Minimal client modifications

• Weaknesses:– Breaks down completely if >1/3 servers are bad

Hold (bad) servers accountable• What is possible when >1/3 servers fail?

– E.g. 1 out of 1 servers fail?

• Let us consider a bad file server – Can it delete a client’s data?– Can it leak the content of a client’s data?– Can it change a client’s data to arbitrary values?– Can it show client A garbage and claim it’s from B?– Can it show client A old data of client B?– Can it show A (op1, op2) and B (op2, op1)?

Hold (bad) servers accountable

• Lessons:– Cannot prevent a bad server from

misbehaving at least once– We can do many checks to detect past

misbehaviors!

• Useful?

QuickTime™ and a decompressor

are needed to see this picture.

Fool me once, shame on you; fool me twice, shame on …

Case study I: SUNDR

• What’s SUNDR?– A (single server) network file system– Handle potential Byzantine server behavior– Can run on an un-trusted server

• Useful properties:– Tamper-evident

• Unauthorized operations will be immediately detected

– Can detect past misbehavior• If server drops operations, can be caught eventually

Ideal File system semantics

• Represent FS calls as fetch/modify operations– Fetch: client downloads new data– Modify: client makes new change visible to others

• Ideal (sequential) consistency: – A fetch reflects the sequence of modifications that

happen before it– Impossible when the server is malicious

• A is only aware of B’s latest modification via the server

• Goal: – get as close to ideal consistency as possible

Strawman File System

AModify f2 sig3

AModify f1 sig1

BFetch f4 sig2

AModify f2 sig3

BFetch f2 sig4

AModify f1 sig1

BFetch f4 sig2

AModify f1 sig1

BFetch f4 sig2

AModify f1 sig1

BFetch f4 sig2

sig1

sig2

AModify f2 sig3

sig3

File server

A: echo “A was here” >> /share/aaa

B: cat /share/aaa

userA

userB

Log specifies the total order

AModify f1 sig1

BFetch f4 sig2

AModify f2 sig3

BFetch f2 sig4

AModify f1 sig1

BFetch f4 sig2

AModify f2 sig3 The total order:

LogA ≤ LogB iff LogA is prefix of LogB

A’s latest log:

B’s latest log:

LogA

LogB

Detecting attacks by the server

AModify f2 sig3

AModify f1 sig1

BFetch f4 sig2

BFetch f2 sig3

A: echo “A was here” >> /share/aaa AModify f1 sig1

BFetch f4 sig2

AModify f1 sig1

BFetch f4 sig2

AModify f1 sig1

BFetch f4 sig2

AModify f2 sig3

A

BB: cat /share/aaa(stale result!)

File server

AModify f1 sig1

BFetch f4 sig2

BFetch f2 sig3b

AModify f1 sig1

BFetch f4 sig2

AModify f2 sig3a

A’s log and B’s log canno longer be ordered:LogA ≤ LogB, LogB ≤ LogA

A’s latest log:

B’s latest log:

Detecting attacks by the server

LogA

LogB

sig1

sig2

sig3a

What Strawman has achieved

High overhead, no concurrency Tamper-evident

• A bad server can’t make up ops users didn’t do

Achieves fork consistency• A bad server can conceal users’ ops from each

other, but such misbehavior could be detected later

Fork Consistency: A tale of two worlds

File Server

A’s view B’s view

Fork consistency is useful

• Best possible alternative to sequential consistency– If server lies only once, it has to lie forever

• Enable detections of misbehavior:– users periodically gossip to check violations– or deploy a trusted online “timestamp” box

SUNDR’s tricks to make strawman practical

1. Store FS as a hash tree– No need to reconstruct FS image from log

2. Use version vector to totally order ops– No need to fetch entire log to check for misbehavior

Trick #1: Hash tree

h0

h1

h3

h2

h4

h6

h5

h7

h9

h8

h10 h11 h12

D0

D1 D2D3

D4 D5 D6 D7 D8 D9 D10 D11 D12

Key property:h0 verifies the entire tree of data

Trick #2: version vector

• Each client keeps track of its version #• Server stores the (freshest) version vector

– Version vector orders all operations• E.g. (0, 1, 2) ≤ (1, 1, 2)

• A client remembers the latest version vector given to it by the server– If new vector does not succeed old one, detect

order violation!• E.g. (0,1,2) ≤ (0,2,1)

SUNDR architecture

• block server stores blocks retrievable by content-hash• consistency server orders all events

Untrusted Network

Untrusted Network

consistency server

block server

SUNDR server-side

SUNDRclient

SUNDRclient

userA

userB

SUNDR data structures: hash tree

• Each file is writable by one user or group• Each user/group manages its own pool of inodes

– A pool of inodes is represented as a hash tree

Hash files

• Blocks are stored and indexed by hash on the block server

data1

Metadata

H(data1)

H(data2)

H(iblk1)

data2

data3

data4H(data3)

H(data4)

iblk1

20-byte File Handle

i-node

Hash a pool of i-nodes

• Hash all files writable by each user/group

• From this digest, client can retrieve and verify any block of any file

2 20-byte

3 20-byte

4 20-byte

i-table i-node 2

i-node 3

i-node 4

20-byte digest

i-num

SUNDR FS

2 20-byte3 20-byte4 20-byte

digest

2 20-byte3 20-byte4 20-byte

digest

Superuser:

UserA:

SUNDR State

How to fetch “/share/aaa”?

/:Dir entry: (share, Superuser, 3)

Lookup “/”

/share:Dir entry: (aaa, UserA, 4)

Lookup “/share”

Fetch “/share/aaa”

digestUserB: …

234

digestGroupG:

SUNDR data structures:version vector

• Server orders users’ fetch/modify ops• Client signs version vector along with digest• Version vectors will expose ordering failures

Version structure

• Each user has its own version structure (VST)• Consistency server keeps latest VSTs of all users• Clients fetch all other users’ VSTs from server before

each operation and cache them• We order VSTA ≤ VSTB iff all the version numbers in VSTA

are less than or equal in VSTB

VSTA

Signature A

ADigest A

A - 1B - 1G - 1

VSTB≤ Signature B

BDigest B

A - 1B - 2G - 2

Update VST: An example

Consistency Server

B

AA-0

B-0

A: echo “A was here” >> /share/aaa

B: cat /share/aaa

DigA

A

A-1

B-1DigA

A

A-1

B-1DigA

A

A-1

B-1DigA

A

A-0

B-1DigB

B

A-1

B-2DigB

B

A-1

B-2DigB

B

A-0

B-1DigB

B

VSTA ≤ VSTB

Detect attacks

Consistency Server

BA: echo “A was here” >> /share/aaa

B: cat /share/aaa (stale!)

A

A-1

B-1DigA

A

A-0

B-0DigA

A

A-0

B-1DigA

B

A-1

B-1DigA

A A-0

B-1DigB

B

A-0

B-2DigB

B

A-0

B-2DigB

B

A’s latest VST and B’s can no longer be ordered:VSTA ≤ VSTB, VSTB ≤ VSTA

≤A-0

B-0DigA

A

Support concurrent operations

• Clients may issue operations concurrently• How does second client know what vector to sign?

– If operations don’t conflict, can just include first user’s forthcoming version number in VST

– But how do you know if concurrent operations conflict?

• Solution: Pre-declare operations in signed updates– Server returns latest VSTs and all pending updates,

thereby ordering them before current operation– User computes new VST including pending updates– User signs and commits new VST

Concurrent update of VSTs

Consistency ServerA

B

update: B-2

update: A-1

A: echo “A was here” >>/share/aaa

B: cat /share/bbb

A-0

B-0DigA

A

A-1

A-0

B-1DigB

B

A-0

B-1DigB

B

A-1 B-2

A-0

B-0DigA

A

VSTA ≤ VSTB

A-1

B-1DigA

A

A-1 A-1

B-1DigA

A

A-1

A-1

B-2DigB

B

A-1B-2A-1

B-2DigB

B

A-1B-2

SUNDR is practical

0

2

4

6

8

10

12

Create (1K) Read (1K) Unlink

NFSv2 NFSv3 SUNDR SUNDR/NVRAM

Seconds

Case study II: PeerReview [SOSP07]

Motivations for PeerReview

• Large distributed systems consist of many nodes

• Some nodes become Byzantine– Software compromise– Malicious/careless administrator

• Goal: detect past misbehavior of nodes– Apply to more general apps than FS

Challenges of general fault detection

• How to detect faults?• How to convince others that a node is (not)

faulty?

Overall approach• Fault := Node deviates from expected behavior• Obtain a signature for every action from each node• If a node misbehaves, the signature works a proof-of-misbehavior against

the node

Can we detect all faults?

• No– e.g. Faults affecting a node's

internal state

• Detect observable faults– E.g. bad nodes send a message

that correct nodes would not send

AA

X

CC

100101011000101101011100100100

0

Can we always get a proof?

• No – A said it sent X

B said A didn‘t

C: did A send X?

• Generate verifiable evidence:– a proof of misbehavior (A send wrong X)– a challenge (C asks A to send X again)

• Nodes not answering challenges are suspects

AA

X

BB

CC

?

I sent X!

I neverreceived

X!?!

• Treat each node as a deterministic state machine• Nodes sign every output message• A witness checks that another node outputs correct

messages– using a reference implementation and signed inputs

PeerReview Overview

M

PeerReview architecture• All nodes keep a log of

their inputs & outputs– Including all messages

• Each node has a set of witnesses, who audit its log periodically

• If the witnesses detect misbehavior, they– generate evidence

– make the evidence avai-lable to other nodes

• Other nodes check evi-dence, report fault

A's log

B's log

AA

BB

M

CCDD

EE

A's witnesses

M

PeerReview detects tampering

A B

Message

Hash chain

Send(X)

Recv(Y)

Send(Z)

Recv(M)

H0

H1

H2

H3

H4

B's log

ACK • What if a node modifies its log entries?

• Log entries form a hash chain• Signed hash is included with

every message Node commits to having received all prior messages

Hash(log)

Hash(log)

PeerReview detects inconsistency• What if a node

– keeps multiple logs?– forks its log?

• Witness checks if signed hashes form a single chain

H3'

Read X

H4'

Not found

Read Z

OK

Create X

H0

H1

H2

H3

H4

OK

"View #1""View #2"

Module B

PeerReview detects faults• Witness audits a node:

– Replay inputs on a reference implementation of the state machine

– Check outputs against the log

Module A

Module B

=?

LogNetwork

Input

Output

State machine

if ≠

Module A

PeerReview‘s guarantees

• Faults will be detected (eventually)– If a node if faulty:

• Its witness has a proof of misbehavior• Its witness generates a challenge that it cannot answer

– If a node is correct• No proof of misbehavor

• It can answer any challenge

PeerReview applications

• App #1: NFS server• Tampering with files

• Lost updates

• App #2: Overlay multicast• Freeloading

• Tampering with content

• App #3: P2P email• Denial of service

• Dropping emails

• Metadata corruption

• Incorrect access control

PeerReview’s performance penalty

• Cost increases w/ # of witnesses per node W

Baseline 1 2 3 4 5

100

80

60

40

20

0Avg traffic (Kbps/node)

Number of witnesses

Baseline traffic

What have we learnt?• Put constraints on what faulty servers can do

– Clients sign data, bad SUNDR server cannot fake data

– Clients sign version vector, bad server cannot hide past inconsistency

• Fault detection– Need proof of misbehavior (by signing actions)– Use challenges to distinguish slow nodes apart

from bad ones