Asynchronous Assertions
description
Transcript of Asynchronous Assertions
Asynchronous Assertions
Eddie Aftandilian and Sam GuyerTufts University
Martin VechevETH Zurich and IBM Research
Eran YahavTechnion
OOPSLA, October 25-27, 2011 2
Assertions are great Assertions are not cheap
Cost paid at runtime Limits kinds of assertions
Our goal Support more costly assertions (e.g., invariants) Support more frequent assertions Do it efficiently
Motivation
OOPSLA, October 25-27, 2011 3
Asynchronous assertions Evaluate assertions concurrently with program Lots of cores (more coming)
Enable more expensive assertions
Thank you. Questions?
Idea
Two problems What if program modifies data being checked? When do we find out about failed assertions?
OOPSLA, October 25-27, 2011 4
Run assertions on a snapshot of program state
Guarantee Get the same result as synchronous evaluation No race conditions
How can we do this efficiently?
Solution
OOPSLA, October 25-27, 2011 5
Incremental construction of snapshots (copy-on-write)
Integrated into the runtime system (Jikes RVM 3.1.1)
Supports multiple in-flight assertions
Our implementation
OOPSLA, October 25-27, 2011 6
OverviewProgram thread Checker thread
a = assert(OK());
o.f = p;
if (o.f == …);
OK() {
return res;}
wait();
b = a.get();
...
...
...
...
if (b) { ...
..wait..
Needs to see old o.f
OOPSLA, October 25-27, 2011 7
Main thread Checker thread
Key
o.f = p; if (o.f == …)
for each active assertion A if o not already in A’s snapshot o’ = copy o mapping(A, o) := o’modify o.f
if o in my snapshot o’ = mapping(me, o) return o’.felse return o.f
Write Barrier Read Barrier
OOPSLA, October 25-27, 2011 8
How do we know when a copy is needed? Epoch counter E
Ticked each time a new assertion starts On each object: copied-at timestamp
Last epoch in which a copy was made Write barrier fast path:
If o.copied-at < E then some assertion needs a copy How do we implement the mapping?
Per-object forwarding array One slot for each active assertion
Implementation
OOPSLA, October 25-27, 2011 9
Synchronization Potential race condition
Checker looks in mapping, o not there Application writes o.f Checker reads o.f
Lock on copied-at timestamp
Cleanup Zero out slot in forwarding array of all copied objects We need to keep a list of them Copies themselves reclaimed during GC
Tricky bits
OOPSLA, October 25-27, 2011 10
Snapshot sharing All assertions that need a copy can share it
Reduces performance overhead by 25-30% Store copy in each empty slot in forwarding array
(for active assertions)
Avoid snapshotting new objects New objects are not in any snapshot Idea: initialize copied-at to E Fast path test fails until new assertion started
Optimizations
OOPSLA, October 25-27, 2011 11
Waiting for assertion result Traditional assertion model
Assertion triggers whenever check completes Futures model
Wait for assertion result before proceeding
Another option: “Unfire the missiles!” Roll back side-effecting computations after receiving
assertion failure
Interface
OOPSLA, October 25-27, 2011 12
Idea: We know this can catch bugs Goal: understand the performance space
Two kinds of experiments Microbenchmarks
Various data structures with invariants Pulled from runtime invariant checking literature
pseudojbb Instrumented with a synthetic assertion check Performs a bounded DFS on database Systematically control frequency and size of checks
Evaluation
OOPSLA, October 25-27, 2011 13
Performance When there’s enough work
7.2-7.5x speedup vs. synchronous evaluation With 10 checker threads Ex. 12x sync slowdown 1.65x async
When there’s less work: 0-60% overhead
Extra memory usage for snapshots 30-70 MB for JBB (out of 210 MB allocated) In steady state, almost all mutated objects are copied Cost plateaus
Key results
OOPSLA, October 25-27, 2011 14
Pseudojbb graph schema
Assertion workload
1.0
SyncAsync Application
waiting
Nor
mal
ized
runt
ime
Snapshotoverhead
OverloadedOK
OOPSLA, October 25-27, 2011 15
Fixed frequency, increasing cost
OOPSLA, October 25-27, 2011 16
Fixed frequency, increasing cost, zoomed
OOPSLA, October 25-27, 2011 17
Fixed frequency, increasing cost
OOPSLA, October 25-27, 2011 18
Concurrent GC Futures Runtime invariant checking FAQ: why didn’t you use STM?
Wrong model We don’t want to abort In STM, transaction sees new state, other threads see
snapshot Weird for us: the entire main thread would be a transaction
Related work
OOPSLA, October 25-27, 2011 19
Execute data structure checks in separate threads
Checks are written in standard Java
Handle all synchronization automatically
Enables more expensive assertions than traditional synchronous evaluation
Thank you!
Conclusions
OOPSLA, October 25-27, 2011 20
OOPSLA, October 25-27, 2011 21
Snapshot volume
OOPSLA, October 25-27, 2011 22
Sharing snapshots
OOPSLA, October 25-27, 2011 23
Snapshot overhead
OOPSLA, October 25-27, 2011 24
Fixed cost, increasing frequency
OOPSLA, October 25-27, 2011 25
Fixed cost, increasing frequency, zoomed
OOPSLA, October 25-27, 2011 26
Copy-on-write implementation
a
obj
ab
b
c
OOPSLA, October 25-27, 2011 27
Safe Futures for Java [Welc 05] Future Contracts [Dimoulas 09] Ditto [Shankar 07] SuperPin [Wallace 07] Speculative execution [Nightingale 08, Kelsey 09,
Susskraut 09] Concurrent GC Transactional memory
Related work