Supercomputing in Plain English Shared Memory Multithreading
Programming-Language Approaches to Improving Shared-Memory Multithreading: Work-In-Progress
description
Transcript of Programming-Language Approaches to Improving Shared-Memory Multithreading: Work-In-Progress
Programming-Language Approaches to Improving Shared-Memory Multithreading:
Work-In-Progress
Dan GrossmanUniversity of Washington
Microsoft Research, RiSEJuly 28, 2009
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 2
Today
• A little history / organization: how I got here
• Informal, broad-not-deep overview of 4 ongoing projects
– Better semantics / languages for transactional memory x 2
• Dynamic separation for Haskell
• Semantics / abstraction for “escape actions”
– Deterministic Multiprocessing
– Code-centric communication graphs
• Hopefully time for discussion
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 3
Biography / group names
Me: • PLDI, ICFP, POPL “feel like home”, 1998-• PhD for Cyclone UW faculty, 2003-
– Type system, compiler for memory-safe C dialect• 30% 80% focus on multithreading, 2005-• Co-advising 3-4 students with computer architect Luis Ceze, 2007-
Two groups for “marketing purposes”• WASP, wasp.cs.washington.edu
• SAMPA, sampa.cs.washington.edu
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 4
People / other projects
Ask me later about:• Progress estimation for PigLatin Hadoop queries [Kristi]• Composable browser extensions [Ben L.]
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 5
Today
• A little history / organization: how I got here
• Informal, broad-not-deep overview of 4 ongoing projects
– Better semantics / languages for transactional memory x 2
• Dynamic separation for Haskell
• Semantics / abstraction for “escape actions”
– Deterministic Multiprocessing
– Code-centric communication graphs
• Hopefully time for discussion
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 6
Atomic blocks
An easier-to-use and harder-to-implement synchronization primitive
void transferFrom(int amt, Acct other){ atomic{ other.withdraw(amt); this.deposit(amt); }}
“Transactions are to shared-memory concurrency as garbagecollection is to memory management” [OOPSLA 07]
GC also has key semantic questions most programmers can ignore– Resurrection, serialization, dead assignments, etc.
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 7
“Weak” isolation
Widespread misconception: “Weak” isolation violates the “all-at-once” property only if
corresponding lock code has a race
(May still be a bad thing, but smart people disagree.)
atomic { y = 1; x = 3; y = x;}
x = 2;print(y); //1? 2? 666?
initially y==0
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 8
It’s worse
Privatization: One of several examples where lock code works and weak-isolation transactions do not
(Example adapted from [Rajwar/Larus] and [Hudson et al])
ptr
f gatomic { r = ptr; ptr = new C();}assert(r.f==r.g);
atomic { ++ptr.f; ++ptr.g;}
initially ptr.f == ptr.g
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 9
atomic { ++ptr.f; ++ptr.g;}
It’s worse
Most weak-isolation systems let the assertion fail!• Eager-update or lazy-update
ptr
f gatomic { r = ptr; ptr = new C();}assert(r.f==r.g);
initially ptr.f == ptr.g
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 10
The need for semantics
• Which is wrong: the privatization code or the language implementation?
• What other “gotchas” exist?
• Can programmers correctly use transactions without understanding their implementation?
Only rigorous programming-language semantics can answer
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 11
Separation
Static separation: Each thread-shared, mutable object is accessed-inside-transactions xor accessed-outside-transactions throughout its lifetime– Natural in STM Haskell (but not other settings)– Proved sound for eager update [POPL 08 x 2]
Dynamic separation:Each thread-shared, mutable object has dynamic metastate explicitly set by programmers to determine “side of the partition”– Designed, proven, implemented by Abadi et al for Bartok
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 12
atomic { r = ptr; ptr = new C();}unprotect(r);assert(r.f==r.g);
atomic { ++ptr.f; ++ptr.g;}
initially ptr.f == ptr.g
Example reduxptr
f g
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 13
Laura’s work
Design, semantics, implementation, and benchmarks for dynamicseparation in Haskell
Primary contributions:1. Regions: Change protection state of entire data structures in
O(1) time– Cool idioms/benchmarks where this gives 2-6x speedup
2. Lazy-update implementation– Allows protection-state changes from within transactions
3. Interface allowing composable libraries that can be used inside or outside transactions, without breaking Haskell’s types
4. Formal semantics in the style of STM Haskell
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 14
Today
• A little history / organization: how I got here
• Informal, broad-not-deep overview of 4 ongoing projects
– Better semantics / languages for transactional memory x 2
• Dynamic separation for Haskell
• Semantics / abstraction for “escape actions”
– Deterministic Multiprocessing
– Code-centric communication graphs
• Hopefully time for discussion
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 15
Escape actions
Escape actions:– Do not count for memory conflicts– Are not undone if transaction aborts– Possible “strange results” if race with transactional accesses
Essentially an unchecked back-door
Note: Open nesting is just escape { atomic { s } }– So escaping is the essential primitive
atomic { s1; escape { s2; } // perhaps in a callee s3;}
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 16
Canonical example
If escape actions are hidden behind strong abstractions, we can improve parallelism without affecting program behavior– Clients cannot observe the escaping
Unique-id generation type id;
id new_id(); bool compare_ids(id, id);
Transactions generating ids need not conflict with each other
If transaction aborts, no need to undo the id-generation
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 17
Matt’s work
• Formal semantics for escape actions
• Use to prove the unique-id example is correct– Two implementations, one using escape– Show no client affected by choice of implementation– Fundamentally similar to proving ADTs actually work
• Gotcha: The theorem is false if a client abuses other escape actions to “leak ids”:– Discovered by
attempting the proof!
atomic { id x = new_id(); if(compare_ids(x,glbl)) … escape { glbl=x; } …}
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 18
Today
• A little history / organization: how I got here
• Informal, broad-not-deep overview of 4 ongoing projects
– Better semantics / languages for transactional memory x 2
• Dynamic separation for Haskell
• Semantics / abstraction for “escape actions”
– Deterministic Multiprocessing
– Code-centric communication graphs
• Hopefully time for discussion
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 19
Deterministic C
Take arbitrary C + POSIX Threads and make behavior dependentonly on inputs (not nondeterministic scheduling)
– Helps testing, debugging, reproducibility, replication
It’s easy!– Run one thread at a time with deterministic context-switch
• Example: run for N instructions or until blocking
It’s hard!– Need to recover scalability with reasonable overhead
• Amdahl’s Law is one tough cookie!
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 20
How to do it
It’s a long and interesting compiler, run-time, and correctness story– Invite Luis over for an hour
Key techniques:– Dynamic ownership of memory (run in parallel while threads
access what they own)– Buffering (publish buffers deterministically while not violating
language’s memory-consistency model)– No promise which deterministic execution programmer will
get (tiny change to source code can affect behavior)
Performance: – Depends on application– Buffering has better scalability but worse per-thread
overhead, so hybrid approaches are sometimes needed
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 21
Today
• A little history / organization: how I got here
• Informal, broad-not-deep overview of 4 ongoing projects
– Better semantics / languages for transactional memory x 2
• Dynamic separation for Haskell
• Semantics / abstraction for “escape actions”
– Deterministic Multiprocessing
– Code-centric communication graphs
• Hopefully time for discussion
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 22
Code-centric
In a shared-memory C/C#/Java program, any heap access mightbe inter-thread communication
– But very few actually are
Most prior work to detect/exploit this sparseness is data-centric– What objects are thread-local?– What locks protect what memory?
Answers can find bugs, optimize programs, define code metrics, etc.
We provide a complementary code-centric view…
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 23
Graph
Nodes: Code units (e.g., functions)Directed edges:
– Source did a write in thread T1– Target read that write in thread T2– T1 != T2
Current tool:– Automatically build graph of a (slower) dynamic execution– Manual easy clean-up by programmer– Rely heavily on state-of-the-art dynamic instrumentation (PIN)
and graph visualization (Prefuse)
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 24
A toy example
queue q; // global, mutable
void enqueue(T* obj) { … }T* dequeue() { … } void consumer(){ … T t = dequeue(); … }void producer(){ … T* t = …; t->f=…; enqueue(t) … }
Program: multiple threads call producer and consumer
enqueue dequeue producer consumer
Tool supports “conceptual inlining” to allow multiple abstraction levels
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 25
Not just for toys
• Small and large applications– Example: MySQL (940KLOC); graph clean-up by one grad
student in < 1 day without prior source-code knowledge
• I truly believe:– Great “first day of internship” tool
• interactive graph essential and not our contribution– Useful way to measure multithreaded behavior
• Example: Graphs are very sparse thankfully MySQL: >11,000 functions, 423 nodes, 802 edges
• Example: Graph diff across runs with same input measures the nondeterminism of the program
But this is hard-to-evaluate tool work – your thoughts?
• Future work: Specification of graphs checked during execution
July 28, 2009 Dan Grossman: Multithreading Work-In-Progress 26
Summary
– Better semantics / languages for transactional memory x 2• Dynamic separation for Haskell• Semantics / abstraction for “escape actions”
– Deterministic Multiprocessing– Code-centric communication graphs
Very little published yet, but all Real Soon Now
Microsoft has been essential– Transactions (Harris, Abadi, Peyton Jones, many more)– Funding (Scalable Multicore RFP, New Faculty Fellows)
Hopefully opportunities to collaborate– Particularly on the (unproven) SE applications of this work