Automatic Generation of Code-Centric Graphs for Understanding Shared-Memory Communication

26
Automatic Generation of Code-Centric Graphs for Understanding Shared-Memory Communication Dan Grossman University of Washington February 25, 2010

description

Automatic Generation of Code-Centric Graphs for Understanding Shared-Memory Communication. Dan Grossman University of Washington February 25, 2010. Joint work. sampa.cs.washington.edu. “safe multiprocessing architectures”. Key idea. Build a “communication graph” - PowerPoint PPT Presentation

Transcript of Automatic Generation of Code-Centric Graphs for Understanding Shared-Memory Communication

Page 1: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

Automatic Generation of Code-Centric Graphs for Understanding

Shared-Memory Communication

Dan Grossman

University of Washington

February 25, 2010

Page 2: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 2

Joint work

sampa.cs.washington.edu

“safe multiprocessing architectures”

Page 3: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 3

Key idea

• Build a “communication graph”– Nodes: units of code (e.g., functions)– Directed edges: shared-memory communication– Source-node writes data that destination-node reads

foo bar

• This is code-centric, not data-centric– No indication of which addresses or how much data– No indication of locking protocol– Fundamentally complementary

Page 4: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 4

First idea: a dynamic program-monitoring tool that outputs a graph

1. Run program with instrumentation (100x+ slowdown for now)

– For write to address addr by thread T1 running function f: table[addr] := (T1,f)

– For read of address addr by thread T2 running function g: if table[addr] == (T1,f) and T1 != T2

then include an edge from f to g• Note we can have f==g

2. Show graph to developers off-line– Program understanding – Concurrency metrics

Execution graphs

Page 5: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 5

Simple!

“It’s mostly that easy” thanks to two modern technologies:

1. PIN dynamic binary instrumentation– Essential: Run real C/C++ apps without

re-building/installing– Drawback: x86 only

2. Prefuse Visualization framework– Essential: Layout and navigation of large graphs– Drawback: Hard for reviewers to appreciate interactivity

But of course there’s a bit more to the story…

Page 6: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 6

From an idea to a project

• Kinds of graphs: 1 dynamic execution isn’t always what you want

• Nodes: Function “inlining” is essential

• Semantics: Why our graphs are “unsound” but “close enough”

• Empirical Evaluation– Case studies: Useful graphs for real applications– Metrics: Using graphs to characterize program structure

• Ongoing work

Page 7: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 7

Graphs

• Execution graph: Graph from one program execution

• Testing graph: Union of graphs from runs over a test suite– Multiple runs can catch edges from nondeterminism

• Program graph: Exactly edges from all possible interleavings with all possible inputs– Undecidable of course

• Specification graph: Edges that the designer wishes to allow– Ongoing work: Concise and modular specifications

Page 8: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 8

Inclusions

Execution Graph

Testing Graph

Program Graph⊆ ⊆

Spec. Graph

⊂⊂Tests don’t

cover spec: Write more tests and/or restrict spec

Spec doesn’t cover tests:

Find bug and/or relax

spec

Page 9: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 9

Diffs

(Automated) Graph-difference is also valuable

• Across runs with same input– Reveals communication non-determinism

• Across runs with different inputs– Reveals dependence of communication on input

• Versus precomputed testing graph– Reveals anomalies with respect to behavior already seen

• …

Page 10: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 10

From an idea to a project

• Kinds of graphs: 1 dynamic execution isn’t always what you want

• Nodes: Function “inlining” is essential

• Semantics: Why our graphs are “unsound” but “close enough”

• Empirical Evaluation– Case studies: Useful graphs for real applications– Metrics: Using graphs to characterize program structure

• Ongoing work

Page 11: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 11

A toy program (skeleton)

queue q; // global, mutable

void enqueue(T* obj) { … }

T* dequeue() { … }

void consumer(){ … T* t = dequeue(); … t->f … }

void producer(){ … T* t = …; t->f=…; enqueue(t) … }

Program: multiple threads call producer and consumer

enqueue dequeue producer consumer

Page 12: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 12

Multiple abstraction levels

// use q as a task queue with

// multiple enqueuers and dequeuersqueue q; // global, mutable

void enqueue(int i) { … }

int dequeue() { … }

void f1(){ … enqueue(i) … }

void f2(){ … enqueue(j) … }

void g1(){ … dequeue() … }

void g2(){ … dequeue() … }

enqueue dequeue

Page 13: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 13

Multiple abstraction levels

// use q1, q2, q3 to set up a pipeline

queue q1, q2, q3; // global, mutable

void enqueue(queue q, int i) { … }

int dequeue(queue q) { … }

void f1(){ … enqueue(q1,i) … }

void f2(){ … dequeue(q1) …

… enqueue(q2,j) … }

void g1(){ … dequeue(q2) …

… enqueue(q3,k) … }

void g2(){ … dequeue(q3) … }

enqueue dequeue

Page 14: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 14

“Inlining” enqueue & dequeue

// use q as a task queue with

// multiple enqueuers and dequeuersqueue q; // global, mutable

void enqueue(int i) { … }

int dequeue() { … }

void f1(){ … enqueue(i) … }

void f2(){ … enqueue(j) … }

void g1(){ … dequeue() … }

void g2(){ … dequeue() … }

f1 g1

f2 g2

Page 15: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 15

“Inlining” enqueue & dequeue

// use q1, q2, q3 to set up a pipelinequeue q1, q2, q3; // global, mutable

void enqueue(queue q, int i) { … }

int dequeue(queue q) { … }

void f1(){ … enqueue(q1,i) … }

void f2(){ … dequeue(q1) …

… enqueue(q2,j) … }

void g1(){ … dequeue(q2) …

… enqueue(q3,k) … }

void g2(){ … dequeue(q3) … }

f1

g2

f2

g1

Page 16: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 16

Inlining Moral

• Different abstractions levels view communication differently

• All layers are important to someone– Queue layer: pipeline stages communicate only via queues– Higher layer: stages actually form a pipeline

• Current tool: Programmer specifies functions to inline– To control instrumentation overhead, must re-run– Ongoing work: maintaining full call-stacks efficiently

• In our experience: Inline most “util” functions and DLL calls– Also “prune” from graph custom allocators and one-time

initializations– Pruning is different from inlining; can be done offline

Page 17: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 17

From an idea to a project

• Kinds of graphs: 1 dynamic execution isn’t always what you want

• Nodes: Function “inlining” is essential

• Semantics: Why our graphs are “unsound” but “close enough”

• Empirical Evaluation– Case studies: Useful graphs for real applications– Metrics: Using graphs to characterize program structure

• Ongoing work

Page 18: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 18

Allowing memory-table races

Thread 1, in f Thread 2, in g Thread 3, in h update(1,&f,&x); update(2,&g,&x); x = false;

x = true; lookup(3,&h,x);

if(x) { update(3,&h,&y); y=42; }lookup(1,&f,y);return y;

Generated graph is impossible!

But each edge is possible!

time

g h f

Page 19: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 19

Soundness moral

• Graph is correct in the absence of data races

• Data races potentially break the atomicity of our instrumentation and the actual memory access– Hence wrong edge may be emitted– But a different schedule would emit that edge– We do not change program semantics (possible behaviors)

• As we saw in the example, the set of edges produced may be wrong not just for that execution but for any execution

• There are other ways to resolve these trade-offs

Page 20: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 20

From an idea to a project

• Kinds of graphs: 1 dynamic execution isn’t always what you want

• Nodes: Function “inlining” is essential

• Semantics: Why our graphs are “unsound” but “close enough”

• Empirical Evaluation– Case studies: Useful graphs for real applications– Metrics: Using graphs to characterize program structure

• Ongoing work

Page 21: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 21

How big are the graphs?With “appropriate” inlining and pruning

– By grad students unfamiliar with underlying apps– Raw numbers in similar ballpark (see the paper)

LOC Functions Nodes Edges

bodytrack 11,804 233 68 (29%) 279

canneal 4,085 42 10 (24%) 23

dedup 3,683 122 23 (19%) 95

facesim 29,355 1454 58 (4%) 226

ferrett 15,035 1861 34 (2%) 60

raytrace 12,878 5589 6 (<1%) 7

vips 174,151 5064 100 (2%) 548

x264 37,413 408 123 (30%) 435

Page 22: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 22

What about huge programs?

Communication does not grow nearly as quickly as program size– Resulting graphs are big but interactive visualization makes

them surprisingly interesting

LOC Functions Nodes Edges

mysqld 941,021 11,215 423 (4%) 802

You really need to see the graphs…– Physics-based layout takes a minute or two (pre-done)

Page 23: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 23

Node degree

The graphs are very sparse– Even most edges have low degree

Page 24: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 24

Changes across runs

• Little graph difference with same inputs– 5 of 15 programs still “finding” new edges by fifth run– A way to measure “observed nondeterminism”

• More difference for different inputs– Depends on application (0%-50% new edges)– Edges in the intersection of all inputs are an interesting

subset

Page 25: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 25

Ongoing work

• Java tool rather than C/C++ and binary instrumentation

• A real programming model for specifying allowed communication– Conciseness and modularity are the key points

• Performance– Currently a heavyweight debugging tool (like valgrind)– Just performant enough to avoid time-outs!

Page 26: Automatic Generation of  Code-Centric Graphs for Understanding Shared-Memory Communication

February 25, 2010 Dan Grossman: Code-Centric View of Shared Memory 26

Thanks!

http://www.cs.washington.edu/homes/bpw/osha/

http://sampa.cs.washington.edu