Parallelism in Algorithms- Bernstein's Conditions

2
CS-421 Parallel Processing BE (CIS) Batch 2004-05 Handout_12 Page - 1 - of 2 Parallelism in Algorithms – Bernstein’s Conditions One way to detect parallelism in a sequential algorithm is to look for operations that can be carried out independently of each other. Let, I(s) = input set of statement 1 s i.e. the set of all variables read by s and O(s) = output set of statement s i.e. the set of all variables written by s Two statements S i and S j are (data) independent if ALL of the following conditions hold: I(S j ) O(S i ) = Φ flow independence O(S i ) O(S j ) = Φ output independence I(S i ) O(S j ) = Φ anti independence That is, two statements S i and S j can be executed in parallel (denoted S i || S j ) if [I(S j ) O(S i )] U [O(S i ) O(S j )] U [I(S i ) O(S j )] = Φ These are called Bernstein’s conditions and may be seen at different levels of granularity. For example, S may be machine-level operations, may be entire procedures or just machine instructions. In general, a set of statements (processes) S 1, S 2, …, S k can execute in parallel if Bernstein’s conditions are satisfied on pair-wise basis i.e. S 1 // S 2 // … // S k iff S i // S j i j Properties of Parallelism // Operator // is commutative i.e., S i // S j S j // S i // is NOT transitive i.e., S i // S j & S j // S k doesn’t imply S i // S k These dependences are usually portrayed in the form of a data-dependence graph . This is a directed graph having as many nodes as there are procedures (or statements, depending upon the granularity of the data dependence analysis). e.g. 1 s 1 : A := x + B; s 2 : C := A * 3; s 2 is flow dependent on s 1 indicated by an arrow directed from s 1 to s 2 . e.g. 2 s 1 : A := x + B; s 2 : A := 3; s 2 is output dependent on s 1 indicated by an arrow directed from s 1 to s 2 with a small circle anywhere on the arrow. e.g. 3 s 1 : B := A + 3; s 2 : A := 3; s 2 is anti dependent on s 1 indicated by an arrow directed from s 1 to s 2 with a small line across the arrow. 1 Here statement is used in quite a general sense in this handout and it can stand for procedure, process, instruction, etc, whatever be the level of granularity of data dependence analysis as explained in the class.

description

A very useful document about detection of parallelism in sequential algorithms- Bernstein's Conditions

Transcript of Parallelism in Algorithms- Bernstein's Conditions

Page 1: Parallelism in Algorithms- Bernstein's Conditions

CS-421 Parallel Processing BE (CIS) Batch 2004-05 Handout_12

Page - 1 - of 2

Parallelism in Algorithms – Bernstein’s Conditions

One way to detect parallelism in a sequential algorithm is to look for operations that can be

carried out independently of each other.

Let, I(s) = input set of statement1 s i.e. the set of all variables read by s and O(s) = output set of

statement s i.e. the set of all variables written by s

Two statements Si and Sj are (data) independent if ALL of the following conditions hold:

I(Sj) ∩ O(Si) = Φ flow independence

O(Si) ∩ O(Sj) = Φ output independence

I(Si) ∩ O(Sj) = Φ anti independence

That is, two statements Si and Sj can be executed in parallel (denoted Si || Sj) if [I(Sj) ∩ O(Si)] U

[O(Si) ∩ O(Sj)] U [I(Si) ∩ O(Sj)] = Φ

These are called Bernstein’s conditions and may be seen at different levels of granularity. For

example, S may be machine-level operations, may be entire procedures or just machine

instructions.

In general, a set of statements (processes) S1, S2, …, Sk can execute in parallel if Bernstein’s

conditions are satisfied on pair-wise basis i.e. S1 // S2 // … // Sk iff Si // Sj ∀ i ≠ j

Properties of Parallelism // Operator

• // is commutative i.e., Si // Sj ⇒ Sj // Si

• // is NOT transitive i.e., Si // Sj & Sj // Sk doesn’t imply Si // Sk

These dependences are usually portrayed in the form of a data-dependence graph. This is a

directed graph having as many nodes as there are procedures (or statements, depending upon the

granularity of the data dependence analysis).

e.g. 1

s1 : A := x + B;

s2 : C := A * 3;

s2 is flow dependent on s1

indicated by an arrow

directed from s1 to s2.

e.g. 2

s1 : A := x + B;

s2 : A := 3;

s2 is output dependent on

s1 indicated by an arrow

directed from s1 to s2 with

a small circle anywhere on

the arrow.

e.g. 3

s1 : B := A + 3;

s2 : A := 3;

s2 is anti dependent on s1

indicated by an arrow

directed from s1 to s2 with

a small line across the

arrow.

1 Here statement is used in quite a general sense in this handout and it can stand for procedure, process, instruction, etc, whatever be the level of granularity of data dependence analysis as explained in the class.

Page 2: Parallelism in Algorithms- Bernstein's Conditions

CS-421 Parallel Processing BE (CIS) Batch 2004-05 Handout_12

Page - 2 - of 2

Compilers can use Bernstein’s conditions to generate parallel code from sequential programs.

However, this approach is not efficient because it can’t expose the hidden parallelism that is

usually exploited by restructuring of operations. As an example of code restructuring recall how

loop unrolling did the trick while scheduling code for VLIW machine.

In addition to the data dependence discussed above, we must also examine resource and control

dependences as explained below:

Resource Independence

There’s a resource dependence between two statements (or instructions) if they need the same

resource for execution. For instance, if there’s only one adder in a machine, then two statements

requiring addition cannot be executed in parallel, though they may be data independent.

Control Independence

If execution of statement S2 depends on some condition tested in statement S1 (for example, S1 is

a branch), then S2 is said to be control dependent on S1.

For parallel execution, two (or more) statements must be independent in every regard i.e. they

must be control independent, data independent and resource independent as well.

******