Scheduling of parallel processes

41
Scheduling of parallel processes Antal Iványi and Rudolf Szendrei Department of Computer Algebra, Eötvös Loránd University [email protected] and [email protected]

description

Scheduling of parallel processes. Antal Iványi and Rudolf Szendrei Department of Computer Algebra, Eötvös Loránd University [email protected] and [email protected]. Scheduling of parallel processes. Introduction 3 - PowerPoint PPT Presentation

Transcript of Scheduling of parallel processes

Page 1: Scheduling of parallel processes

Scheduling of parallel processes

Antal Iványi and Rudolf SzendreiDepartment of Computer Algebra, Eötvös Loránd University

[email protected] and [email protected]

Page 2: Scheduling of parallel processes

2

Introduction 3 Basic concepts 4 Aims of the investigations 10 Algorithms 11 Mathematical results 24 Results of simulation 32 Summary 39 References 40

Scheduling of parallel processes

Page 3: Scheduling of parallel processes

3

Our problem originates in physics: percolation is the physical process when liquid flows through a porous media (see percolator in US)

Our mathematical model describes also parallel processes using a resource requireing mutual exclusion

A simple interpretation: two philosophers (A and B) have a conversation. Both have a plan (A a sequence a and B a sequence b) of zeros and ones, where ai=1 means that A wishes to speak in the i-th time unit and aj=0 means that A do not wish to speak in the j-th time unit

An important property of our philosophers is that they are ready to delay their silent periods to the end of the conversation.

Introduction

Page 4: Scheduling of parallel processes

4

A binary matrix A of size m×n is called

r-good, if it contains at most r 1’s in each column;

r-safe, if for any k (k = 1, 2, …, n) holds, that the first k columns of the matrix contains at most kr 1’s;

r-schedulable (or Winkler-schedulable or compa- tible), if it can be transformed into an r-good matrix repeating the following operation: we remove any zero element aij=0, shift the elements ai,j+1 , …, ai,n to left and add a new element ain=0 to the i-th row of the matrix

Basic concepts/1

Page 5: Scheduling of parallel processes

5

Example of a good matrix: [m = 2, n = 4, r = 1]

1 0 0 1

0 0 1 0

Example of a safe but not good matrix:

1 0 1 0

0 0 1 1

Basic concepts/2

Page 6: Scheduling of parallel processes

6

Basic concepts/3

The previous matrix is not only a safe, but at the same time also a schedulable matrix too:

1 0 1 0

0 0 1 1

Shifting the second element of the first row results the following good matrix:

1 1 0 0

0 0 1 1

Page 7: Scheduling of parallel processes

7

Z

z11

z 21

...zm1

z12

z22

...zm2

...

...

...

...

z 1 n

z 2 n

...zmn

Let m és n be positive integers, let r (0 ≤ r ≤ m) be a real number and let

be a matrix containing independent random variables zij having joint distribution

P zi j k p , k 1 , 1 i m, 1 j n ,q 1 p , k 0 , 1 i m, 1 j n

Basic concepts/4

Page 8: Scheduling of parallel processes

8

Let matrix A be a concrete realization of the matrix Z.

Let the number of different m×n sized r-good matrices by Gr(m, n), and the probability that Z is r-good by gr(m, n, p).

Let the number of different m×n sized r-safe matrices denoted by Sr(m,n), and the probability that Z is r-safe by sr(m, n, p).

Let the number of different m×n sized r-schedulable matrices be denoted by Wr(m,n), and the probability that matrix Z is r-schedulable by wr(m,n,p). The function wr(m,n,p) is called r-schedulability function.

Basic concepts/5

Page 9: Scheduling of parallel processes

9

The functions gr(m,n,p), wr(m,n,p) and sr(m,n,p) functions are called density functions. The asymptotic density functions of the good, safe and schedulable matrices are defined as

.

gr m, p lim n

g r m,n, p ,

sr m, p limn

s r m,n, p ,

wr m, p limn

wr m,n, p

wcrit , rm sup p w

rm , p 0 ,

g crit , rm sup p g

rm , p 0 ,

scrit , rm sup p s

rm , p 0

The following supremums are called critical probabilities:

Basic concepts/6

Page 10: Scheduling of parallel processes

10

Starting point of our investigations is a paper of Péter Gács [7], containing the proof of the assertation that if p is small enough, then w1(2, p) > 0. According to the proof of the theorem wcrit,1(2) ≥ 10-400 . The aims of our work are as follows:

To propose quick algorithms to decide whether a given matrix A is r-good, r-safe or r-schedulable;

To determine the number of different types of matrices for different parameters; To determine (or at least to estimate) the different important probabilities – among

others the critical probability for two processes wcrit,1(2). Remark. The special case m = 1 and r = 1 is the well-known ticket problem or

ballot problem, and the special case m = 2 and r = 1 represents the Winkler-model of percolation.

Az m ≥ 1 esetben a jó mátrixok segítségével alsó, a biztos mátrixok segítségével felső korlátok adhatóak annak aszimptotikus valószínűségére, hogy a Z mátrix egy konkrét realizációja 1-ütemezhető, és megadható az a kritikus valószínűség, amelynél kisebb p-re a Z mátrix pozitív valószínűséggel 1-biztos.

Gács algoritmusánál gyorsabb algoritmust javaslunk a két dimenziós esetre, és ezt az algoritmust kiterjesztettük tetszőleges dimenziós mátrixok vizsgálatára.

AIMS OF THE INVESTIGATIONS

Page 11: Scheduling of parallel processes

11

Csak azt kell megvizsgálnunk, hogy az oszlopok elemeinek összege kisebb-e kettőnél – és csak az első olyan oszlopig kell elmennünk, amelyre a feltétel nem teljesül.

GOOD(n)

1 for i = 1 to n2 s := 13 for j = 1 to m4 s := s + a[i, j]5 if s > 16 then return „column j is black”7 return „the matrix is good”

Rögzített m esetén ez az algoritmus legrosszabb esetben Θ(n) ideig fut, legjobb és várható esetben pedig Θ(1) ideig.

Algorithms/1Good matrices

Page 12: Scheduling of parallel processes

12

Oszlopfolytonosan számoljuk az egyeseket és számukat oszloponként összehasonlítjuk az adott oszlop indexével.

SAFE(n)

1 s := 02 for i = 1 to n3 for j = 1 to m4 s := s + a[i, j]5 if s > j6 then return „túl sok 1-es van a j-edik oszlopban”7 return „a mátrix biztos”

Ez az algoritmus legrosszabb esetben Θ(n) időben fut, legjobb esetben pedig Θ(1) a futási idő.

Algorithms/2Safe matrices

Page 13: Scheduling of parallel processes

13

Let x = (x1,…,xn) be the first, and y = (y1,…,yn) be the second row of a binary matrix A.

At first we define a directed graph G(A)=(V,E) as follows. Let V be the set of points (i,j) (0 ≤ i ≤ n, 0 ≤ j ≤ n) and vertices of G(A) are as follows:

1. if xi=yj=1, then the vertex (i - 1, j - 1) is not a starting vertex of any edge;

2. if xi=yj=0, then two edges start from vertex (i - 1, j -1), one to (i, j - 1) and the other edge to (i - 1, j);

1. if xi = 0 and yj = 1, then two edges start from vertex (i - 1, j - 1), one to (i, j) and the other edge to (i, j - 1);

1. if xi=1 and yj=0, then two edges start from vertex (i - 1, j - 1), one to (i, j) and the other edge to (i - 1, j);

Algorithms/3Schedulability/1

Page 14: Scheduling of parallel processes

14

0 1 0 0

0

1

1

0

Algorithms/4Schedulability/2

Page 15: Scheduling of parallel processes

15

It is known [7], that a matrix A is schedulable if and only if when the graph G(A) contains a directed path from the vertex (0,0) to a board vertex (n, i) or (i, n).

A simple method to decide whether such directed path exists is the following. We use a queue Q and investigate the vertex (0,0). If vertex (0,0) is not a starting point of an edge, then matrix A is not schedulable. Otherwise we put the endpoints of the edges starting at (0,0) to Q.

In the following we investigate the head elements of Q: we delete the head element from Q and put the endpoints of the vertices of the headpoint (if there are) into Q – until we put a board vertex (n, i) or (i, n) or the Q will be empty.

We proposed a quicker algorithm having the same quadratic running time in the worst case, but better average running time than the previous method. The difference between the graph G(A) and our graph H(A) is that if xi≠yj then we put only vertex (i, j) - the endpoint of the diagonal edge - into Q.

Algorithms/5Schedulability/3

Page 16: Scheduling of parallel processes

16

A lemma alapján megvalósított a TERMÉSZETES algoritmus, segítségével n = 1, 2, … , 17 értékekre meghatároztam a jó G1(2, n), a biztos S1(2, n) és az ütemezhető mátrixok W1(2, n) számát, valamint – a p = 0.5, p = 0.3 és p = 0.25 értékek mellett – a G1(2, n, p), S1(2, n, p) és W1(2, n, p) valószínűségeket.

Algorithms/6Schedulability

Page 17: Scheduling of parallel processes

17

NATURAL(m,n,p)

1 OSZLOPOSAN-ELLENŐRÍZ(m, n) 2 G1(m, n) := jo_matrix 3 S1(m, n) := biztos_matrix 4 W1(m, n) := komp_matrix 5 G1(m, n, p) := 0 6 S1(m, n, p) := 0 7 W1(m, n, p) := 0 8 for i = 0 to n*m 9 G1(m, n, p) := G1(m, n, p) + (pi + (1–p)n*m-i) * jo_szam[i]10 S1(m, n, p) := S1(m, n, p) + (pi + (1–p)n*m-i) * biztos_szam[i]11 W1(m, n, p) := W1(m, n, p) + (pi + (1–p)n*m-i) * komp_szam[i]

Algorithms/7Schedulability - NATURAL

Page 18: Scheduling of parallel processes

18

OSZLOPOSAN-ELLENŐRÍZ( m, n ) 1 jo_matrix := 0 2 biztos_matrix := 0 3 komp_matrix := 0 4 for i := 0 to n*m 5 do jo_szam[i] := 0 6 bizt_szam[i] := 0 7 komp_szam[i] := 0 8 for i = 0 to n - 1 9 do oszlopok[i] := 010 egyesek_az_oszlopban[i] := 011 egyesek_idaig[i] := 012 STOP := 013 for i = 0 to m - 114 do STOP := (STOP << 1) + 1...

Algorithms/8Schedulability - OSZLOPOSAN-ELLENŐRÍZ

Page 19: Scheduling of parallel processes

19

...

15 while oszlopok[0] < STOP16 do biztos_matrix := biztos_matrix + 117 bizt_szam[egyesek_idaig[n-1]] := bizt_szam[egyesek_idaig[n-1]] + 118 JO := igaz19 i := 020 while i < n és JO21 do if 1 < egyesek_az_oszlopban[i]22 then JO ← hamis23 i := i + 124 if JO25 then jo_matrix := jo_matrix + 126 komp_matrix := komp_matrix + 127 jo_szam[egyesek_idaig[n-1]] := jo_szam[egyesek_idaig[n-1]] + 128 komp_szam[egyesek_idaig[n-1]] := komp_szam[egyesek_idaig[n-1]] + 129 else if Ütemezhető( oszlopok, m, n )30 then komp_matrix := komp_matrix + 131 komp_szam[egyesek_idaig[n-1]] := komp_szam[egyesek_idaig[n-1]] + 1

...

Algorithms/9Schedulability – Column-Check

Page 20: Scheduling of parallel processes

20

...32 for i = n - 1 downto 033 do BIZTOS := hamis34 while oszlopok[i] < STOP35 do egyesek := 036 oszlopok[i] := oszlopok[i] + 137 oszlop := oszlopok[i]38 while oszlop ≠ 039 do if oszlop & 140 then egyesek := egyesek + 141 oszlop = oszlop >> 142 if egyesek_idaig[i] + egyesek – egyesek_az_oszlopban[i] <= i + 143 then BIZTOS := igaz44 egyesek_idaig[i] := egyesek_idaig[i] + egyesek –45 egyesek_az_oszlopban[i]46 egyesek_az_oszlopban[i] := egyesek47 for j = i + 1 to n - 148 do egyesek_idaig[j] := egyesek_idaig[i]49 if i < n – 150 then for j = i + 1 to n51 do oszlopok[j] := 052 egyesek_az_oszlopban[j] := 053 exit do54 if BIZTOS55 then exit for

Algorithms/10Schedulability – Column-Check

Page 21: Scheduling of parallel processes

21

Schedulable( oszlopok, m, n )

1 if m < 2 2 then return igaz 3 if n < 2 4 then return igaz 5 koord1 := (0,...,0) 6 H := {} 7 H := H U koord1 8 while H ≠ {} 9 do koord1 := min(H)10 H := H \ koord111 iranyok_szama := 012 hataron := 013 for i = 0 to m – 114 do leptetes = koord[i]15 egyes = (oszlopok[leptetes] >> i) & 116 if leptetes >= n17 then hataron = hataron + 118 if egyes ≠ 019 then iranyok_szama = iranyok_szama + 1

...

Algorithms/11Schedulability - Schedulable

Page 22: Scheduling of parallel processes

22

...

20 if hataron > m – 221 then return igaz22 if iranyok_szama = 023 then for i = 0 to m – 124 do if koord1[i] < n25 then koord2 := koord126 koord2[i] := koord2[i] + 127 H := H U koord228 else if iranyok_szama = 129 then koord2 := koord130 for i = 0 to m – 131 do if koord2[i] + 1 < n32 then koord2[i] := koord2[i] + 133 else koord2[i] := n34 H := H U koord235 return hamis

Algoritmhms/12Schedulability - Schedulable - cont.

Page 23: Scheduling of parallel processes

23

In the graph G(A) in average (2+2+2+0)/4 = 1.5 edges per vertex. In our H(A) graph, when xi-1 and yj-1 are different, then it is sufficient only the edge going to the vertex (i, j). Therefore the graph H(A) contains in average (2+1+1+0)/4 = 1 edge per vertex.

We investigated the m-dimensional generalization of the Winkler model and the mentioned two-dimensional algorithms.

Since our m-dimensional algorithms extends the (n + 1)m vertices at most ones (and at each extentions investigates at most m outgoing edges), so its running time in worst cases O(mnm). If all elements of the investigated matrix equals to zero, then our algorithms extend about the half of the vertices therefore. The worst case running time is Θ(mnm).

Algorithms/13Schedulability

Page 24: Scheduling of parallel processes

24

Using different methods we investigated the asymptotic density of the safe matrices as the function of the probability p and number of rows m.

Some properties of the functions gr(m, n, p), wr(m, n, p) and sr(m, n, p):

Mathematical results/1

1. n+

, r , r 0, m , p , p 0,1 ;2. monotone decreasing as a function of n;3. monotone decreasing as a function of p;4. monotone decreasing as a function of m;5. monotone decreasing as a

function of r. We suppose in the following r = 1, therefore we omit the index

r.

Page 25: Scheduling of parallel processes

25

In the following we need some lemmas. Let Cn (where n is a positive integer) the number of such binary

sequences a1, a2, … , a2n containing n zeroes and n ones so that for all prefix a1, a2, … ak (1 ≤ k ≤ 2n) contains at most so many ones as zeroes.

Lemma. If n ≥ 0, then

where Cn is the n’th Catalan-number.

Lemma. If 0 ≤ x ≤ 1, then

Mathematical results/2

Cn1

n 12nn

.

f x xk 0

1

k 12k

kx 1 x k

x

1 x, if 0 x

1

2

1, if1

2x 1.

Page 26: Scheduling of parallel processes

26

Lemma. If m ≥ 2, then the good matrices are schedulable, and the schedulable matrices are safe.

Corollary. If m ≥ 2, then

g(m, n, p) ≤ w(m, n, p) ≤ s(m, n, p),g(m, p) ≤ w(m, p) ≤ s(m, p), gcrit(m) ≤ wcrit(m) ≤ scrit(m).

Analysis of m×n sized safe matrices in the case of m ≥ 4 is similar to the case m = 3.

In our research we used the close connection between the properties of safe matrices and some walking across the x-axis. If a column of A contains at least b ≥ 2 1’s then the walking point jumps (b-2) to left; if the column contains one 1 then the point stays; and finally if the column contains m zeroes then the walking point steps to right.

Mathematical results/3

Page 27: Scheduling of parallel processes

27

Probability of the (b - 2)-sized jump to left:

Probability of the step to left:

Probability of the staying:

And the probability of the step to right:

Taking into account the results received earlier before m = 2 and m = 3-ra we get the following general assertation.

Theorem. If m ≥ 2 and 0 ≤ p ≤ 1, then

and

mb

p b 2 q n b 2

m2

p m 2 q 2

m1

pq m 1

m0

q m

scrit

1

m

s m , p 1i 2

m

mi

p1 p

i

i 1

Mathematical results/4

Page 28: Scheduling of parallel processes

28

For the simplicity we analyse the function u(2, n, p) = 1 – s(2, n, p) instead of s(2, n, p).

Lemma. If n ≥ 1, then

Lemma. If 0 ≤ p ≤ 1, then

Mathematical results/5Matrices with 2 rows

u 2, n , pi 1

n

j 0

i 1 2

2i 1 2j

C ji 1

2j4

n i.

u 2, p

p2

q2 , if 0 p 1

2

1, if12

p 1.

Page 29: Scheduling of parallel processes

29

The following figure shows the curve of the function s(2, p) in the interval [0, ½]:

Our results imply the following estimations of the critical probability for two rows:

Mathematical results/6Matrices with two rows

0 gcrit 2 w crit 2 scrit 212

.

Page 30: Scheduling of parallel processes

30

If m = 3, then the ratio of the zeros and ones is 3:0, 2:1, 1:2 or 0:3.

In this case we order to the investigated matrix such a random walk where

we jump to left with the probability of the column p3 of the column containing three 1’s,

we make a step to lerft with the probability 3p2q vof the column containing two 1’s,

we remain in the same point with the probability 3p2 of the column containing one 1and

we make a step to right with the probability q3 of the column containing threee zeros.

Mathematical results/7Matrices with three rows

Page 31: Scheduling of parallel processes

31

If r = 1 and m ≥ 2, then the investigation of the safe matrices lets to an asymmetric random walk starting at the origo, having sink at x=-1 and random walk is controlled by the columns of the matrix:

If the column contains one 1, then the walking point steps to right – the corresponding probability equals to

If the column contains k > 1 1’s, then the walking point jumps k–1 positions to right – the corresponding probability equals to

If the column contains only zeroes, then the walking point steps to left – the corresponding probability equals to qm.

In [9] was published the following general formula for the asymptotic and critical probabilities of the safe matrices.

Theorem. If m ≥ 2 and 0 ≤ p ≤ 1, then

Mathematical results/8General case

m1

pqm 1

mk

pk qm q

s m , p 1i 2

m

mi

p1 p

i

i 1

scrit1m

Page 32: Scheduling of parallel processes

32

n G(2, n) W(2, n) S(2, n)

1 3 0.750 3 0.750 3 0.750 1

2 9 0.562 10 0.625 10 0.625 1

3 27 0.452 35 0.547 35 0.547 1

4 81 0.316 124 0.484 126 0.492 0.984

5 243 0.237 444 0.434 462 0.451 0.961

6 729 0.178 1 592 0.389 1716 0.419 0.927

7 2 187 0.133 5 731 0.350 6435 0.393 0.890

8 6 561 0.100 20 671 0.315 24310 0.371 0.850

9 19 683 0.075 74 722 0.285 92378 0.352 0.808

10 59 049 0.056 270 521 0.258 352716 0.336 0.767

11 177 147 0.042 980 751 0.234 1352078 0.322 0.725

12 531 441 0.032 3 559 538 0.212 5200300 0.310 0.684

13 1 594 323 0.022 12 931 155 0.193 20058300 0.299 0.646

14 4 782 969 0.018 47 013 033 0.175 77558760 0.289 0.606

15 14 348 907 0.013 171 036 244 0.159 300540195 0.280 0.568

Results of simulation/1Matrices with two rows

Rounded data for p = 0.5 (Table 1)

G 2, nT 2,n

W 2,nT 2,n

S 2,nT 2,n

W 2,nS 2,n

Page 33: Scheduling of parallel processes

33

According to Péter Gács [7] wcrit(2) ≥ 10-400.

Table one contains the absolute and relative frequency of good, schedulable and safe matrices. These frequencies correspond to the probability p = 0.5 The number of the columns in the investigated matrices was 1, 2, …, 15.

Let the number of m×n sized binary matrices be denoted by T(m, n). Then T(m, n) = 2mn.

According to the known theoretical results all the relative frequencies G(m, n) / T(m, n), W(m, n) / T(m, n) and S(m, n) / T(m, n) have to tend to zero as n tends to infinity. An open question is the behaviour of the fraction W(2, n) / S(2, n).

In table two in the column of s(2, n, 0.4) the sequence has to tend to the limit 5/9. The concrete values are far from 5/9. In table three in the column of s(2, n, 0.35) the sequence has the limit 120/169 ~ 0.7101.

Results of simulation/2Matrices with two rows

Page 34: Scheduling of parallel processes

34

Results of simulation/3Matrices with two rows

Rounded data for p = 0.4 (Table 2)

n T(2, n) g(2, n, 0.4) w(2, n, 0.4) s(2, n, 0.4)

1 4 0.8400 0.8400 0.8400 1

2 16 0.7056 0.7632 0.7632 1

3 64 0.5929 0.7171 0.7171 1

4 256 0.4979 0.6795 0.6862 0.9902

5 1 024 0.4182 0.6487 0.6639 0.9771

6 4 096 0.3513 0.6206 0.6470 0.9592

7 16 384 0.2951 0.5957 0.6339 0.9397

8 65 536 0.2479 0.5731 0.6234 0.9193

9 262 144 0.2082 0.5524 0.6149 0.8984

10 1 048 576 0.1749 0.5332 0.6078 0.8773

11 4 194 304 0.1469 0.5155 0.6019 0.8565

12 16 777 216 0.1234 0.4988 0.5967 0.8359

13 67 108 864 0.1037 0.4832 0.5924 0.8157

14 268 435 456 0.0871 0.4685 0.5886 0.7960

15 1 073 741 824 0.0731 0.4545 0.5854 0.7764

16 4 294 967 296 0.0644 0.4412 0.5825 0.7574

17 17 179 869 184 0.0516 0.4286 0.5800 0.7390

w 2,n,0.4s 2,n,0.4

Page 35: Scheduling of parallel processes

35

Results of simulation/4Matrices with two rows

Rounded data for p = 0.35 (Table 3)

n T(2, n) g(2, n, 0.35) w(2, n, 0.35) s(2, n, 0.35)

1 4 0.8775 0.8775 0.8775 1

2 16 0.7700 0.8218 0.8218 1

3 64 0.6757 0.7901 0.7901 1

4 256 0.5929 0.7645 0.7699 0.9930

5 1 024 0.5203 0.7441 0.7561 0.9841

6 4 096 0.4565 0.7255 0.7462 0.9723

7 16 384 0.4006 0.7094 0.7389 0.9601

8 65 536 0.3515 0.6949 0.7334 0.9475

9 262 144 0.3085 0.6817 0.7291 0.9350

10 1 048 576 0.2707 0.6696 0.7258 0.9226

11 4 194 304 0.2375 0.6585 0.7231 0.9107

12 16 777 216 0.2048 0.6481 0.7210 0.8989

13 67 108 864 0.1839 0.6383 0.7192 0.8875

14 268 435 456 0.1605 0.6291 0.7178 0.8764

15 1 073 741 824 0.1401 0.6204 0.7166 0.8658

16 4 294 967 296 0.1236 0.6122 0.7156 0.8555

w 2,n ,0.35s 2,n ,0.35

Page 36: Scheduling of parallel processes

36

Results of simulation/5Matrices with three rows

Rounded data for m = 3 és p = 0.5 (Table 4)

n T(3, n) G(3, n) g(3, n, 0.5) W(3, n) w(3, n, 0.5) S(3, n) s(3, n, 0.5)

1 8 4 0.5000 4 0.5000 4 0.5000 1

2 64 16 0.2500 19 0.2969 19 0.2969 1

3 512 64 0.1250 98 0.1914 98 0.1914 1

4 4 096 256 0.0625 525 0.1282 531 0.1296 0.9892

5 32 768 1 024 0.0312 2 884 0.0880 2 974 0.0907 0.9702

6 262 144 4 096 0.0156 16 043 0.0612 17 060 0.0651 0.9401

7 2 097 152 16 384 0.0078 90 091 0.0429 99 658 0.0475 0.9032

8 16 777 216 65 536 0.0039 507 520 0.0303 590 563 0.0352 0.8594

w 3,n,0.5s 3,n,0.5

Page 37: Scheduling of parallel processes

37

Results of simulation/6Matrices with 3 rows

Rounded data for m = 3 and p = 0.3 (Table 5)

n T(3, m) g(3, n, 0.3) w(3, n, 0.3) s(3, n, 0.3)

1 8 0.7840 0.7840 0.7840 1

2 64 0.6147 0.6795 0.6795 1

3 512 0.4819 0.6153 0.6153 1

4 4 096 0.3778 0.5682 0.5710 0.9951

5 32 768 0.2962 0.5313 0.5380 0.9875

6 262 144 0.2322 0.5004 0.5125 0.9764

7 2 097 152 0.1821 0.4739 0.4919 0.9634

w 3,n ,0.3s 3,n,0.3

Page 38: Scheduling of parallel processes

38

Results of simulation/7Matrices with 3 rows

Rounded data for m = 3 and p = 0.25 (Table 6)

n T(3, m) g(3, n, 0.25) w(3, n, 0.25) s(3, n, 0.25)

1 8 0.8437 0.8437 0.8437 1

2 64 0.7119 0.7712 0.7712 1

3 512 0.6009 0.7286 0.7286 1

4 4 096 0.5068 0.6981 0.7004 0.9967

5 32 768 0.4276 0.6748 0.6804 0.9917

w 3,n ,0.25s 3,n,0.25

Page 39: Scheduling of parallel processes

39

We determined the explicite form of the schedulability function and determined the critical values scrit(m) for m ≥ 2. The value of scrit(2) equals to ½ (this is a characteristic value for several percolation model), the value of the further critical probabilities decreases as the value of m increases.

According to our simulation experiments the critical probabilities of the safe and schedulable matrices are near to each other and the proved upper bounds. Table 1. contains the results for p = 0.5, Table 2. for p = 0.4, and Table 3. for p = 0.35.

On the base of the results of the simulation we suppose that the bound in [7] can be improved: if p < 0.4, then w(2, p) > 0.

Behaviour of the fractions w(m, p) / s(m, p) requires further analyses.

One can be give better lower bounds than our ones, but they imply also only the trivial lower bound zero, therefore if we wish to improve the lower bounds of the critical probabilities of the schedulable matrices then we need more useful matrices, than the good matrices.

Summary

Page 40: Scheduling of parallel processes

40

1. P. Balister, B. Bollobás, M. Walters (2004), Continuum percolation with steps in an annulus. Ann. Appl. Probab. 14/4 1869—1879. http://arxiv.org/find

2. A. Bege and Z. Kása (2001), Coding objects related to Catalan numbers, Studia Univ. Babeş-Bolyai, Informatica, Volume XLVI (1), 31—39. http://www.cs.ubbcluj.ro/~studia-i/contents.php

3. B. Bollobás, O. Riordan (2005), A short proof of the Harris-Kesten theorem. Electronic manuscript, http://arxiv.org/find.

4. I. Derényi, G. Palla, T. Vicsek (2005), Clique percolation in random networks. Phys. Rev. Lett. 94 160—202. http://arxiv.org/find.

5. W. Feller, (1968), An Introduction to Probability Theory and its Applications. John Wiley and Sons, New York.

6. P. Gács (2002), Clairvoyant scheduling of random walks (submitted to Electronic Journal of Probability). Short version: Clairvoyant scheduling of random walks. In: Proc. of the Thirty-Fourth Annual ACM Symposium on Theory of Computing, 99—108 (electronic), ACM, New York, 2002. http://www.cs.bu.edu/fac/gacs/recent-publ.html

7. P. Gács (2004), Compatible sequences and a slow Winkler percolation. Combin. Probab. Comput. 13/6, 815—856. http://www.cs.bu.edu/fac/gacs/recent-publ.html.

References/1

Page 41: Scheduling of parallel processes

41

8. T. E. Harris (1960), A lower bound for the critical probability in a certain percolation process. Proc. of the Cambridge Phil. Soc. 56 13—20.

9. A. Iványi, Density of 2-safe matrices (manuscript)

10. Z. Kása (2003), Combinatorică cu aplicaţii. Presa Universitară Clujeană, Cluj-Napoca.

11. Z. Kása (2004), Rekurzív egyenletek. Informatikai algoritmusok. 1 (Szerk. A. Iványi). ELTE Eötvös Kiadó, Budapest, 14—37. http://elek.inf.elte.hu/

12. H. Kesten (1982), Percolation Theory for Mathematicians. Birkhäuser, Boston

13. D. Schauffer (1985), Introduction to Percolation Theory. Taylor and Francis, London.

14. R. P. Stanley (1999), Enumerative Combinatorics, Volume 2. Cambridge Studies in Advanced Mathematics, 62. Cambridge University Press, Cambridge

15. P. Winkler (2000), Dependent percolation and colliding random walks, Random Structures & Algorithms 16/1 58—84. http://www.math.dartmouth.edu/~pw/papers/pubs.html

16. A. Hunt (2005), Percolation Theory for Flow in Porous Media. Lecture Notes in Physics, Vol. 674.

References/2