Monte Carlo Methods - ethz.ch · 1 Monte Carlo Methods Nicholas Constantine Metropolis Stanislaw...

Post on 18-Oct-2020

29 views 1 download

Transcript of Monte Carlo Methods - ethz.ch · 1 Monte Carlo Methods Nicholas Constantine Metropolis Stanislaw...

1

Monte Carlo Methods

Nicholas Constantine

MetropolisStanislaw Ulam

2

Books about Monte Carlo

• M.H. Kalos and P.A. Whitlock: „Monte Carlo Methods“ (Wiley-VCH, Berlin, 2008 )

• J.M. Hamersley and D.C. Handscomb: „Monte Carlo Methods“ (Wiley & Sons, N.Y., 1964)

• K. Binder and D. Heermann: „Monte Carlo Simulations in Statistical Physics“ (Springer, Berlin, 1992)

• R.Y. Rubinstein: „Simulation and the Monte Carlo Method“ (Wiley & Sons, N.Y., 1981)

3

Simulates an experimental measuring process

with sampling and averaging.

Big advantage:

Systematic improvement by increasing

the number of samples N.

Error goes like:

Monte Carlo Method (MC)

1

N

4

Applications of Monte Carlo

• Statistical partition functions, i.e. averages at constant temperature

• High dimensional integrals

5

Calculation of π

1

1

Choose N pairs

(xi , yi) , i = 1,…,N

of random numbers

xi ,yi{0,1}

1

0

214 dxx

0

x2 + y2 = 1

6

Calculation of π

N

cN 4)(

c = 0

if

c is number of points that fall inside circle.

area c / N

1( )N

N

2 21 1 i iy x c c

applet

7

Calculation of integrals

g(x)

a b

1

1( ) ( ) ( )

b N

iia

g x dx b a g xN

xi

8

Calculation of integrals

random numbers xi [a,b]

homogeneously distributed:

„simple sampling“Good if g(x) smooth.

9

Error of integration

b ah

N

conventional method:

choose N equidistant points distance

error:

22 2

1 1 area h

N T

where T is the

computer time.

one dimension

Δ (N area)2 T -2

10

Error in d dimensions

11 , d dh T N L

Lh T

2

2 2 2 1 ( )d d dNhh T h T

in d dimensions:

error with conventional method:

error with Monte Carlo:1

21

TN

Monte Carlo better for d > 4

11

High dimensional integral

2 2 2( ) ( ) ( )ij i j i j i jr x x y y z z

1 1 1

2

1

1

( ), .., , .., , ..,ij ij n n n

i jn nZr r dx dx dy dy dz dz

Consider n hard spheres of radius R in a 3d box.

Phase space is 3n dimensional: (xi , yi , zi), i = 1,…,n.

Calculate average distance:

with hard sphere constraint2 2 2 2 ( ) ( ) ( )i j i j i jx x y y z z R

12

MC strategy

• Choose particle position.

• If the excluded volume condition is not fulfilled then reject.

• Once the n particles are placed, calculate the average of rij over all pairs (i,j).

13

Not smooth integrals

g(x)

a bxi

need more precision

14

if xi randomly distributed according to p(x).

Good convergence when function smooth.

„importance sampling“

Calculation of integrals

1

1 ( )( )( ) ( ) ( )

( ) ( )

b b Ni

i ia a

g xg xg x dx p x dx b a

p x N p x

( )

( )

g x

p x

15

E(X) = energy of configuration X

Probability for system to be in X is given by

Boltzmann factor:

Canonical Monte Carlo

1 ( )

( )E X

kTeq

T

p X eZ

( )E X

kTT

X

Z e

1( )eqX

p X

ZT is the partition function:

16

Problem of sampling

( ) ( ) ( )eqX

Q T Q X p X The distribution of

energy E around

the average < E >T

gets sharper with

increasing size.

Choosing configurations equally distributed

over energy would be very ineffective.

thermal average

17

M(RT)2 algorithm

N.C. Metropolis, A.W. Rosenbluth,

M.N. Rosenbluth, A.H. Teller and E. Teller (1953)

importance sampling through a Markov chain:

X1 X2 …..

where the probability for a configuration is peq(X)

Markov chain: Xt only depends on Xt -1

18

1. Ergodicity: One must be able to reach any configuration Y after a finite number of steps.

2. Normalization:

3. Reversibility:

Properties of Markov chain

1( )Y

T X Y

Start in configuration X and propose a new

configuration Y with probability T (XY).

( ) ( )T X Y T Y X

19

Transition probability

The proposed configuration Y will be

accepted with probability A(XY) .

Total probability of a Markov chain is:

W(XY) = T(XY) A(XY)

Master equation:

( , )( ) ( ) ( ) ( )

Y Y

dp X tp Y W Y X p X W X Y

dt

20

Properties of W(XY)

• Ergodicity:

• Normality:

• Homogeneity:

0, : ( )X Y W X Y

1( )Y

W X Y

( ) ( ) ( )st stY

p Y W Y X p X

21

E(X) = energy of configuration X

Probability for system to be in X is given by

Boltzmann factor:

Canonical Monte Carlo

1 ( )

( )E X

kTeq

T

p X eZ

( )E X

kTT

X

Z e

1( )eqX

p X

ZT is the partition function:

22

M(RT)2 algorithm

importance sampling through a Markov chain:

X1 X2 …..

where the probability for a configuration is peq(X)Markov chain: Xt only depends on Xt -1

The proposed configuration Y will be

accepted with probability A(XY) .

Total probability of a Markov chain is:

W(XY) = T(XY) A(XY)

23

In stationary state one should have

equilibrium distribution (Boltzmann)

Detailed balance( , )

( ) ( ) ( ) ( )Y Y

dp X tp Y W Y X p X W X Y

dt

0 ( , )

, ( ) ( )stst eq

dp X tp X p X

dt

( ) ( ) ( ) ( )eq eqY Y

p Y W Y X p X W X Y

sufficient condition for detailed balance:

( ) ( ) ( ) ( )eq eqp Y W Y X p X W X Y

24

Metropolis (M(RT)2)

1min( )

( ) ,( )

eq

eq

p YA X Y

p X

1 ( )

( )E X

kTeq

T

p X eZ

1 1( ) ( )

min min( ) ( , ) ( , )E Y E X E

kT kTA X Y e e

Boltzmann:

If energydecreases always accept

increases accept with probability . E

-kTe

25

Glauber dynamics

1

( )

E

kT

E

kT

eA X Y

e

Roy C. Glauber (1963)

(Nobel prize 2005)

also

fulfills

detailed

balance

2626

The Ising Model

• Magnetic Systems

• Opinion models

• Binary mixtures

Ernst Ising

(1900-1998)

Spins

on a

lattice

(1924)

2727

The Ising Model

1 1, , ...,i i N

1, :

N N

i j ii j nn i

E J H

H

Binary variables:

on a graph of N sites

interacting via the Hamiltonian:

28

Order parameter

N

ii

H NTM

10

1lim)( spontaneous magnetization:

ordered phase

disordered phase

critical

temperature

)( cTTM

β = 1/8 (2d)

β ≈ 0.326 (3d)

29

Phase diagram

30

Response functions

cHT

TTH

MT

0,

)(

,

( )v cV H

EC T T T

T

susceptibility:

specific heat:

both diverge at Tc

31

Specific heat

cv TTTC )(

α = 0 (2d)

α ≈ 0.11 (3d)

comparing with experimental

data for binary mixture

32

Susceptibility

cTTT )(

γ = 7/4 (2d)

γ ≈ 1.24 (3d)

numerical data from a finite system

33

Correlation length

)()0()( RRC correlation function:

2

( )R

C R M ae

For T ≠ Tc and

for large R:

where ξ is the

correlation length.

T > Tc

T < Tc

34

Correlation length

cTT

dRRC 2)(

The correlation length diverges at Tc as:

with a critical exponent ν. ν = 1 (2d)

ν ≈ 0.63 (3d)At Tc we have for large R:

with η = 1/4 (2d)

η ≈ 0.05 (3d)

35

Magnetic exponent

1M H

DyAlG in field along [111]

above and below Néel temperature

δ = 15 (2d)

δ ≈ 4.79 (3d)

at Tc

36

d=2 d=3 d=4

α 0 0.11008(1) 0 2 − d / ( d − Δ ϵ )

β 1/8 0.326419(3) 1/2 Δ σ / ( d − Δ ϵ )

γ 7/4 1.237075(10) 1( d − 2 Δ σ ) / ( d − Δ ϵ )

δ 15 4.78984(1) 3 ( d − Δ σ ) / Δ σ

η 1/4 0.036298(2) 0 2 Δ σ − d + 2

ν 1 0.629971(4) 1/2 1 / ( d − Δ ϵ )

ω 2 0.82966(9) 0 Δ ϵ ′ − d

.

Critical exponents

37

Exponent relations

)2(

2

22

d

Exponents are related through:

scaling

hyperscaling

so that only two exponents are independent.

H.E.Stanley, „Introduction to Phase Transitions and

Critical Phenomena“ (Clarendon, Oxford, 1971)

38

Exact solutions

Lars Onsager

1944

2 2

0

1 1ln ( , 0) ln 2cosh(2 ) ln 1 1 sin

2 2TZ T H J d

square lattice:

22sinh 2 cosh 2J J

1 dimension:

2cosh( ) 1 tanh( )N NNZ J J

1( , ) ln 2cosh( )F T H J

1

Bk T

39

Metropolis (M(RT)2)

1min( )

( ) ,( )

eq

eq

p YA X Y

p X

1 ( )

( )E X

kTeq

T

p X eZ

1 1( ) ( )

min min( ) ( , ) ( , )E Y E X E

kT kTA X Y e e

Boltzmann:

If energydecreases always accept

increases accept with probability . E

-kTe

4040

MC of the Ising Model

• Choose one site i (having spin σi).

• Calculate ΔE = E(Y) - E(X) = 2Jσihi .

• If ΔE < 0 then flip spin: σi → - σi .

• If ΔE > 0 flip with probability exp(-ΔE/kT).

Single flip Metropolis:

j

inn of i

h where hi is the local field at site i

new configuration

old configuration

applet

, :

N

i ji j nn

E J

4141

Binary mixtures(lattice gas)

Consider two species A and B distributed with

given concentrations on the sites of a lattice.

EAA is energy of A-A bond.

EBB is energy of B-B bond.

EAB is energy of A-B bond.

Set EAA = EBB = 0 and EAB = 1.

Number of each species is constant.

4242

Kawasaki dynamics

Kyozi Kawasaki

• Choose any A-B bond.

• Calculate ΔE for A-B → B-A.

• Metropolis: If ΔE ≤ 0 flip, else flip with p = exp(-βΔE).

• Glauber: Flip with probability p = exp(- βΔE)/(1+ exp(- βΔE)).

β = 1/kT

43

Other Ising-like models

• Antiferromagnetic models:

• Ising spin glass:

• ANNNI model:

• Metamagnets:

, :ij i j

i j nn

J H

1 2 2, : :

i j i ii j nn i nnn

J J H

, :

( 1)ii j s i

i j nn i

J H H

1 2, : , :

i j i j ii j nn i j nnn i

J J H H

staggered field

random interaction

frustration in x-direction

incomensurate phases

with „Lifshitz point“

tricritical point

44

Antiferromagnetic Ising model

staggered field on square lattice

frustration on triangular lattice

, :

( 1)ii j s i

i j nn i

J H HOn square lattice phase diagram

and critical behavior exactly

identical to ferromagnetic case.

On the triangular lattice the critical temperature is zero

and the ground state is infinitely degenerate (finite entropy).

45

Ising spin glass

frustration

freezing of spins

quenched average

, :ij i j

i j nn

J H 1( ) ( ) ( )

2ij ij ijP J J J J J

Edwards Anderson model (1975):

2

1

10

N

ii

qN

order parameter in

spin glass phase (T<Tc):Fe0.5 Mn0.5 TiO3

46

The ANNNI model

1 2 2, : :

i j i ii j nn i nnn

J J H

2d

3d

Lifshitz point

47

1 2, : , :

( 1)ii j i j i i

i j nn i j nnn i i

J J p h H

Tricritical point

Metamagnet

48

The O(n) model

,

with =1, ,x x yXY i j x i i i i i

i j nn i

J S S H S S S S S

H

1 11

,

with 1, ,..., nn vector i j i i i i i

i j nn i

J S S H S S S S S

H

,

with =1, , ,x x y zHeisenberg i j x i i i i i i

i j nn i

J S S H S S S S S S

H

O(n) model:

n = 1 is the Ising model, n = 2 is the XY-model:

n = 3 is the Heisenberg model:

n = ∞ is the „spherical model“.

49

Phase transitions

Mermin-Wagner theorem (1966):

In two dimensions a system with continuous degrees

of freedom and short range interactions has no

phase transition which involves long range order.

Heisenberg model in three dimensions:

50

XY model

David ThoulessMichael Kosterlitz

2 dimensions

( )( ) ( ) , i j i jTs x s x r r x x

Tc

below Tc :

KT-transition1

( )4cT

( )( ) cb T TT e

51

XY model

No symmetry breaking, no long-range order

But still scaling behavior at Tc

specific heat:

susceptibility:

2( ) ( ) ln ( ) , 6q

C T T T q

22( ) ( ) ln ( ) , 0.056cr

T T T r

( ) 1

2 , ( ) cb T TT e

correlation length:

52

XY model

vortex-antivortex pair

At low temperatures the vortices are bound in vortex-antivortex

pairs, which dissociate at the KT transition.

53

Continuous degrees of freedom

Phase space is not discrete anymore.

Monte Carlo move: Choose for site i a new spin:

with small random , i i iS S S S S S

• Calculate ΔE

• Metropolis: If ΔE ≤ 0 flip, else flip with p = exp(-βΔE).

• Glauber: Flip with probability p = exp(- βΔE)/(1+ exp(- βΔE)).applet

5454

Interfaces

5555

Interfaces

f f A+B Asurface tension

A

BA

5656

Ising interface

, :

N

i ji j nn

E JH ++ ++++

++

--

- - - - ---

Fixed boundary conditions:

upper half „+“, lower half „–“Simulate with Kawasaki

dynamics at temperature T.

h

5757

Ising interface

21W i

i

h hN

interface width W:

W

TTc

roughening transition

1i

i

h hN

58

+

_

+

_

, : , :

N N

i j i ji j nn i j nnn

J KEH

Add next-nearest neighbor interaction:

punishes curvature introduces stiffness

Ising interface

13 K 20 K

14 J

5959

Shape of drop

• Start with a block of +1 sites attached to wall of an LL system filled with -1.

• Hamiltonian including gravity g:

• Use Kawasaki dynamics and further do not allow for disconnected clusters of +1.

1 11

1

1

, : , :

with and( )( )

N N

i j i j j ii j nn i j nnn j line j

L Lj

E J K h

j h h h hh h g

L L

H

6060

Shape of drop

L = 257, V = 6613, g = 0.001

after 5107 MC updates

averaged over 20 samples.

Contact angle θ is function

of temperature and vanishes

when approaching Tc . θ

6161

Solid on solid model (SOS)

interface without islands and without overhangs

, :

N

i ji j nn

E h hH

adatoms and

surface growth

6262

Self-affine scaling

W , /L t L f t L z

Family-Vicsek scaling (1985):

ξ is the roughening exponent, z the dynamic exponent.

6363

Self-affine scaling

W , / zL t L f t L

const: W

: W 0

W / z z

t L f u

L t f u u

L u L t L L t

/ zu t L

z

is the growth exponent.

Numerically these laws are verified by data collapse.

0

64

Self-affine scaling

6565

Irreversible growth

• T. Vicsek, „Fractal Growth Phenomena“ (World Scientific, Singapore, 1989

• A.-L. Barabasi and H.E. Stanley, „Fractal Concepts in Surface Growth“ (Cambridge Univ. Press, 1995)

• H.J. Herrmann, „Geometric Cluster Growth Models and Kinetic Gelation“, Phys. Rep. 136, 153 (1986)

6666

Irreversible growth

• Deposition and aggregation patterns

• Fluid instabilities

• Electric breakdown

• Biological morphogenesis

• Fracture and Fragmentation

No thermal equilibrium

6767

Random deposition

6868

Random deposition

Pick a random column.Add a particle on top of that column

1/2

~

~

h t

W t

simplest possible growth model

= 1/2

ξ = 1/2

6969

Random deposition with surface diffusion

1/4

~

~

h t

W t

Particle can move a short distance to find a more stable configuration.

Pick a random column i.Compare h(i) and h(i+1).Particle is added onto whichever is lower. If they are equal, add to column i or i+1 with equal probability.

= 1/4

ξ = 1/2

7070

Restricted Solid On Solid Model (RSOS)

Neighbouring sites may not have a height difference greater than 1.

Pick a random column i.Add a particle only ifh(i) h(i-1) or h(i) h(i+1).Otherwise pick a new column.

c

= 1/3

ξ = 1/2

7171

Growth equations

M. Kardar, G. Parisi and Y.-C. Zhang (1986)

2( , )( , )

2

h x th h x t

t

KPZ equation:

Edwards Wilkinson equation: ( , )( , )

h x th x t

t

S.F. Edwards and D.R. Wilkinson (1982)

= 1/3 and ξ = 1/2

= 1/4 and ξ = 1/2noise

Mehran Giorgio Yi-Cheng

Kardar Parisi Zhang

72

Stochastic Differential Equations

Martin Hairer

Kyoshi Itô

Itô – Stratonowitsch calculus

Ruslan L. Stratonowitsch

7373

Eden model

c

Each site that is a neighbour of an occupied site has an equal probability of being filled.

Make a list of neighbour sites (red).Pick one at random. Add it to cluster (green).Neighbours of this site then become red….

There can be more than one growth site in a column. Can get overhangs.

simple model for tumor growth or epidemic spread

7474

Eden cluster

= 1/3

ξ = 1/2

7575

c

Ballistic depositionParticles fall vertically from above and stick when they touch a particle below or a neighbour on one side.

Pick a column i. Let particle fall till it touches a neighbour. Possible growth sites are indicated in red. Only one possible site per column.

= 1/3

ξ = 1/2

76

Random Walk

k = degree = number of neighbors

Sample a random number z

homogenously between 1 and k

and move the walker

in direction [z] +1,

where [..] is the integer part.

Random Walk

78

Random Walk

The probability p(x,t) to find a random walker

at time t at position x is proportional to the

concentration c(x,t) of non-interacting random

walkers (= diffusion), after ensemble averaging. p

( , ) ( , )

c x tD c x t

t

3( , ) 1c x t dx

converges towards stationary solution:

( , ) 0c x t

7979

c

Diffusion Limited Aggregation (DLA)

A particles starts far away from surface.It diffuses until it first touches the surface. If it moves too far away, another particle is started.

Witten and Sander, 1981

Tom Witten Len Sander

8080

fractal dimensiondf = 1.7

(in 2d)

DLA clusters

81

Laplacian Growth

0 0c

1c

( , ) 0c x t

( , )gv c x tgrowth velocity

is the concentration of random walkers( , )c x t

82

Laplacian Growth

0 0

1

( , ) 0x t

( , )g xv tgrowth velocity

is the electrostatic potential( , )x t

8383

Electrodeposition

84

Laplacian Growth

( , ) 0c x t

1c

0 0c

growth velocity

nv c

circular geometry

8585

DLA clusters

anisotropy on square lattice:

86

Noise reduction

Put a counter on each site

along the surface of the cluster

and count how many times

a random walker hits this site.

only occupy the site when the

number of hits reaches a

certain threshold.

87

Self similarity

87

105 sites 106 sites

107 sitesoff-lattice

DLA clusters

8888

Electrodeposition

89

Laplacian Growth

( , ) 0p x t

boundary injectionp p

growth velocity

nv p

p (x,t) is the pressure in the Hele-Shaw cell

injectionp

9090

Viscous Fingering

Push a less viscous fluid into a more viscous fluid

inside a horizontal

Hele-Shaw cell.

91

Laplacian Growth

0T

boundary meltT T

meltT

growth velocity

nv T

example: crystal growth inside undercooled melt

92

Nittmann and Stanley, 1987

Snowflakes

DLA with

noise reduction

and

surface tension

on the

hexagonal lattice

real realsimulation simulation

9393

Dielectric breakdown

9494

Dielectric breakdown model (DBM)

0

p

Solve Laplacian field ϕ:

and occupy site

at boundary

with probability:

η = 1same as DLA

9595

Dielectric breakdown model

η = 2

9696

Dielectric breakdown model

η = 4

9797

Dielectric breakdown model

η = 0.1

9898

same as Eden model

Dielectric breakdown model

η = 0

9999

Clustering of clusters

Fractal dimension is df ≈ 1.42 in 2d, df ≈ 1.7 in 3d.

2 /sn s f s t zdynamical scaling:

with dynamical exponent z .

100

Clustering of clusters

101101

Gold colloids

David Weitz, 1984

df = 1.70

Random Walk

103

Application of Random Walk

• Brownian motion

• Diffusion – heat transport - pollutants

• Foraging – search algorithms

• Mathematics (SLE)

• Finance of gambling

• Polymer at theta point

• ...

104

Random Walk

2

4( , )4

xDtN

P x tDt

e

2 ( ) 2x t Dt

( ,0) ( )P x xcentral limit theorem

105

diffusion:( , )

( , )c x t

D c x tt

2 2 3( ) ( , ) 2x t x c x t dx Dt

Random Walk

106

Random Walk

2 ( ) 2x t Dt2 2 ( ) 2length x t Dt mass fractal object with df = 2 in all dimensions ≥ 2

107

Random Walk

108

Variants of Random Walks

2 ( ) 2x t Dtanomalous diffusion:

Levy flights

109

Polymers

• M.Doi, «Introduction to Polymer Physics», Clarendon Press, Oxford, 1995

• P.G. De Gennes, «Scaling Concepts in Polymer Physics», Cornell Univ. Press, 1979

• P.D. Gujrati and A.I. Leonov, «Modelling and Simulations in Polymers», Wiley, 2010

• M.Kröger, «Introduction to Computational Physics», site of IfB, ETH Zürich

110

Self-avoiding walk (SAW)

dilute polymer chains in good solvent

excluded volume effect

N is chain length = number of monomers

= degree of polymerization

solvent

Paul Flory

111

Self-avoiding walkAll configurations of same chain length N

have the same statistical weight.

NN N

generating function:(grand canonical partition function)

number of SAW of length N:

( ) NN

N

Z x x ( )( ) N

N

NN

N

x NZ x x fugacity

1

finite for ln

critical at

infinite for

cNN

cx N

N c

x xNZ

N xx

x x

average chain length

1cx

critical fugacity

1( ) cZ x x x

112

Self-avoiding walk

How to sample correctly self-avoiding walks?Kinetic Growth Walk (KGW): grow a path starting

from one site moving randomly to one of the empty neighbors

Dimerization:

(Alexandrowicz, 1969)

• Start with any SAW of length N

• Put the last monomer of one end in a random direction at the other end.

• If there is no overlap retain this configuration, otherwise reject the move.

• Choose randomly one end and repeat.

113

Self-avoiding walk

reptation algorithmT. Wall and F. Mandel (1975)

Simulate fixed sizes.

114

Self-avoiding walk

reptation algorithm

115

Self-avoiding walk

trapped configurations

• Start with any SAW of length N

• Select one monomer randomly and move it to a possible (i.e. without disrupting the chain)random position.

• If this position is not occupied accept this move, otherwise reject.

• Repeat many times.

116

Self-avoiding walk

kink-jump algorithm

117

Self-avoiding walk

local moves involving

only one monomer:

local moves involving

two monomers:

non-local move:

118

Self-avoiding walk

pivot algorithm

• Start with any SAW of length N.

• Select one monomer randomly as pivot.

• Perform one one branch of this pivot a transformation symmetry.

• If this produces no overlap accept this move, otherwise reject.

• Repeat many times.

N. Madras and A.D. Sokal (1988)

119

Self-avoiding walk

N = 108 with pivot algorithm

120120

Ensemble method

21

1

( )( ) i ji jg r r

NR

N

Define „radius of gyration“ Rg:

Take self-avoiding walk of N monomers.

fd

gN R

121

Self-avoiding walk

fractal dimension df = 4/3 in 2d

3d: df ≈ 1.701..

and df = 2 for d > 4.

122

Yardstick method

coast of

NorwayKoch

curve

123

SAW with attractive interaction

1

1 1

if =1 ,

0 otherwise

N Ni j

ij iji j i

r rE c c

1( ) ,

E E

kT kTN

walksN

p walk e Z eZ

many ground states

124

SAW with attractive interaction

collapse of the polymer below temperature Tt

kT/εθ – point

At temperature Tt , i.e. at the θ – point, the

polymer behaves like a random walk: N ~ (Rg)2

125

SAW with persistence

semi-flexible polymer

Correlation between direction

of first step and nth step decays

exponentially with n and the

characteristic length is

called «persistence length».

By giving more weight to

straight sections than to

curved ones, one can

introduce an effective

stiffness.

126

Polymer melts

→ reptation algorithm

127

Shortest path

weighted graph

128

Edsger Dijkstra

1959

Dijkstra algorithm

greedy algorithm, i.e. tries to find a global optimum through many local optimizations. (No greedy algorithm can solve travelling salesman problem.)

V. Jarník: O jistém problému minimálním [About a certain minimal problem], Práce Moravské Přírodovědecké Společnosti, 6, 1930, pp. 57–63. (in Czech)

similar problem: minimum spanning tree

129

Dijkstra algorithm

1. Assign to every node a tentative distance value: set it to zero for our initial node and to infinity for all other nodes.

2. Set the initial node as current. Mark all other nodes unvisited. Create a set of all the unvisited nodes called the unvisited set.

3. For the current node, consider all of its unvisited neighbors and calculate their tentative distances. Compare the newly calculated tentative distance to the current assigned value and assign the smaller one. For example, if the current node A is marked with a distance of 6, and the edge connecting it with a neighbor B has length 2, then the distance to B (throughA) will be 6 + 2 = 8. If B was previously marked with a distance greater than 8 then change it to 8. Otherwise, keep the current value.

4. When we are done considering all of the neighbors of the current node, mark the current node as visited and remove it from the unvisited set. A visited node will never be checked again.

5. If the destination node has been marked visited (when planning a route between two specific nodes) or if the smallest tentative distance among the nodes in the unvisited set is infinity (when planning a complete traversal; occurs when there is no connection between the initial node and remaining unvisited nodes), then stop. The algorithm has finished.

6. Otherwise, select the unvisited node that is marked with the smallest tentative distance, set it as the new "current node", and go back to step 3.

130

Dijkstra algorithm

131

Dijkstra algorithm

Bellman-Ford algorithm can also handle negative weights.

132132

Simulated Annealing

• P.J.M. Van Laarhoven and E.H. Aarts, „Simulated Annealing: Theory and Applications“ (Kluwer, 1987)

• A. Das and B.K. Chakrabarti (eds.), „Quantum Annealing and Related Optimization Methods“ Lecture Notes No. 679 (Springer, 1990)

133133

Simulated Annealing (SA)

S. Kirkpatrick, C.D. Gelatt and M.P. Vecci, 1983

Given a finite set S of solutions

and a cost function F: S Search a global minimum:s* S* { s S : F(s) F(t) t S }.

Difficult when S is very big (like |S| = n!).

SA is a stochastic optimization technique.

134134

Given n cities σi and the travelling costs c(σi ,σj).

We search for the cheapest trip through all cities.

• S = { permutations of {1,..,n} }• F() = i=1..n c(σi ,σj) für S

Finding the best trajectory is a NP-complete problem,i.e. time to solve grows faster than any polynomial of n.

Travelling Salesman

135135

Define closed configuration on SN : S 2S with N(´) ´ N()

i´ N() j i j

(j)

(i)

´

Travelling Salesman

Traditional optimization algorithm:Improve systematically the costs by exploring close solutions.

If F(σ´) < F(σ) replace σ σ ´ until F(σ) F(t) for all t N(σ).

Problem: One gets stuck in local minima.

Make local changes:

136136

Travelling Salesman

Simulated annealing optimization algorithm:

If F(σ´) ≤ F(σ) replace σ σ ´

If F(σ´) > F(σ) replace σ σ ´

with probability exp(- F / T)

with F = F(σ ´)-F(σ ) > 0

T is a constant (like a temperature)

Go slowly to T 0 in order to find global minimum.

Applet

137137

Slow cooling

T0

T1

T20

Asymptotic convergence is guarantied and

leads to an exponential convergence time.

Different cooling protocols are possible.

node

link

node = site = vertex = agent

link = bond = edge = connection

Complex networks

139139

Complex Networks

• A.L. Barabasi: «Linked», Perseus, New York, 2003

• D.J. Watts: “Six Degrees”, Norton, New York, 2003

• M. Buchanan: “Nexus”, Norton, New York, 2003

• R. Pastor-Santorras and A. Vespigngani: “Evolution and Structure of the Internet”, Cambridge Univ. Press, 2004

• G. Caldarelli: “Scale-free Networks: Complex Webs in Nature and Technology”, Oxford Univ. Press, 2007

• A. Barrat, M. Barthelemy and A. Vespignani, “Dynamical Processes on Complex Networks”, Cambridge Univ. Press, 2008

• M. Newman: «Networks: An Introduction», Oxford Univ. Press, 2010

Technological Networks

• Internet

• Railways

• Electrical grid

• Air traffic

• Gas pipes

• Highways

• WWW

Swiss roads

data from Helmut Honermann, Bundesamt für Raumentwicklung, ARE

142

World Airline Network (WAN)

European Power Grid

144

Natural gas pipeline network

The Internet

Social Networks

• Friendships

• Sexual relations

• Collaborations

• Criminal associations

• Financial institutions

• Gossip

• Elections

• Spam

Friendships in schools

visualization using „pajec“

Survey interviewing 90118 student from 84 schools in US(Add Health Program)

Criminal network

drug traffic

in Tucson

Social Network Analysis (SNA) from Anacapa

149

Financial institutions

Natural Networks

• Brain

• Gels

• Contact networks

• Rivers

• Colloidal Structures

• Protein networks

• Food Webs

151

Brain

152

Contact Network

153

Protein interaction network

Protein-protein

interaction for

schizophrenia

associated proteins

Interaction between

protein can be defined

by how closely located on

a DNA are the genes that

encode the two proteins.

Food web of theNorth Atlantic Ocean

predator → prey

Food Chain Network

Simplest Network

square lattice

each agent has

a unique index

Erdös – Rényi (ER)

Every pair of nodes is connected.

157

Adjacency Matrix

158

Adjacency Matrix

Complex Structures of Networks

• Scale-free

• Small world

• Modular

Properties of networks

• Degree distribution• Clustering Coefficient• Shortest Path• Assortativity• Betweenness Centrality• Cliquishness• Connectivity Correlation• Cutting bonds, bottlenecks• ......

161

Degree

degree k is the number

of connections of a site

directed networks:

Scale-free networks

WWW: 2.4out 2.1in

( )P k k

degree k is the number of connections of a site

A network is called «scale-free» if

its distribution of degrees follows:

internet: 2.4

163

Scale-free networkscollaborations

movie actors: 2.3 scientific collaborations:

(bipartite network of authors and papers

taken from arXiv cond-mat)

Scale-free networksBarabasi-Albert model:

preferential attachment:attach new sites to a site

of degree k with probability

proportional to k

→ = 3

„configuration model“:attach new sites to a site

of degree k with probability

proportional to k-α

slope = - 3

(1998)

165

Small-world propertyclustering coefficient

2number of connections between neighbors

( 1)C

k k

shortest path

chemical distance between two sitesl

A network has «small-world property», when

0.5 < C ≤ 1 and l does not grow faster than log N.

«six degrees of separation»

Duncan Watts Steven Strogatz

Stanley Milgram

166

Small-world propertyD.J. Watts and S.H. Strogatz, Nature, 1998

Start with regular lattice

with degree k and rewire a

fraction p of its connections.

k

Npl

k

kpC

ln

ln)1(

14

23)0(

p

167

Betweeness centrality

,

2 number of shortest paths from to through

1 2 number of shortest paths from to ( )

i j v

i j v

N N i jg v

Measures the load during traffic.

Assortativity

where M is the number of edges and ji and ki the degrees of the nodes connected by the edge i. The range of A is between ‐1 and 1. A positive value indicates that nodes with similar degrees are connected, while a negative value indicates that nodes with low degree are attached to nodes with high degree and vice versa.

222

2

)(2

)(4

i iii ii

i iii ii

kjkjM

kjkjMA

The assortativity A describes the correlation between the degree of the nodes at each side of an edge:

Questions

• Propagation of information or epidemics

• Robustness against attack

• Synchronization of oscillators

• Clogging, gridlock of traffic

• Basins of attraction, cycles

• Optimization dynamics

• Self-repair, complementation

Apollonian packing

Apollonian network

• scale-free

• small world

• Euclidean

• space-filling

• matching

Applications

• Systems of electrical supply lines

• Friendship networks

• Computer networks

• Force networks in polydisperse packings

• Highly fractured porous media

• Networks of roads

Degree distribution

n-1

n- 2

2

1

2

3 3

3 3 2

3

scale

( , )

-free: (

3 2

3

)

( )

n

m

P k k

W k kk n k

n-1

n

n+1

3 2

3 3 2

1 ln3

1 1.585l

2

2

n

number of sites at generation ( , ) number of vertices of degree

cummulative distribution ( ) ( , ) /

n

nk k

N nm k n k

W k m k n N

Small-world properties

clustering coefficient

2number of connections between neighbors

( 1)C

k k

shortest path

chemical distance between two sitesl

0.828C lnl N

Percolation threshold

10−5

10−4

10−3

10−2

1/N

0.06

0.08

0.10

0.12

0.14

0.16

0.18

0.20

pc

0.17

,

,

1 2 31

at : and simultaneously c

3

bond peronnected

porou

col

s media Archie's la

ation

w

c

vc

p P P P

p L vL N

epidemics

random failure

Electrical conductance

1S

2S{

• from center to all sites of generation n:

• from

}= number of outputs ofsystemeach site

1 1 2 21 3 , 2n n S S S S 1.24current distribution ( )P I I

1 2 to : 0 4 . 5zP P

2 3z

fluxfuses =

malicious attack

Ising model

coupling constant

correlation length diverges at

free energy, entropy, specific heat are smooth

magnetization

n

n c

T

J n

J T

m e T

opinion

Apollonian networks

• Extremely stable against random failure.

• Very fragile against malicious attacks.

• Only one opinion

Evolution of epidemics with healingon a mobile population

Optimizing through rewiring

185

Numerical Methods

186

Finding the solution (root) of an equation:

f (x) = 0is equivalent to the optimization problem

of finding the minimum (or maximum) of F(x)

given by:

Solving equations

0( )d

F xdx

187

Newton method

1 0 1 0 0( ) ( ) ( ) '( ) 0f x f x x x f x

1

( )

'( )n

n nn

f xx x

f x

Be x0 a first guess, then linearize around x0:

188

Secant method

1

1

( ) ( )( )' n n

nn n

f x f xf x

x x

1 11

( )( )

( ) ( )n

n n n nn n

f xx x x x

f x f x

If derivative of f is not known analytically:

189

Secant method

190

Secant method

Might not converge

191

Bisection method

Take two starting values x1 and x2

with f (x1) < 0 and f (x2) > 0.

Define mid-point xm as xm = (x1 + x2) / 2 .

If sign(f (xm)) = sign(f (x1))

then replace x1 by xm

otherwise replace x2 by xm.

192

Bisection method

193

Regula falsi

Take two starting values x1 and x2

with f (x1) < 0 and f (x2) > 0.

Approximate f by a straight line between

f (x1) and f (x2) and calculate its root as:

xm = (f (x1) x2 – f (x2) x1) / (f (x1) – f (x2)).

If sign(f (xm)) = sign(f (x1)), then

replace x1 by xm otherwise replace x2 by xm.

194

Regula falsi

195

Regula falsi

196

Be a N-dimensional vector.

N-dimensional equations

0)( xf

0)( xF

x

System of N coupled equations:

corresponding to the N-dimensional

optimization problem:

197

N-dimensional Newton method

,

( )( ) i

i jj

f xJ x

x

11

( )n n nx x J f x

Define the Jacobi matrix:

Must be non-singular and

also well-conditioned

for numerical inversion.

198

System of linear equations

b11 x1 +. .. . + b1N xN = c1

. . . . . .

. . . . . .

bN1 x1 + . . . + bNN xN = cN

1x B c

solution:

Bx c

199

System of linear equations

0

( )( ) i

j

f xf x Bx c J B

x

1 11 ( )n n nx x B Bx c B c

apply Newton method:

exact solution in one step

11

( )n n nx x J f x

200

N-dimensional secant method

,

( ) ( )( ) i j j i

i jj

f x h e f xJ x

h

jj xh

If the derivatives are not known analytically:

where hj should be chosen as:

being ε the machine precision,

i.e. ≈ 10-16 for a 64 bit computer.

201

Other techniques

0 1( ) ( , ), , ...,i i jf x x g x j i i N Relaxation method:

Start with xi (0) and iterate: xi (t+1) = gi (xj (t)).

Gradient methods:1. Steepest descent

2. Conjugate gradient

202

Ordinary differential equations

( , )dy

f y tdt

dNN

dt

First order ODE, initial value problem:

with y (t0) = y0

examples:

( )room

dTT T

dt

radioactive decay

coffee cooling

203

Euler method

20 0 0

20 0 0 1 1

( ) ( ) ( ) ( )

( , ) ( ) ( )

dyy t t y t t t O t

dt

y t f y t O t y t y

explicit forward integration algorithm

choose small Δt, Taylor expansion:

0 , ( ) n n nt t n t y y t convention:

204

Euler method

21 ,( ) ( )n n n ny y t f y t O t

Start with y0 and iterate:

This is the simplest finite difference method.

Since the error goes with Δ2 t one needs a very

small Δt and that is numerically very expensive.

205

Euler method

206

The order of a method

If the error at one time-step is O(Δn t) the method is „locally of order n“. To consider a fixed time interval T one needs T /Δt time-steps so that the total error is:

1( ) ( )n nTO t O t

t

and therefore the method is „globally of order n-1“.

The Euler method is globally of first order.

207

The Newton equation

2

2 ( )

d xm F x

dt

( )

dxv

dtdv F x

dt m

2nd order ODE

Transform in a

system of two

coupled ODEs

of first order.

208

Euler method for N coupled ODEs

1 1( , ..., , ) , , ...,ii N

dyf y y t i N

dt

21 1( ) ( ) ( ( ), ..., ( ), ) ( )i n i n i n N n ny t y t t f y t y t t O t

0nt t n t

N coupled ODEs

of first order:

with

iterate with a small Δt :

209

Runge - Kutta method

210

References for R-K method

• G.E. Forsythe, M.A. Malcolm and C.B. Moler, „Computer Methods for Mathematical Computations“ (Prentice Hall, Englewood Cliffs, NJ, 1977), Chapter 6

• E. Hairer, S.P. Nørsett and G. Wanner, “Solving Ordinary Differential Equations I” (Springer, Berlin, 1993)

• W.H. Press, B.P.Flannery, S.A Teukolsky and W.T. Vetterling, “Numerical Recipes” (Cambridge University Press, Cambridge, 1988), Sect. 16.1 and 16.2

• J.C. Butcher, „The Numerical Analysis of Ordinary Differential Equations“ (Wiley, New York, 1987)

• J.D. Lambert, „Numerical Methods for Ordinary Differential Equations“ (John Wiley & Sons, New York, 1991)

• L.F. Shampine, „Numerical Solution of Ordinary Differential Equations“ (Chapman and Hall, London, 1994)

211

2nd order Runge - Kutta method

212

2nd order Runge - Kutta method

1 1

2 2( ) ( ) ( ( ), )i i iy t t y t t f y t t

31 1

2 2( ) ( ) ( ( ), ) ( )i i iy t t y t t f y t t t t O t

Extrapolate using Euler method

to the value halfway, i.e. at t + Δt / 2:

Evaluate derivative in Euler method at this value:

213

Fourth order Runge - Kutta

1

2 1

3 2

4 3

( , )

( / 2, / 2)

( / 2, / 2)

( , )

n n

n n

n n

n n

k f y t

k f y k t t

k f y k t t

k f y k t t

531 2 41 ( )

6 3 3 6n n

kk k ky y t O t

RK4

applet

214

RK4

• k1 is the slope at the beginning of the interval.

• k2 is the slope at the midpoint of the interval, using slope k1 to determine the value of y at the point tn+Δt/2 using Euler method.

• k3 is again the slope at the midpoint, but now using the slope k2 to determine the y-value.

• k4 is the slope at the end of the interval, with its y-value determined using slope k3.

1 2 3 42 2

6slope

k k k k Then use Euler method with

215

q-stage Runge - Kutta method

11

q

n n i ii

y y t k

1

1

( , )i

i n ij j n ij

k f y t k t t

1

( , )q

i n ij j n ij

k f y t k t t

iteration:

with

explicit

implicit:

and α1 = 0

216

q-stage Runge - Kutta method

q = 1 is the Euler method (unique)

Butcher array or Runge-Kutta tableau:

2 21

3 31 32

1 2 1

1 1

,

2

....

....

q q q q q

q q

resumes all

parameters

of the general

Runge-Kutta

217

0

½ ½

½ 0 ½

1 0 0 1

Example RK4

its stage is: q = 4

1 1 1 1

6 3 3 6

Butcher array:

RK4 is of order p = 4

218

Order of Runge - Kutta method

1

1

( ) ( ) ( )q

pi i

i

y t t y t t k O t

The R-K method is of order p if:

One must choose the αi, βij and ωi for i,j [1, q] such

that the left hand side = 0 in O(Δtm) for all m ≤ p.

11

11

1

( ),

( ) ( ) ( )!

mpm p

mm y t t

d fy t t y t t O t

m dt

Taylor expansion:

219

Order of Runge - Kutta method

11

11 1

1

( ),!

mq pm

i i mi m y t t

d fk t

m dt

1 1 1( , ) ( , )n n n nf y t f y t

up to O(Δt p+1)

Example q = p = 1:

gives Euler method.

Example q = p = 2:1 1 2 2

1

2nn

dfk k f t

dt

where index „n“ means „at (yn,tn)“.

220

Example q = p = 2

1 1 2 2

1 1

2 2n n nn n n

df f fk k f t f t f

dt t y

1

2 21 1 2

221 2

( , )

( , )

( )

n n n

n n

n nnn

k f y t f

k f y t k t t

f ff t f t O t

y t

insert

1 2 2 2 2 21

1 11

2 2 , ,

3 equations for 4 parameters one-parameter family

221

Example q = p = 2

1 2 1 2 21( )n ny y t k k

1

22 22 2

( , )

( , )

n n

n n

k f y t

t tk f y t

222

Order of Runge - Kutta method

To obtain a Runge-Kutta method of given

order p one needs a minimum stage of qmin.

p 1 2 3 4 5 6 7 8 9 10

qmin 1 2 3 4 6 7 9 11 12 – 17 13 - 17

John Butcher

224

Example: Lorenz equation

Edward Norton Lorenz

(1963)

1 2 1

2 1 3 2

3 1 2 3

'

'

'

y = σ y - y

y = y ρ - y - y

y = y y - βy

C.Sparrow: „The Lorenz Equations: Bifurcations, Chaos

and Strange Attractors“ (Springer Verlag, N.Y., 1982)

Is a simplified system of equations describing the 2-dimensional

flow of fluid of uniform depth in the presence of an imposed

temperature difference taking into account gravity, buoyancy,

thermal diffusivity, and kinematic viscosity (friction).

σ Prandtl number

Rayleigh number

σ = 10 , = 8/3

is varied.

Chaos for = 28.

applet

226

Lorenz attractor

227

Example: Lorenz equation

Warwick Tucker

(2002)

Chaotic solutions of the

Lorenz equation exist and

are not numerical artefacts

(14th math problem of Smale).

228

Ordinary differential equations

( , )dy

f y tdt

dNN

dt

First order ODE, initial value problem:

with y (t0) = y0

examples:

( )room

dTT T

dt

radioactive decay

coffee cooling

229

Explicit forward integration

21 ,( ) ( )n n n ny y t f y t O t

Start with y0 and iterate:

This is the simplest finite difference method.

Since the error goes with Δ2 t one needs a very

small Δt and that is numerically very expensive.

Euler method

230

Error reduction

1 21

1 22

1 1 2

22

2

2 2

( ) ( )( )

( ) ( )

( ) ( )

p p

p p

p p p

y t O ty t t

y t O t

t O t

Let us consider a method of order p, .

Be y1 the predicted value for 2Δt

and y2 the predicted value making two steps Δt.

Define difference as δ = y2 – y1

(error = ΦΔt p+1)

12 2

p

231

Error reduction

22 1

2

2 2( ) ( )p

py t t y O t

improve systematically:

22 15

( ) ( )py t t y O t

example RK4:

232

Error reduction

1

1 11 2

1 11 2

1 2

1 1 1 1 22 1 2 1

1 12 1 2

1 12 1

, 1,2

pi

p p

p p

p pi it

p p p p pit t

p p

t t pip p

y y t O t i

t t y t y t y O t

t y t yy O t

t t

General for two different time steps Δt1 and Δt2

233

Richardson extrapolation

Calculate for

various Δti

and extrapolate

0lim i i

tt

yy =

Lewis Fry Richarson

234

Bulirsch - Stoer Method

R. Bulirsch and J. Stoer, „Introduction to

Numerical Analysis“ (Springer, NY, 1992)

Extrapolate with rational function:

0 10

0 1

2, 4, 6,8,12,16, 24,32, 48, 64,96,.

...

...

, ..

n n

k

n k nt m t

n m n

n

p p t p ty y

q q t q t

tt n

n

Choose k and m appropiately.

235

Define the error you want to accept.

Then measure the real error and

define a new

time-step through:

Adaptive time-step Δt

expectedδ

1

1pexpected

new oldmeasured

t t

measuredδ

because δ = ϕ Δt p+1 .

236

Predictor-corrector method

2( ) ( ) ( ) ( )P dyy t t y t t t O t

dt

3

2( ) ( ) [ ( ( )) ( ( ))] ( )C Pt

y t t y t f y t f y t t O t

2

( ( )) ( ( ))( ) ( )

f y t f y t ty t t y t t

idea:

implicit equationmake prediction

using Taylor:

correct by

inserting:

Can be iterated by again inserting corrected value.

237

3th order predictor-corrector

Predict through 3rd order Taylor expansion:

238

3th order predictor- corrector

( ) ( ( ))C

Pdyt t f y t t

dt

( ) ( )C P

dy dyt t t t

dt dt

0

2 2

22 2

3 3

33 3

( )

( )

( )

C P

C P

C P

y t t y c

d y d yt t c

dt dt

d y d yt t c

dt dt

use equation:

define error:

correct:

Procedure

can be

repeated.

Gear coefficients:

c0 = 3/8

c2 = 3/4

c3 = 1/6

Charles William

Gear

239

Be r0 ≡ y. Then one can define the time derivatives:

Predictor:

5th order predictor- corrector

240

5th order predictor- corrector

1 0 1 1 r

(r) r (r ) r r rC P C Pdf f

dt

Corrector:

1st order eq.:

2nd order eq.:2

2 0 2 222

r(r) r (r ) r r rC P C Pd

f fdt

241

1st order equation:

2nd order equation:

Gear coefficients (1971)

242

Comparison of methods

Δt

error

yn+1 - yn

for fixed number of iterations n

4th

5th6th

Predictor - Corrector

Verlet

243

Sets of coupled ODEs

1 1( , ..., , ) , , ...,ii N

dyf y y t i N

dt

Explicit Runge Kutta and Predictor Corrector

methods can be straightforwardly generalized

to a set of coupled 1st order ODEs:

by inserting simultaneously all the values

of the previous iteration.

244

Stiff differential equation

( ) 15 ( ) , 0 , (0) 1y t y t t y '

2( ) ( ) 15 ( ) ( )y t t y t t y t O t

example:

Euler:

Becomes unstable if Δt not small enough:

21

2( ) ( ) ( ( ), ) ( ( ), ) ( )y t t y t t f y t t f y t t t t O t

Use implicit method: (e.g. Adams – Moulton)

245

Stiff sets of equations

'1 1 2

'2 1

1 2

1000

999

(0) 1 , (0) 1

y y y

y y

y y

' ( ) ( ) ( )y t K y t f t This system is called „stiff“

if matrix K has at least one

very large eigenvalue.

example:

' ( ) ( ( ), )y t f y t t This system is called „stiff“ if

Jacobi matrix has at least

one very large eigenvalue.

Solve with with

implicit method.