Bayesian networks, introduction

51
Bayesian networks, introduction Graphical models: nodes (vertices) links (edges)

description

Bayesian networks, introduction. Graphical models:. nodes (vertices). links (edges). A graph can be disconnected: or connected: ; undirected: or directed:. the edges are one-directed arrows. - PowerPoint PPT Presentation

Transcript of Bayesian networks, introduction

Page 1: Bayesian networks, introduction

Bayesian networks, introduction

Graphical models:

nodes (vertices)

links (edges)

Page 2: Bayesian networks, introduction

A graph can be

disconnected: or connected: ; undirected: or directed:

the edges are one-directed arrows

cyclic: or acyclic:

possible to start in one node and “come back”

Page 3: Bayesian networks, introduction

Examples:

Transport routes: S I1

I2A

I2B

F

Acyclic, but not completely directed

Junction trees:

A B C

D E

F G H

A B C

D E

F G H

ABD

DFG

BDG BEG

BCE

EGH

From 8 nodes to 6 nodes (Source: Wikipedia)

Page 4: Bayesian networks, introduction

Markov random field

Given the light blue nodes, the middle blue node is conditionally independent of all other nodes (the white nodes)

Page 5: Bayesian networks, introduction

Bayesian (belief) networks

A Bayesian network is a connected directed acyclic graph (DAG) in which

• the nodes represent random variables

• the links represent direct relevance relationships among variables

Examples:

X Y

This small network has two nodes representing the random variable X and Y.

The directed link gives a relevance relationship between the two variables that means Pr (Y = y | X = x, I ) Pr (Y = y | I )

Page 6: Bayesian networks, introduction

X Y

Z

This network has three nodes representing the random variables X, Y and Z.

The directed links give relevance relationships that means

Pr ( Y = y | X = x, I ) Pr ( Y = y | I )

Pr ( Z = z | X = x, I ) Pr ( Z = z | I )

but also (as will be seen below)

Pr ( Z = z | Y = y, X = x, I ) = Pr ( Z = z | X = x, I )

Page 7: Bayesian networks, introduction

Structures in a Bayesian network

There are two classifications for nodes: parent nodes and child nodes

parent node child node

child nodes

parent nodesThus, a node can be solely a parent node, solely a child node or both!

Page 8: Bayesian networks, introduction

Probability “tables”

Each node represents a random variable.

This random variable has either assigned probabilities (nominal scale or discrete) or an assigned probability density function (continuous scale) for its states.

For a node that is solely a parent node:

The assigned probabilities or density function are conditional on background information only (may be expressed as unconditional)

For a node that is a child node (solely or joint parent/child):

The assigned probabilities or density function are conditional on the states of its parent nodes (and on background information).

Page 9: Bayesian networks, introduction

Example:

X

Y

X has the states x1 and x2

Y has the states y1 and y2

Probability tables

X Probabilities

x1 Pr (X = x1 | I )

x2 Pr (X = x2 | I )

Probabilities

X: x1 x2

Y: y1 Pr (Y = y1 | X = x1, I ) Pr (Y = y1 | X = x2, I )

y2 Pr (Y = y2 | X = x1, I ) Pr (Y = y2 | X = x2, I )

Page 10: Bayesian networks, introduction

Example Dyes on banknotes (from previous lectures)

Two states:

absent" is Dye" :

present" is Dye" :

A

A

Two states:

negative" isResult " :

positive" isResult " :

B

B

A?

B?

A? Probabilities

0.001

0.999

A

A

Probabilities

A?:

B?: 0.99 0.02

0.01 0.98

A A

B

B

Page 11: Bayesian networks, introduction

More about the structure…

Ancestors and descendants:

A node X is an ancestor of a node Y and Y is in turn a descendant of X if there is a unidirectional path from X to Y

A

E

B

G

D F

H

I

CAncestor Descendants

A D, E, G, I

B E, F, G, H, I

C F, H, I

E G, I

F H, I

G I

H I

Page 12: Bayesian networks, introduction

Different connections:

A

B C

diverging connection

A B C serial connection

A B

C

converging connection

Page 13: Bayesian networks, introduction

Conditional independence and d-separation

1) Diverging connection A

B C

There is a path between B and C even if it not unidirectional

B may be relevant for C (and vice versa)

However, if the state of A is known this relevance is lost.: The path is blocked

B and C are conditionally independent given A

Page 14: Bayesian networks, introduction

Example:

Assume the old rat Willie was caught in a trap.

We have also found a sac with wheat grains with a small hole where grains have leaked out, and we suspect that Willie made this hole.

Examining the sac and Willie we find •traces of wheat grain in the jaw of Willie •traces of saliva at the damage on the sac that matches the DNA of Willie.

Page 15: Bayesian networks, introduction

Note that the states of B and C are actually given, but the description gives a complete model

The whole scenario can be described with three random variables :

A with the two states: A1: “Willie made the whole in the sac” A2: “Willie has not been near the sac” ,

B with the two states: B1: “Traces of wheat grain found in Willie’s jaw”

B2: “No traces of wheat grain found in Willie’s jaw” ,

C with the two states: C1: “Match between saliva DNA and Willie’s DNA”

C2: “No match in DNA between saliva and Willie”

Page 16: Bayesian networks, introduction

First we assume none of the states are given:

Is B relevant for C ?

Yes, because if B1 is true, i.e. we have found wheat grains in Willie’s jaw, the conditional probability of obtaining a match in DNA would be different from the corresponding conditional probability if B2 was true.

Now assume for A that the state A2 is given, i.e. Willie was never near the sac.

Under this condition B can no longer be relevant for C as whether we find a match in DNA between the saliva trace and Willie or not can have nothing to do with the grains we have found in Willie’s jaw.

Now assume for A that the state A1 is given, i.e. Willie made the hole.

Under this condition it is tempting to think that B is relevant for C, but the relevance is actually lost. Whether we find a match in DNA or not cannot have any impact on whether we find grains in the jaw or not once we have stated that Willie made the hole.

Page 17: Bayesian networks, introduction

The scenario can be described with the Bayesian network

A

B C

i.e. a diverging connection

When a state of a node is assumed to be given we say the node is instantiated

In the example, once A is instantiated the relevance relationship between B and C is lost.

B and C are thus conditionally independent given a state of A

Page 18: Bayesian networks, introduction

2) Serial connection A B C

There is a path between A and C (unidirectional from A to C)

A may be relevant for C (and vice versa)

If the state of B is known this relevance is lost.: The path is blocked

A and C are conditionally independent given (a state of) B

Page 19: Bayesian networks, introduction

Example The Willie case with another description

Let

A be a random variable with states A1: “Willie made the hole in the sac”A2: “Willie did not make the hole” ,

B be a random variable with states B1: “Willie left saliva on the damage”B2: “Willie left no saliva” ,

C be a random variable with states C1: “There is a match in DNA” C2: “There is no match”

Assuming no state is given, there is a relevance relationship between A and C:

IACIAC ,Pr,Pr 2111

Page 20: Bayesian networks, introduction

Now assuming state B1 of B is given, i.e. we assume there was a contact between Willie’s jaw and the damage.

A can no longer be relevant for C as once we have stated that Willie left saliva it does not matter for C whether he made the hole or not.

The scenario can be described with the Bayesian network

A B C

Once B is instantiated the relevant relationship between A and C is lost.

A and C are conditionally independent given a state of B

Page 21: Bayesian networks, introduction

3) Converging connection A B

C

There is a path between A and B (not unidirectional)

A may be relevant for B (and vice versa)

If the state of C is (completely) unknown this relevance does not exist.

If the state of C is known (exactly or by a modification of the state probabilities) the path is opened

A and C are conditionally dependent given information about the states of C, otherwise they are (conditionally) independent

Page 22: Bayesian networks, introduction

Example Paternity testing: child, mother and the true father

Let

A be a random variable representing the mother’s genotype in a specific locus

B be a random variable representing the true father’s genotype in the same locus

C be a random variable representing the child’s genotype in that locus

A1 A2A:

B1 B2B:

C1 C2C:

Page 23: Bayesian networks, introduction

If we know nothing about C (C1 and C2 are both unknown) , then

•information about A cannot have any impact on B and vice versa.

If we on the other hand know the genotype of the child (C1 and C2 are both known or one of them is) then

•knowledge of the genotype of the mother has impact on the probabilities of the different genotypes that can be possessed by the true father since the child must have inherited half of the genotype from the mother and the other half from the father.

A B

C

Bayesian network:

Page 24: Bayesian networks, introduction

d-separation

In a directed acyclic graph (DAG) the concept of d-separation is defined as:

Let SX, SY and SX be three disjoint subsets of variables included in the DAG

The sets SX and SY are d-separated given SZ if every path between a variable X in SX and a variable Y in SY containseither

• a serial connection through a variable Z in SZ or a divergent connection diverging from a variable Z in SZ

or• a converging connection converging to a variable W not in SZ and of which no descendants belong to SX

Page 25: Bayesian networks, introduction

• No direct link from red area to blue area or vice versa• No convergence from blue area and red area to green area

X1 X2 Y1

Y2Y3Z1

Z3

Z2

W1 W2

Page 26: Bayesian networks, introduction

The Markov property - formal definition of a Bayesian network

Consider a variable X in a DAG

Let PA(X ) be the set of all parents to X and DE(X ) be the set of all descendants to X.

Let SY be a set of variables that does not include any variables in DE(X ), i.e. are not descendants of X

Then, the DAG is a Bayesian network if and only if

i.e. X is conditionally independent of SY given PA(X )

This is also known as the Markov property

IXXIXX ,Pr,S,Pr PAPA Y

Note , by Pr(X | … ) we mean the probability of X having a particular state

Page 27: Bayesian networks, introduction

X

D1 D1

D3

Y1 Y2 Y4

Y3 W1 W2

IWYX

XWWY

XYY

IYYYWYX

IYYYYWYX

IXX

,,Pr

)(

, links Serial

,,,,,Pr

,,,,,,Pr

,S,Pr

13

124

31

42113

432113

YPAExample

IWYX

XWW

IWWYX

IWWWYX

IXX

,,Pr

link Serial

,,,Pr

,,,,Pr

,S,Pr

13

12

213

2113

WPA

Page 28: Bayesian networks, introduction

Software

GeNIe (Graphical network Interface)

• Software free-of-charge• Powerful for building complex network and running with moderately large probability tables• Download from http://genie.sis.pitt.edu/

HUGIN

• Commercial software• Probably today’s most powerful software for Bayesian networks• A demo version (less powerful than GeNIe) can be downloaded from www.hugin.com

Page 29: Bayesian networks, introduction

Example

A burglary was done in a shop. On the shop floor the police have secured a shoeprint. In the home of a suspect a shoe is found with a sole pattern that matches that of the shoeprint.

In a compiled database of shoeprints it is found that the particular pattern is prevalent on 3 out of 657 prints.

Hypotheses (usually called propositions in forensic literature):

Hp : “The shoeprint was made by the found shoe”

Hd : “The shoeprint was made by some other shoe”

“p” in Hp stands for “Prosecutor” (incriminating proposition)

“d” in Hd stands for “Defence” (alternative to the incriminating)

Evidence:

E : “There is a match in pattern between shoeprint and the found shoe”

Page 30: Bayesian networks, introduction
Page 31: Bayesian networks, introduction

Default settings

Page 32: Bayesian networks, introduction

Table automatically set from table of H

Page 33: Bayesian networks, introduction

Setting the probability table for node E

If proposition Hp (Shoeprint was made by found shoe) is true:

1,Pr,MatchPr IHEIH pp

0,Pr,match NoPr IHEIH pp

If proposition Hd (Shoeprint was made by another shoe) is true:

IHEIH dd ,Pr,MatchPr

where is the proportion shoes in the (relevant) population of shoes having the observed pattern

1,Pr,match NoPr IHEIH dd

Page 34: Bayesian networks, introduction

The proportion is unknown, but an estimate from the database can be used

0046.0657

Page 35: Bayesian networks, introduction

Run the network

Page 36: Bayesian networks, introduction
Page 37: Bayesian networks, introduction
Page 38: Bayesian networks, introduction

Instantiate the match

Page 39: Bayesian networks, introduction

217004579.0995421.05.05.0

004579.0995421.0

PrPr

,Pr,Pr

IHIH

IEHIEHLRB

dp

dp

On the other we could directly have computed

2196573

1LR

which is more accurate

Page 40: Bayesian networks, introduction

Alternative network

Propositions (as before):

Hp : “The shoeprint was made by the found shoe”Hd : “The shoeprint was made by some other shoe”

Evidence:

X : Sole pattern of the found shoe

States: q (the observed pattern)

non-q

Y : Pattern of the shoe print

States: q

non-q

Page 41: Bayesian networks, introduction

6573

1

,Pr

1

,Pr

,,Pr

,,Pr

,,Pr

,Pr,,Pr

,Pr,,Pr

,,Pr

,,Pr

,Pr

,Pr

IHqYIHqY

IHqXqY

IHqXqY

IHqXqY

IHqXIHqXqY

IHqXIHqXqY

IHqYqX

IHqYqXLR

dd

p

d

p

IHqX

IHqX

dd

pp

d

p

d

p

Page 42: Bayesian networks, introduction

in a network…

H

X

YProbability table for Y

H Hp Hd

X q non-q q non-q

Y q 1 0 3/657 3/657

non-q 0 1 654/657 654/657

Probability table for X

X Probability

q

non-q 1–

Page 43: Bayesian networks, introduction

Note!

We need to give a probability table for X to make the software work.

However, we do not know but it does not matter what value we set here.

Instantiate nodes X and Y both to q

Page 44: Bayesian networks, introduction

Example: (more complex) In the head of the experienced examiner

Assume there is a question whether an individual has a specific disease A or another disease B.

What is observed is

The individual has an increased level of substance 1

The individual has recurrent fever attacks

The individual has light recurrent pain in the stomach

Page 45: Bayesian networks, introduction

The experience of the examining physician says

1. If disease A is present it is quite common to have an increased level of substance 1.

2. If disease B is present it is less common to have an increased level of substance 1.

3. If disease A is present it is not generally common to have recurrent fever attacks, but if there is also an increased level of substance 1 such events are very common

4. Recurrent fever attacks are quite common when disease B is present regardless of the level of substance 1

5. Recurrent pain in the stomach are generally more common when disease B is present than when disease A is present, and regardless of the level of substance 1 and whether fever attacks are present or not

6. If a patient has disease A, increased levels of substance 1 and recurrent fever attacks he/she would almost certainly have recurrent pain in the stomach. Otherwise, if disease A is present recurrent pain in the stomach is equally common.

Can we put this up in a network?

Page 46: Bayesian networks, introduction

Let the “disease node” be H, with states A and B

Let the “evidence” nodes be

X with statesx1 : “The individual has an increased level of substance 1”x2 : “The individual has a normal level of substance 1”

Y with statesy1 : “The individual has recurrent fever attacks”y2 : “The individual has no fever attacks”

Z with statesz1 : “The individual has light recurrent pain in the stomach”z2 : “The individual has no pain in the stomach”

Page 47: Bayesian networks, introduction

H

X

Y

Z

Probability table for X

H: A B

X x1

x2 1 – 1 –

Probability table for Y

H: A B

X : x1 x2 x1 x2

Y y1

y2 1 – 1 – 1 – 1 –

Probability table for Z

H: A B

X : x1 x2 x1 x2

Y : y1 y2 y1 y2 y1 y2 y1 y2

Z z1 1

z1 0 1 – 1 – 1 – 1 – 1 – 1 – 1 –

Page 48: Bayesian networks, introduction

The probabilities set out in the tables take into account some of the experience listed (e.g. that some probabilities are equal)

However, we need to estimate numbers for , , , , , and

Experience 1 & 2 >> Assume 0.8 and 0.2

Experience 3 high and < 0.5 Assume 0.9, 0.3

Experience 4 Assume 0.8

Experience 5 & 6 > Assume 0.6 and 0.4

Page 49: Bayesian networks, introduction

Run network

Page 50: Bayesian networks, introduction

Instantiate the nodes X, Y and Z

Page 51: Bayesian networks, introduction

The likelihood ratio of the evidence becomes

5.7764.11

325.88LR

Thus the three observations are combined 7.5 times more probable if disease A is present than if disease B is present.