Stochastic Constraint Programming and Global Stochastic … · 2020-07-07 · Stochastic Constraint...
Transcript of Stochastic Constraint Programming and Global Stochastic … · 2020-07-07 · Stochastic Constraint...
Stochastic Constraint Programming andGlobal Stochastic Constraints
A modelling and solution framework for effective decisionmaking under uncertainty
Dr. Roberto Rossi
The University of Edinburgh Business SchoolThe University of Edinburgh
Friday, November the 29th, 2013
1
What is Constraint Programming (CP)
A subfield of Artificial Intelligence
It represents an efficient means for finding solutions tocombinatorial problems in areas such as:
◮ timetabling
◮ scheduling
◮ design & configuration
◮ . . .
Closely related to a number of disciplines:
◮ logic programming & inference
◮ graph theory
◮ combinatorial optimization
◮ . . .
2
Who cares about Constraints
◮ IBM acquired ILOG in 2009, a leading vendor of constrainttechnology.
◮ CISCO acquired the ECLiPSe constraint logic programmingsystem.
◮ St Andrews Minion solver is used to schedule the CB1000Nanoproteomic Analysis Systemdx.doi.org/10.1007/978-3-642-04244-7_8.
◮ In Scotland there is a fairly large academic community: thereare constraint programers in Edinburgh, Glasgow, St Andrewsand Dundee.
3
Constraints
Constraints represent a natural means of knowledge
representation.
Examples
“x+y=30”
“The helicopter can carry one passenger”
University timetabling:
◮ “no student can attend two lectures at once”
◮ “George Square lecture theatre has a capacity of 400students”
◮ “art history lectures require a slide projector”
4
Constraints
Constraint programming offers many expressive constraints thatcan be used to build a constraint model, e.g.
allDifferent(x1, . . . , xn)
Global constraint catalog
The catalogue presents a list of 354 global constraints issued fromthe literature in constraint programming and from popularconstraint systems.
http://www.emn.fr/z-info/sdemasse/gccat/
5
Solving problems with constraints
Two phases:
1. Describe the problem to be solved as a constraint model,a format suitable for input to a constraint solver.
2. Search (automatically) for solutions to the model with aconstraint solver (e.g. IBM ILOG, Choco, ECLiPse,. . . ).
6
The (finite domain) Constraint Satisfaction Problem
Given
◮ a finite set of decision variables
◮ for each decision variable, a finite domain of potential values
◮ a finite set of constraints on the decision variables
Find an assignment of values to variables such that all constraintsare satisfied.
7
The (finite domain) Constraint Satisfaction Problem
Decision variableA decision variable corresponds to a choice that must be made insolving a problem.
DomainsValues in the domain of a decision variable correspond to theoptions for a particular choice.
ConstraintsRepresent relations that identify which assignments of values tovariables are allowed.
8
Constraint solving
Typically interleaves two components
◮ systematic search through a space of partial assignments◮ extend an assignment to a subset of the variables incrementally◮ backtrack (i.e. “undo”) if establish that current partial
assignment cannot be extended to a solution
◮ constraint propagation◮ logical inference based on constraints and current domains◮ aims to reduce domains
9
The Crystal Maze Puzzle
Thanks to Patrick Prosser
Computing ScienceGlasgow University17 Lilybank GardensGlasgow G12 8RZ
10
Crystal MazeGiven the network
?
?
?
?
?
?
?
?
Place numbers 1, . . . , 8 on nodes by making sure that◮ each number appears exactly once◮ consecutive numbers do not appear on adjacent nodes
11
Crystal Maze as a CSP
The Crystal Maze puzzle can be modelled as a CSP.
A constraint solver can then make the inferences you justperformed by hand, automatically.
12
Crystal Maze as a CSP
Variables
x3
x7
x4
x1
x8
x5
x2
x6
The indexing adopted is irrelevant.
13
Crystal Maze as a CSP
Domains
x3
x7
x4
x1
x8
x5
x2
x6
All variables have the same domain: {1, 2, 3, 4, 5, 6, 7, 8}.
14
Crystal Maze as a CSP
Constraints
x3
x7
x4
x1
x8
x5
x2
x6
◮ All values in the domains should be used:alldifferent(x1, x2, x3, x4, x5, x6, x7, x8)
◮ No consecutive numbers on adjacent nodes:|x1 − x2| > 1|x1 − x3| > 1. . .
15
Solving the Crystal Maze Puzzle
But how would a constraint solver reason about it?
16
Solving the Crystal Maze Puzzle
Which nodes are the hardest to label?
?
?
?
?
?
?
?
?
17
Solving the Crystal Maze Puzzle
Which nodes are the hardest to label?
?
?
?
?
?
?
?
?
18
Solving the Crystal Maze Puzzle
Which are the least constraining values to use?
?
?
?
?
?
?
?
?
Recall that domains are {1, 2, 3, 4, 5, 6, 7, 8}.
19
Solving the Crystal Maze Puzzle
Which are the least constraining values to use?
?
?
1
?
?
8
?
?
Recall that domains are {1, 2, 3, 4, 5, 6, 7, 8}.
20
Solving the Crystal Maze Puzzle
Exploit symmetry!
It would be unnecessary to consider the following symmetricassignment.
?
?
8
?
?
1
?
?
21
Solving the Crystal Maze Puzzle
Constraint propagation
Exploit constraint propagation to eliminate infeasible values fromdomains associated with other nodes (i.e. decision variables).
?
?
1
?
?
8
?
?
22
Solving the Crystal Maze Puzzle
Constraint propagation
Exploit constraint propagation to eliminate infeasible values fromdomains associated with other nodes (i.e. decision variables).
?
?
1
?
?
8
?
?
81,2,3,4,5,6,7,8<
23
Solving the Crystal Maze Puzzle
Constraint propagation
Values 1 and 8 appear exactly once.
?
?
1
?
?
8
?
?
82,3,4,5,6,7<
24
Solving the Crystal Maze Puzzle
Constraint propagation
Enforce adjacency restrictions.
?
?
1
?
?
8
?
?
82,3,4,5,6,7<
25
Solving the Crystal Maze Puzzle
Constraint propagation
Enforce adjacency restrictions.
?
?
1
?
?
8
?
?
83,4,5,6<
26
Solving the Crystal Maze Puzzle
Constraint propagation
Symmetry again. . .
?
?
1
?
?
8
?
?
83,4,5,6<
83,4,5,6<
27
Solving the Crystal Maze Puzzle
Constraint propagation
Values 1 and 8 appear exactly once.
?
?
1
?
?
8
?
?
81,2,3,4,5,6,7,8<83,4,5,6<
83,4,5,6<
28
Solving the Crystal Maze Puzzle
Constraint propagation
Values 1 and 8 appear exactly once.
?
?
1
?
?
8
?
?
82,3,4,5,6,7<83,4,5,6<
83,4,5,6<
29
Solving the Crystal Maze Puzzle
Constraint propagation
Enforce adjacency restrictions.
?
?
1
?
?
8
?
?
82,3,4,5,6,7<83,4,5,6<
83,4,5,6<
30
Solving the Crystal Maze Puzzle
Constraint propagation
Enforce adjacency restrictions.
?
?
1
?
?
8
?
?
83,4,5,6<83,4,5,6<
83,4,5,6<
31
Solving the Crystal Maze Puzzle
Constraint propagation
Symmetry. . .
?
?
1
?
?
8
?
?
83,4,5,6<
83,4,5,6<
83,4,5,6<
83,4,5,6<
32
Solving the Crystal Maze Puzzle
Constraint propagation
Values 1 and 8 appear exactly once.
?
?
1
?
?
8
?
?81,2,3,4,5,6,7,8< 81,2,3,4,5,6,7,8<
83,4,5,6<
83,4,5,6<
83,4,5,6<
83,4,5,6<
33
Solving the Crystal Maze Puzzle
Constraint propagation
Values 1 and 8 appear exactly once.
?
?
1
?
?
8
?
?82,3,4,5,6,7< 82,3,4,5,6,7<
83,4,5,6<
83,4,5,6<
83,4,5,6<
83,4,5,6<34
Solving the Crystal Maze Puzzle
Constraint propagation
Enforce adjacency restrictions.
?
?
1
?
?
8
?
?82,3,4,5,6,7< 82,3,4,5,6,7<
83,4,5,6<
83,4,5,6<
83,4,5,6<
83,4,5,6<35
Solving the Crystal Maze Puzzle
Constraint propagation
Enforce adjacency restrictions.
?
?
1
?
?
8
?
?83,4,5,6,7< 82,3,4,5,6<
83,4,5,6<
83,4,5,6<
83,4,5,6<
83,4,5,6<36
Solving the Crystal Maze Puzzle
Constraint propagation
Values 1 and 8 appear exactly once.
?
?
1
?
?
8
?
?83,4,5,6,7< 82,3,4,5,6<
83,4,5,6<
83,4,5,6<
83,4,5,6<
83,4,5,6<37
Solving the Crystal Maze Puzzle
Constraint propagation
Values 1 and 8 appear exactly once.
7
?
1
?
?
8
?
2
83,4,5,6<
83,4,5,6<
83,4,5,6<
83,4,5,6<
38
Solving the Crystal Maze Puzzle
Constraint propagation
Enforce adjacency restrictions.
7
?
1
?
?
8
?
2
83,4,5,6<
83,4,5,6<
83,4,5,6<
83,4,5,6<
39
Solving the Crystal Maze Puzzle
Constraint propagation
Enforce adjacency restrictions.
7
?
1
?
?
8
?
2
84,5,6<
84,5,6<
83,4,5<
83,4,5<
40
Solving the Crystal Maze Puzzle
SearchGuess, but be prepared to backtrack. . .
7
?
1
?
?
8
?
2
84,5,6<
84,5,6<
83,4,5<
83,4,5<
41
Solving the Crystal Maze Puzzle
SearchGuess, but be prepared to backtrack. . .
7
?
1
3
?
8
?
2
84,5,6<
84,5,6<83,4,5<
42
Solving the Crystal Maze Puzzle
Constraint propagation
Values 1 and 8 appear exactly once.
7
?
1
3
?
8
?
2
84,5,6<
84,5,6<83,4,5<
43
Solving the Crystal Maze Puzzle
Constraint propagation
Values 1 and 8 appear exactly once.
7
?
1
3
?
8
?
2
84,5,6<
84,5,6<84,5<
44
Solving the Crystal Maze Puzzle
Constraint propagation
Enforce adjacency restrictions.
7
?
1
3
?
8
?
2
84,5,6<
84,5,6<84,5<
45
Solving the Crystal Maze Puzzle
Constraint propagation
Enforce adjacency restrictions.
7
?
1
3
?
8
?
2
85,6<
84,5,6<84,5<
46
Solving the Crystal Maze Puzzle
SearchGuess, but be prepared to backtrack. . .
7
?
1
3
?
8
?
2
85,6<
84,5,6<84,5<
47
Solving the Crystal Maze Puzzle
SearchGuess, but be prepared to backtrack. . .
7
?
1
3
?
8
5
2
84,5,6<84,5<
48
Solving the Crystal Maze Puzzle
Constraint propagation
Values 1 and 8 appear exactly once.
7
?
1
3
?
8
5
2
84,5,6<84,5<
49
Solving the Crystal Maze Puzzle
Constraint propagation
Values 1 and 8 appear exactly once.
7
?
1
3
?
8
5
2
84,6<84<
50
Solving the Crystal Maze Puzzle
Constraint propagation
Values 1 and 8 appear exactly once.
7
4
1
3
?
8
5
2
84,6<
51
Solving the Crystal Maze Puzzle
Constraint propagation
Values 1 and 8 appear exactly once.
7
4
1
3
?
8
5
2
86<
52
Solving the Crystal Maze Puzzle
Constraint propagation
Solution.
7
4
1
3
6
8
5
2
53
Solving the Crystal Maze Puzzle
Generally, it doesn’t go as smooth as this.
Search will often involve backtracking.
54
Search
Search is often carried out by using “standard” depth-first search(DFS) procedures.
At each node of the search tree two heuristics operate:
◮ variable selection: what variable do we branch on (e.g.smallest domain, most constrained, . . . )?
◮ value selection: what value in the domain do we try first(e.g. increasing, decreasing, . . . )?
In general, there is no easy way to decide what strategies are goodfor a given problem!
55
Propagation
Propagation operates by exploiting dedicated filtering algorithmsthat enforce some degree of “consistency” by removing provablyinfeasible value from the domains of the constrained variables.
Possible “levels” of consistency (from weaker to stronger) are:
◮ arc consistency
◮ bound consistency
◮ range consistency
◮ . . .
◮ hyper-arc consistency
56
Propagation
Arc consistency
Arc consistency is the simplest (and weakest) possible form ofconsistency that can be enforced; it is enforced by checking eachpair of variables against a given constraint that constrains them —as we did in the Crystal Maze example!
57
Propagation
Arc consistency: example
¹
¹ ¹
x1 x2
x3
81,2,3<
81,2< 81,2<
58
Propagation
Bound consistency
Bound consistency operates by pruning edges of domains, a typicalexample involves linear inequalities.
59
Propagation
Bound consistency: example
Variables & Domains:
x1 ∈ {1, . . . , 3}
x2 ∈ {5, . . . , 9}
x3 ∈ {1, . . . , 2}
Constraints:x1 + x2 + x3 ≥ 15
Reasoning:x1 ≥ 15− x2 − x3
where xi denotes the maximum value in the domain of xi.
60
Propagation
Bound consistency: example
Variables & Domains:
x1 ∈ {1, . . . , 3}
x2 ∈ {5, . . . , 9}
x3 ∈ {1, . . . , 2}
Constraints:x1 + x2 + x3 ≥ 15
Reasoning:
x1 ≥ 15− 9− 2 = 4 /∈ {1, . . . , 3} inconsistent!
where xi denotes the maximum value in the domain of xi.
61
Propagation
Hyper-arc consistency
This is the strongest possible level of consistency.
Informally speaking, it guarantees that it is always possible toextend the current partial assignment to a solution viabacktrack-free search.
If the problem of interest is NP-hard, enforcing hyper-arcconsistency on it is also NP-hard.
We shall demonstrate how it is possible to enforce hyper-arcconsistency for the alldifferent(x1, . . . , xn) constraint, whichensures that x1, . . . , xn take all different values.
Jean-Charles Regin: A Filtering Algorithm for Constraints ofDifference in CSPs. AAAI 1994: 362-367
62
Propagation
Hyper-arc consistency: example
¹
¹ ¹
x1 x2
x3
81,2,3<
81,2< 81,2<
63
Propagation
Hyper-arc consistency: example
Reformulate the problem as that of finding a Maximum Matchingin a Bipartite Graph — known to be solvable in polynomial time.
x1
x2
x3
1
2
3
64
Propagation
Hyper-arc consistency: example
Find a maximum matching. If the cardinality of the maximummatching is less than the number of variables, then there is nofeasible assignment; hence we remove all values from domains.
x1
x2
x3
1
2
3
65
Propagation
Hyper-arc consistency: example
In this case there is a maximum matching of size 3, therefore thereis at least one feasible assignment (displayed).
x1
x2
x3
1
2
3
66
Propagation
Hyper-arc consistency: example
However, to enforce hyper-arc consistency, in principle we shouldbe able to identify all possible maximum matchings of size 3 — sothat we can retain every values in decision variable domains, whichbelongs at least to a maximum matching.
67
Propagation
Hyper-arc consistency: example
However, to enforce hyper-arc consistency, in principle we shouldbe able to identify all possible maximum matchings of size 3 — sothat we can retain every values in decision variable domains, whichbelongs at least to a maximum matching.
Fortunately, to find out if a values in a decision variable domainbelongs at least to a maximum matching there is no need toenumerate all maximum matchings.
67
Propagation
Hyper-arc consistency: example
To understand why, we can look at the extended graph used by theFord-Fulkerson algorithm — which can be used to determine amaximum matching — and focus on its strongly connectedcomponents (SCC).
s
x1
x2
x3
1
2
3
t
68
Propagation
Hyper-arc consistency: example
A directed graph is said to be strongly connected, if every vertex isreachable from every other vertex.
The strongly connected components of an arbitrary directed graphform a partition into subgraphs that are strongly connected; thesecan be computed in linear time.
s
x1
x2
x3
1
2
3
t
69
Propagation
Hyper-arc consistency: example
Observation: by flipping every edge in a strongly connectedcomponent we obtain a new maximum matching.
s
x1
x2
x3
1
2
3
t
70
Propagation
Hyper-arc consistency: example
Observation: by flipping every edge in a strongly connectedcomponent we obtain a new maximum matching.
s
x1
x2
x3
1
2
3
t
71
Propagation
Hyper-arc consistency: example
This means that all edges that connect nodes within a stronglyconnected component also belong to a maximum matching; hencethe respective values in decision variable domains should beretained.
s
x1
x2
x3
1
2
3
t
72
Propagation
Hyper-arc consistency: example
Conversely, all edges that connect nodes belonging to differentstrongly connected component and that do not belong to thecurrent matching, can not belong to any maximum matching (notethat strongly connected components are all saturated); hence therespective values in decision variable domains should be removed.
s
x1
x2
x3
1
2
3
t
73
Propagation
Hyper-arc consistency: example
¹
¹ ¹
x1 x2
x3
81<
81,2< 81,2<
74
Decision making under uncertainty
Can we extend constraint programming so that it can be applied toproblems of decision making under uncertainty?
75
Stochastic Constraint Programming
European Conference on Artificial Intelligence, 2002 Constraints, 2006
76
Stochastic Constraint Programming (Walsh, 2002)
A Stochastic Constraint Satisfaction Problem (SCSP) is a 7-tuple
〈V, S,D,P,C, θ, L〉
1. V = {v1, . . . , vn} is a set of decision variables
2. D is a function mapping each decision variable in V to a domain of potentialvalues
3. S = {s1, . . . , sn} is a set of random variables
4. P is a function mapping each random variable in S to a probability distribution
5. C is a set of (chance) constraints, possibly involving random variables
6. θh is a threshold probability associated with chance-constraint h
7. L = [〈V1, S1〉, . . . , 〈Vi, Si〉, . . . , 〈Vm, Sm〉] is a list of decision stages.
by considering an objective function f(V, S) we obtain aStochastic Constraint Optimization Problem (SCOP).
77
Stochastic Constraint Programming (Walsh, 2002)
An SCSP
◮ V1 = {x1},V2 = {x2}
◮ D(x1) = {1, . . . , 4},D(x2) = {3, . . . , 6}
◮ S1 = {s1},S2 = {s2}
◮ P (s1) = {(0.5)5, (0.5)4},P (s2) = {(0.5)3, (0.5)4}
◮ C =
{
c1 : Pr{s1x1 + s2x2 ≥ 30} ≥ 0.75c2 : Pr{s2x1 = 12} ≥ 0.5;
}
◮ L = [〈V1, S1〉, 〈V2, S2〉]
78
Stochastic Constraint Programming (Walsh, 2002)
A solution to the previous SCSP
s1 = 5
x1 = 3
s1 = 4
s2 = 4
s2 = 3
s2 = 4
s2 = 3
0.25
0.25
0.25
0.25
x2 = 4
x2 = 6
1
2
V1
S1
Scenarioprobability CV
2S
2c
1: 5á3 + 4á4 ³ 30
c2: 4á3 = 12
c1: 4á3 + 4á6 ³ 30
c2: 4á3 = 12
c1: 5á3 + 3á4 < 30
c2: 3á3 12
c1: 4á3 + 3á6 ³ 30
c2: 3á3 12
79
Stochastic Constraint Programming (Walsh, 2002)
An SCOP: Static Stochastic Knapsack
◮ V = {x1, . . . , x3}
◮ D(xi) = {0, 1} ∀i ∈ {1, . . . , 3}
◮ S = {w1, . . . , w3}
◮ P (w1) = {5(0.5), 8(0.5)},P (w2) = {3(0.5), 9(0.5)},P (w3) = {15(0.5), 4(0.5)}
◮ C = {Pr(w1x1 + w2x2 + w3x3 ≤ C) ≥ θ}
◮ L = [〈V, S〉]
◮ f(x1, . . . , x3) =
E
[
p1x1 + p2x2 + p3x3 − smax(
0,∑3
i=1wixi − 20)]
80
Stochastic Constraint Programming (Walsh, 2002)
An SCOP: Dynamic Stochastic Knapsack
◮ V = {x1, . . . , x3}
◮ D(xi) = {0, 1} ∀i ∈ {1, . . . , 3}
◮ S = {w1, . . . , w3}
◮ P (w1) = {5(0.5), 8(0.5)},P (w2) = {3(0.5), 9(0.5)},P (w3) = {15(0.5), 4(0.5)}
◮ C = {Pr(w1x1 + w2x2 + w3x3 ≤ C) ≥ θ}
◮ L = [〈{x1}, {w1}〉, 〈{x2}, {w2}〉, 〈{x3}, {w3}〉]
◮ f(x1, . . . , x3) =
E
[
p1x1 + p2x2 + p3x3 − smax(
0,∑3
i=1wixi − 20)]
81
Stochastic OPL (Tarim et al., 2006)
A language specifically introduced for modeling decision
problems under uncertainty.
It captures several high level concepts that facilitate the processof modeling uncertainty:
◮ stochastic variables (independent or conditional distributions)
◮ several probabilistic measures for the objective function(expectation, variance, etc.)
◮ chance-constraints
◮ decision stages
◮ ...
82
Stochastic OPL (Tarim et al., 2006)
Static Stochastic Knapsack
int N = 3;
int c = 10;
int p = 2;
float θ = 0.2
range Object [1..3];
int value[Object] = [8,15,10];
stoch int weight[Object] = [<5(0.5),8(0.5)>,
<3(0.5),9(0.5)>,<15(0.5),4(0.5)>];
var int+ X[Object] in 0..1;
stages = [<X,weight>];
var int+ z;
maximize sum(i in Object) X[i]*value[i] - p*z
subject to{z = max(0,expected(sum(i in Object) X[i]*weight[i] - c));
prob(sum(i in Object) X[i]*weight[i] - c ≤ 0) ≥ θ;
};
83
Stochastic OPL (Tarim et al., 2006)
Dynamic Stochastic Knapsack
int N = 3;
int c = 10;
int p = 2;
float θ = 0.2
range Object [1..3];
int value[Object] = [8,15,10];
stoch int weight[Object] = [<5(0.5),8(0.5)>,
<3(0.5),9(0.5)>,<15(0.5),4(0.5)>];
var int+ X[Object] in 0..1;
stages = [<X[1],weight[1]>,<X[2],weight[2]>,<X[3],weight[3]>];
var int+ z;
maximize sum(i in Object) X[i]*value[i] - p*z
subject to{z = max(0,expected(sum(i in Object) X[i]*weight[i] - c));
prob(sum(i in Object) X[i]*weight[i] - c ≤ 0) ≥ θ;
};
83
Stochastic OPL (Tarim et al., 2006)
By using the approach discussed in
S. A. Tarim, S. Manandhar and T. Walsh,Stochastic Constraint Programming: A Scenario-Based Approach,Constraints, Vol.11, pp.53-80, 2006
it is possible to compile any SCSP/SCOP down to a deterministicequivalent CSP.
84
Stochastic OPL (Tarim et al., 2006)
Static Stochastic Knapsack — deterministic equivalent model
int nbWorlds = 8;
range Worlds 1..nbWorlds;
int nbItems = 3;
range Items 1..nbItems;
float s = 0.2
float W[Worlds,Items] =
[[5,3,15],[5,3,4],[5,9,15],[5,9,4],[8,3,15],[8,3,4],[8,9,15],[8,9,4]];
float Pr[Worlds] = [0.125,0.125,0.125,0.125,0.125,0.125,0.125,0.125];
float p[Items] = [8,15,10];
float C = 10;
var float+ z[Worlds];
var int+ w[Worlds] in 0..1;
var int+ x[Items] in 0..1;
maximize sum(i in Items) x[i]*r[i] - p*(sum(j in Worlds)Pr[j]*z[j])
subject to{forall(j in Worlds) z[j] >= (sum(i in Items)W[j,i]*x[i])-C;
forall(j in Worlds) (sum(i in Items)W[j,i]*x[i] <= C) => w[i]==1;
sum(j in Worlds) Pr[j]*w[j] >= 0.2;
};
85
Stochastic OPL (Tarim et al., 2006)
Advantages
◮ Seamless modeling under uncertainty
◮ Stochastic OPL not necessarily linked to CP
Drawbacks
◮ Size of the compiled model
◮ Constraint propagation not fully supported
86
Decision making under uncertainty
Can we reuse existing propagation algorithms, e.g. alldifferent, in astochastic setting?
87
Global stochastic constraints & Global chance constraints
International Conference on Principles and Practice ofConstraint Programming, 2008 Artificial Intelligence, 2012
88
Global stochastic constraints & Global chance constraintsA global stochastic constraint (Rossi et al., 2008) represents arelation among a non-predefined number of decision and random
variables.
It implements dedicated filtering algorithms based on
◮ feasibility reasoning
◮ optimality reasoning
Stochastic Programming
Pr{
∑ki=1 wixi ≤ C
}
≥ θ
xi → decision variablewi → random variable or constant parameterC random variable or constant parameterθ → satisfaction probability
Global Chance ConstraintstochLinIneq(x1, . . . , xk, w1, . . . , wk, C, θ);
89
Filtering algorithms for global chance constraints: an
example
x1=0x2=0
x3=1
w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Pr(w1x1+w2x2+w3x3²10)>0.5
Solution Tree
X
X
X
X
90
Filtering algorithms for global chance constraints: an
example
x1={0,1}x2={0,1}
x3={0,1}
w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Pr(w1x1+w2x2+w3x3²10)>0.5
Solution TreeSearch Tree
90
Filtering algorithms for global chance constraints: an
example
x1={0,1}x2={0,1}
x3={0,1}
w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Pr(w1x1+w2x2+w3x3²10)>0.5
Solution TreeSearch Tree
x1={0,1} x2={0,1} x3={0}
90
Filtering algorithms for global chance constraints: an
example
x1={0,1}x2={0,1}
x3={0,1}
w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Pr(w1x1+w2x2+w3x3²10)>0.5
Solution TreeSearch Tree
x1={0,1} x2={0,1} x3={0}
x1={0,1} x2={0,1} x3={0,1}
90
Filtering algorithms for global chance constraints: an
example
x1={0,1}x2={0,1}
x3={0,1}
w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Pr(w1x1+w2x2+w3x3²10)>0.5
Solution TreeSearch Tree
x1={0,1} x2={0,1} x3={0}
x1={0,1} x2={0,1} x3={0,1}
x1={0,1} x2={0,1} x3={0}
x1={0,1} x2={0,1} x3={0,1}
x1={0,1} x2={0,1} x3={0}
x1={0,1} x2={0,1} x3={0,1}
x1={0,1} x2={0,1} x3={0}
x1={0,1} x2={0,1} x3={0,1}
90
Filtering algorithms for global chance constraints: an
example
x1={1}x2={0,1}
x3={0}
w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Pr(w1x1+w2x2+w3x3²10)>0.5
Solution TreeSearch Tree
x1={1} x2={0,1} x3={0}
x1=1
90
Filtering algorithms for global chance constraints: an
example
x1={1}x2={0,1}
x3={0}
w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Pr(w1x1+w2x2+w3x3²10)>0.5
Solution TreeSearch Tree
x1={1} x2={0,1} x3={0}
x1=1
x1={1} x2={0,1} x3={0}
90
Filtering algorithms for global chance constraints: an
example
x1={1}x2={0,1}
x3={0}
w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Pr(w1x1+w2x2+w3x3²10)>0.5
Solution TreeSearch Tree
x1={1} x2={0,1} x3={0}
x1=1
x1={1} x2={0,1} x3={0}
x1={1} x2={0} x3={0}
90
Filtering algorithms for global chance constraints: an
example
x1={1}x2={0,1}
x3={0}
w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Pr(w1x1+w2x2+w3x3²10)>0.5
Solution TreeSearch Tree
x1={1} x2={0,1} x3={0}
x1=1
x1={1} x2={0,1} x3={0}
x1={1} x2={0} x3={0}
x1={1} x2={0} x3={0}
x1={1} x2={0} x3={0}
x1={1} x2={0} x3={0}
x1={1} x2={0} x3={0}
x1={1} x2={0} x3={0}
90
Filtering algorithms for global chance constraints: an
example
x1={1}x2={0}
x3={0}
w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Pr(w1x1+w2x2+w3x3²10)>0.5
Solution TreeSearch Tree
x1={1} x2={0,1} x3={0}
x1=1
x1={1} x2={0,1} x3={0}
x1={1} x2={0} x3={0}
x1={1} x2={0} x3={0}
x1={1} x2={0} x3={0}
x1={1} x2={0} x3={0}
x1={1} x2={0} x3={0}
x1={1} x2={0} x3={0}
90
Filtering algorithms for global chance constraints: an
example
x3={0,1}
x3=1
x3=1
x3=1
x2=1
x2=1
x1={0,1}w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Solution TreeSearch Tree
Pr(w1x1+w2x2+w3x3²10)³0.5
90
Filtering algorithms for global chance constraints: an
example
x3={0,1}
x3=1
x3=1
x3=1
x2=1
x2=1
x1={0,1}w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Solution TreeSearch Tree
x1={} x2={1} x3={1}
Pr(w1x1+w2x2+w3x3²10)³0.5
90
Filtering algorithms for global chance constraints: an
example
x3={0,1}
x3=1
x3=1
x3=1
x2=1
x2=1
x1={0,1}w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Solution TreeSearch Tree
x1={} x2={1} x3={1}
x1={0} x2={1} x3={1}
Pr(w1x1+w2x2+w3x3²10)³0.5
90
Filtering algorithms for global chance constraints: an
example
x3={0,1}
x3=1
x3=1
x3=1
x2=1
x2=1
x1={0,1}w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Solution TreeSearch Tree
Pr(w1x1+w2x2+w3x3²10)³0.5
x1={} x2={1} x3={1}
x1={0} x2={1} x3={1}
x1={0} x2={1} x3={0}
90
Filtering algorithms for global chance constraints: an
example
x1={} x2={1} x3={1}
x1={0} x2={1} x3={1}
x1={0} x2={1} x3={0}
x1={0} x2={1} x3={0}
x3={0,1}
x3=1
x3=1
x3=1
x2=1
x2=1
x1={0,1}w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Solution TreeSearch Tree
Pr(w1x1+w2x2+w3x3²10)³0.5
90
Filtering algorithms for global chance constraints: an
example
x3={0,1}
x3=1
x3=1
x3=1
x2=1
x2=1
x1={0,1}w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Solution TreeSearch Tree
x1={} x2={1} x3={1}
x1={0} x2={1} x3={1}
x1={0} x2={1} x3={0}
x1={0} x2={1} x3={0}
x1={} x2={1} x3={1}
x1={0} x2={1} x3={1}
x1={} x2={1} x3={1}
x1={} x2={1} x3={1}
Pr(w1x1+w2x2+w3x3²10)³0.5
90
Filtering algorithms for global chance constraints: an
example
x3={0,1}
x3=1
x3=1
x3=1
x2=1
x2=1
x1={0,1}w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Solution TreeSearch Tree
x1={} x2={1} x3={1}
x1={0} x2={1} x3={1}
x1={0} x2={1} x3={0}
x1={0} x2={1} x3={0}
x1={} x2={1} x3={1}
x1={0} x2={1} x3={1}
x1={} x2={1} x3={1}
x1={} x2={1} x3={1}
Pr(w1x1+w2x2+w3x3²10)³0.5
90
Filtering algorithms for global chance constraints: an
example
x3=0
x3=1
x3=1
x3=1
x2=1
x2=1
x1=0w1=5
w1=8
w2=3
w2=9
w2=3
w2=9
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
w3=15
w3=4
Solution TreeSearch Tree
x1={} x2={1} x3={1}
x1={0} x2={1} x3={1}
x1={0} x2={1} x3={0}
x1={0} x2={1} x3={0}
x1={} x2={1} x3={1}
x1={0} x2={1} x3={1}
x1={} x2={1} x3={1}
x1={} x2={1} x3={1}
Pr(w1x1+w2x2+w3x3²10)³0.5
90
Global chance constraints vs Scenario-based approachRandom SCSPs in (Hnich et al., 2012)
91
Global stochastic constraints & Global chance constraints
What are the advantages of modelling problems of decision makingunder uncertainty via global stochastic constraints?
92
Deterministic multiprocessor scheduling (CSP)
Constraints:
(1) cumulative(s, e, t, c,m)Decision variables:
sk ∈ {rk, . . . , dk}, ∀k ∈ 1, . . . , |K|ek ∈ {rk, . . . , dk}, ∀k ∈ 1, . . . , |K|
93
Stochastic multiprocessor scheduling (SCSP)Uncertain processing time
Constraints:
(1) Pr {cumulative(s, e, t, c,m)} ≥ θ
Decision variables:
sk ∈ {rk, . . . , dk}, ∀k ∈ 1, . . . , |K|ek ∈ {rk, . . . , dk}, ∀k ∈ 1, . . . , |K|
Stochastic variables:
tk → processing time of order k ∀k ∈ 1, . . . , |K|Stage structure:
V1 = {s1, s2, . . . , s|K|} S1 = {t1, t2 . . . , t|K|}V2 = {e1, e2, . . . , e|K|} S2 = {}L = [〈V1, S1〉, 〈V2, S2〉]
94
Stochastic multiprocessor scheduling (SCSP)Uncertain capacity requirements
Constraints:
(1) Pr {cumulative(s, e, t, c,m)} ≥ θ
Decision variables:
sk ∈ {rk, . . . , dk}, ∀k ∈ 1, . . . , |K|ek ∈ {rk, . . . , dk}, ∀k ∈ 1, . . . , |K|
Stochastic variables:
ck → capacity requirement of order k ∀k ∈ 1, . . . , |K|Stage structure:
V1 = {s1, s2, . . . , s|K|, t1, t2 . . . , t|K|, e1, e2, . . . , e|K|}S1 = {c1, c2, . . . , c|K|}L = [〈V1, S1〉]
95
Stochastic multiprocessor scheduling (SCSP)Uncertain processing time and capacity requirements
Constraints:
(1) Pr {cumulative(s, e, t, c,m)} ≥ θ
Decision variables:
sk ∈ {rk, . . . , dk}, ∀k ∈ 1, . . . , |K|ek ∈ {rk, . . . , dk}, ∀k ∈ 1, . . . , |K|
Stochastic variables:
tk → processing time of order k ∀k ∈ 1, . . . , |K|ck → capacity requirement of order k ∀k ∈ 1, . . . , |K|
Stage structure:
V1 = {s1, s2, . . . , s|K|} S1 = {t1, t2 . . . , t|K|, c1, c2, . . . , c|K|}V2 = {e1, e2, . . . , e|K|} S2 = {}L = [〈V1, S1〉, 〈V2, S2〉]
96
Stochastic Plane Landing Scheduling ProblemUncertain plane delay
Constraints:
(1) ti = xi + di ∀i ∈ P1, . . . , P5(2) li = xi + fi + di ∀i ∈ P1, . . . , P5(3) bi = xi + di ∀i ∈ P6, . . . , P14(4) Pr {alldiff(t1, l3, t5, b6, b7, b8)} ≥ θ4(5) Pr {alldiff(l1, t2, l4, b9, b10, b11)} ≥ θ5(6) Pr {alldiff(l2, t3, t4, l5, b12, b13, b14)} ≥ θ6(7) Pr {l1 ≤ t2} ≥ θ7(8) l3 ≤ t5
Decision variables:
x1 ∈ {1, . . . , 12}, x2 ∈ {13, 14, 15, 22, 23, 24},x3 ∈ {1, . . . , 9}, x4 ∈ {7, . . . , 12},x5 ∈ {16, . . . , 24},x6 = 18, x7 = 10, x8 = 2,x9 = 15, x10 = 8, x11 = 2,x12 = 13, x13 = 5, x14 = 2,ti ∈ {1, . . . , 28} ∀i ∈ 1, . . . , 5li,∈ {1, . . . , 40} ∀i ∈ 1, . . . , 5bi,∈ {1, . . . , 22} ∀i ∈ 6, . . . , 14
Stochastic variables:
di → delay of plane i ∀i ∈ 1, . . . , 14Stage structure:
V1 = {x1, x2, . . . , x14}S1 = {d1, d2 . . . , d14}V2 = {t1, . . . , t5, l1, . . . , l5, b6, . . . , b14}S2 = {}L = [〈V1, S1〉, 〈V2, S2〉]
97
Global stochastic constraints & Global chance constraints
Can we scale these propagation methods when random variablesupports are large?
98
Global stochastic constraints & Global chance constraintsNumerical study
A comprehensive numerical study on several problems
◮ Random stochastic CSPS
◮ Stochastic knapsack (SCOP)
◮ Stochastic plane landing scheduling problem
is presented in (Hnich et al., 2012).
In this numerical study we demonstrates the effectiveness of globalchance constraints in modelling and solving problems of decisionmaking under uncertainty.
We also demonstrate that global chance constraints outperformclassical scenario-based deterministic equivalent reformulation suchas the one discussed in (Tarim et al., 2006).
99
(α, ϑ)-solutions and Sampled SCSP
22nd International Joint Conferenceon Artificial Intelligence (IJCAI-11)
100
(α, ϑ)-solutions and Sampled SCSP
s1 = 5
x1 = 3
s1 = 4
s2 = 4
s2 = 3
s2 = 4
s2 = 3
2/3
0
1/3
0
x2 = 4
x2 = 6
1
2
V1
S1
Scenarioprobability CV
2S
2c
1: 5á3 + 4á4 ³ 30
c2: 4á3 = 12
c1: 4á3 + 4á6 ³ 30
c2: 4á3 = 12
101
Questions
102