Jaehan Koh ( [email protected] ) Dept. of Computer Science Texas A&M University
description
Transcript of Jaehan Koh ( [email protected] ) Dept. of Computer Science Texas A&M University
![Page 1: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/1.jpg)
1
On the Cost and Benefits of Procrastination: Approximation
Algorithms for Stochastic Combinatorial Optimization Problems
(SODA 2004)
Nicole Immorlica, David Karger, Maria Minkoff, Vahab S. Mirrokni
Jaehan Koh ([email protected])
Dept. of Computer ScienceTexas A&M University
![Page 2: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/2.jpg)
2
Outline
IntroductionPreplanning FrameworkExamplesResultsSummaryHomework
![Page 3: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/3.jpg)
3
Introduction
Scenarios Harry is designing a network for the Computer Science
Department at Texas A&M University. In designing, he should make his best guess about the future demands in a network and purchase capacity in accordance with them.
A mobile robot is navigating around a room. Since the information about the environment is unknown or becomes available too late to be useful to the robot, it might be impossible to modify a solution or improve its value once the actual inputs are revealed.
![Page 4: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/4.jpg)
4
Planning under uncertainty
Problem data frequently subject to uncertainty May represent information about the future Inputs may be evolve over time
On-line model Assumes no knowledge of the future
Stochastic modeling of uncertainty Given: probability distribution on potential
outcomes Goal: minimize expected cost over all potential
outcomes
![Page 5: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/5.jpg)
5
Approaches
Plan aheadFull solution has to be specified before we learn
values of unknown parameters Information becomes available too late to be
useful
Wait-and-SeePossibility to defer some decisions until getting
to know exact inputsTrade-off: decisions made late may be more
expensive
![Page 6: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/6.jpg)
6
Approaches (Cont’d)
Trade-offsMake some purchase/allocation decisions early
to reduce cost while deferring others at greater expense to take advantage of additional information.
Problems in which the probolem instance is uncertain.
Min-cost flow, bin packing, vertex cover, shortest path, and the Steiner tree problems.
![Page 7: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/7.jpg)
7
Preplanning framework
Stochastic combinatorial optimization problem ground set of elements e E A (randomly selected) problem instance I, which defines
a set of feasible solutions FI, each corresponding to a subset FI 2E.
We can buy certain elements “in advance” at cost ce, then sample a problem instance, then buy other elements at “last-minute” cost ce so as to produce a feasible solution S FI for the problem instance.
Goal: to choose a subset of elements to buy in advance to minimize the expected total cost.
Ee
![Page 8: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/8.jpg)
8
Two types of instance prob. distribution
Bounded support distributionNonzero probability to only a polynomial
number of distinct problem instances.
Independent distributionEach element / constraint for the problem
instance active independently with some probability.
![Page 9: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/9.jpg)
9
Versions
Scenario-basedBounded number of possible scenariosExplicit probability distribution over problem
instances
Independent events modelRandom instance is defined implicitly by
underlying probabilistic processThe number of possible scenarios can be
exponential in the problem size
![Page 10: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/10.jpg)
10
Problems
Min Cost Flow Given a source and sink and a probability distribution on
demand, buy some edges in advance and some after sampling (at greater cost) such that the given amount of demand can be routed from source to sink.
Bin Packing A collection of item is given, each of which will need to
be packed into a bin with some probability. Bins can be purchased in advance at cost 1; after the determination of which items need to be packed, additional bins can be purchased as cost > 1. How many bins should be purchased in advance to minimize the expected total cost?
![Page 11: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/11.jpg)
11
Problems (Cont’d)
Vertex CoverA graph is given, along with a probability
distribution over sets of edges that may need to be covered. Vertices can be purchased in advance at cost 1; after determination of which edges need to be covered, additional vertices can be purchased at cost . Which vertices should be purchased in advance?
![Page 12: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/12.jpg)
12
Problems (Cont’d)
Cheap PathGiven a graph and a randomly selected pair of
vertices (or one fixed vertex and one random vertex), connect them by a path. We can purchase edge e at cost ce before the pair is known or at cost ce after. We wish to minimize the expected total edge cost.
![Page 13: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/13.jpg)
13
Problems (Cont’d)
Steiner TreeA graph is given, along with a probability
distribution over sets of terminals that need to be connected by a Steiner tree. Edge e can be purchased at cost ce in advance or at cost ce after the set of terminal is known.
![Page 14: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/14.jpg)
14
Preplanning combinatorial optimization problem
A ground set of elements e E, a probability distribution on instances {I}, a cost function c: E R, a penalty factor 1Each instance I has a corresponding set of feasible solutions FI 2E associated with it.
Suppose a set of elements A E is purchased before sampling the probability distribution. The posterior cost function cA is defined by
![Page 15: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/15.jpg)
15
Preplanning combinatorial optimization problem (Cont’d)The objective of a PCO problme: choose a subset of elements A to be purchased in advance so as to minimize the total expected cost of a feasible solution
over a random choice of an instance I.
![Page 16: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/16.jpg)
16
The Threshold Property
Theorem 1.An element should be purchased in advance if
and only if the probability it is used in the solution for a randomly chosen instance exceeds 1/.
![Page 17: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/17.jpg)
17
Example: Min-cost FlowWe wish to provide capacity on a network sufficient to carry a random amount of flow demand D from a source s to a sink t.We have an option to pre-install some amout of capacity in advance at some cost per unit.We rent additional capacity once the demands become known, but at cost a factor of or larger per unit.The sum of capacity in advance and capacity rented must satisfy a given upper bound of total capacity for each edge.Goal: Over a given probability distribution on demands, minimize the expected cost of installing sufficient capacity in the network so that the network satisfies the demand.
![Page 18: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/18.jpg)
18
Example: Min-cost Flow (Cont’d)
Suppose Cap(s-a) = Cap(a-t) = Cap(s-b) = Cap(a-b) = Cap(b-t) = 1 = 2 Pr[D=0] = ¼, Pr[D=1] = ¼, Pr[D=2] = ½
as
bt
1 3
3 11
![Page 19: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/19.jpg)
19
Example: Min-cost Flow (Cont’d)
Suppose Cap(s-a) = Cap(a-t) = Cap(s-b) = Cap(a-b) = Cap(b-t) = 1 = 2 Pr[D=0] = 5/12, Pr[D=1] = ¼, Pr[D=2] = 1/3
as
bt
1 3
3 11
![Page 20: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/20.jpg)
20
Example: Vertex cover
Classical vertex cover problem Given: a graph G = (V, E) Output: a subset of vertices such that each edge has at
least one endpoint in the set Goal: minimize cardinality of the vertex subset
Stochastic vertex cover Random subset of edges is present Vertices picked in advance cost 1 Additional vertices can be purchased at cost >1 Goal: minimize expected cost of a vertex cover Time-information trade-off
![Page 21: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/21.jpg)
21
Our techniques
Merger of Linear Programs Easy way to handle scenario-based stochastic problems
with LP-formulations
Threshold Property Identity elements likely to be needed and buy them in
advance
Probability Aggregation Cluster probability mass to get nearly deterministic
subproblems Justifies buying something in advance
![Page 22: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/22.jpg)
22
Vertex Cover with preplanning
Given Graph G = (V, E)
Random instance: a subset of edges to be covered Scenario-based: poly number of possible edge sets
Independent version: each e E present independently with probability p.
Vertices cost 1 in advance; after edge set is sampled Goal: select a subset A of vertices to buy in advance
such that minimize expected cost of a vertex cover
![Page 23: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/23.jpg)
23
Idea 1: LP merger
Deterministic problem has LP formulation Write separate LP for each scenario of stochastic
problem Take probability-weighted linear combination of objective
functions
Applications of scenario-based vertex cover VC has LP relaxation Combine LP relaxation of scenarios Round in standard way Result: 4-approximation
![Page 24: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/24.jpg)
24
Idea 2: Threshold property
Buy element e in advance Pr[need e] 1/ Advance cost ce Expected buy-later cost cePr[need e]
Application of independent-events vertex cover
Threshold degree k = 1/p Vertex with degree k is adjacent to an active edge with
probability 1/ Not worth buying vertices with degree < k in advance Idea: purchase a subset of high degree vertices in
advance
![Page 25: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/25.jpg)
25
k-matching
k-matching is a subset of edges that induces degree k on each vertexVertex v V is tight if its degree in matching is exactly kCan construct a maximal k-matching greedily
![Page 26: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/26.jpg)
26
Algorithm for k-matchingAssume 4Set k = 1 / pConstruct some maximal k-matching Mk
Purchase in advance the set of tight vertices At
Solution costPrepurchase cost | At | “Wait-and-See” cost expected size of minimum vertex cover of
active edges not already covered by At
![Page 27: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/27.jpg)
27
Bounding 2nd stage cost
Claim: once purchased all the tight vertices, it is optimal to buy nothing else in advance
Vertices in At cover all edges not in k-matching, then e has at least one tight endpoint
Vertices not in At have degree < k
E[cost] a vertex cover in V \ At-induced subgraph is at most OPT
![Page 28: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/28.jpg)
28
Bounding prepurchase cost
Restrict attention to instnace induced by Mk Costs no more than OPT
Intuition Vertex of degree k = 1/p is likely to be adjacent to an
active edge Active edges are not likely to be clustered together Can show that a tight vertex is useful for the vertex cover
with sufficiently high probability | At | = O(OPT)
![Page 29: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/29.jpg)
29
Steiner network preplanning
Given Undirected graph G = (V, E) with edge costs ce 0 Probability pi of node i becoming active Penalty 1
Goal Buy subset of edges in advance in order to minimize
expected cost of s Steiner tree over active nodes
Approaches: probability aggregation Cluster vertices into groups of 1/ probability Purchase in advance edges of an MST over clusters
![Page 30: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/30.jpg)
30
CLUSTER algorithm
The Steiner tree for the ultrametric case An assignment of edge weights satisfying ĉuv max
(ĉu, ĉv).
The basic idea To cluster nodes into components, each containing
clients of total probability mass (1/).
Lemma Algorithm CLUSTER produces an MST with the specified
properties.
![Page 31: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/31.jpg)
31
CLUSTER algorithm (Cont’d)
![Page 32: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/32.jpg)
32
CLUSTER algorithm (Cont’d)
A hub tree h
h
<T<T
<T<T>T
>T
![Page 33: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/33.jpg)
33
Probability Aggregation
p p
ppp
pp
pp
p
p p
ppp
p pp
p p
pp
pp
p
1/
1/1/
1/1/
![Page 34: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/34.jpg)
34
ResultsProblem Stochastic
elementsApprox.
guaranteeMin-cost s-t flow
Demand 1 Via LP
Bin packing Items Asymptotic FPAS
Using FPAS for determ. version
Vertex cover Edges 44.37
Scenario-basedIndep. Events
Shortest path Start nodeStart & end
O(1)O(1)
ConnFacLocRent-or-Buy
Steiner tree Terminals 3O(log n)
UltrametricsGeneral case
![Page 35: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/35.jpg)
35
SummaryStochastic combinatorial optimization problems in a novel “preplanning” framework
Study of time-information trade-off in problems with uncertain inputs
Algorithms to approximately optimize the choice of what to purchase in advance and what to defer
Open questions Combinatorial algorithms for stochastic min-cost flow Metric Steiner tree preplanning Applying preplanning scheme to other problems:
scheduling, multicommodity network flow, network design
![Page 36: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/36.jpg)
36
Homework1. (40 pts) In the paper, the authors argue that we can
reduce the preplanning version of an NP-hard problem to solve a preplanning instance of another optimization problem that has a polynomial-time algorithm. Explain in your own words why this is true and give at least one example.
2. (a) (30 pts) Formulate the Minimum Steiner Tree problem as an optimization problem (b) (30 pts) Reformulate the Minimum Steiner Tree problem in a PCO (Preplanning Combinatorial Optimization) problem version. Explain the difference between (a) and (b).
![Page 37: Jaehan Koh ( jaehanko@cs.tamu ) Dept. of Computer Science Texas A&M University](https://reader035.fdocuments.us/reader035/viewer/2022062816/56814d52550346895dba8ad7/html5/thumbnails/37.jpg)
37
Thank you …Any Questions?