Linear Programming Projection Theory: Part 2 Chapter 2 (57...

Post on 06-Feb-2018

217 views 0 download

Transcript of Linear Programming Projection Theory: Part 2 Chapter 2 (57...

Linear ProgrammingProjection Theory: Part 2

Chapter 2 (57-80)

University of ChicagoBooth School of Business

Kipp Martin

September 26, 2017

1

Outline

Duality Theory

Dirty Variables

From Here to Infinity

Complementary Slackness

Sensitivity Analysis

Degeneracy

Consistency Testing

Optimal Value Function

Summary: Key Results

2

Duality Theory

We are solving the linear programming problem using projection.

min{c>x |Ax ≥ b}Rewrite as the system:

z0 − c>x ≥ 0

Ax ≥ b

Project out the x variables and get

z0 ≥ dk , k = 1, . . . , q

0 ≥ dk , k = q + 1, . . . , r

3

Duality Theory

Since

z0 ≥ dk , k = 1, . . . , q

0 ≥ dk , k = q + 1, . . . , r

the optimal value of the objective function is

z0 = max{dk | k = 1, ..., q}

Key Idea: Any of the dk provide a lower bound on the optimalobjective function value since the optimal value is the maximumover all values.

4

Duality Theory

Observation: We projected out the x variables from the system

z0 − c>x ≥ 0

Ax ≥ b

and got

z0 ≥ dk , k = 1, . . . , q

0 ≥ dk , k = q + 1, . . . , r

If we let u be the dual multipliers on Ax ≥ b and u0 the multiplieron z0 − c>x ≥ 0 we have (uk)>b = dk and

(uk)>A− uk0 c = 0

We take without loss uk0 = 1

5

Duality Theory

Let me say this again because it is so important. Each of the

z0 ≥ dk , k = 1, . . . , q

constraints in the projection result from aggregating c>x withAx ≥ b where uk is a vector of multipliers on Ax ≥ b so that

(uk)>A = c

6

Duality Theory

Thus any nonnegative uk with A>uk = c provides a lower boundvalue of

dk = b>uk

on the optimal objective function value. This is known as weakduality.

Lemma 2.28 (Weak Duality) If x is a solution to the systemAx ≥ b and u ≥ 0 is a solution to A>u = c , then c>x ≥ b>u.

7

Duality Theory

Since the minimum value of the linear program is given byz0 = max{dk | k = 1, . . . q} it is necessary to find a nonnegative usuch that A>u = c and b>u is as large as possible.

Finding the largest value of b>u is also a linear program:

max {b>u |A>u = c , u ≥ 0 }.

The linear program min{c>x |Ax ≥ b} is the primal linearprogram.

The linear program max{b>u |A>u = c , u ≥ 0 } that generatesthe multipliers for the lower bounds is the dual linear program.

The x variables are the primal variables and the u variables thedual variables or dual multipliers.

8

Duality Theory

Theorem 2.29 (Strong Duality) If there is an optimal solution tothe primal linear program

min {c>x |Ax ≥ b},

then there is an optimal solution to the dual linear programmax {b>u |A>u = c , u ≥ 0} and the optimal objective functionvalues are equal.

Corollary 2.30 If the primal linear program min{c>x |Ax ≥ b} isunbounded then max{b>u |A>u = c , u ≥ 0} the dual linearprogram is infeasible.

9

Duality Theory

If the primal problem is

min c>x

Ax ≥ b

the dual problem is

max b>u

A>u = c

u ≥ 0

What is the dual of the dual?

10

Duality Theory

If the primal is

min c>x

Ax ≥ b

x ≥ 0

What is the dual?

What is the dual of the dual?

11

Duality Theory

If the primal is

min c>x

Ax = b

x ≥ 0

What is the dual?

What is the dual of the dual?

12

Duality Theory

DO NOT MEMORIZE PRIMAL DUAL PAIRS!

DO NOT MEMORIZE PRIMAL DUAL PAIRS!

DO NOT MEMORIZE PRIMAL DUAL PAIRS!

DO NOT MEMORIZE PRIMAL DUAL PAIRS!

DO NOT MEMORIZE PRIMAL DUAL PAIRS!

DO NOT MEMORIZE PRIMAL DUAL PAIRS!

DO NOT MEMORIZE PRIMAL DUAL PAIRS!

You get the idea.

13

Duality Theory

Table: Primal-Dual Relationships

Primal Dual

inequality primal constraint nonnegative dual variableequality primal constraint unrestricted dual variablenonnegative primal variable inequality dual constraintunrestricted primal variable equality dual constraintmin primal objective function max dual objective function

14

Duality Theory

Proposition 2.31 If the linear program {c>x |Ax ≥ b} has anoptimal solution, then applying projection to the systemz0 − c>x ≥ 0, Ax ≥ b gives all of the extreme points of the dualpolyhedron {u |A>u = c , u ≥ 0} and all of the extreme rays of thedual recession cone {u |A>u = 0, u ≥ 0}.

You are going to basically prove this for homework.

15

Where We Are Headed

We want to solve problems with special structure! Real problemshave special structure! One such structure is

min c>x + f >y

s.t. Ax + By ≥ b

Later we will take advantage of special structure in the A matrixand project out the x variables and solve a problem in only the yvariables.

16

Where We Are Headed

z0 − c>x − f >y ≥ 0

Ax + By ≥ b

Rewrite this as

z0 − c>x ≥ f >y

Ax ≥ b − By

Now project out x

z0 ≥ f >y + (uk)>(b − By), k = 1, . . . , q

0 ≥ (uk)>(b − By), k = q + 1, . . . , r

17

Where We Are Headed

Here is the new formulation

z0 ≥ f >y + (uk)>(b − By), k = 1, . . . , q

0 ≥ (uk)>(b − By), k = q + 1, . . . , r

Any problems with this?

What is the fix?

See the homework for a premonition.

18

Where We Are Headed

We will also use duality to generate bounds in enumerationalgorithms.

19

Dirty Variables (See pages 43-46 in the text.)

Consider the system

x1 − x2 ≥ 1

x1 + x2 ≥ 1

I Can we eliminate x1 using Fourier-Motzkin elimination?

I What is the projection of the polyhedron onto the x2 space?

20

Dirty Variables (See pages 43-46 in the text.)

RecallH+(k) := {i ∈ I | ak(i) > 0}

H−(k) := {i ∈ I | ak(i) < 0}

H0(k) := {i ∈ I | ak(i) = 0}

(1)

Variable k is a dirty variable if H+(k) is empty and H−(k) is notempty, or H+(k) is not empty and H−(k) is empty,

21

Dirty Variables

In general, what is the projection of the system

x1 + ai2x2 + · · ·+ ainxn ≥ bi , i = 1, . . . ,m,

into Rn−1?

What is the projection of the system

x1 + ai2x2 + · · ·+ ainxn ≥ bi , i = 1, . . . ,m1

ai2x2 + · · ·+ ainxn ≥ bi , i = m1 + 1, . . . ,m

into Rn−1?

22

Dirty Variables

If, at some point:

x1 + ai2x2 + · · ·+ ainxn ≥ bi , i = 1, . . . ,m,

Two implications:

I The projection of <n in <n−1 is <n−1 (the entire subspace)

I The only solution to the corresponding u>A = 0, for u ≥ 0 isu = 0

The above is only true if and only if there is a strictly positive(negative) coefficient in every row at some point in the projectionprocess.

23

Dirty Variables

If there are finite number of constraints, dirty variables are not toobad, we just drop the constraints with the dirty variables.

However, if there are an infinite number of constraints, dirtyvariables hide dirty secrets and can be obscene.

24

From Here to Infinity

Theorem: If

Γ = {x : a1(i)x1 + a2(i)x2 + · · ·+ an(i)xn ≥ b(i) for i ∈ I = {1, 2, . . . ,m}}

then Fourier-Motzkin elimination gives

P(Γ; x1) := {(x2, x3, . . . , xn) ∈ Rn−1 : ∃x1 ∈ R s.t. (x1, x2, . . . , xn) ∈ Γ}. (2)

even when x1 is dirty.

What happens when I is not a finite index set? That is, when Γ isa closed convex set, but not necessarily a polyhedron.

25

From Here to Infinity

26

From Here to Infinity

Consider the system

ix1 + x2i ≥ 1, i = 1, . . . ,N

x1 ≥ −1x2 ≥ −1

(3)

Project out x2 (it is a dirty variable) and get

x1 ≥ −1

27

From Here to Infinity

28

The Road to Infinity

Now what about (note the ∞)

ix1 + x2i ≥ 1, i = 1, . . . ,∞

x1 ≥ −1x2 ≥ 0

(4)

Project out x2 (it is a dirty variable) and get

x1 ≥ −1

Is this correct?

29

From Here to Infinity

19

ix1 +

x2i≥ 1 i=1,… ,∞

x1 ≥ −1

x2 ≥ 0

x1 ≥ −1

x2

x1�1

x2

x10

x1 ≥ 0

Typo: x1 > 0 not x1 ≥ 0.

30

From Here to Infinity

31

From Here to InfinityBrief Review: Solve the finite linear programming problem usingprojection.

min{c>x |Ax ≥ b}Rewrite as the system:

z0 − c>x ≥ 0

Ax ≥ b

Project out the x variables and get

z0 ≥ dk , k = 1, . . . , q

0 ≥ dk , k = q + 1, . . . , r

the optimal value of the objective function is

v(LP) = V (LPD) = max{dk | k = 1, ..., q}32

From Here to Infinity

Fourier-Motzkin algorithm modification.

Do Not drop dirty variables.

Pass over them and eliminate only the clean variables

For a formal statement see pages 6-7 in:

http://www.ams.jhu.edu/~abasu9/papers/fm-paper.pdf

33

From Here to Infinity

34

From Here to Infinity

Consider

0 ≥ b(h), h ∈ I1

a`(h)x` + · · ·+ an(h)xn ≥ b(h), h ∈ I2

z ≥ b(h), h ∈ I3

z + a`(h)x` + · · ·+ an(h)xn ≥ b(h), h ∈ I4

(5)

Assume (z , x`, . . . , xn) is a feasible solution to the projectedsystem above.

35

From Here to Infinity

Observation: For the feasible (z , x`, . . . , xn), we must have

z ≥ sup{b(h)− a`(h)x` − · · · − an(h)xn : h ∈ I4}

and

z ≥ sup{b(h) : h ∈ I3}

36

From Here to InfinityLemma: If z is an optimal solution value to the linear programthen

z ≥ limδ→∞

sup{b(h)− δn∑

k=`

|ak(h)| : h ∈ I4}

Proof: the key idea is that if (x`, . . . , xn) is a feasible point in theprojected space then

xk =

{δ, if xk > 0−δ, if xk < 0

k = `, . . . , n

where

δ = max{|xk | : k = `, . . . , n}

is a feasible point in the projected space.37

From Here to InfinityTheorem: If (SILP) is feasible, then the optimal solution value is

v(SILP) = max{S , L}

where

S = sup{b(h) : h ∈ I3}

and

L = limδ→∞

sup{b(h)− δn∑

k=`

|ak(h)| : h ∈ I4}

Corollary When the cardinality of I is finite, L = −∞ andv(SILP) = S .

38

From Here to Infinity

39

Complementary Slackness

Motivation: Most work algorithms work by moving from point topoint until a set of optimality conditions are satisfied.

Generic Algorithm:

Initialization: Start with a point that satisfies a subset of theoptimality conditions.

Iterative Step: Move to a better point.

Termination: Stop when you have satisfied (to numericaltolerances) the optimality conditions.

40

Complementary Slackness

Theorem 2.33 If x is a feasible solution to the primal problemmin {c>x |Ax ≥ b } and u is a feasible solution to the dualproblem max {b>u |A>u = c, u ≥ 0 }, then x , u are primal-dualoptimal if and only if

(b − Ax)>u = 0. primal complementary slackness

Corollary 2.34 If

Ax ≥ b (6)

A>u = c u ≥ 0 (7)

(b − Ax)>u = 0 (8)

then x is optimal to the primal problem min {c>x |Ax ≥ b} and uis optimal to the dual problem max {b>u |A>u = c , u ≥ 0}.

41

Complementary Slackness

Condition (8) is

(b − Ax)>u = 0

What is the economic interpretation?

What is the interpretation in terms of projection?

42

Complementary Slackness

Theorem 2.35 (Strict Complementary Slackness) If there is anoptimal solution to the linear program

min {c>x | (ai )>x ≥ bi , i = 1, . . . ,m},then there is an optimal primal-dual pair (x , u) such that

(bi − (ai )>x) < 0⇒ ui = 0 and (bi − (ai )>x) = 0⇒ ui > 0, i = 1, . . . ,m. (9)

Corollary 2.37 If x , u is an optimal primal-dual pair for the linearprogram min {c>x |Ax ≥ b}, but is not strictly complementary,then there are either alternative primal optima or alternative dualoptima.

This is not good. More on this later.

43

Complementary Slackness

The linear program is min {x1 + x2 | x1 + x2 ≥ 1, x1, x2 ≥ 0}. Solvethe linear program using projection.

z0 − x1 − x2 ≥ 0 (E0)

x1 + x2 ≥ 1 (E1)x1 ≥ 0 (E2)

x2 ≥ 0 (E3)

⇒{

z0 ≥ 1 (E0) + (E1)z0 ≥ 0 (E0) + (E2) + (E3)

}

44

Complementary Slackness

Proof Motivation:

Idea One: It is sufficient to show strict complementary slacknessmust hold for the first constraint.

Why is showing the result for one constraint sufficient?

Idea Two: Apply Corollary 2.21 on page 52 of the text.

45

Complementary Slackness

Corollary (2.21)

If there is no solution to the system

A1x > b1, A2x ≥ b2

then there are nonnegative u1, u2 such that

A>1 u1 + A>2 u

2 = 0, (b1)>u1 + (b2)>u2 ≥ 0, u1 6= 0 (10)

has a solution, or

A>1 u1 + A>2 u

2 = 0, (b1)>u1 + (b2)>u2 > 0, (11)

has a solution. Conversely, if there is no solution to either (10) or(11), there is a solution to A1x > b1, A2x ≥ b2.

46

Complementary Slackness

The complementary slackness results of this section are stated interms of the symmetric primal-dual pair.

If

Ax ≥ b, x ≥ 0 (12)

A>u ≤ c , u ≥ 0 (13)

(b − Ax)>u = 0 (14)

(c − A>u)>x = 0 (15)

then x is optimal to the primal problem min {c>x |Ax ≥ b, x ≥ 0}and u is optimal to the dual problem max {b>u |A>u ≤ c , u ≥ 0}.

47

Complementary Slackness

Conditions (12) - (15) are called optimality conditions.

I Condition (12) – primal feasibility

I Condition (13) – dual feasibility

I Conditions (14)-(15) – complementary slackness

Simplex Algorithm: Enforce conditions (12), (14), and (15) anditerate to satisfy (13) and then stop.

Nonlinear Programming: – Karush-Kuhn-Tucker conditionsgeneralize this to nonlinear programming.

Constructing a primal dual-pair with equal objective functionvalues is a very useful proof technique.

48

Sensitivity Analysis

People are interested not only in primal solutions, but also dualinformation.

What is the practical significance (or utility) of knowing theoptimal values of the dual variables?

Solvers not only give the optimal dual information, but they alsoprovide range analysis.

Let’s look at some actual solver output.

49

Sensitivity Analysis

Consider the linear program we have been working with

MIN 2 X1 - 3 X2

SUBJECT TO

2) X2 >= 2

3) 0.5 X1 - X2 >= - 8

4) - 0.5 X1 + X2 >= - 3

5) X1 - X2 >= - 6

END

50

Sensitivity Analysis (LINDO/LINGO)

OBJECTIVE FUNCTION VALUE

1) -22.00000

VARIABLE VALUE REDUCED COST

X1 4.000000 0.000000

X2 10.000000 0.000000

ROW SLACK OR SURPLUS DUAL PRICES

2) 8.000000 0.000000

3) 0.000000 -2.000000

4) 11.000000 0.000000

5) 0.000000 -1.000000

51

Sensitivity Analysis (LINDO/LINGO)

Terminology: You will see

I Dual price

I Dual variable/value

I Shadow price

52

Sensitivity Analysis (LINDO/LINGO)

RANGES IN WHICH THE BASIS IS UNCHANGED:

OBJ COEFFICIENT RANGES

VARIABLE CURRENT ALLOWABLE ALLOWABLE

COEF INCREASE DECREASE

X1 2.000000 1.000000 0.500000

X2 -3.000000 1.000000 1.000000

RIGHTHAND SIDE RANGES

ROW CURRENT ALLOWABLE ALLOWABLE

RHS INCREASE DECREASE

2 2.000000 8.000000 INFINITY

3 -8.000000 2.000000 INFINITY

4 -3.000000 11.000000 INFINITY

5 -6.000000 INFINITY 2.000000

53

Sensitivity Analysis (Excel Solver)

54

Sensitivity Analysis: Allowable Increase/Decrease

Given an optimal dual solution, u, the allowable increase(decrease) on the right hand side of the constraint

(ak)>x ≥ bk

is the maximum increase (decrease) on the right hand side bk suchthat u is still an optimal dual solution (or the primal becomesinfeasible).

The dual values and the allowable/increase is a natural by-productof projection.

55

Sensitivity Analysis

The result of projection on the linear program is

z0 ≥ −22 (E0) + 2(E2) + (E4) (16)

z0 ≥ −24 (E0) + 3(E2) + 0.5(E5) (17)

z0 ≥ −44 (E0) + 4(E2) + 2(E3) + (E4) (18)

z0 ≥ −35 (E0) + 4(E2) + (E3) + 0.5(E5) (19)

z0 ≥ −30 (E0) + (E1) + 4(E2) (20)

z0 ≥ −32 (E0) + 4(E2) + (E6) (21)

0 ≥ −11 (E2) + (E3) (22)

56

Sensitivity Analysis

Questions:

I Why is the dual price on row (E2) -2?

I Why is the allowable increase on row (E1) two 8?

I Why is the allowable decrease on row (E4) 2?

57

Sensitivity Analysis – 100% Rule

In calculating the allowable/increase for the linear programmingright hand sides we assume that only one right hand side changeswhile the others are held constant.

What if we change more than one b?

If the sum of the percentage increase (decrease) of all the righthand sides in the linear program, min{c>x |Ax ≥ b}, does notexceed 100% then the current optimal dual solution remainsoptimal.

Proof: Homework!

58

Degeneracy

People are interested in knowing if there are alternative optima.Why?

If an optimal solution is NOT strictly complementary, then we havealternative primal or dual solutions. Why?

If an optimal solution is NOT strictly complementary, then therange analysis may not be valid. Argle Bargle!

Can we tell from a solver output if we have primal or dualalternative optima?

It depends. Degenerate solutions mess things up.

59

Degeneracy

Closely related to alternative dual optima is the concept of primaldegeneracy.

A solution x of the primal linear program min {c>x |Ax ≥ b}where A is an m× n matrix is primal degenerate if the submatrix ofA defined by the binding constraints ((ai )>x = bi where ai is row iof A) has rank strictly less than the number of binding constraints.

Primal degeneracy can also lead to incorrect values of theallowable increase/decrease. Argh!!!!

Some authors define primal degeneracy to be alternative dualoptima, but this is not correct.

60

Degeneracy

Consider the linear program

min −x2 (E0)x1 −x2 ≥ −3 (E1)−x1 −x2 ≥ −7 (E2)

−x2 ≥ −2 (E3)−2x1 +x2 ≥ −2 (E4)

x1 ≥ 0 (E5)x2 ≥ 0 (E6)

61

Degeneracy

Applying projection to this linear program and eliminating the xvariables gives

z0 ≥ −2 (E0) +(E3)z0 ≥ −5 (E0) +(1/2)(E1) +(1/2)(E2)z0 ≥ −8 (E0) +2(E1) +(E4)z0 ≥ −7 (E0) +(E2) +(E5)0 ≥ −2 (E3) +(E6)0 ≥ −8 2(E1) +(E4) +(E6)0 ≥ −7 (E2) +(E5) +(E6)0 ≥ −10 (E1) +(E2) +2(E6)0 ≥ −4 (E3) +(E4) +2(E5)0 ≥ −4 (E1) +(E2) +2(E4) +2(E5)0 ≥ −10 2(E1) +2(E4) +2(E5)0 ≥ −9 (E2) +(E4) +3E(5)

62

Degeneracy

Question: What is the allowable decrease on constraint (E3)?

Answer: ??????????

Let’s look at what we get from an LP code.

63

Degeneracy

The LINGO solution (RHS of E3 is -2)

64

Degeneracy

The LINGO Range analysis (RHS of E3 is -2)

65

Degeneracy

What is the allowable decrease on (E3) according to LINGO?

What is the allowable decrease on (E3) according to projection?

Argle bargle!!!!!!!!! What happened? Do we have strictcomplementary slackness?

Let’s look at a picture. In the picture we have decreased the RHSto -4 from -2, that is the constraint is x2 ≤ 4.

66

Degeneracy

-2 2 4 6

-5

-2.5

2.5

5

7.5

10

x1

x2

(E3)

(E4)

(E2)

(E1)

(3.0, 4.0)

67

Degeneracy

The LINGO solution (RHS of E3 is -4)

68

Degeneracy

The LINGO Range analysis (RHS of E3 is -4)

69

Degeneracy Range analysis (RHS of E3 is -4)The primal system (tight constraints):

−x1 − x2 = −7 (E2)

−x2 = −4 (E3)

−2x1 + x2 = −2 (E4)

The dual system:

−u2 − 2u4 = 0

−u2 − u3 + u4 = −1

u1, u2, u3 ≥ 0

A unique dual solution, but alternative primal optima.

u2 = 0 u3 = 1 u4 = 0

70

Degeneracy

The LINGO solution (RHS of E3 is -5)

71

Degeneracy

The LINGO Range analysis (RHS of E3 is -5)

72

Degeneracy Range analysis (RHS of E3 is -5)The primal system (tight constraints):

x1 − x2 = −3 (E1)

−x1 − x2 = −7 (E2)

−x2 = −5 (E3)

The dual system:

u1 − u2 = 0

−u1 − u2 − u3 = −1

u1, u2, u3 ≥ 0

Alternative dual solution, but unique primal optima (x1 = 2,x2 = 5).

−2u1 − u3 = −1

73

Range Analysis Summary

-2 2 4 6

-5

-2.5

2.5

5

7.5

10

x1

x2

(E3)

(E4)

(E2)

(E1)

(3.0, 4.0)

74

Range Analysis Summary

Alternate AlternateConstraint Solution Degenerate Primal Dual

−x2 ≥ −2 x1 = 0, x2 = 2 No Yes No−x2 ≥ −3 x1 = 0, x2 = 3 Yes Yes No−x2 ≥ −4 x1 = 3, x2 = 4 Yes Yes No−x2 ≥ −5 x1 = 2, x2 = 5 Yes No Yes

75

Range Analysis Summary

Here is something to think about while lying in bed at night.

Assume there is an optimal solution to the linear program such that

I the solution is not strictly complementary

I the solution is not degenerate

What can we conclude about alternative optimal primal andalternative optimal dual solutions?

How can Simplex spot a degenerate solution?

76

Degeneracy – Key Take Aways

Here is the problem: Simplex codes calculate the allowableincrease/decrease by how much a right hand side can changebefore the set of positive primal variables and constraints withpositive slack change.

Stated another way: Simplex codes calculate the allowableincrease/decrease by how much a right hand side can changebefore there is a change in the basis.

This may not actually be equal to the true allowable increase (i.e.before the value of the optimal dual variables change.)

There may be several basis changes before the dual solutionchanges.

77

Degeneracy – Key Take Aways

I If there is NOT strict complementary slackness we havealternative primal optima or dual optima.

I If there is NOT strict complementary slackness range analysismay be misleading.

I If there is primal degeneracy knowing which kind ofalternative optima (primal or dual) we have is difficult.

78

Consistency Testing

Objective: It is desirable to characterize for which b ∈ Rm the LPset {x ∈ Rn |Ax ≥ b} is nonempty or consistent.

Let A be an m by n matrix.

We call V : Rm → R an LP-consistency tester for A if for anyb ∈ Rm, V (b) ≤ 0 if and only if {x ∈ Rn : Ax ≥ b} is nonempty.

79

Consistency Testing

The beauty of projection is that it provides an LP-consistencytester.

Projecting out the x variables in the system Ax ≥ b gives

0 ≥ b>uk , k = 1, . . . , q

Define V (b) by

V (b) := max{b>uk , k = 1, . . . , q}

We have seen that Ax ≥ b is consistent if and only if V (b) ≤ 0.

80

Optimal Value Function

What is the efficient frontier in finance?

81

Optimal Value Function

Increasing the right hand side bk , of constraint k, in the linearprogram

min {c>x |Ax ≥ b},will result in infeasibility or a new optimal dual solution. If a newoptimal dual solution is the result then the new dual solution willhave a strictly greater value for component k .

Hurting hurts more and more.

82

Optimal Value Function

If u is an optimal dual solution to min {c>x |Ax ≥ b} and θk isthe allowable increase (decrease) on the right hand side bk thenincreasing (decreasing) bk by more than θk will either result in aninfeasible primal or a new optimal dual solution u whereuk > uk(uk < uk).

Why does this follow from projection?

83

Optimal Value Function

Proposition 2.48: If the projection of the systemz0 − c>x , Ax ≥ b into the z0 variable space is not <1, and is notnull, then there exists a set of vectors, u1, . . . , uq such that

z0(b) = min {c>x |Ax ≥ b} = max {b>uk | k = 1, . . . , q}

for all b such that {x |Ax ≥ b } 6= 0.

Corollary 2.49: The optimal value functionz0(b) = min {c>x |Ax ≥ b} is a piecewise linear convex functionover the domain for which the linear program is feasible.

84

Key Results

I Projection

I Projection does not create or destroy solutions

I Projection yields dual mutlipliers

I Farkas’ Lemma – Theorems of the Alternative

I Weyl’s Theorem

I Solve a linear program with projection

I Weak and Strong Duality

I Complementary Slackness

I Sensitivity Analysis

I Optimal value function

85