Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce...
Transcript of Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce...
![Page 1: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/1.jpg)
Section 9 Fundamentals of Algorithms for Constrained Optimization
Follows N & W, section 15.
![Page 2: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/2.jpg)
9.1 TYPES OF CONSTRAINED OPTIMIZATION ALGORITHMS
![Page 3: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/3.jpg)
Types of Optimization Algorithms �• All of the algorithms solve iteratively a simpler
problem. �– Penalty and Augmented Lagrangian Methods. �– Sequential Quadratic Programming. �– Interior-point Methods.
�• The approach follows the usual divide-and-conquer approach:
�• Constrained Optimization- �• Unconstrained Optimization �• Nonlinear Equations �• Linear Equations
![Page 4: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/4.jpg)
Quadratic Programming Problems
�• Algorithms for such problems are interested to explore because �– 1. Their structure can be efficiently exploited. �– 2. They form the basis for other algorithms, such as
augmented Lagrangian and Sequential quadratic programming problems.
![Page 5: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/5.jpg)
Penalty Methods
�• Idea: Replace the constraints by a penalty term. �• Inexact penalties: parameter driven to infinity to
recover solution. Example:
�• Exact but nonsmooth penalty �– the penalty parameter can stay finite.
x* = arg min f (x) subject to c x( ) = 0
xµ = arg min f x( ) + µ2
ci2
i E
x( ); x* = limµ xµ = x*
Solve with unconstrained optimization
x* = arg min f (x) subject to c x( ) = 0 x* = arg min f x( ) + µ ci x( )
i E
; µ µ0
![Page 6: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/6.jpg)
Augmented Lagrangian Methods
�• Mix the Lagrangian point of view with a penalty point of view.
x* = arg min f (x) subject to c x( ) = 0
xµ , = arg min f x( ) icii E
x( ) + µ2
ci2
i E
x( )
x* = lim * xµ , for some µ µ0 > 0
![Page 7: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/7.jpg)
Sequential Quadratic Programming Algorithms
�• Solve successively Quadratic Programs.
�• It is the analogous of Newton�’s method for the case of constraints if
�• But how do you solve the subproblem? It is possible with extensions of simplex which I do not cover.
�• An option is BFGS which makes it convex.
min p12pT Bk p + f xk( )
subject to ci xk( )d + ci xk( ) = 0 i E
ci xk( )d + ci xk( ) 0 i I
Bk = xx
2 L xk , k( )
![Page 8: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/8.jpg)
Interior Point Methods �• Reduce the inequality constraints with a barrier
�• An alternative, is use a penalty as well:
�• And I can solve it as a sequence of unconstrained problems!
minx,s f x( ) µ log sii=1
m
subject to ci x( ) = 0 i E
ci x( ) si = 0 i I
minx f x( ) µ log si
i I
+ 12µ
ci x( ) s( )2i I
+ 12µ
ci x( )( )2i E
![Page 9: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/9.jpg)
9.2 MERIT FUNCTIONS AND FILTERS
![Page 10: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/10.jpg)
Feasible algorithms
�• If I can afford to maintain feasibility at all steps, then I just monitor decrease in objective function.
�• I accept a point if I have enough descent. �• But this works only for very particular
constraints, such as linear constraints or bound constraints (and we will use it).
�• Algorithms that do that are called feasible algorithms.
![Page 11: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/11.jpg)
Infeasible algorithms �• But, sometimes it is VERY HARD to enforce
feasibility at all steps (e.g. nonlinear equality constraints).
�• And I need feasibility only in the limit; so there is benefit to allow algorithms to move on the outside of the feasible set.
�• But then, how do I measure progress since I have two, apparently contradictory requirements: �– Reduce infeasibility (e.g. ) �– Reduce objective function. �– It has a multiobjective optimization nature!
ci x( )
i E
+ max ci x( ),0{ }i I
![Page 12: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/12.jpg)
9.2.1 MERIT FUNCTIONS
![Page 13: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/13.jpg)
Merit function
�• One idea also from multiobjective optimization: minimize a weighted combination of the 2 criteria.
�• But I can scale it so that the weight of the objective is 1.
�• In that case, the weight of the infeasibility measure is called �“penalty parameter�”.
�• I can monitor progress by ensuring that decreases, as in unconstrained optimization.
x( ) = w1 f x( ) + w2 ci x( )
i E
+ max ci x( ),0{ }i I
; w1,w2 > 0
x( )
![Page 14: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/14.jpg)
Nonsmooth Penalty Merit Functions
�• It is called the l1 merit function. �• Sometimes, they can be even EXACT. �•
Penalty parameter
![Page 15: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/15.jpg)
Smooth and Exact Penalty Functions
�• Excellent convergence properties, but very expensive to compute.
�• Fletcher�’s augmented Lagrangian:
�• It is both smooth and exact, but perhaps impractical due to the linear solve.
![Page 16: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/16.jpg)
Augmented Lagrangian
�• Smooth, but inexact.
�• An update of the Lagrange Multiplier is needed. �• We will not uses it, except with Augmented
Lagrangian methods themselves.
x( ) = f x( ) icii E
x( ) + µ2
ci2
i E
x( )
![Page 17: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/17.jpg)
Line-search (Armijo) for Nonsmooth Merit Functions
�• How do we carry out the �“progress search�”? �• That is the line search or the sufficient reduction
in trust region? �• In the unconstrained case, we had
�• But we cannot use this anymore, since the function is not differentiable.
f xk( ) f xk +mdk( ) m f xk( )T dk ; 0 < <1, 0 < < 0.5
![Page 18: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/18.jpg)
Directional Derivatives of Nonsmooth Merit Function
�• Nevertheless, the function has a directional derivative (follows from properties of max function). EXPAND
�• Line Search: �• Trust Region
D x,µ( ); p( ) = limt 0,t>0x + tp,µ( ) x,µ( )
t; D max f1, f2{ }, p( ) = max f1p, f1p{ }
xk ,µ( ) xk +m pk ,µ( ) mD xk ,µ( ), pk( );
xk ,µ( ) xk +m pk ,µ( ) 1 m 0( ) m pk( )( );
0 < 1 < 0.5
![Page 19: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/19.jpg)
And �…. How do I choose the penalty parameter?
�• VERY tricky issue, highly dependent on the penalty function used.
�• For the l1 function, guideline is:
�• But almost always adaptive. Criterion: If optimality gets ahead of feasibility, make penalty parameter more stringent.
�• E.g l1 function: the max of current value of multipliers plus safety factor (EXPAND) �–
![Page 20: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/20.jpg)
9.2.2 FILTER APPROACHES
![Page 21: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/21.jpg)
Principles of filters
�• Originates in the multiobjective optimization philosophy: objective and infeasibility
�• The problem becomes:
![Page 22: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/22.jpg)
The Filter approach
![Page 23: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/23.jpg)
Some Refinements
�• Like in the line search approach, I cannot accept EVERY decrease since I may never converge.
�• Modification:
10 5
![Page 24: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/24.jpg)
9.3 MARATOS EFFECT AND CURVILINEAR SEARCH
![Page 25: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/25.jpg)
Unfortunately, the Newton step may not be compatible with penalty
�• This is called the Maratos effect.
�• Problem:
�• Note: the closest point on search direction (Newton) will be rejected !
�• So fast convergence does not occur
![Page 26: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/26.jpg)
Solutions?
�• Use Fletcher�’s function that does not suffer from this problem.
�• Following a step: �• Use a correction that satisfies
�• Followed by the update or line search:
�• Since compared to corrected Newton step is likelier to be accepted.
xk + pk +2 �ˆpk
c xk + pk + �ˆpk( ) =O xk x*3( ) c xk + pk( ) =O xk x*
2( )
![Page 27: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/27.jpg)
Section 11 Algorithms for Nonlinear Optimization. Follows N & W, 17 and 19.
![Page 28: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/28.jpg)
Algorithms for constrained optimization
�• It is the story of putting ALL these blocks together.
�• Augmented Lagrangian �• Interior Point �• Sequential Quadratic Programming
![Page 29: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/29.jpg)
11.1 AUGMENTED LAGRANGIAN
![Page 30: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/30.jpg)
AUGLAG: Equality Constraints
�• The augmented Lagrangian:
�• Observation: if
= *; µ µ0 xLA x*, *,µ( ) = 0;xx2 LA x*, *,µ( ) = xx
2 L x*, *,µ( ) + µ c x*( )( )T c x*( )( )
![Page 31: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/31.jpg)
AUGLAG: SOC
�• So x* is a stationary point for Auglag for exact multipliers �… but is it a minimum?
�• Yes, for mu sufficiently large.
�• So it is *almost* as solving unconstrained problem �… but how do I find multiplier estimates?
xx2 LA x*, *,µ( ) Y Z
T
xx2 LA x*, *,µ( ) Y Z + µ c x*( )Y( )T c x*( )Y( ) =
ZTxx2 LA x*, *,µ( )Z *
* *+ µ c x*( )Y( )T c x*( )Y( )0 for µ suff large.
![Page 32: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/32.jpg)
Multiplier Estimates Auglag
�• At the current estimate, solve problem
�• The obvious choice:
�• What do I do if I converge lambda but x* is not feasible? Increase the penalty mu (it will have to end increasing eventually).
![Page 33: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/33.jpg)
The general case
�• The bound constrained formulation. Slacks.
�• The problem:
![Page 34: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/34.jpg)
The augmented Lagrangian
�• The new AugLag
�• The bound constrained optimization problem:
�• Same property: if Lagrange multiplier is the optimal one for eq cons and mu is large enough then x* is a solution !
![Page 35: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/35.jpg)
Practical AugLag alg: LANCELOT
Main computation: Use bound constrained projection.
Forcing sequences
![Page 36: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/36.jpg)
Solving the bound constrained subproblem
�• It is an iterative bound constrained optimization algorithm with trust-region:
�• Each step solves a bound constrained QP (not necessarily PD), same as in your homework 4.
�• The difference: after a subspace solve: compute the new derivative and update TR.
![Page 37: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/37.jpg)
11.2 INTERIOR-POINT METHODS
![Page 38: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/38.jpg)
Outline
�• Same idea as in the case of the interior-point method for QP.
�• Create a path that is interior with respect to the Lagrange multipliers and the slacks that depends on a smoothing parameter mu.
�• Drive mu to 0.
![Page 39: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/39.jpg)
Interior �–point, �“smoothing�” parth
�• Formulation (with slacks) :
�• Interior-point (smoothing path; mu=0: KKT)
![Page 40: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/40.jpg)
Barrier interpretation
�• The nonlinear equation is the same as the KKT point of the barrier function:
![Page 41: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/41.jpg)
Newton Method:
�• Linearization for fixed mu:
![Page 42: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/42.jpg)
Choose the step
�• The new iteration:
�• Where:
�• And, = 0.99 0.995
![Page 43: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/43.jpg)
How do I measure progress?
�• Merit function:
�• I try to decrease it as much as I can.
![Page 44: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/44.jpg)
Basic Interior-Point Algorithm
![Page 45: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/45.jpg)
How to solve the linear system
�• Rewriting the Newton Direction:
�• Can use indefinite factorization LDLT. �• Or, projected CG (since it is in saddle-point
form)
![Page 46: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/46.jpg)
Linear System, part II
�• Or, we can eliminate p_s and use LDLT
�• And even p_z:
![Page 47: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/47.jpg)
How do we deal with nonconvexity and non-LICQ?
�• Regularization
�• Choose delta so that signature of the matrix corresponds to positive definiteness of reduced matrix:
�• For signature, can use LDLT
![Page 48: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/48.jpg)
But, how do I know how far to go in a direction?
�• Backtracking search for merit function (based on barrier interpretation) :
�• Directional derivative (for line search)
pc x( ) =
pc x( )T c x( ) =
c x( )c x( ) c x( ) p c x( ) 0
c x( ) pc x( ) p c x( ) p c x( ) = 0, c x( ) p 0
0 otherwise
![Page 49: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/49.jpg)
How do we update barrier parameter?
�• Decrease of barrier (example):
�• Step update:
![Page 50: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/50.jpg)
A practical interior-point algorithm
![Page 51: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/51.jpg)
11.3 SEQUENTIAL QUADRATIC PROGRAMMING
![Page 52: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/52.jpg)
Idea:
�• Start with equality constrained problem:
�• Find the solution of problem with quadratic objective and linearized constraints called quadratic program.
�• Define: which gives Newton�’s.
pk ,lk
k+1 = lk ; xk+1 = xk + pk
![Page 53: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/53.jpg)
Extension to inequality constraints.
�• For problem:
�• Solve successively the quadratic program:
�• E.g use a BFGS approximation (though density an issue) and interior point (defined in section 10).
![Page 54: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/54.jpg)
A sequential Linear-Quadratic Program
�• Analogous with the projection/subspace minimization algorithm.
�• In Linear phase, solve (e.g by interior point)
�• Variation: (infeas)
+ 12pT p
![Page 55: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/55.jpg)
Determine active set in linear phase
�• For feasible algorithm:
�• For infeasible algorithm:
�• Backtrack merit function on to obtain Cauchy pt
pc
pLP pC
![Page 56: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/56.jpg)
Equality Constrained QP: EQP
�• Determine the working active set:
�• Solve EQP:
�• E.g by truncated, projected CG.
Wk Ak or Vk( )
![Page 57: Fundamentals of Algorithms for Constrained Optimization ...anitescu/CLASSES/2011/... · . Reduce the inequality constraints with a barrier . An alternative, is use a penalty as well:](https://reader030.fdocuments.us/reader030/viewer/2022040503/5e2f037326d7ca018d2bc584/html5/thumbnails/57.jpg)
Total Step
�• Start from Cauchy Direction:
�• Choose by backtracking using the same merit function as in first stage. (effectively, a dogleg).
�• If the LP solution is infeasible, increase the penalty.
Q