On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In...

23
Laboratoire d’Arithmétique, de Calcul formel et d’Optimisation ESA - CNRS 6090 On duality in nonconvex vector optimization in Banach spaces using augmented Lagrangians Phan Quoc Khanh, Tran Hue Nuong & Michel Théra Rapport de recherche n° 1998-02 Université de Limoges, 123 avenue Albert Thomas, 87060 Limoges Cedex Tél. 05 55 45 73 23 - Fax. 05 55 45 73 22 - [email protected] http://www.unilim.fr/laco/

Transcript of On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In...

Page 1: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

Laboratoire d’Arithmétique, de Calcul formel et d’OptimisationESA - CNRS 6090

On duality in nonconvex vector optimization in Banach spaces using augmented Lagrangians

Phan Quoc Khanh, Tran Hue Nuong & Michel Théra

Rapport de recherche n° 1998-02

Université de Limoges, 123 avenue Albert Thomas, 87060 Limoges CedexTél. 05 55 45 73 23 - Fax. 05 55 45 73 22 - [email protected]

http://www.unilim.fr/laco/

Page 2: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)
Page 3: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

ON DUALITY IN NONCONVEX VECTOR

OPTIMIZATION IN BANACH SPACES USING

AUGMENTED LAGRANGIANS ∗

Phan Quoc Khanh, Tran Hue Nuong † Michel Thera‡

November 15, 1997

Abstract

This paper shows how the use of penalty functions in terms of projections on the

constraint cones, which are orthogonal in the sense of Birkhoff, permits to establish

augmented Lagrangians and to define a dual problem of a given nonconvex vector

optimization problem. Then the weak duality always holds. Using the quadratic

growth condition together with the inf-stability or a kind of Rockafellar’s stability

called stability of degree two, we derive strong duality results between the properly

efficient solutions of the two problems. A strict converse duality result is proved

under an additional convexity assumption, which is shown to be essential.

Keywords : vector optimization, positively proprer minima, augmented Lagrangian, Birkhoff

orthogonality, quadratic growth condition, inf-stability, stability of degree 2..

AMS subject classification : 90C29∗The research reported here was sponsored in part by the FICU-AUPELF program “ Cooperation

en mathematiques appliquees entre la Belgique, la France et le Vietnam “. The research of the first two

authors was also supported by the the Vietnam National Basic Research Program in Natural Sciences†Departement de Mathematiques et d’Informatique, Universite d’Hochiminh Ville, 227 rue Nguyen

Van Cu, Q.1, Hochiminh Ville - Vietnam. Email : [email protected]‡LACO, UPRESA 6060, Universite de Limoges, 87060 Limoges Cedex, France, Email :

[email protected]

1

Page 4: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

1 Introduction

Let X be a set, Y be a topological vector space and Z be a Banach space, Y and Z being

ordered by closed convex cones K and M respectively. Let F : X → Y and G : X → Z

be two mappings.

Consider the vector optimization problem

minF (x)

subject to (P)

G(x) ∈ −M.

It is well known that whenever Y = IR and X is a linear space, if problem (P) is convex,

i.e. F is convex and G is M -convex, one obtains the Lagrangian duality between (P ) and

its Lagrange dual. If (P) is nonconvex, a nonzero duality gap appears. To derive duality

results for nonconvex problems there are three major ways to overcome this duality gap.

• The first one, with probably the largest number of contributions consists of assuming

generalized convexity conditions in order to prove duality for various duality schemes

based on Lagrangians : Lagrange, Wolfe, Mond-Weir ... schemes, see e.g. [4], [5], [6],

[11], [12], [15], [17], [19], [20].

• The second approach is based on defining general, perhaps abstract duality schemes,

e.g. [7], [14].

• The third possibility to improve nonconvex situations is to use augmented La-

grangians. Since these Lagrangians are often combinations of ordinary Lagrangians

and penalty functions, many mathematicians working in mathematical programming

have contributed numerous results both from the theoretic and the computational

point of view. We do not attempt to give a complete account here, but only to refer

the reader to some basic literature on the subject ; see for instance, [1], [2], [3], [9],

[10], [13], [16], [18], [21].

2

Page 5: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

Most of these contributions are developed in the finite dimensional setting, while refer-

ences [9], [10] and [21] are devoted to problems in Hilbert spaces. Papers [9] and [10]

use usual quadratic augmented Lagrangians to analyze algorithms for local solutions. An

augmented Lagrangian containing multiplier terms built by orthogonal projections on

the ordering cone M was proposed in [21] to investigate duality and algorithms. All the

references on augmented Lagrangians cited above are related to for scalar optimization

problems. The authors of this note are unaware of results on vector optimization using

augmented Lagrangians, except the abstract ones in [7]. However many practical applica-

tions amount to considering vector optimization with a constraint space Z being a Banach

spaces illustrated by the following example.

Example : Consider a cooperative differential game

y(t) = ϕ(t, y(t), x(t)), y(t0) = 0,

hk(y(t1)) = 0, gi(t, y(t), x(t)) ≤ 0, t ∈ [t0, t1], k = 1, · · · , s, i = 1, · · · `,∫ t1

t0ψj(t, y(t), x(t))dt→ inf, j = 1, · · · ,m.

Assume that the Cauchy problem involving the first two equations have a unique solution

y(x)(·) for any admissible control x(·) ∈ Lm∞[t0, t1]. Let X = Lm

∞[t0, t1], Y = IRm, K =

IRm+ , Z = IRs × C`[t0, t1]

F (x) =∫ t1

t0ψ(t, y(t), x(t))dt,

G(x) = (h(y(x)(t1), g(x)(·)),

M ={

(h, g) ∈ Z | h = 0, g(x)(t) ≥ 0, ∀ t ∈ [t0, t1]}.

Then the game reduces to problem (P).

The aim of this note is to consider duality for properly efficient solutions of Problem (P).

The main idea developed in order to achieve this goal is to build an augmented Lagrangian

using shifted penalty functions in terms of a kind of projection on the cone M .

3

Page 6: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

2 Augmented Lagrangian

Throughout the note assume that the cone M is proximinal, i.e.,

for each z ∈ Z, there exists zM ∈M such that

||z − zM || = minm∈M

||z −m||. (1)

Then we call zM a projection of z on M . Recall that (1) is equivalent to saying that z−zM

is Birkhoff orthogonal to the hyperplane supporting M at zM . Here x is said to be Birkhoff

orthogonal to y if

||x|| ≤ ||x+ γy|| for each γ ∈ IR;

x is said to be orthogonal to a set A ⊂ Z if x is orthogonal to all a ∈ A.

For convenience, set

z − zM := z−M∗, z − z−M := zM∗

.

Note that if Z is a Hilbert space, then zM(zM∗, z−M∗

, respectively) is just the orthogonal

projection of z on M (M∗, −M∗, respectively), where

M∗ :={z∗ ∈ Z∗ | < z∗,m > ≥ 0 ∀m ∈M

},

and Z∗ is the topological dual of Z.

For a given z ∈ Z, the point zM is not unique. However, for our consideration we can take

any projection of z on M for zM .

Some properties of projections needed in the sequel are collected in the next lemma.

Lemma 1 The following properties hold true :

(i) For z 6∈M , zM ∈M is a projection of z on M if and only if µ ∈ −M∗ of norm one

exists such that

< µ, z − zM > = ||z − zM ||, < µ, zM > = 0;

(ii) One can choose appropriate projections so that (−z)M = −z−M , (−z)M∗= −z−M∗

;

4

Page 7: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

(iii) ||z|| ≥ max { ||zM∗||, ||z−M∗|| } ;

(iv) ||zM∗|| = min{||m|| | m ∈M + z

};

(v) ||(x+ y −m)M∗|| ≤ ||xM∗+ yM∗|| ∀ x ∈ Z, ∀ y ∈ Z, ∀ m ∈M .

Proof - (i) By a theorem of Garkavi (e.g. [8], p. 76), zM is a projection of z if and only

if there exists µ ∈ Z∗ of norm one such that, for each m ∈M ,

< µ, z − zM > = ||z − zM ||, < µ, zM > ≥ < µ,m > .

Since M is a cone, the inequality is equivalent to µ ∈ −M∗ and < µ, zM > = 0.

(ii) is clear from the definitions.

(iii) Observe that

||z−M∗|| = ||z − zM || = < µ, z − zM > = < µ, z > ≤ ||z||.

On the other hand,

||zM∗|| = ||z − z−M || ≤ ||z − (−m)|| ∀ m ∈M.

Taking m = 0 one obtains ||zM∗|| ≤ ||z||.(iv)

||zM∗|| = || − z − (−z)M ||

= minm′∈M

||(−z −m′)||

= minm∈M+z

||m||.(v) Since

xM∗+ yM∗

= x+ (−x)M + y + (−y)M +m−m ∈ x+ y +M −m,

in view of (iv) we have

||(x+ y −m)M∗|| = min{||m′|| |m′ ∈M + x+ y −m

}

≤ ||xM∗+ yM∗||

5

Page 8: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

and the proof is complete.

In the sequel we shall always assume that the dual quasi-interior of K∗, i.e.

K• :={λ ∈ Y ∗ | < λ, y > > 0 ∀ y ∈ K\(−K)

},

is nonempty. Let us remark that K• 6= ∅ whenever K∗ has an interior or more generally

when Y is Hausdorff and locally convex and K has a weakly compact base B, i.e,

K =⋃ {

λb |λ ≥ 0, b ∈ B}

, where B is a convex set whose closure does not contain 0.

For instance, any closed pointed convex cone in finite dimension has a compact base. To

make use of scalar Lagrangians and to consider positively proper minima (p.p. minima,

for short) of (P) it is natural, for a given λ ∈ K•, to investigate together with (P) the

scalar problem (Pλ) :

min < λ, F (x) > subject to G(x) ∈ −M. (Pλ)

At this point, we recall that a point y ∈ V ⊂ Z is said to be a p.p. minimum of V if there

is λ ∈ K• such that < λ, y > ≤ < λ, y > for every y ∈ V .

In order to choose an appropriate augmented Lagrangian for (P), it is worthwile to observe

that z ∈ −M if and only if zM∗= 0 (by the definition of zM∗

and Lemma 1 (iv)). So an

ordinary penalty functional for Problem (Pλ) is

φ(x, ζ) = < λ, F (x) > +1

2ζ ||G(x)M∗||2.

It is well known that shifted penalty functionals can be used. In terms of projection, such

a functional may be as follows :

ψ(x, ζ, z) = < λ, F (x) > +1

2ζ ||(G(x) − z)M∗||2.

Following the idea of the augmented Lagrangian in [21] we define the augmented La-

grangian for (P) as follows :

L(x, λ, ζ, z) = < λ, F (x) > +1

2ζ||(G(x) − z)M∗ ||2 − 1

2ζ||z||2,

6

Page 9: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

with x ∈ X, λ ∈ K∗, (ζ, z) ∈ IR+ × Z. Note that the added term does not depend on

x, so ψ and L are equivalent when minimized on x. Furthermore, if X is a linear space

L(·, λ, ζ, z) is convex whenever (P) is convex. Recall that a mapping H : X → Y is said

to be K-convex if

H((1 − α)x+ αy) ∈ (1 − α)H(x) + αH(y) −K

whenever x ∈ X, y ∈ X and α lies in [0, 1]. (P) is called convex if F is K-convex and

G is M -convex. (The convexity of ||(G(·) − z)M∗|| follows from Lemma 1 (v)). Then, the

attainable set of (P) is K-convex. In the case where Z is a Hilbert space, L takes a form

involving an explicit multiplier as follows. Let

fλ(x, u) :=

〈λ, F (x)〉 if G(x) − u ∈ −M,

+∞ otherwise.

Then, by Lemma 1 (iv),

L(x, λ, ζ, z) = < λ, F (x) > +1

2ζ min

n∈G(x)+M||u− z||2 − 1

2ζ||z||2

= infu∈Z

{fλ(x, u) +

1

2ζ||u− z||2 − 1

2ζ||z||2

}

= infu∈Z

{fλ(x, u)+ < η, u > +

1

2ζ||u||2

}(2)

with the multiplier η := −ζz.

Consider a perturbed problem (Pλ,u) from (Pλ) :

min < λ, F (x) >, subject to G(x) − u ∈ −M. (Pλ,u)

In the sequel we shall denote the optimal values of problems (Pλ) and (Pλu) by pλ and

pλ(u), respectively, while P will stand for the set of all p.p. minima of the closure of the

attainable set of (P). Then pλ(u) = infx∈X

fλ(x, u) and, by (2) (true if Z is a Banach space),

infx∈X

L(x, λ, ζ, z) = infu∈Z

{pλ(u) +

1

2ζ||u− z||2 − 1

2ζ||z||2

}. (3)

7

Page 10: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

3 Duality

We define the dual (D) of (P) as

maxy∈W

y, (D)

where

W ={y ∈ Y | ∃λ ∈ K•, ∃(ζ, z) ∈ IR+ × Z, ∀x ∈ X, < λ, y > ≤ L(x, λ, ζ, z)

}.

Observe that every (Pareto) maximum of (D) is also a positively proper maximum of (D).

Lemma 2

sup(ζ,z)∈IR+×Z

{ζ||(G(x) − z)M∗||2 − ζ||z||2

}=

0 if G(x) ∈ −M,

+∞ otherwise

and, when G(x) ∈ −M , the supremum is attained at (ζ, 0) ∀ ζ ∈ IR+.

Proof - If G(x) ∈ −M , by Lemma 1 (v) and (iii) one has

||(G(x) − z)M∗|| − ||z|| = ||(−z)M∗|| − || − z|| ≤ 0.

On the other hand, for z = 0,

G(x)M∗= G(x) −G(x)−M = 0,

i.e. the supremum is reached at (ζ, 0), ∀ ζ ∈ IR+, and is equal to 0.

If G(x) 6∈ −M , one has

G(x)M∗= G(x) + (−G(x))M 6= 0.

Taking (ζn, zn) = (n, 0) we obtain limn→∞ ζn||G(x)M∗||2 = +∞, establishing the proof.

Thanks to Lemma 2, as any duality scheme using Lagrangians does, the weak duality

between (P) and (D) always holds as follows.

8

Page 11: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

Proposition 1 For any feasible point x of (P) and any y ∈ W of (D) we have

y 6∈ F (x) +K\(−K).

Proof - Since y ∈ W , the exist λ ∈ K• and (ζ, z) ∈ IR+ × Z such that

< λ, y > ≤ L(x, λ, ζ, z) ∀ x ∈ X.

Hence, by Lemma 2, < λ, y > ≤ < λ, F (x) > for all feasible x and we are done.

Corollary 1 If, for a feasible x, F (x) ∈ W , then F (x) is both a p.p. minimum of (P)

and a p.p. maximum of (D).

By virtue of Lemma 2, Problem (Pλ) can be rewritten as

infx∈X

sup(ζ,z)∈IR+×Z

L(x, λ, ζ, z). (‘Pλ)

Its dual (defined similarly to (D) from (P)) is

sup(ζ,z)∈IR+×Z

infx∈X

L(x, λ, ζ, z). (Dλ)

If the optimal value dλ of (Dλ) satisfies dλ = −∞, then the strong duality between (Pλ)

and (Dλ) means that pλ = −∞, which does not make any real sense. So it is natural to

assume the following hypothesis (boundedness from below) :

there exists (ζ, z) ∈ IR+ × Z such that infx∈X

L(x, λ, ζ, z) > −∞.

This assumption is equivalent to saying (cf.[16]) that there exists (q, ζ) ∈ IR × IR+, such

that for each u ∈ Z we have,

q − 1

2ζ ||u||2 ≤ pλ(u). (4)

Indeed, if (4) holds, taking ζ = ζ, z = 0 by (3) we see that

infx∈X

L(x, λ, ζ, 0) = infu∈Z

{Pλ(u) +

1

2ζ||u||2

}≥ q.

9

Page 12: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

Conversely, if there exist (ζ , z) ∈ IR+ × Z, q ∈ IR such that

q ≤ infx∈X

L(x, λ, ζ, z),

then, again by (3),

q ≤ infu∈Z

{pλ(u) +

1

2ζ ||u− z||2 − 1

2ζ||z||2

}

≤ infu∈Z

{pλ(u) +

1

2ζ(||u||2 + ||z||2) − 1

2ζ||z||2

}

= infu∈Z

{pλ(u) +

1

2ζ||u||2

}.

If (4) holds, we say that Problem (Pλ) satisfies the quadratic growth condition (q.g.c.).

Observation : dλ = −∞ if and only if the q.g.c. is not satisfied for (Pλ).

The following theorem shows that the q.g.c. guarantees a classical outcome [16], nearby

to strong duality, for our scheme (Pλ) and (Dλ).

Theorem 1 (Pλ) satisfies the q.g.c. if and only if

−∞ < dλ = lim infu→0

pλ(u) ≤ pλ(0) ≡ pλ. (5)

Proof - We have to prove only the ”only if” part. Consider an arbitrary q ∈ IR such that

q < lim infu→0

pλ(u) and an ε > 0 small enough to have pλ(u) ≥ q if ||u|| < ε.

For (q, ζ) in (4) one has, for all ζ sufficiently large,

q − 1

2ζ||u||2 ≤ q − 1

2ζ||u||2 if ||u|| ≥ ε.

Then

q − 1

2ζ ||u||2 ≤ pλ(u) ∀ u ∈ Z.

Hence

q ≤ infu∈Z

{pλ(u) +

1

2ζ||u||2

}.

Since q is arbitrary, this implies

lim infu→0

pλ(u) ≤ infu∈Z

{pλ(u) +

1

2ζ||u||2

}≤ dλ.

10

Page 13: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

The opposite inequality always holds (without (4)) as follows :

dλ = sup(ζ,z)

infu∈Z

{(pλ(u) +

1

2ζ||u− z||2 − 1

2ζ||z||2

}

≤ sup(ζ,z)

infu∈Z

{pλ(u) +

1

2ζ||u||2

}

≤ supζ

lim infu→0

{pλ(u) +

1

2ζ||u||2

}

≤ supζ

lim infu→0

pλ(u) = lim infu→0

pλ(u)

establishing the proof.

To obtain a corresponding result for (P), similarly let D be the set of all p.p. maxima of

the closure of the attainable set of (D). We assume by assumption that :

P (u) 6= ∅ for all u in a neighbourhood of 0.

Observe that

pλ(u) = infy∈P (u)

< λ, y > .

Let us define an inferior limit for a mapping A : Z −−→−→ Y as follows. First, for B : Z −−→−→ IR

we write

lim infu→0

B(u) := lims→0

inf||u||≤s

B(u).

Then, define

K• − lim infu→0

A(u) :={y ∈ lim sup

u→0A(u) | ∃λ ∈ K•,

< λ, y > = lim infu→0

< λ,A(u) >},

where the symbol ”limsup” means the superior limit of sets in the Painleve-Kuratowski

sense :

lim supu→0

A(u) =⋂ε>0

⋃||u||≤ε

A(u).

11

Page 14: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

Then

lim infu→0

pλ(u) = lims→0

inf||u||≤s

infy∈P (u)

< λ, y >

= lims→0

inf||u||≤s

< λ, P (u) >

= lim infu→0

< λ, P (u) >

and

K• − lim infu→0

P (u) ={y ∈ lim sup

u→0P (u) | ∃λ ∈ K•, < λ, y > = lim inf

u→0pλ(u)

}. (6)

If y ∈ K•−lim infu→0

P (u), then for any λ satisfying (6) (for this y) we say that y corresponds

to λ.

We define a kind of relaxed compactness as follows. A family A(u), u ∈ U ⊂ Z, of sets in

Y is said to be K•-compact at u = 0 if

for each λ ∈ K•, for each sequence {un}n∈IN in U with limit 0, for each sequence {yn}n∈IN

in clA(un) such that < λ, yn > converges, there exists y ∈ Y , and a subsequence {ynk}k∈IN

converging to y as k goes to ∞.

Of course, if A(·) is a multifunction continuous at u = 0 and each A(u) is relatively

compact, then the family{A(u) |u ∈ Z

}is K•-compact at u = 0. This condition is

rather weak. However, there are non-K•-compact families as the example below shows.

Example : Let Y = IR2,K ={

(y1, y2) | −y1 ≤ y2 ≤ y1

}, Z = IR, U =

{1

n|n ≥ 1

}∪{0},

A(0) = 1 and A(

1

n

)=

{(1

n, n

)|n ≥ 1

}. Then, for λ = (1, 0), < λ,A

(1

n

)> =

1

ntends

to 0 but the sequence{(

1

n, n

)}n≥1

has no convergent subsequence. So the family A(u)

is not K•-compact at u = 0.

Since, for Problem (P), X is an arbitrary set and G is an arbitrary mapping, in order to

obtain for (P) a result corresponding to Theorem 1, it is essential to impose additionally

the K•-compactness at u = 0 of the family of the attainable sets of (Pu), u ∈ Z. Also,

seeking for duality results, we often assume the existence of solutions for the primal

problem.

12

Page 15: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

Theorem 2 Assume that the family of the attainable sets of (Pu), u ∈ Z, is K•-compact

at u = 0. Then the following assertions hold :

(i) If (Pλ) satisfies the q.g.c. and if for all u in a neighbourhood of zero, there exists

y ∈ P (u) corresponding to λ, then the set of y ∈ K• − lim infu→0

P (u) corresponding

to λ is nonempty and contained in D ;

(ii) If the assumptions in (i) are satisfied for all λ ∈ K•, then

φ 6= K• − lim infu→0

P (u) ⊂ D ⊂ cl(Y \(P +K\ −K).

Proof - (i) Suppose on the contrary that among all y ∈ Y such that < λ, y >=

lim infu→0

pλ(u) = dλ there is no point of lim supu→0

P (u). Then, for each such an y, for ev-

ery sequence {un}n∈IN with un ∈ Z and limn→∞un = 0, there exists a neighbourhood Yy of

y, such that P (un) ∩ Yy = φ for all large n.

On the other hand, there exists a sequence {un}n∈IN with limit 0 such that limn→∞ pλ(un) =

dλ. By the assumption on P (un) one can choose yn in the closure of the attainable set of

(Pun) such that un ∈ P (un) and < λ, yn >= pλ(un). Hence, the K•-compactness yields

the existence of y ∈ Y and a subsequence {ynk}k∈IN such that lim

k→∞ynk

= y. Clearly

< λ, y > = dλ. This contradicts the fact that all ynk6∈ Yy. Furthermore, it is obvious

that the mentioned set of y is contained in D.

(ii) follows from (i) and the weak duality.

Now let us return to relations between (pλ) and (dλ). Theorem 1 clearly has the following

consequence.

Theorem 3 Suppose that (Pλ) satisfies the q.g.c.. Then, dλ = pλ, i.e.

sup(ζ,z)

infxL(x, λ, ζ, z) = inf

xsup(ζ,z)

L(x, λ, ζ, z)

if and only if (Pλ) is inf-stable in the sense that

lim infu→0

pλ(u) ≥ pλ.

13

Page 16: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

For our vector problems (P) and (D), then the following result is valid.

Theorem 4 Let all the assumptions in Theorem 2 be satisfied. Let, for each λ ∈ K•,

(Pλ) be inf-stable. Then

P ⊂ K• − lim infu→0

P (u) ⊂ D.

Proof - Only the first inclusion has to be checked. Of course

P = P (0) ⊂ lim supu→0

P (u).

On the other hand

P ⊂{y ∈ Y | ∃λ ∈ K•, < λ, y > = pλ

}.

Since pλ = lim infu→0

pλ(u) for all λ ∈ K•, the conclusion follows immediately.

In order to have a strong duality relations between p.p. minima of (P) and p.p. maxima

of (D) (not simply through their closures), using a traditional way (cf. [16], [21]), we must

add some extra conditions.

Problem (Pλ) is said to be locally stable of degree 2 if there exist ε > 0 and (ζ , z) ∈ IR+×Z,

such that ||u|| < ε implies

pλ − 1

2ζ ||u− z||2 +

1

2ζ ||z||2 ≤ pλ(u). (7)

If (7) holds for all u ∈ Z, (Pλ)) is called globally stable of degree 2 (we write for short

these two conditions as l.s.2 and g.s.2, respectively).

Remark : l.s.2 property is clearly stronger than the inf-stability (strictly stronger ! ).

g.s.2 property is equivalent to the two conditions l.s.2 and q.g.c. together. Indeed, the

g.s.2 property of course contains l.s.2. It implies also the q.g.c. since, by (7),

pλ ≤ infu∈Z

(pλ(u) +1

2ζ| ||u− z||2 − 1

2ζ ||z||2)

= infx∈X

L(x, λ, ζ, z).

14

Page 17: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

Conversely, by the q.g.c. property (4) one can choose ζ ′ ≥ ζ (ζ in l.s.2 property) and large

enough such that for ε in l.s.2. property,

pλ(u) +1

2ζ ′||u||2 ≥ pλ if ||u|| ≥ ε.

On the other hand

inf||u||<ε

(pλ(u) +

1

2ζ ′||u||2

)≥ inf

||u||<ε

(pλ(u) +

1

2ζ ||u||2

)

≥ inf||u||<ε

(pλ(u) +

1

2ζ ||u− z||2 − 1

2ζ ||z||2

)

≥ pλ.

So, for ζ = ζ ′, z = 0 and for all u ∈ Z, (7) holds.

Theorem 5 (i) If (Pλ) is g.s.2 and y ∈ P corresponds to λ, then y is a p.p. maximum

of (D).

(ii) Conversely, if there is y ∈ W belonging to the closure of the attaimable set of (P),

then, for any λ ∈ K• corresponding to y (in the definition of W ), (Pλ) is g.s.2.

Proof - (i) By (7) and (3) one has

< λ, y >= pλ ≤ infu∈Z

(pλ(u) +

1

2ζ ||u− z||2 − 1

2ζ||z||2

)

= infx∈X

L(x, λ, ζ, z).

So y ∈ W and < λ, y >= dλ, then y is a p.p. maximum of (D).

(ii) For the given y and λ and for (ζ , z) from the definition of y in W we have (by Lemma

2)

< λ, y > ≤ infx∈X

L(x, λ, ζ, z)

≤ inf{< λ, F (x) > | G(x) ∈ −M

}

= pλ.

15

Page 18: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

Since y is in the mentioned closure, the inequalities must be equalities. Then, by (3), we

have (7) for each u ∈ Z.

Corollary 2 In order that the duality relation

infx

sup(ζ,z)

L(x, λ, ζ, z) = max(ζ,z)

infxL(x, λ, ζ, z)

hold it is necessary and sufficient that (Pλ) be g.s.2.

Theorem 6 (i) If (x, λ, ζ, z) is a saddle point of L(x, λ, ζ, z), i.e., for each x ∈ X, (ζ, z) ∈IR+ × Z, we have

L(x, λ, ζ, z) ≤ L(x, λ, ζ, z) ≤ L(x, λ, ζ, z), (8)

then x is a p.p. minimizer of (P) and all y ∈ Y such that

< λ, y > = L(x, λ, ζ, z) (9)

are p.p. maxima of (D).

(ii) Conversely, if x is a p.p. minimizer of (P) corresponding to λ ∈ K• and (Pλ) is

g.s.2, then there exists (ζ , z) ∈ IR+ ×Z such that (8) holds. (In fact (8) holds if and

only if all y satisfying (9) are p.p. maxima of (D).)

Proof - (i) Lemma 2 together with (8) imply that x is feasible and, for all feasible points

x,

< λ, F (x) > = sup(ζ,z)

L(x, λ, ζ, z)

≥ L(x, λ, ζ, z)

≥ L(x, λ, ζ, z)

= sup(ζ,z)

L(x, λ, ζ, z)

= < λ, F (x) > .

16

Page 19: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

Hence, L(x, λ, ζ, z) = pλ and x is a p.p. minimizer of (P). Furthermore,

dλ = infxL(x, λ, ζ, z) = L(x, λ, ζ, z) = pλ.

By the weak duality dλ = L(x, λ, ζ, z), and (9) shows that such y belong to W and are

p.p. maxima of (D).

(ii) For the given x, λ, we have by (7)

< λ, F (x) > = pλ

≤ infx∈X

L(x, λ, ζ, z)

≤ L(x, λ, ζ, z)

≤ sup(ζ,z)

L(x, λ, ζ, z)

= < λ, F (x) > .

This means that the inequalities are in fact equalities, and therefore (8) holds.

For strict converse duality results we need convexity conditions, as follows.

Theorem 7 Let K be pointed, (P) be convex and the attainable set V of (P) be closed.

Let, for any λ ∈ K•, (Pλ) be inf-stable and satisfy the q.g.c. Then, every p.p. maximum

y of (D) is also a p.p. minimum of (P).

Proof - We claim first that y ∈ V +K. Indeed, suppose the contrary. Then, since V +K

is closed and convex, y can be separated from V +K by λ1 ∈ K∗\{0}, and by Theorem

3 we have

< λ1, y > < infy∈V +K

< λ1, y >

= infy∈V

< λ1, y >

= infx

sup(ζ,z)

L(x, λ1, ζ, z)

= sup(ζ,z)

infxL(x, λ1, ζ, z).

17

Page 20: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

(Note that Theorem 3 is still true for any λ ∈ K∗\{0}, not only for λ ∈ K•.)

On the other hand, since y ∈ W , λ2 ∈ K• exists such that

< λ2, y >≤ infy∈V

< λ2, y > . (10)

Hence, for λα := (1 − α)λ1 + αλ2 ∈ K• with any α ∈ (0, 1),

< λα, y > < infy∈V

< λα, y > = sup(ζ,z)

infxL(x, λα, ζ, z).

Consequently, one can find (ζ , z) ∈ IR+ × Z such that

< λα, y > < infxL(x, λα, ζ, z).

This yields to the existence of such a y′ ∈ W1 that y′ ∈ y + K\{0}, contradicting the

maximality of y. Thus y ∈ V +K. Now suppose y 6∈ V and y = y0 + k with y0 ∈ V and

0 6= k ∈ K. Then, by the pointedness of K, < λ2, y0 > < < λ2, y >. This contradiction

to (10) shows that y ∈ V ∩W . By Corollary 4 we are done.

Example (Convexity is essential) : Let X = IR, Y = Z = IR2. Take K = M = IR2+,

G(x) ≡ 0, F2(x) = x and

F1(x) =

x if x ≥ 0,

√−x if −1 < x < 0,

−x if x ≤ −1.

Then, Problem (P) is unconstrained and easy to be solved. However using our augmented

Lagrangian the dual (D) can be still defined. A simple calculation shows that

minx

max(ζ,z)

L(x, λ, ζ, z) = max(ζ,z)

minx

L(x, λ, ζ, z)

=

0 if λ1 ≥ λ2,

−∞ if λ1 < λ2

18

Page 21: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

where (λ1, λ2) ∈ K• = Int IR2+.

Elementary argument shows that all points of half-straightline AB and (0, 0) are p.p.

minima of (P). p.p. maxima of (D) consist of all points of the half-straightline OB. So

the p.p.maxima of (D) on open interval OA are not p.p. minima of (P).

W

A

B

Cy2

y11

-1

0

F(X)

References

[1] Bertsekas D.P. (1982) Constrained optimization and Lagrange multiplier methods,

Academic Press, New York.

[2] Boukari D. and Fiacco A.V. (1995) Survey of penalty, exact-penalty and multiplier

methods from 1968 to 1993, Optimization, 32, 301–334.

[3] Conn A.R., Gould N., Sartenaer A. and Toint P.L. (1996) Convergence properties of

an augmented Lagrangian algorithm for optimization with a combination of general

equality and linear constraints, SIAM J. Optim., 6, 674–703.

19

Page 22: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

[4] Craven B.D. (1989) A modified Wolfe dual for weak vector optimization, Numer.

Funct. Anal. Optim., 10, 899–907.

[5] Craven B.D. and Glover B.M. (1985) Invex functions and duality, J. Australian Math.

Soc., 39 A, 1–20.

[6] Dolecki S. and Kurcyusz S. (1978) On φ-convexity in extremal problems, SIAM J.

Control Optim., 16, 277–300.

[7] Dolecki S. and Malivert C. (1993) General duality in vector optimization, Optimiza-

tion, 27, 97–119.

[8] Holmes R.B. (1972) A course on optimization and best approximation, Lec. Notes

Math. 257, Springer, Berlin.

[9] Ito K. and Kunisch K. (1990) The augmented Lagrangian method for equality and

inequality constraints in Hilbert spaces, Math. Prog., 46, 341–360.

[10] Ito K. and Kunisch K. (1996) Augmented Lagrangian SQP methods in Hilbert spaces

and application to control in the coefficients problems, SIAM J. Optim., 6, 96–125.

[11] Khanh P.Q. (1995) Invex-convexlike functions and duality, J.O.T.A., 87, 141–165.

[12] Khanh P.Q. (1995) Sufficient optimality conditions and duality in vector opitmization

with invex-convexlike functions, J.O.T.A., 87, 359–378.

[13] Kiwiel K.C. (1996) On the twice differentiable cubic augmented Lagrangian,

J.O.T.A., 88, 233–236.

[14] Luc D.T. (1989) Theory of vector optimization, Lec. Notes Economics Math. Systems,

Springer, Berlin.

[15] Pini R. and Singh C. (1997) A survey of recent [1985 – 1995] advances in generalized

convexity with applications to duality theory and optimality conditions, Optimiza-

tion, 39, 311–360.

20

Page 23: On duality in nonconvex vector optimization in Banach ...min ; subject to G(x)¡u2¡M: (P ‚;u) In the sequel we shall denote the optimal values of problems (P ‚)

[16] Rockafellar R.T. (1974) Augmented Lagrange multiplier functions and duality in

nonconvex programming, SIAM J. Control, 12, 268–285.

[17] Sach P.H. and Craven B.D. (1991) Invex multifunctions and duality, Numer Funct.

Anal. Optim., 12, 575–591.

[18] Singh C., Bhatia D. and Rueda N. (1996) Duality in nonlinear multiobjective pro-

gramming, J.O.T.A., 88, 659–670.

[19] Song W. (1997) Lagrangian duality for minimization of nonconvex multifunctions,

J.O.T.A., 93, 167–182.

[20] Weir T. and Mond B. (1989), Generalized convexity and duality in multiple-objective

programming, Bull. Australian Math. Soc., 39, 287–299.

[21] Wierzbicki A.P. and Kurcyusz S. (1977) Projection on a cone, penalty functionals

and duality theory for problems with inequality constraints in Hilbert space, SIAM

J. Control Optim., 15, 25–56.

21