Nonlinear and unconstrained multiple-objective optimization: Algorithm, computation, and application

13
Nonlinear and Unconstrained Multiple-Objective Optimization: Algorithm, Computation, and Application Asim Roy Arizona State University Jyrki Wallenius Helsinki School of Economics An extension of Zionts-Wallenius procedure, providing a unified approach to solv- ing several classes of multiple-objective optimization problems, is presented. The classes of problems addressed are linear programming, nonlinear programming, and unconstrained optimization. The method and its extensions are described. implemented for computer, subjected to extensive computational testing, and ap- plied to a quality-control problem. 1. INTRODUCTION In a recent paper we presented an algorithm and the supporting theory ex- tending the Zionts and Wallenius [16, 171 interactive method for solving multiple- objective nonlinear programming problems (Roy and Wallenius [ 121). The al- gorithm is based on the use of the generalized reduced gradient (GRG) method to deal with nonlinear multiple-objective programming problems (see also Aba- die and Carpentier [l], Lasdon, Fox, and Ratner [3], and Sadagopan and Rav- indran [13]). In this article we overview the algorithm and describe several extensions. We also present computational results and an application to quality control. The necessary theoretical results can be found in Roy and Wallenius As in Roy and Wallenius (121 we assume that the problem addressed is sig- nificant, judging by the continuing research interest in finding a workable ap- proach to solving the problem (see, for instance, Loganathan and Sherali [S], Rosenthal [9], Steuer [14], or the references at the end of this article). The problem under consideration involves a set of n decision variables rep- resented by the vector X constrained by rn (possibly) nonlinear constraints. We represent the constraints algebraically as follows: [12]. g,(X) = 0. k = 1, . . . m, ~,S,X,SU,, j=1, .... n, (1) Nrival Research Logistics, Vol. 38, pp. 623-635 (1991) Copyright 0 1991 by John Wiley & Sons, Inc. CCC 0028-1441/91/040623-13$04.00

Transcript of Nonlinear and unconstrained multiple-objective optimization: Algorithm, computation, and application

Nonlinear and Unconstrained Multiple-Objective Optimization: Algorithm, Computation,

and Application

Asim Roy Arizona State University

Jyrki Wallenius Helsinki School of Economics

An extension of Zionts-Wallenius procedure, providing a unified approach to solv- ing several classes of multiple-objective optimization problems, is presented. The classes of problems addressed are linear programming, nonlinear programming, and unconstrained optimization. The method and its extensions are described. implemented for computer, subjected to extensive computational testing, and ap- plied to a quality-control problem.

1. INTRODUCTION

In a recent paper we presented an algorithm and the supporting theory ex- tending the Zionts and Wallenius [16, 171 interactive method for solving multiple- objective nonlinear programming problems (Roy and Wallenius [ 121). The al- gorithm is based on the use of the generalized reduced gradient (GRG) method to deal with nonlinear multiple-objective programming problems (see also Aba- die and Carpentier [l], Lasdon, Fox, and Ratner [ 3 ] , and Sadagopan and Rav- indran [13]). In this article we overview the algorithm and describe several extensions. We also present computational results and an application to quality control. The necessary theoretical results can be found in Roy and Wallenius

As in Roy and Wallenius (121 we assume that the problem addressed is sig- nificant, judging by the continuing research interest in finding a workable ap- proach to solving the problem (see, for instance, Loganathan and Sherali [ S ] , Rosenthal [9], Steuer [14], or the references at the end of this article).

The problem under consideration involves a set of n decision variables rep- resented by the vector X constrained by rn (possibly) nonlinear constraints. We represent the constraints algebraically as follows:

[12].

g , ( X ) = 0. k = 1, . . . m,

~ , S , X , S U , , j = 1 , . . . . n , (1)

Nrival Research Logistics, Vol. 38, pp. 623-635 (1991) Copyright 0 1991 by John Wiley & Sons, Inc. CCC 0028-1441/91/040623-13$04.00

624 Naval Reseurch Logistics, Vol. 38 (1991)

where I, and u, are given lower and upper bounds. The vector X contains both the structural and the slack variables. The constraint functions g k ( X ) are assumed to be continuously differentiable, and the feasible set defined by (1) is assumed to be a nonempty, compact, and convex subset of R”. If m = 0, the problem becomes unconstrained. The decision situation involves a single decision maker (DM) who has p continuously differentiable and concave objectives. We write these objectives as u = f ( X ) , where f (X) is a vector of real-valued objective

Without loss of generality, we assume that the objectives are all to be maximized. A DM is assumed to have only an implicit function of these multiple objectives and no explicit knowledge of the value function that he/she wishes to maximize. Let U ( f , ( X ) , . . . , f , ( X ) ) be the unknown, implicit value function, assumed to be concave and continuously differentiable. The DM is interested in maxi- mizing U ( f l ( X ) , . . . , f,,(X)) subject to the constraints in (1).

In our extended Zionts and Wallenius (ZW) method, the actual value function of the DM is approximated either by a linear or a nonlinear (concave) proxy. We generally approximate it by an unknown, linear function A’f (X> of the objectives, where 1, is an unknown p-vector of multipliers.

The rest of the article is organized as follows. In Section 2 we overview the algorithm and describe several extensions. In Section 3 we present the termi- nation conditions. In Section 4 we report on computational tests of the method. In Section 5 we describe an experimental application of the algorithm to quality control. In Section 6 we present the conclusions of the article. A numerical example is solved in the appendix.

functions, the components of which are f l ( X ) , f,: R“ + R , i = 1, . . . 3 P .

2. THE ALGORITHM

Our algorithm generally proceeds as follows:

1.

2.

Choose an arbitrary set of positive multipliers i, > 0 ( i = 1, . . . , p ) initially, and generate a composite linear value function i ‘ f ( X ) using those multipliers. Using a current set of weights {2}. solve the nonlinear programming problem whose constraints are Eqs. (1) and whose objective is to maximize i’f(X). The optimal solution will be an efficient solution in terms of the objective function vector. Compute the reduced gradients for each of the objective functions with respect to the nonbasic variables at the current solution. Determine which of the nonbasic variables offer efficient tradeoffs (see Zionts and Wallenius [ 161). Present the tradeoffs associated with the efficient nonbasic variables to the DM. H e / she should indicate “yes” (the tradeoff is preferred), “no” (the tradeoff is not pre- ferred), or “indifferent” (it is difficult to determine whether or not heishe would like the tradeoff). If the responses indicate that the current solution is optimal, stop. Otherwise go to step 6. Find, through an optimization, a new set of weights i > 0 consistent with all previous responses (if one exists). If the new is are not significantly different from the last two sets, stop; the solution is “nearly” optimal. Otherwise, return to step 2.

3 .

4.

5.

6.

REMARKS 1: In step 5 , if all tradeoffs with respect to efficient nonbasic variables at bounds are unattractive, and all tradeoffs with respect to efficient nonbasic variables between bounds are viewed indifferently, an optimal solution has been found (Lemma 1, Section 3).

Roy and Wallenius: Multiple-Objective Optimization 625

REMARK 2: The fundamental idea of the GRG methods is to express, a t each iteration, rn of the variablesy (called basic or dependent variables) in terms of the remaining n - m nonbasic (independent) variables x (as in the simplex method). where X = (x, y). Then the objective function is expressed in terms of the nonbasic variables x [denoted F , ( x ) ] and the reduced gradient of the objective function is calculated with respect to the nonbasic variables.

REMARK 3: In step 6. for each nonbasic variable x,. let w,, represent the j t h component of the reduced gradient of F , . When a tradeoff [ wI,. . . . , M ’ , ~ , ]

is (strongly) preferred, we construct an inequality of the form (where epsilon is a sufficiently small positive number).

0

, = I

When a tradeoff is not preferred, we construct a reverse inequality of the form

I’

The problem is then to find a solution that fulfills constraints (2), ( 3 ) , and X i , = l , E , r ~ , i = 1 . . . . . p .

REMARK 4: If all constraints and objective functions are linear, the method is equivalent to the ZW method [16].

REMARK 5 : Since the feasible region defined by (1) is assumed to be convex and the objective function ;.’.f(X) to be maximized is concave, the optimization problem in step 2 is a convex programming problem and a local maximum is also a global maximum.

REMARK 6: In the event that a DM is inconsistent in the responses, the lambda problem in step 6 has no feasible solution. However, inconsistency is immediately detected by the method and reported to the DM, who can correct hisiher behavior accordingly.

REMARK 7: Throughout this article we use the term “nearly” optimal somewhat loosely. By a nearly optimal solution we mean a solution that is very close (say within first-decimal-place accuracy) to a globally optimal solution. For most practical uses, such solutions would certainly be “close enough.” In our tests, we could always verify the “near” optimality of a solution, as the optimal solution was known. For details see Roy and Wallenius [12].

Reestimating the Proxy: The Lambda Problem

In the ZW method, the DM’s responses to tradeoff questions are used to construct (linear) constraints to restrict the choice of weights E.. An exception

626 Naval Research Logistics, Vol. 38 (1991)

is the “I don’t know” category of responses that is ignored by the (original) method.

Essentially, each response generates a constraint that expresses the fact that the marginal change in the DM’s value function corresponding to that tradeoff is either positive (when the tradeoff is preferred) or negative (when the tradeoff is not preferred) by at least some small amount E . Hence, a new set of weights should be consistent with these “facts.” The marginal change in U , the unknown value function, corresponding to a tradeoff can be expressed as

P P

au ldx j = ( d U l d F i ) ( d F i l d x j ) = wi,(dUldFi). r = l i = l

(4)

If we work with a linear proxy ( U ( f ( X ) ) = i’f(X)), then

P

aUldF, = Lr and dUIdx, = ~,,1~. ( 5 ) , = I

For each nonbasic variable x, , w,, represents the jth component of the reduced gradient of F,. When a tradeoff vector ( w l , , . . . , w p l ) is strongly preferred, we construct an inequality constraint of the form (2). When a tradeoff vector is strongly not preferred, we construct an inequality constraint of the form ( 3 ) .

To speed up the convergence to a “good” solution, we have experimented with a variation of the lambda generation problem. It is based on the use of strength of preference (SOP) information, as suggested by Malakooti [6]. In this variation, a DM is allowed to express the degree of desirability of a tradeoff in terms of two levels-strongly desirable (or not) or weakly desirable (or not). When a tradeoff vector is weakly preferred (or weakly not preferred), no con- straint is generated. Instead, such preference information is incorporated in the objective of the lambda problem as a quadratic “penalty” term. (See Roy and Wallenius [ 121.)

The “Lambda-Doubling” Heuristic and the “No-Lambda-Change” Stopping Rule

To speed up convergence to an optimal or most preferred solution located on the surface of a binding (nonlinear) constraint , we developed and implemented a simple heuristic, called lambda doubling. In this heuristic, if each 1, value changed twice in the same direction in step 6 of the method, we would double the last change in the Lrs to obtain new i r s [i.e., new Li = (1; from step 6) + 2 (last change in 4)]. (Thus, one needs to check the last three consecutive lambda sets.) These modified i s are tested for feasibility [using constraints (2), (3), and ELi = 11 before using them in step 2.

In step 6 of the method, the system terminates automatically if the lambdas do not change sufficiently for two consecutive iterations (we currently use 1.OE- 3 as tolerance). Essentially, it means that the last solution (in step 2) is nearly optimal. This no-lambda-change stopping usually occurs when a DM is close to

Roy and Wallenius: Multiple-Objective Optimization 627

his or her preferred solution and the last few responses were indications of “weak” preferences.

Concave Proxies

In the case of a concave, implicit value function, we proceed as above. That is, we work with a linear approximation (proxy) to the value function. In theory, we should periodically purge old constraints in the lambda problem (given in- feasible As), but in our experience this has not been a problem (see section 4). Surprisingly often, a large number of old A constraints are generally not binding at later iterations (a form of “self-purging”). However, when the optimal solution is on a hyperplane and the objectives are linear, a linear proxy of the value function cannot be used to locate it. Typically, the method would cycle from one extreme point to another on that hyperplane. In such a case, we switch to a nonlinear proxy of the value function, solve for its parameters, and continue with step 2 (see, for example, Oppenheimer [8]). Otherwise, we operate with a linear proxy. Switching to a nonlinear proxy only when needed allows the lambda problem to remain simple (that is, linear) through most of the iterations. We do not know from our tests if the choice of a particular proxy leads to fewer overall questions under all circumstances. That is, the effect of a proxy on the progress of the procedure is unknown. Further computational tests on many different problems can answer that question. The linear proxy is appealing as the first choice because of the computational advantage in solving the linear lambda problem. In addition, whether we use a nonlinear proxy throughout or switch from a linear to a nonlinear proxy as needed, the final solution obtained is the same.

When a nonlinear (concave) proxy is used, the lambda-problem constraints become nonlinear functions of the parameters. For example, if a sum-of-ex- ponentials function is used as the proxy, then

where a, and L, ( i = 1, . . . , p ) are the parameters. Then

and

Thus, when a tradeoff vector is strongly preferred, the inequality constraint (2) takes the form

628 Naval Research Logistics, Vol. 38 (1991)

And when a tradeoff is strongly not preferred, the inequality constraint (3) becomes

Similar changes occur for the weak preference penalty terms in the objective function. As can be seen, nonlinear proxies result in higher-order models; hence they require more parameters to fit. Because an ordinal preference function is unique up to an increasing monotonic transformation, the constant a, (for the sum-of-exponentials function) can arbitrarily be set equal to one. The remaining 2p - 1 parameters: a2, . . . , u p , A,, . . . , ,Ip, must be calculated from the lambda problem (tradeoff responses). For the sum-of-exponentials proxy, the nonlinear programming problem in step 2 becomes

subject to the constraints in (1). Other concave value functions, such as the sum-of-powers function,

U ( f ( X ) ) = X r a , f l ( X ) - " ~ , or the Cobb-Douglass function, U ( f ( X ) ) = 2,ur In f r ( X ) , could also serve as proxies for the DM's real value function. The lambda problem would then have to be appropriately formulated. Note, however, that in our procedure, we downplay the importance of the proxy. It is never assumed to represent the true preference function; it only serves as a mechanism to guide the search for an optimal solution.

3. TERMINATION CONDITIONS

Lasdon et al. [3 ] have shown that the Kuhn-Tucker conditions for {max U ( f l ( X ) , . . . , f p ( X ) ) s.t. (1)) are the same as the optimality conditions for the reduced problem, namely,

1 . 2.

aCi/dx, = 0 for x, strictly between its bounds; dU/dx, 5 0 for x, at its upper or lower bound.

(Note that for a variable xi at upper bound, the sign of the tradeoff vector has been changed to correspond to a feasible direction of motion.)

Because the composite objective function U (the value function) is not known explicitly by the DM in this case, the optimality conditions can only be tested indirectly (in step 5 of the method). By obtaining responses to the tradeoff questions, we can evaluate the marginal change in U using the equation

P

a uiax, = C (a U I ~ F ~ ) ( ~ F , /ax,). i = l

Roy and Wallenius: Multiple-Objective Optimization 629

If the marginal change is positive, the DM will prefer the tradeoff associated with that variable. If it is negative, he/she will not prefer the tradeoff. If it is zero, heishe will be indifferent to the tradeoff. The optimality conditions, recast in terms of the DM’s responses, are stated as Lemma 1.

LEMMA 1: Solution x* is an optimal solution to the following problem:

if the following conditions hold:

1. the tradeoff vector dFidx, is not preferred, i f the nonbasic variable x: is at its lower or upper bound;

2 . the DM is indifferent to the tradeoff vector, if the nonbasic variable x: is between its bounds.

PROOF: See Roy and Wallenius [ 121

As in the original ZW method, only efficient tradeoff vectors are presented in step 5 for the optimality test.

Convergence

The method converges to an optimal (preferred) solution independent of the form of the proxy used (linear or nonlinear). For example, for a linear proxy, the set of weights ( I . ) is nonrepeating, because the tradeoff responses exclude the current solution from the set of feasible solutions at subsequent iterations. The feasible region for the lambda problem (for any proxy) is reduced by the tradeoff responses that are added as constraints (or penalty terms). For a proof using a linear proxy, see Roy and Wallenius [12].

In our tests, using simulated value functions, the algorithm has always con- verged to an optimal or a nearly optimal solution in a finite number of iterations.

4. COMPUTATIONAL TEST OF THE METHOD

A computer program implementing the method was developed in FORTRAN. The basic idea was to verify that the method works and to test several new variations and heuristics. The program uses Lasdon and Warren’s GRG2 code [4] to solve the required optimization problems. In this section we report on the results of solving 10 test problems. The problems were either taken from the literature or constructed by us (Table 1). One of the problems is real (Musselman and Talavage [7]). Some of them were highly nonlinear and notoriously difficult to solve (especially the Himmelblau [2] problems). In fact, for some of the problems, we did not know whether the feasible set was convex or not. The entries p by m by n in Table 1 mean that the problem consists of p objectives, 171 constraints, and n variables. We investigated the effect of using strength of preference (SOP) information (that is, weak and strong preferences) and the lambda-doubling technique on the rate of convergence of the method. Each

630 Naval Research Logistics, Vol. 38 (1991)

Table 1. The test problems. Problem Description Problem Problem

number Reference p x m x n objectives: constraints: la Example in appendix 2 x 3 ~ 5 L, NL L, NL 2 Constructed by us 2 x 1 ~ 3 L NL 3 Constructed by us 2 x o x 2 NL none

5” Himmelblau [2, Prob. 111 2 x 3 ~ 8 L, NL NL 6” Himmelblau [2, Prob. 161 3 x 13 x 22 L, NL NL 7 Musselman and Talavage [7] 5 x 7 X 10 L, NL NL 8 Constructed by us 3 x 2 ~ 5 L L, NL 9 Constructed by us 3 x 1 ~ 4 L NL

10 Constructed by us 2 x 3 ~ 5 NL L, NL

4 Zionts and Wallenius [16] 3 x 2 ~ 6 L L

Key: L = linear, NL = nonlinear. “The constraint set was taken from the referenced source; the objective functions were constructed by us.

problem was solved 20 times (five times without using SOP information in the lambda problem and without using the lambda-doubling technique, five times using SOP information and without using the lambda-doubling technique, etc.).

The results are presented in Table 2. A linear proxy with (different) random initial and final weights was used to simulate choices. The final weights corre- sponded to the DM’s actual weights and were used to respond to the tradeoff questions. As the number of questions asked can be affected by the choice of

Table 2. Computational results. Mean number Mean number of iterations: of questions: Problem

number: A B C D A B C D 1 7.0 5.6 7.8 6.0 15.1 10.8 17.2 11.9 2 6.6 5.8 5.8 5.4 7 6.2 6.2 5.8 3 8.4 5.8 9.4 5.8 9.2 5.8 10.4 5.8 4 1.2 1.8 1.2 1.6 5.4 5.6 5.4 5.4 5 1.0 1.0 1.0 1.0 2.0 2.0 2.0 2.0

- - 2.4 9.8 - - 10.0 6 2.2 7 8.0 7.6 8.2 5.0 18.0 17.2 18.4 12.0 8 1.6 2.0 1.6 2.0 5.2 4.8 5.2 4.8 9 11.6 5.6 8.8 5.8 25.2 11.2 19.6 11.6

10 5.2 3.6 6.2 3.8 6.2 4.6 7.2 4.8

Grand mean: 5.62* 4.31 5.56 4.04* 10.37* 7.58 10.18 7.12*

Percentage increase in questions: Percentage increase in iterations: 39.1 6.7 37.6 - 45.6 6.5 43.0 -

Key : A = without 1 doubling; without SOP B = without 1 doubling; with SOP C = with 1 doubling; without SOP D = with 1 doubling; with SOP information

*Excluding problem 6 to allow comparison to columns B and C.

Roy and Wallenius: Multiple-Objective Optimization 63 1

initial (initial linear proxy) and final weights (e.g.. choosing very close initial and final weights can lead to quick convergence), these weights were picked randomly five times for each of the four experiments in Table 2 to remove any bias in the weight selection. An indifference response and a weak preference response (if applicable) were given, respectively, if the percentage change in the value function associated with a tradeoff was less than 0.1% (indifference) or between 0.1 and 1% (weak preference).

Table 2 indicates that the method converges rapidly to an optimal or a nearly optimal solution. The use of SOP information in the lambda problem (column B) considerably reduces the number of questions and iterations. The sign test indicates that the results are statistically significant at the 1% level. The lambda- doubling technique itself (column C) does not appear to speed up convergence. However, when used together with SOP information (column D), the results are overall slightly better than without lambda doubling (column B) (statistically not significant). The use of SOP information in the lambda problem, in general, causes the algorithm to generate nearly optimal solutions. In practice, the error is usually small (all actual “optimal” solutions were found independently by solving a single objective problem where the final weights were used to construct the value function to be maximized). Based on the results, we recommend that SOP information be used with the algorithm. Also, in general, there is no harm nor much computational cost in using the lambda-doubling technique. In about 20% of our tests, it resulted in major savings (say, 30% fewer questions). Its inclusion in the algorithm is, therefore, justified.

In general, the easiest problems to solve were those whose optimum was at an “extreme point.” In such cases, as in linear programming, it was not necessary to identify the weights precisely to find an optimal solution. The convergence was slower if an optimal solution appeared on the surface of a nonlinear con- straint. Usually, when weak preference responses were used, the method con- verged to a nearly optimal solution relatively quickly (say in 10 questions), but to generate an optimal solution (say, within two to three decimal places) required perhaps 50-75% more questions.

We have also used the method to solve a small number of problems having more general, concave value functions in order to experimentally verify that it works. In particular, the following three different functions were tested: Cobb- Douglas, sum of powers, and sum of exponentials. Problems 6, 7 , and 9 were solved five times using each of the above value functions to simulate choices. In general, the method converged to an optimal or a nearly optimal solution almost as rapidly as (sometimes even more rapidly than) with a linear value function. To solve problem 4 with a concave value function, we had to switch to a concave proxy function, because the optimal solution was on a hyperplane. The optimality of the solutions found was independently verified in all cases.

5. APPLICATION TO QUALITY CONTROL

The determination of an inspection plan for a continuous production system was setup as a bicriteria problem. One objective was to minimize the expected unit cost of inspection and replacement. The unit cost included the cost of inspecting an item during production, the cost of replacing a defective item

632 Naval Research Logistics, Vol. 38 (1991)

during inspection, and the cost of replacing defective items when returned by customers (or the next production line). The other objective was to minimize the average outgoing percentage of defectives. Both of these objectives were highly nonlinear functions of two decision variables: (1) the rate at which random sampling of the production line is to take place and (2) the extent of 100% sampling (free of defectives) before switching back to random sampling. The inspection process resorts to 100% sampling whenever a defective item is found (see Roy, Liu and Keats [ l l ] ) . There were no constraints in the problem, except for bounds on the decision variables.

For the experiment, five data files were created with significantly different data values. Six persons were asked to use our interactive system for each situation and determine a most preferred inspection plan. Three of them were quality-control professionals with over 15 years of experience. Three others were experienced faculty members with quality control as one of their main areas of research.

It is apparent from the results that the subjects had nonlinear value functions, because the final weights ( I L l and &) for each person changed as the data set was changed. A nonlinear value function is, of course, expected for this problem. In fact, for all subjects, we had no idea about the exact nature of their value function before or after the experiment.

The average number of questions posed was about six. Most tradeoff questions were easily answered in less than a minute. In many cases, the questions were answered simply by looking at what objective will increase or decrease and without studying the exact tradeoff. The session for each person lasted about one and a half hours.

The weak preference response was quite frequently used by subjects and therefore justifies its incorporation in the method. In many cases, the users stated that they were looking for better solutions, could not find one, and back- tracked.

We also carried out some sensitivity analysis regarding the last data file. The results demonstrate that a linear proxy to a nonlinear value function, once captured, can be reused to find a most preferred solution when the data is slightly changed. Thus, for repetitive bicriteria decision-making situations, such as qual- ity control, a high-level manager could be asked to identify the preferred solution using the Z W procedure, a proxy to the value function obtained, and reused at a lower-level of decision making to find the “preferred” or “expert” solution for similar problems. However, observe that the problem was effectively un- constrained and a linear proxy could be used for generating any efficient (not just extreme point) solution.

6. CONCLUSION

In this article we have developed and tested a method for solving multiple- criteria optimization problems involving nonlinear constraints and/or nonlinear objective functions. The method is based on earlier work of Zionts and Wallenius [16, 171, and generalizes their original approach to a wider class of nonlinear multiple criteria optimization problems. The extended method maintains the simplicity of the original method, and we believe it compares favorably with

Roy and Wallenius: Multiple-Objective Optimization 633

other existing approaches. Further statements require additional work. Possible areas of application are numerous, including engineering design. water resources management, quality control, and policy analysis, among others. Such problems d o have multiple objectives and they are often highly nonlinear. In contrast to linear programming problems, the number of constraints is, in general, not large. As several individuals have expressed an interest in using the method in problem solving, we plan to incorporate the program as part of a commercially available decision support system software, such as IFPSIOPTIMUM.

APPENDIX: AN EXAMPLE

(Source: Lasdon et al. [ 3 ] . ) Consider the following set of constraints:

XI - x, - x, = 0,

-xi + x, - x, = 0.

XI + x, - xi - 1 = 0.

X , 2 0. ( j = 1. . . . , 5 ) .

where X?. X, . and Xi are slack variables. and the following objective functions. both to be maximized:

X1 5 0.8.

f l = -2xi - 2x;.

f: = 1x1 + 3.2X: - 3.28.

For the example. assume that the (implicit) value function is 0.999fi + O.OO1fl. but only use the knowledge of this function in answering the tradeoff questions. Initially. we arbitrarily choose i , =

i! = 0.5. and maximize O.Sf, + O.5f. subject to the problem constraints. The optimal solution is (XI = 0.894, X 2 = 0.8); the second constraint is binding.) For the basic variable. we may choose either XI or X:. We choose XI because it is strictly between its bounds. Then the nonbasic variables are X: and X,. From the second constraint (the only binding constraint). 5 0 l V t ' for XI in t e r m of X 2 and X, , and substitute it in f l and f Z . Then. calculate the (reduced) gradients of f l and f: with respect to the nonbasic variables X z and X,. At this point. the gradients are Of l = (5.2, 2) and V f : = (-5.436. -2.236). Therefore, the tradeoffs associated with X 2 and X , are M': = (5.2. -5.436) and w, = (2. -2.236), respectively. Upon checking, the latter set of tradeoffs is not efficient ( w : rescaled is (2.139. -2.236). and therefore w: offers a better tradeoff than MI,), and need not be presented to the DM. The first set of tradeoffs is regarded favorably by the DM. using the value function stated above. To find a consistent set of (new) i s (Zionts and Wallenius [ 171). solve the following linear programming problem, where E is a variable (using the "middle most" formulation of the lambda problem):

max c

s.t. 5.2i., - 5.436i, 2 E , /I, + j.? = 1. i , . ;.! 2 0.001

The optimal solution is j., = 0.999. = 0 .01 (max epsilon = 1.9958), and the associated solution of the original problem (with updated i s ) is (XI = 0.5001. X ? = 0.4999) (the third constraint

'The GRG method, as implemented by Lasdon et al. [3] uses only the binding constraints to state the basic variables in terms of the nonbasics. This keeps the problem simpler. since the slacks of the nonbinding constraints may be ignored. During the iterations, however, the nonbinding constraints are always checked for feasibility and also whether they become binding. We have adopted the same strategy.

634 NavaI Research Logistics, Vol. 38 (1991)

binding). The solution is very close to the intersection of the first and third constraints. In fact, this point of intersection will never be obtained using strictly positive multipliers, as it is an improperly efficient solution. At the new solution, X 2 and X , are made nonbasic. The tradeoffs are w 2 = (0.008, - 0.8) and ws = ( - 2.005,4). This time they are both efficient. The first tradeoff is viewed indifferently and the second tradeoff negatively. The optimality conditions are fulfilled and the procedure is terminated.

To solve the unconstrained version of the problem, we would proceed as above. Both variables X , and X , would be nonbasic. In the X space the tradeoffs would correspond to motions in the direction of the axes. To find the true optimum, we would need an exact indifference response to both tradeoffs. In practice, we would terminate with a “near” indifference response.

ACKNOWLEDGMENT

The research was supported, in part, by a Faculty-Grant-in-Aid and by a Council of 100 Research Grant from Arizona State University (Roy), and by a grant from Y. Jahnsson Foundation, Finland (Wallenius). The research was started while Dr. Jyrki Wallenius was a Visiting Professor at Arizona State University.

REFERENCES

[ l ] Abadie, J., and Carpentier. J., Generalization of the Wolfe Reduced Gradient Method to the Case of Nonlinear Constraints,” Optimization, R. Fletcher, Ed., Academic Press, New York, 1969, pp. 37-48.

[2] Himmelblau, D. , Applied Nonlinear Programming, McGraw-Hill, New York, 1972. [3] Lasdon, L., Fox, R.L., and Ratner, M., “Nonlinear Optimization Using the Gen-

eralized Reduced Gradient Method,” Revue Francaise D’Automatique, D’lnfor- matique, et de Recerche Operationelle, 3, 73-104 (1974).

[4] Lasdon, L., and Warren, A., “Generalized Reduced Gradient Software for Linearly and Nonlinearly Constrained Problems,” Design and Implementation of Optimiza- tion Software, H. Greenberg, Ed., Sifthoff and Noordhoff, Amsterdam, 1978.

[5] Loganathan, G.V., and Sherali, H.D., “A Convergent Cutting-Plane Algorithm for Multiobjective Optimization,” Operations Research, 35, 365-377 (1987).

[6] Malakooti, B., “Assessment Through Strength of Preference,” Large Scale Systems, 8, 169 (1985).

[7] Musselman, K., and Talavage, J., “A Tradeoff Cut Approach to Multiple Objective Optimization,” Operations Research, 28, 1424-143.5 (1980).

[8] Oppenheimer, K.R., “A Proxy Approach to Multi-Attitude Decision-Making,’’ Management Science, 24, 675-689 (1978).

[9] Rosenthal, R.E. “Principles of Multiobjective Optimization,” Decision Sciences,

[lo] Roy, A. , Lasdon, L., and Lordeman, J., “Extending Planning Languages to Include Optimization Capabilities,” Management Science, 32, 360-373 (1986).

[ l l ] Roy, A., Liu, M., and Keats, J.B., “Multiple Criteria Optimization of an Econom- ically-Based Continuous Sampling Plan for Finite Production,” Proceedings of the IX International Conference on Production Research, 1987, pp. 2661-2667.

[12] Roy, A., and Wallenius, J . , “Nonlinear Multiple Objective Optimization: An Al- gorithm and Some Theory” Mathematical Programming (to be published).

[ 131 Sadagopan, S. , and Ravindran, A., “Interactive Algorithms for Multiple Criteria Nonlinear Programming Problems,” European Journal of Operational Research, 25,

1141 Steuer, R., Multiple Criteria Optimization: Theory, Computation, and Application,

[15] Wallenius, H., Wallenius, J . , and Vartia, P., “An Approach to Solving Multiple

16, 133-1.52 (198.5).

247-257 (1986).

Wiley, New York, 1986.

Roy and Wallenius: Multiple-Objective Optimization 635

Criteria Macroeconomic Policy Problems and Application,” Management Science,

[ 161 Zionts, S. , and Wallenius, J., “An Interactive Programming Method for Solving the Multiple Criteria Problem,” Management Science, 22, 652-663 (1976).

[17] Zionts, S. , and Wallenius, J., “An Interactive Multiple Objective Linear Program- ming Method for a Class of Underlying Nonlinear Utility Functions,” Management Science, 29, 519-529 (1983).

24, 1021-1030 (1978).

Manuscript received February, 1988 Revised manuscript received May, 1990 Accepted December 10, 1990