MATLAB Optimization Toolbox Version 4.0 (R2008a)...

26
MATLAB Optimization Toolbox Version 4.0 (R2008a) 23-Jan-2008 Nonlinear minimization of functions. fminbnd - Scalar bounded nonlinear function minimization. fmincon - Multidimensional constrained nonlinear minimization. fminsearch - Multidimensional unconstrained nonlinear minimization, by Nelder-Mead direct search method. fminunc - Multidimensional unconstrained nonlinear minimization. fseminf - Multidimensional constrained minimization, semi-infinite constraints. Nonlinear minimization of multi-objective functions. fgoalattain - Multidimensional goal attainment optimization fminimax - Multidimensional minimax optimization. Linear least squares (of matrix problems). lsqlin - Linear least squares with linear constraints. lsqnonneg - Linear least squares with nonnegativity constraints. Nonlinear least squares (of functions). lsqcurvefit - Nonlinear curvefitting via least squares (with bounds). lsqnonlin - Nonlinear least squares with upper and lower bounds. Nonlinear zero finding (equation solving). fzero - Scalar nonlinear zero finding. fsolve - Nonlinear system of equations solve (function solve). Minimization of matrix problems. bintprog - Binary integer (linear) programming. linprog - Linear programming. quadprog - Quadratic programming. Controlling defaults and options. optimset - Create or alter optimization OPTIONS structure. optimget - Get optimization parameters from OPTIONS structure. Graphical user interface and plot routines optimtool - Optimization Toolbox Graphical User Interface optimplotconstrviolation - Plot max. constraint violation at each iteration optimplotfirstorderopt - Plot first-order optimality at each iteration optimplotresnorm - Plot value of the norm of residuals at each iteration optimplotstepsize - Plot step size at each iteration

Transcript of MATLAB Optimization Toolbox Version 4.0 (R2008a)...

MATLAB Optimization Toolbox Version 4.0 (R2008a) 23-Jan-2008

Nonlinear minimization of functions.

fminbnd - Scalar bounded nonlinear function minimization.

fmincon - Multidimensional constrained nonlinear minimization.

fminsearch - Multidimensional unconstrained nonlinear minimization,

by Nelder-Mead direct search method.

fminunc - Multidimensional unconstrained nonlinear minimization.

fseminf - Multidimensional constrained minimization, semi-infinite constraints.

Nonlinear minimization of multi-objective functions.

fgoalattain - Multidimensional goal attainment optimization

fminimax - Multidimensional minimax optimization.

Linear least squares (of matrix problems).

lsqlin - Linear least squares with linear constraints.

lsqnonneg - Linear least squares with nonnegativity constraints.

Nonlinear least squares (of functions).

lsqcurvefit - Nonlinear curvefitting via least squares (with bounds).

lsqnonlin - Nonlinear least squares with upper and lower bounds.

Nonlinear zero finding (equation solving).

fzero - Scalar nonlinear zero finding.

fsolve - Nonlinear system of equations solve (function solve).

Minimization of matrix problems.

bintprog - Binary integer (linear) programming.

linprog - Linear programming.

quadprog - Quadratic programming.

Controlling defaults and options.

optimset - Create or alter optimization OPTIONS structure.

optimget - Get optimization parameters from OPTIONS structure.

Graphical user interface and plot routines

optimtool - Optimization Toolbox Graphical User Interface

optimplotconstrviolation - Plot max. constraint violation at each iteration

optimplotfirstorderopt - Plot first-order optimality at each iteration

optimplotresnorm - Plot value of the norm of residuals at each iteration

optimplotstepsize - Plot step size at each iteration

1

FMINBND Single-variable bounded nonlinear function minimization. X = FMINBND(FUN,x1,x2) attempts to find a local minimizer X of the function FUN in the interval x1 < X < x2. FUN is a function handle. FUN accepts scalar input X and returns a scalar function value F evaluated at X. X = FMINBND(FUN,x1,x2,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, created with the OPTIMSET function. See OPTIMSET for details. FMINBND uses these options: Display, TolX, MaxFunEval, MaxIter, FunValCheck, PlotFcns, and OutputFcn. X = FMINBND(PROBLEM) finds the minimum for PROBLEM. PROBLEM is a structure with the function FUN in PROBLEM.objective, the interval in PROBLEM.x1 and PROBLEM.x2, the options structure in PROBLEM.options, and solver name 'fminbnd' in PROBLEM.solver. The structure PROBLEM must have all the fields. [X,FVAL] = FMINBND(...) also returns the value of the objective function, FVAL, computed in FUN, at X. [X,FVAL,EXITFLAG] = FMINBND(...) also returns an EXITFLAG that describes the exit condition of FMINBND. Possible values of EXITFLAG and the corresponding exit conditions are 1 FMINBND converged with a solution X based on OPTIONS.TolX. 0 Maximum number of function evaluations or iterations reached. -1 Algorithm terminated by the output function. -2 Bounds are inconsistent (that is, ax > bx). [X,FVAL,EXITFLAG,OUTPUT] = FMINBND(...) also returns a structure OUTPUT with the number of iterations taken in OUTPUT.iterations, the number of function evaluations in OUTPUT.funcCount, the algorithm name in OUTPUT.algorithm, and the exit message in OUTPUT.message. Examples FUN can be specified using @: X = fminbnd(@cos,3,4) computes pi to a few decimal places and gives a message upon termination. [X,FVAL,EXITFLAG] = fminbnd(@cos,3,4,optimset('TolX',1e-12,'Display','off')) computes pi to about 12 decimal places, suppresses output, returns the function value at x, and returns an EXITFLAG of 1. FUN can also be an anonymous function: x = fminbnd(@(x) sin(x)+3,2,5) If FUN is parameterized, you can use anonymous functions to capture the problem-dependent parameters. Suppose you want to minimize the objective given in the function myfun, which is parameterized by its second argument c. Here myfun is an M-file function such as function f = myfun(x,c) f = (x - c)^2; To optimize for a specific value of c, first assign the value to c. Then create a one-argument anonymous function that captures that value of c and calls myfun with two arguments. Finally, pass this anonymous function to FMINBND: c = 1.5; % define parameter first x = fminbnd(@(x) myfun(x,c),0,1) See also optimset, fminsearch, fzero, function_handle. Reference page in Help browser

2

doc fminbnd ===================================================================== FMINCON finds a constrained minimum of a function of several variables. FMINCON attempts to solve problems of the form: min F(X) subject to: A*X <= B, Aeq*X = Beq (linear constraints) X C(X) <= 0, Ceq(X) = 0 (nonlinear constraints) LB <= X <= UB (bounds) X = FMINCON(FUN,X0,A,B) starts at X0 and finds a minimum X to the function FUN, subject to the linear inequalities A*X <= B. FUN accepts input X and returns a scalar function value F evaluated at X. X0 may be a scalar, vector, or matrix. X = FMINCON(FUN,X0,A,B,Aeq,Beq) minimizes FUN subject to the linear equalities Aeq*X = Beq as well as A*X <= B. (Set A=[] and B=[] if no inequalities exist.) X = FMINCON(FUN,X0,A,B,Aeq,Beq,LB,UB) defines a set of lower and upper bounds on the design variables, X, so that a solution is found in the range LB <= X <= UB. Use empty matrices for LB and UB if no bounds exist. Set LB(i) = -Inf if X(i) is unbounded below; set UB(i) = Inf if X(i) is unbounded above. X = FMINCON(FUN,X0,A,B,Aeq,Beq,LB,UB,NONLCON) subjects the minimization to the constraints defined in NONLCON. The function NONLCON accepts X and returns the vectors C and Ceq, representing the nonlinear inequalities and equalities respectively. FMINCON minimizes FUN such that C(X) <= 0 and Ceq(X) = 0. (Set LB = [] and/or UB = [] if no bounds exist.) X = FMINCON(FUN,X0,A,B,Aeq,Beq,LB,UB,NONLCON,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. For a list of options accepted by FMINCON refer to the documentation. X = FMINCON(PROBLEM) finds the minimum for PROBLEM. PROBLEM is a structure with the function FUN in PROBLEM.objective, the start point in PROBLEM.x0, the linear inequality constraints in PROBLEM.Aineq and PROBLEM.bineq, the linear equality constraints in PROBLEM.Aeq and PROBLEM.beq, the lower bounds in PROBLEM.lb, the upper bounds in PROBLEM.ub, the nonlinear constraint function in PROBLEM.nonlcon, the options structure in PROBLEM.options, and solver name 'fmincon' in PROBLEM.solver. Use this syntax to solve at the command line a problem exported from OPTIMTOOL. The structure PROBLEM must have all the fields. [X,FVAL] = FMINCON(FUN,X0,...) returns the value of the objective function FUN at the solution X. [X,FVAL,EXITFLAG] = FMINCON(FUN,X0,...) returns an EXITFLAG that describes the exit condition of FMINCON. Possible values of EXITFLAG and the corresponding exit conditions are listed below. All algorithms: 1 First order optimality conditions satisfied to the specified tolerance. 0 Maximum number of function evaluations or iterations reached. -1 Optimization terminated by the output function. -2 No feasible point found. Trust-region-reflective and interior-point: 2 Change in X less than the specified tolerance. Trust-region-reflective: 3 Change in the objective function value less than the specified

3

tolerance. Active-set only: 4 Magnitude of search direction smaller than the specified tolerance and constraint violation less than options.TolCon. 5 Magnitude of directional derivative less than the specified tolerance and constraint violation less than options.TolCon. Interior-point: -3 Problem appears to be unbounded. [X,FVAL,EXITFLAG,OUTPUT] = FMINCON(FUN,X0,...) returns a structure OUTPUT with information such as total number of iterations, and final objective function value. See the documentation for a complete list. [X,FVAL,EXITFLAG,OUTPUT,LAMBDA] = FMINCON(FUN,X0,...) returns the Lagrange multipliers at the solution X: LAMBDA.lower for LB, LAMBDA.upper for UB, LAMBDA.ineqlin is for the linear inequalities, LAMBDA.eqlin is for the linear equalities, LAMBDA.ineqnonlin is for the nonlinear inequalities, and LAMBDA.eqnonlin is for the nonlinear equalities. [X,FVAL,EXITFLAG,OUTPUT,LAMBDA,GRAD] = FMINCON(FUN,X0,...) returns the value of the gradient of FUN at the solution X. [X,FVAL,EXITFLAG,OUTPUT,LAMBDA,GRAD,HESSIAN] = FMINCON(FUN,X0,...) returns the value of the exact or approximate Hessian of the Lagrangian at X. Examples FUN can be specified using @: X = fmincon(@humps,...) In this case, F = humps(X) returns the scalar function value F of the HUMPS function evaluated at X. FUN can also be an anonymous function: X = fmincon(@(x) 3*sin(x(1))+exp(x(2)),[1;1],[],[],[],[],[0 0]) returns X = [0;0]. If FUN or NONLCON are parameterized, you can use anonymous functions to capture the problem-dependent parameters. Suppose you want to minimize the objective given in the function myfun, subject to the nonlinear constraint mycon, where these two functions are parameterized by their second argument a1 and a2, respectively. Here myfun and mycon are M-file functions such as function f = myfun(x,a1) f = x(1)^2 + a1*x(2)^2; function [c,ceq] = mycon(x,a2) c = a2/x(1) - x(2); ceq = []; To optimize for specific values of a1 and a2, first assign the values to these two parameters. Then create two one-argument anonymous functions that capture the values of a1 and a2, and call myfun and mycon with two arguments. Finally, pass these anonymous functions to FMINCON: a1 = 2; a2 = 1.5; % define parameters first options = optimset('Algorithm','active-set'); % run active-set algorithm x = fmincon(@(x) myfun(x,a1),[1;2],[],[],[],[],[],[],@(x) mycon(x,a2),options) See also optimset, optimtool, fminunc, fminbnd, fminsearch, @, function_handle.

4

Reference page in Help browser doc fmincon ===================================================================== FMINSEARCH Multidimensional unconstrained nonlinear minimization (Nelder-Mead). X = FMINSEARCH(FUN,X0) starts at X0 and attempts to find a local minimizer X of the function FUN. FUN is a function handle. FUN accepts input X and returns a scalar function value F evaluated at X. X0 can be a scalar, vector or matrix. X = FMINSEARCH(FUN,X0,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, created with the OPTIMSET function. See OPTIMSET for details. FMINSEARCH uses these options: Display, TolX, TolFun, MaxFunEvals, MaxIter, FunValCheck, PlotFcns, and OutputFcn. X = FMINSEARCH(PROBLEM) finds the minimum for PROBLEM. PROBLEM is a structure with the function FUN in PROBLEM.objective, the start point in PROBLEM.x0, the options structure in PROBLEM.options, and solver name 'fminsearch' in PROBLEM.solver. The PROBLEM structure must have all the fields. [X,FVAL]= FMINSEARCH(...) returns the value of the objective function, described in FUN, at X. [X,FVAL,EXITFLAG] = FMINSEARCH(...) returns an EXITFLAG that describes the exit condition of FMINSEARCH. Possible values of EXITFLAG and the corresponding exit conditions are 1 Maximum coordinate difference between current best point and other points in simplex is less than or equal to TolX, and corresponding difference in function values is less than or equal to TolFun. 0 Maximum number of function evaluations or iterations reached. -1 Algorithm terminated by the output function. [X,FVAL,EXITFLAG,OUTPUT] = FMINSEARCH(...) returns a structure OUTPUT with the number of iterations taken in OUTPUT.iterations, the number of function evaluations in OUTPUT.funcCount, the algorithm name in OUTPUT.algorithm, and the exit message in OUTPUT.message. Examples FUN can be specified using @: X = fminsearch(@sin,3) finds a minimum of the SIN function near 3. In this case, SIN is a function that returns a scalar function value SIN evaluated at X. FUN can also be an anonymous function: X = fminsearch(@(x) norm(x),[1;2;3]) returns a point near the minimizer [0;0;0]. If FUN is parameterized, you can use anonymous functions to capture the problem-dependent parameters. Suppose you want to optimize the objective given in the function myfun, which is parameterized by its second argument c. Here myfun is an M-file function such as function f = myfun(x,c) f = x(1)^2 + c*x(2)^2; To optimize for a specific value of c, first assign the value to c. Then create a one-argument anonymous function that captures that value of c and calls myfun with two arguments. Finally, pass this anonymous function to FMINSEARCH:

5

c = 1.5; % define parameter first x = fminsearch(@(x) myfun(x,c),[0.3;1]) FMINSEARCH uses the Nelder-Mead simplex (direct search) method. See also optimset, fminbnd, function_handle. Reference page in Help browser doc fminsearch ===================================================================== FMINUNC finds a local minimum of a function of several variables. X = FMINUNC(FUN,X0) starts at X0 and attempts to find a local minimizer X of the function FUN. FUN accepts input X and returns a scalar function value F evaluated at X. X0 can be a scalar, vector or matrix. X = FMINUNC(FUN,X0,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Used options are Display, TolX, TolFun, DerivativeCheck, Diagnostics, FunValCheck, GradObj, HessPattern, Hessian, HessMult, HessUpdate, InitialHessType, InitialHessMatrix, MaxFunEvals, MaxIter, DiffMinChange and DiffMaxChange, LargeScale, MaxPCGIter, PrecondBandWidth, TolPCG, PlotFcns, OutputFcn, and TypicalX. Use the GradObj option to specify that FUN also returns a second output argument G that is the partial derivatives of the function df/dX, at the point X. Use the Hessian option to specify that FUN also returns a third output argument H that is the 2nd partial derivatives of the function (the Hessian) at the point X. The Hessian is only used by the large-scale method, not the line-search method. X = FMINUNC(PROBLEM) finds the minimum for PROBLEM. PROBLEM is a structure with the function FUN in PROBLEM.objective, the start point in PROBLEM.x0, the options structure in PROBLEM.options, and solver name 'fminunc' in PROBLEM.solver. Use this syntax to solve at the command line a problem exported from OPTIMTOOL. The structure PROBLEM must have all the fields. [X,FVAL] = FMINUNC(FUN,X0,...) returns the value of the objective function FUN at the solution X. [X,FVAL,EXITFLAG] = FMINUNC(FUN,X0,...) returns an EXITFLAG that describes the exit condition of FMINUNC. Possible values of EXITFLAG and the corresponding exit conditions are 1 Magnitude of gradient smaller than the specified tolerance. 2 Change in X smaller than the specified tolerance. 3 Change in the objective function value smaller than the specified tolerance (only occurs in the large-scale method). 0 Maximum number of function evaluations or iterations reached. -1 Algorithm terminated by the output function. -2 Line search cannot find an acceptable point along the current search direction (only occurs in the medium-scale method). [X,FVAL,EXITFLAG,OUTPUT] = FMINUNC(FUN,X0,...) returns a structure OUTPUT with the number of iterations taken in OUTPUT.iterations, the number of function evaluations in OUTPUT.funcCount, the algorithm used in OUTPUT.algorithm, the number of CG iterations (if used) in OUTPUT.cgiterations, the first-order optimality (if used) in OUTPUT.firstorderopt, and the exit message in OUTPUT.message. [X,FVAL,EXITFLAG,OUTPUT,GRAD] = FMINUNC(FUN,X0,...) returns the value of the gradient of FUN at the solution X. [X,FVAL,EXITFLAG,OUTPUT,GRAD,HESSIAN] = FMINUNC(FUN,X0,...) returns the

6

value of the Hessian of the objective function FUN at the solution X. Examples FUN can be specified using @: X = fminunc(@myfun,2) where myfun is a MATLAB function such as: function F = myfun(x) F = sin(x) + 3; To minimize this function with the gradient provided, modify the function myfun so the gradient is the second output argument: function [f,g] = myfun(x) f = sin(x) + 3; g = cos(x); and indicate the gradient value is available by creating an options structure with OPTIONS.GradObj set to 'on' (using OPTIMSET): options = optimset('GradObj','on'); x = fminunc(@myfun,4,options); FUN can also be an anonymous function: x = fminunc(@(x) 5*x(1)^2 + x(2)^2,[5;1]) If FUN is parameterized, you can use anonymous functions to capture the problem-dependent parameters. Suppose you want to minimize the objective given in the function myfun, which is parameterized by its second argument c. Here myfun is an M-file function such as function [f,g] = myfun(x,c) f = c*x(1)^2 + 2*x(1)*x(2) + x(2)^2; % function g = [2*c*x(1) + 2*x(2) % gradient 2*x(1) + 2*x(2)]; To optimize for a specific value of c, first assign the value to c. Then create a one-argument anonymous function that captures that value of c and calls myfun with two arguments. Finally, pass this anonymous function to FMINUNC: c = 3; % define parameter first options = optimset('GradObj','on'); % indicate gradient is provided x = fminunc(@(x) myfun(x,c),[1;1],options) See also optimset, fminsearch, fminbnd, fmincon, @, inline. Reference page in Help browser doc fminunc ===================================================================== FSEMINF solves semi-infinite constrained optimization problems. FSEMINF attempts to solve problems of the form: min { F(x) | C(x) <= 0 , Ceq(x) = 0 , PHI(x,w) <= 0 } x for all w in an interval. X = FSEMINF(FUN,X0,NTHETA,SEMINFCON) starts at X0 and finds minimum X to the function FUN constrained by NTHETA semi-infinite constraints in the function SEMINFCON. FUN accepts vector input X and returns the scalar function value F evaluated at X. Function SEMINFCON accepts vector inputs X and S and return a vector C of nonlinear inequality constraints, a vector Ceq of nonlinear equality constraints and NTHETA semi-infinite inequality constraint matrices, PHI_1, PHI_2, ..., PHI_NTHETA, evaluated over an interval. S is a recommended sampling

7

interval, which may or may not be used. X = FSEMINF(FUN,X0,NTHETA,SEMINFCON,A,B) also tries to satisfy the linear inequalities A*X <= B. X = FSEMINF(FUN,X0,NTHETA,SEMINFCON,A,B,Aeq,Beq) minimizes subject to the linear equalities Aeq*X = Beq as well. (Set A=[] and B=[] if no inequalities exist.) X = FSEMINF(FUN,X0,NTHETA,SEMINFCON,A,B,Aeq,Beq,LB,UB) defines a set of lower and upper bounds on the design variables, X, so that the solution is in the range LB <= X <= UB. Use empty matrices for LB and U if no bounds exist. Set LB(i) = -Inf if X(i) is unbounded below; set UB(i) = Inf if X(i) is unbounded above. X = FSEMINF(FUN,X0,NTHETA,SEMINFCON,A,B,Aeq,Beq,LB,UB,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Used options are Display, TolX, TolFun, TolCon, DerivativeCheck, Diagnostics, FunValCheck, GradObj, MaxFunEvals, MaxIter, DiffMinChange, DiffMaxChange, PlotFcns, OutputFcn, and TypicalX. Use the GradObj option to specify that FUN may be called with two output arguments where the second, G, is the partial derivatives of the function df/dX, at the point X: [F,G] = feval(FUN,X). X = FSEMINF(PROBLEM) solves the semi-infinite constrained problem defined in PROBLEM. PROBLEM is a structure with the function FUN in PROBLEM.objective, the start point in PROBLEM.x0, the number of semi-infinite constraints in PROBLEM.ntheta, the nonlinear and semi-infinite constraint function in PROBLEM.seminfcon, the linear inequality constraints in PROBLEM.Aineq and PROBLEM.bineq, the linear equality constraints in PROBLEM.Aeq and PROBLEM.beq, the lower bounds in PROBLEM.lb, the upper bounds in PROBLEM.ub, the options structure in PROBLEM.options, and solver name 'fseminf' in PROBLEM.solver. Use this syntax to solve at the command line a problem exported from OPTIMTOOL. The structure PROBLEM must have all the fields. [X,FVAL] = FSEMINF(FUN,X0,NTHETA,SEMINFCON,...) returns the value of the objective function FUN at the solution X. [X,FVAL,EXITFLAG] = FSEMINF(FUN,X0,NTHETA,SEMINFCON,...) returns an EXITFLAG that describes the exit condition of FSEMINF. Possible values of EXITFLAG and the corresponding exit conditions are 1 FSEMINF converged to a solution X. 4 Magnitude of search direction smaller than the specified tolerance and constraint violation less than options.TolCon. 5 Magnitude of directional derivative less than the specified tolerance and constraint violation less than options.TolCon. 0 Maximum number of function evaluations or iterations reached. -1 Optimization terminated by the output function. -2 No feasible point found. [X,FVAL,EXITFLAG,OUTPUT] = FSEMINF(FUN,X0,NTHETA,SEMINFCON,...) returns a structure OUTPUT with the number of iterations taken in OUTPUT.iterations, the number of function evaluations in OUTPUT.funcCount, the norm of the final step in OUTPUT.stepsize, the final line search steplength in OUTPUT.lssteplength, the algorithm used in OUTPUT.algorithm, the first-order optimality in OUTPUT.firstorderopt, and the exit message in OUTPUT.message. [X,FVAL,EXITFLAG,OUTPUT,LAMBDA] = FSEMINF(FUN,X0,NTHETA,SEMINFCON,...) returns the Lagrange multiplier at the solution X: LAMBDA.lower for LB, LAMBDA.upper for UB, LAMBDA.ineqlin for the linear inequalities,

8

LAMBDA.eqlin is for the linear equalities, LAMBDA.ineqnonlin is for the nonlinear inequalities, and LAMBDA.eqnonlin is for the nonlinear equalities. Examples FUN and SEMINFCON can be specified using @: x = fseminf(@myfun,[2 3 4],3,@myseminfcon) where myfun is a MATLAB function such as: function F = myfun(x) F = x(1)*cos(x(2))+ x(3)^3; and myseminfcon is a MATLAB function such as: function [C,Ceq,PHI1,PHI2,PHI3,S] = myseminfcon(X,S) C = []; % Code to compute C and Ceq: could be % empty matrices if no constraints. Ceq = []; if isnan(S(1,1)) S = [...] ; % S has ntheta rows and 2 columns end PHI1 = ... ; % code to compute PHI's PHI2 = ... ; PHI3 = ... ; See also optimset, @, fgoalattain, lsqnonlin. Reference page in Help browser doc fseminf ===================================================================== FGOALATTAIN solves the multi-objective goal attainment optimization problem. X = FGOALATTAIN(FUN,X0,GOAL,WEIGHT) tries to make the objective functions (F) supplied by the function FUN attain the goals (GOAL) by varying X. The goals are weighted according to WEIGHT. In doing so the following nonlinear programming problem is solved: min { LAMBDA : F(X)-WEIGHT.*LAMBDA<=GOAL } X,LAMBDA FUN accepts input X and returns a vector (matrix) of function values F evaluated at X. X0 may be a scalar, vector, or matrix. X = FGOALATTAIN(FUN,X0,GOAL,WEIGHT,A,B) solves the goal attainment problem subject to the linear inequalities A*X <= B. X = FGOALATTAIN(FUN,X0,GOAL,WEIGHT,A,B,Aeq,Beq) solves the goal attainment problem subject to the linear equalities Aeq*X = Beq as well. X = FGOALATTAIN(FUN,X0,GOAL,WEIGHT,A,B,Aeq,Beq,LB,UB) defines a set of lower and upper bounds on the design variables, X, so that the solution is in the range LB <= X <= UB. Use empty matrices for LB and U if no bounds exist. Set LB(i) = -Inf if X(i) is unbounded below; set UB(i) = Inf if X(i) is unbounded above. X = FGOALATTAIN(FUN,X0,GOAL,WEIGHT,A,B,Aeq,Beq,LB,UB,NONLCON) subjects the goal attainment problem to the constraints defined in NONLCON (usually an M-file: NONLCON.m). The function NONLCON should return the vectors C and Ceq, representing the nonlinear inequalities and equalities respectively, when called with feval: [C, Ceq] = feval(NONLCON,X). FGOALATTAIN optimizes such that C(X) <= 0

9

and Ceq(X) = 0. X = FGOALATTAIN(FUN,X0,GOAL,WEIGHT,A,B,Aeq,Beq,LB,UB,NONLCON,OPTIONS) minimizes the with default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Used options are Display, TolX, TolFun, TolCon, DerivativeCheck, FunValCheck, GradObj, GradConstr, MaxFunEvals, MaxIter, MeritFunction, GoalsExactAchieve, Diagnostics, DiffMinChange, DiffMaxChange, PlotFcns, OutputFcn, and TypicalX. Use the GradObj option to specify that FUN may be called with two output arguments where the second, G, is the partial derivatives of the function df/dX, at the point X: [F,G] = feval(FUN,X). Use the GradConstr option to specify that NONLCON may be called with four output arguments: [C,Ceq,GC,GCeq] = feval(NONLCON,X) where GC is the partial derivatives of the constraint vector of inequalities C an GCeq is the partial derivatives of the constraint vector of equalities Ceq. Use OPTIONS = [] as a place holder if no options are set. X = FGOALATTAIN(PROBLEM) solves the goal attainment problem defined in PROBLEM. PROBLEM is a structure with the function FUN in PROBLEM.objective, the start point in PROBLEM.x0, the 'goal' vector in PROBLEM.goal, the 'weight' vector in PROBLEM.weight, the linear inequality constraints in PROBLEM.Aineq and PROBLEM.bineq, the linear equality constraints in PROBLEM.Aeq and PROBLEM.beq, the lower bounds in PROBLEM.lb, the upper bounds in PROBLEM.ub, the nonlinear constraint function in PROBLEM.nonlcon, the options structure in PROBLEM.options, and solver name 'fgoalattain' in PROBLEM.solver. Use this syntax to solve at the command line a problem exported from OPTIMTOOL. The structure PROBLEM must have all the fields. [X,FVAL] = FGOALATTAIN(FUN,X0,...) returns the value of the objective function FUN at the solution X. [X,FVAL,ATTAINFACTOR] = FGOALATTAIN(FUN,X0,...) returns the attainment factor at the solution X. If ATTAINFACTOR is negative, the goals have been over- achieved; if ATTAINFACTOR is positive, the goals have been under-achieved. [X,FVAL,ATTAINFACTOR,EXITFLAG] = FGOALATTAIN(FUN,X0,...) returns an EXITFLAG that describes the exit condition of FGOALATTAIN. Possible values of EXITFLAG and the corresponding exit conditions are 1 FGOALATTAIN converged to a solution X. 4 Magnitude of search direction smaller than the specified tolerance and constraint violation less than options.TolCon. 5 Magnitude of directional derivative less than the specified tolerance and constraint violation less than options.TolCon. 0 Maximum number of function evaluations or iterations reached. -1 Optimization terminated by the output function. -2 No feasible point found. [X,FVAL,ATTAINFACTOR,EXITFLAG,OUTPUT] = FGOALATTAIN(FUN,X0,...) returns a structure OUTPUT with the number of iterations taken in OUTPUT.iterations, the number of function evaluations in OUTPUT.funcCount, the norm of the final step in OUTPUT.stepsize, the final line search steplength in OUTPUT.lssteplength, the algorithm used in OUTPUT.algorithm, the first-order optimality in OUTPUT.firstorderopt, and the exit message in OUTPUT.message. [X,FVAL,ATTAINFACTOR,EXITFLAG,OUTPUT,LAMBDA] = FGOALATTAIN(FUN,X0,...) returns the Lagrange multiplier at the solution X: LAMBDA.lower for LB, LAMBDA.upper for UB, LAMBDA.ineqlin is for the linear inequalities, LAMBDA.eqlin is for the linear equalities, LAMBDA.ineqnonlin is for the nonlinear inequalities, and LAMBDA.eqnonlin is for the nonlinear equalities.

10

For more details, type the M-file FGOALATTAIN.M. See also optimset, optimget. Reference page in Help browser doc fgoalattain ===================================================================== FMINIMAX finds a minimax solution of a function of several variables. FMINIMAX attempts to solve the following problem: min (max {FUN(X} ) where FUN and X can be vectors or matrices. X X = FMINIMAX(FUN,X0) starts at X0 and finds a minimax solution X to the functions in FUN. FUN accepts input X and returns a vector (matrix) of function values F evaluated at X. X0 may be a scalar, vector, or matrix. X = FMINIMAX(FUN,X0,A,B) solves the minimax problem subject to the linear inequalities A*X <= B. X = FMINIMAX(FUN,X0,A,B,Aeq,Beq) solves the minimax problem subject to the linear equalities Aeq*X = Beq as well. (Set A = [] and B = [] if no inequalities exist.) X = FMINIMAX(FUN,X0,A,B,Aeq,Beq,LB,UB) defines a set of lower and upper bounds on the design variables, X, so that the solution is in the range LB <= X <= UB. You may use empty matrices for LB and UB if no bounds exist. Set LB(i) = -Inf if X(i) is unbounded below; set UB(i) = Inf if X(i) is unbounded above. X = FMINIMAX(FUN,X0,A,B,Aeq,Beq,LB,UB,NONLCON) subjects the goal attainment problem to the constraints defined in NONLCON (usually an M-file: NONLCON.m). The function NONLCON should return the vectors C and Ceq, representing the nonlinear inequalities and equalities respectively, when called with feval: [C, Ceq] = feval(NONLCON,X). FMINIMAX optimizes such that C(X) <= 0 and Ceq(X) = 0. X = FMINIMAX(FUN,X0,A,B,Aeq,Beq,LB,UB,NONLCON,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Used options are Display, TolX, TolFun, TolCon, DerivativeCheck, FunValCheck, GradObj, GradConstr, MaxFunEvals, MaxIter, MeritFunction, MinAbsMax, Diagnostics, DiffMinChange, DiffMaxChange, PlotFcns, OutputFcn, and TypicalX. Use the GradObj option to specify that FUN may be called with two output arguments where the second, G, is the partial derivatives of the function df/dX, at the point X: [F,G] = feval(FUN,X). Use the GradConstr option to specify that NONLCON may be called with four output arguments: [C,Ceq,GC,GCeq] = feval(NONLCON,X) where GC is the partial derivatives of the constraint vector of inequalities C an GCeq is the partial derivatives of the constraint vector of equalities Ceq. Use OPTIONS = [] as a place holder if no options are set. X = FMINIMAX(PROBLEM) finds a minimax solution for PROBLEM. PROBLEM is a structure with the function FUN in PROBLEM.objective, the start point in PROBLEM.x0, the linear inequality constraints in PROBLEM.Aineq and PROBLEM.bineq, the linear equality constraints in PROBLEM.Aeq and PROBLEM.beq, the lower bounds in PROBLEM.lb, the upper bounds in PROBLEM.ub, the nonlinear constraint function in PROBLEM.nonlcon, the options structure in PROBLEM.options, and solver name 'fminimax' in PROBLEM.solver. Use this syntax to solve at the command line a problem exported from OPTIMTOOL. The structure PROBLEM must have all the fields.

11

[X,FVAL] = FMINIMAX(FUN,X0,...) returns the value of the objective functions at the solution X: FVAL = feval(FUN,X). [X,FVAL,MAXFVAL] = FMINIMAX(FUN,X0,...) returns MAXFVAL = max { FUN(X) } at the solution X. [X,FVAL,MAXFVAL,EXITFLAG] = FMINIMAX(FUN,X0,...) returns an EXITFLAG that describes the exit condition of FMINIMAX. Possible values of EXITFLAG and the corresponding exit conditions are 1 FMINIMAX converged to a solution X. 4 Magnitude of search direction smaller than the specified tolerance and constraint violation less than options.TolCon. 5 Magnitude of directional derivative smaller than the specified tolerance and constraint violation less than options.TolCon. 0 Maximum number of function evaluations or iterations reached. -1 Optimization terminated by the output function. -2 No feasible point found. [X,FVAL,MAXFVAL,EXITFLAG,OUTPUT] = FMINIMAX(FUN,X0,...) returns a structure OUTPUT with the number of iterations taken in OUTPUT.iterations, the number of function evaluations in OUTPUT.funcCount, the norm of the final step in OUTPUT.stepsize, the final line search steplength in OUTPUT.lssteplength, the algorithm used in OUTPUT.algorithm, the first-order optimality in OUTPUT.firstorderopt, and the exit message in OUTPUT.message. [X,FVAL,MAXFVAL,EXITFLAG,OUTPUT,LAMBDA] = FMINIMAX(FUN,X0,...) returns the Lagrange multipliers at the solution X: LAMBDA.lower for LB, LAMBDA.upper for UB, LAMBDA.ineqlin is for the linear inequalities, LAMBDA.eqlin is for the linear equalities, LAMBDA.ineqnonlin is for the nonlinear inequalities, and LAMBDA.eqnonlin is for the nonlinear equalities. Examples FUN can be specified using @: x = fminimax(@myfun,[2 3 4]) where myfun is a MATLAB function such as: function F = myfun(x) F = cos(x); FUN can also be an anonymous function: x = fminimax(@(x) sin(3*x),[2 5]) If FUN is parameterized, you can use anonymous functions to capture the problem-dependent parameters. Suppose you want to solve a minimax problem where the objectives given in the function myfun are parameterized by its second argument c. Here myfun is an M-file function such as function F = myfun(x,c) F = [x(1)^2 + c*x(2)^2; x(2) - x(1)]; To optimize for a specific value of c, first assign the value to c. Then create a one-argument anonymous function that captures that value of c and calls myfun with two arguments. Finally pass this anonymous function to FMINIMAX: c = 2; % define parameter first x = fminimax(@(x) myfun(x,c),[1;1])

12

See also optimset, @, inline, fgoalattain, lsqnonlin. Reference page in Help browser doc fminimax =================================================== LSQLIN Constrained linear least squares.

==================

X = LSQLIN(C,d,A,b) attempts to solve the least-squares problem min 0.5*(NORM(C*x-d)).^2 subject to A*x <= b x where C is m-by-n. X = LSQLIN(C,d,A,b,Aeq,beq) solves the least-squares (with equality constraints) problem: min 0.5*(NORM(C*x-d)).^2 subject to x A*x <= b and Aeq*x = beq X = LSQLIN(C,d,A,b,Aeq,beq,LB,UB) defines a set of lower and upper bounds on the design variables, X, so that the solution is in the range LB <= X <= UB. Use empty matrices for LB and UB if no bounds exist. Set LB(i) = -Inf if X(i) is unbounded below; set UB(i) = Inf if X(i) is unbounded above. X = LSQLIN(C,d,A,b,Aeq,beq,LB,UB,X0) sets the starting point to X0. X = LSQLIN(C,d,A,b,Aeq,beq,LB,UB,X0,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Used options are Display, Diagnostics, TolFun, LargeScale, MaxIter, JacobMult, PrecondBandWidth, TypicalX, TolPCG, and MaxPCGIter. Currently, only 'final' and 'off' are valid values for the parameter Display ('iter' is not available). X = LSQLIN(C,d,A,b,Aeq,beq,LB,UB,X0,OPTIONS,P1,P2,...) passes the problem-dependent parameters P1,P2,... directly to the JMFUN function when OPTIMSET('JacobMult',JMFUN) is set. JMFUN is provided by the user. Pass empty matrices for A, b, Aeq, beq, LB, UB, XO, OPTIONS, to use the default values. X = LSQLIN(PROBLEM) solves the least squares problem defined in PROBLEM. PROBLEM is a structure with the matrix 'C' in PROBLEM.C, the vector 'd' in PROBLEM.d, the linear inequality constraints in PROBLEM.Aineq and PROBLEM.bineq, the linear equality constraints in PROBLEM.Aeq and PROBLEM.beq, the lower bounds in PROBLEM.lb, the upper bounds in PROBLEM.ub, the start point in PROBLEM.x0, the options structure in PROBLEM.options, and solver name 'lsqlin' in PROBLEM.solver. Use this syntax to solve at the command line a problem exported from OPTIMTOOL. The structure PROBLEM must have all the fields. [X,RESNORM] = LSQLIN(C,d,A,b) returns the value of the squared 2-norm of the residual: norm(C*X-d)^2. [X,RESNORM,RESIDUAL] = LSQLIN(C,d,A,b) returns the residual: C*X-d. [X,RESNORM,RESIDUAL,EXITFLAG] = LSQLIN(C,d,A,b) returns an EXITFLAG that describes the exit condition of LSQLIN. Possible values of EXITFLAG and the corresponding exit conditions are 1 LSQLIN converged to a solution X. 3 Change in the residual smaller that the specified tolerance.

13

0 Maximum number of iterations exceeded. -2 Problem is infeasible. -4 Ill-conditioning prevents further optimization. -7 Magnitude of search direction became too small; no further progress can be made. The problem is ill-posed or badly conditioned. [X,RESNORM,RESIDUAL,EXITFLAG,OUTPUT] = LSQLIN(C,d,A,b) returns a structure OUTPUT with the number of iterations taken in OUTPUT.iterations, the type of algorithm used in OUTPUT.algorithm, the number of conjugate gradient iterations (if used) in OUTPUT.cgiterations, a measure of first order optimality (large-scale method only) in OUTPUT.firstorderopt, and the exit message in OUTPUT.message. [X,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA] = LSQLIN(C,d,A,b) returns the set of Lagrangian multipliers LAMBDA, at the solution: LAMBDA.ineqlin for the linear inequalities C, LAMBDA.eqlin for the linear equalities Ceq, LAMBDA.lower for LB, and LAMBDA.upper for UB. See also quadprog. Reference page in Help browser doc lsqlin ===================================================================== LSQNONNEG Linear least squares with nonnegativity constraints. X = LSQNONNEG(C,d) returns the vector X that minimizes NORM(d-C*X) subject to X >= 0. C and d must be real. X = LSQNONNEG(C,d,X0) uses X0 as the starting point if all(X0 > 0); otherwise the default is used. The default start point is the origin (the default is used when X0==[] or when only two input arguments are provided). X = LSQNONNEG(C,d,X0,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Used options are Display and TolX. (A default tolerance TolX of 10*MAX(SIZE(C))*NORM(C,1)*EPS is used). X = LSQNONNEG(PROBLEM) finds the minimum for PROBLEM. PROBLEM is a structure with the matrix 'C' in PROBLEM.C, the vector 'd' in PROBLEM.d, the start point in PROBLEM.x0, the options structure in PROBLEM.options, and solver name 'lsqnonneg' in PROBLEM.solver. The structure PROBLEM must have all the fields. [X,RESNORM] = LSQNONNEG(...) also returns the value of the squared 2-norm of the residual: norm(d-C*X)^2. [X,RESNORM,RESIDUAL] = LSQNONNEG(...) also returns the value of the residual: d-C*X. [X,RESNORM,RESIDUAL,EXITFLAG] = LSQNONNEG(...) returns an EXITFLAG that describes the exit condition of LSQNONNEG. Possible values of EXITFLAG and the corresponding exit conditions are 1 LSQNONNEG converged with a solution X. 0 Iteration count was exceeded. Increasing the tolerance (OPTIONS.TolX) may lead to a solution. [X,RESNORM,RESIDUAL,EXITFLAG,OUTPUT] = LSQNONNEG(...) returns a structure OUTPUT with the number of steps taken in OUTPUT.iterations, the type of algorithm used in OUTPUT.algorithm, and the exit message in OUTPUT.message.

14

[X,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA] = LSQNONNEG(...) returns the dual vector LAMBDA where LAMBDA(i) <= 0 when X(i) is (approximately) 0 and LAMBDA(i) is (approximately) 0 when X(i) > 0. See also lscov, slash. Reference page in Help browser doc lsqnonneg ================================================================== LSQCURVEFIT solves non-linear least squares problems.

===

LSQCURVEFIT attempts to solve problems of the form: min sum {(FUN(X,XDATA)-YDATA).^2} where X, XDATA, YDATA and the X values returned by FUN can be vectors or matrices. X = LSQCURVEFIT(FUN,X0,XDATA,YDATA) starts at X0 and finds coefficients X to best fit the nonlinear functions in FUN to the data YDATA (in the least-squares sense). FUN accepts inputs X and XDATA and returns a vector (or matrix) of function values F, where F is the same size as YDATA, evaluated at X and XDATA. NOTE: FUN should return FUN(X,XDATA) and not the sum-of-squares sum((FUN(X,XDATA)-YDATA).^2). ((FUN(X,XDATA)-YDATA) is squared and summed implicitly in the algorithm.) X = LSQCURVEFIT(FUN,X0,XDATA,YDATA,LB,UB) defines a set of lower and upper bounds on the design variables, X, so that the solution is in the range LB <= X <= UB. Use empty matrices for LB and UB if no bounds exist. Set LB(i) = -Inf if X(i) is unbounded below; set UB(i) = Inf if X(i) is unbounded above. X = LSQCURVEFIT(FUN,X0,XDATA,YDATA,LB,UB,OPTIONS) minimizes with the default parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Used options are Display, TolX, TolFun, DerivativeCheck, Diagnostics, FunValCheck, Jacobian, JacobMult, JacobPattern, LineSearchType, LevenbergMarquardt, MaxFunEvals, MaxIter, DiffMinChange and DiffMaxChange, LargeScale, MaxPCGIter, PrecondBandWidth, TolPCG, PlotFcns, OutputFcn, and TypicalX. Use the Jacobian option to specify that FUN also returns a second output argument J that is the Jacobian matrix at the point X. If FUN returns a vector F of m components when X has length n, then J is an m-by-n matrix where J(i,j) is the partial derivative of F(i) with respect to x(j). (Note that the Jacobian J is the transpose of the gradient of F.) X = LSQCURVEFIT(PROBLEM) solves the non-linear least squares problem defined in PROBLEM. PROBLEM is a structure with the function FUN in PROBLEM.objective, the start point in PROBLEM.x0, the 'xdata' in PROBLEM.xdata, the 'ydata' in PROBLEM.ydata, the lower bounds in PROBLEM.lb, the upper bounds in PROBLEM.ub, the options structure in PROBLEM.options, and solver name 'lsqcurvefit' in PROBLEM.solver. Use this syntax to solve at the command line a problem exported from OPTIMTOOL. The structure PROBLEM must have all the fields. [X,RESNORM] = LSQCURVEFIT(FUN,X0,XDATA,YDATA,...) returns the value of the squared 2-norm of the residual at X: sum {(FUN(X,XDATA)-YDATA).^2}. [X,RESNORM,RESIDUAL] = LSQCURVEFIT(FUN,X0,...) returns the value of residual, FUN(X,XDATA)-YDATA, at the solution X. [X,RESNORM,RESIDUAL,EXITFLAG] = LSQCURVEFIT(FUN,X0,XDATA,YDATA,...) returns an EXITFLAG that describes the exit condition of LSQCURVEFIT. Possible values of EXITFLAG and the corresponding exit conditions are 1 LSQCURVEFIT converged to a solution X.

15

2 Change in X smaller than the specified tolerance. 3 Change in the residual smaller than the specified tolerance. 4 Magnitude of search direction smaller than the specified tolerance. 0 Maximum number of function evaluations or of iterations reached. -1 Algorithm terminated by the output function. -2 Bounds are inconsistent. -4 Line search cannot sufficiently decrease the residual along the current search direction. [X,RESNORM,RESIDUAL,EXITFLAG,OUTPUT] = LSQCURVEFIT(FUN,X0,XDATA, YDATA,...) returns a structure OUTPUT with the number of iterations taken in OUTPUT.iterations, the number of function evaluations in OUTPUT.funcCount, the algorithm used in OUTPUT.algorithm, the number of CG iterations (if used) in OUTPUT.cgiterations, the first-order optimality (if used) in OUTPUT.firstorderopt, and the exit message in OUTPUT.message. [X,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA] = LSQCURVEFIT(FUN,X0,XDATA, YDATA,...) returns the set of Lagrangian multipliers, LAMBDA, at the solution: LAMBDA.lower for LB and LAMBDA.upper for UB. [X,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA,JACOBIAN] = LSQCURVEFIT(FUN, X0,XDATA,YDATA,...) returns the Jacobian of FUN at X. Examples FUN can be specified using @: xdata = [5;4;6]; % example xdata ydata = 3*sin([5;4;6])+6; % example ydata x = lsqcurvefit(@myfun, [2 7], xdata, ydata) where myfun is a MATLAB function such as: function F = myfun(x,xdata) F = x(1)*sin(xdata)+x(2); FUN can also be an anonymous function: x = lsqcurvefit(@(x,xdata) x(1)*sin(xdata)+x(2),[2 7],xdata,ydata) If FUN is parameterized, you can use anonymous functions to capture the problem-dependent parameters. Suppose you want to solve the curve-fitting problem given in the function myfun, which is parameterized by its second argument c. Here myfun is an M-file function such as function F = myfun(x,xdata,c) F = x(1)*exp(c*xdata)+x(2); To solve the curve-fitting problem for a specific value of c, first assign the value to c. Then create a two-argument anonymous function that captures that value of c and calls myfun with three arguments. Finally, pass this anonymous function to LSQCURVEFIT: xdata = [3; 1; 4]; % example xdata ydata = 6*exp(-1.5*xdata)+3; % example ydata c = -1.5; % define parameter x = lsqcurvefit(@(x,xdata) myfun(x,xdata,c),[5;1],xdata,ydata) See also optimset, lsqnonlin, fsolve, @, inline. Reference page in Help browser doc lsqcurvefit ================================================================ LSQNONLIN solves non-linear least squares problems.

=====

16

LSQNONLIN attempts to solve problems of the form: min sum {FUN(X).^2} where X and the values returned by FUN can be X vectors or matrices. X = LSQNONLIN(FUN,X0) starts at the matrix X0 and finds a minimum X to the sum of squares of the functions in FUN. FUN accepts input X and returns a vector (or matrix) of function values F evaluated at X. NOTE: FUN should return FUN(X) and not the sum-of-squares sum(FUN(X).^2)). (FUN(X) is summed and squared implicitly in the algorithm.) X = LSQNONLIN(FUN,X0,LB,UB) defines a set of lower and upper bounds on the design variables, X, so that the solution is in the range LB <= X <= UB. Use empty matrices for LB and UB if no bounds exist. Set LB(i) = -Inf if X(i) is unbounded below; set UB(i) = Inf if X(i) is unbounded above. X = LSQNONLIN(FUN,X0,LB,UB,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Used options are Display, TolX, TolFun, DerivativeCheck, Diagnostics, FunValCheck, Jacobian, JacobMult, JacobPattern, LineSearchType, LevenbergMarquardt, MaxFunEvals, MaxIter, DiffMinChange and DiffMaxChange, LargeScale, MaxPCGIter, PrecondBandWidth, TolPCG, TypicalX, PlotFcns, and OutputFcn. Use the Jacobian option to specify that FUN also returns a second output argument J that is the Jacobian matrix at the point X. If FUN returns a vector F of m components when X has length n, then J is an m-by-n matrix where J(i,j) is the partial derivative of F(i) with respect to x(j). (Note that the Jacobian J is the transpose of the gradient of F.) X = LSQNONLIN(PROBLEM) solves the non-linear least squares problem defined in PROBLEM. PROBLEM is a structure with the function FUN in PROBLEM.objective, the start point in PROBLEM.x0, the lower bounds in PROBLEM.lb, the upper bounds in PROBLEM.ub, the options structure in PROBLEM.options, and solver name 'lsqnonlin' in PROBLEM.solver. Use this syntax to solve at the command line a problem exported from OPTIMTOOL. The structure PROBLEM must have all the fields. [X,RESNORM] = LSQNONLIN(FUN,X0,...) returns the value of the squared 2-norm of the residual at X: sum(FUN(X).^2). [X,RESNORM,RESIDUAL] = LSQNONLIN(FUN,X0,...) returns the value of the residual at the solution X: RESIDUAL = FUN(X). [X,RESNORM,RESIDUAL,EXITFLAG] = LSQNONLIN(FUN,X0,...) returns an EXITFLAG that describes the exit condition of LSQNONLIN. Possible values of EXITFLAG and the corresponding exit conditions are 1 LSQNONLIN converged to a solution X. 2 Change in X smaller than the specified tolerance. 3 Change in the residual smaller than the specified tolerance. 4 Magnitude search direction smaller than the specified tolerance. 0 Maximum number of function evaluations or of iterations reached. -1 Algorithm terminated by the output function. -2 Bounds are inconsistent. -4 Line search cannot sufficiently decrease the residual along the current search direction. [X,RESNORM,RESIDUAL,EXITFLAG,OUTPUT] = LSQNONLIN(FUN,X0,...) returns a structure OUTPUT with the number of iterations taken in OUTPUT.iterations, the number of function evaluations in OUTPUT.funcCount, the algorithm used in OUTPUT.algorithm, the number of CG iterations (if used) in OUTPUT.cgiterations, the first-order optimality (if used) in OUTPUT.firstorderopt, and the exit message in

17

OUTPUT.message. [X,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA] = LSQNONLIN(FUN,X0,...) returns the set of Lagrangian multipliers, LAMBDA, at the solution: LAMBDA.lower for LB and LAMBDA.upper for UB. [X,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA,JACOBIAN] = LSQNONLIN(FUN, X0,...) returns the Jacobian of FUN at X. Examples FUN can be specified using @: x = lsqnonlin(@myfun,[2 3 4]) where myfun is a MATLAB function such as: function F = myfun(x) F = sin(x); FUN can also be an anonymous function: x = lsqnonlin(@(x) sin(3*x),[1 4]) If FUN is parameterized, you can use anonymous functions to capture the problem-dependent parameters. Suppose you want to solve the non-linear least squares problem given in the function myfun, which is parameterized by its second argument c. Here myfun is an M-file function such as function F = myfun(x,c) F = [ 2*x(1) - exp(c*x(1)) -x(1) - exp(c*x(2)) x(1) - x(2) ]; To solve the least squares problem for a specific value of c, first assign the value to c. Then create a one-argument anonymous function that captures that value of c and calls myfun with two arguments. Finally, pass this anonymous function to LSQNONLIN: c = -1; % define parameter first x = lsqnonlin(@(x) myfun(x,c),[1;1]) See also optimset, lsqcurvefit, fsolve, @, inline. Reference page in Help browser doc lsqnonlin =========================================================== FZERO Single-variable nonlinear zero finding.

==========

X = FZERO(FUN,X0) tries to find a zero of the function FUN near X0, if X0 is a scalar. It first finds an interval containing X0 where the function values of the interval endpoints differ in sign, then searches that interval for a zero. FUN is a function handle. FUN accepts real scalar input X and returns a real scalar function value F, evaluated at X. The value X returned by FZERO is near a point where FUN changes sign (if FUN is continuous), or NaN if the search fails. X = FZERO(FUN,X0), where X0 is a vector of length 2, assumes X0 is an interval where the sign of FUN(X0(1)) differs from the sign of FUN(X0(2)). An error occurs if this is not true. Calling FZERO with an interval guarantees FZERO will return a value near a point where FUN changes sign. X = FZERO(FUN,X0), where X0 is a scalar value, uses X0 as a starting guess. FZERO looks for an interval containing a sign change for FUN and containing X0. If no such interval is found, NaN is returned.

18

In this case, the search terminates when the search interval is expanded until an Inf, NaN, or complex value is found. Note: if the option FunValCheck is 'on', then an error will occur if an NaN or complex value is found. X = FZERO(FUN,X0,OPTIONS) solves the equation with the default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Used options are Display, TolX, FunValCheck, OutputFcn, and PlotFcns. X = FZERO(PROBLEM) finds the zero of a function defined in PROBLEM. PROBLEM is a structure with the function FUN in PROBLEM.objective, the start point in PROBLEM.x0, the options structure in PROBLEM.options, and solver name 'fzero' in PROBLEM.solver. The structure PROBLEM must have all the fields. [X,FVAL]= FZERO(FUN,...) returns the value of the function described in FUN, at X. [X,FVAL,EXITFLAG] = FZERO(...) returns an EXITFLAG that describes the exit condition of FZERO. Possible values of EXITFLAG and the corresponding exit conditions are 1 FZERO found a zero X. -1 Algorithm terminated by output function. -3 NaN or Inf function value encountered during search for an interval containing a sign change. -4 Complex function value encountered during search for an interval containing a sign change. -5 FZERO may have converged to a singular point. -6 FZERO can not detect a change in sign of the function. [X,FVAL,EXITFLAG,OUTPUT] = FZERO(...) returns a structure OUTPUT with the number of function evaluations in OUTPUT.funcCount, the algorithm name in OUTPUT.algorithm, the number of iterations to find an interval (if needed) in OUTPUT.intervaliterations, the number of zero-finding iterations in OUTPUT.iterations, and the exit message in OUTPUT.message. Examples FUN can be specified using @: X = fzero(@sin,3) returns pi. X = fzero(@sin,3,optimset('Display','iter')) returns pi, uses the default tolerance and displays iteration information. FUN can also be an anonymous function: X = fzero(@(x) sin(3*x),2) If FUN is parameterized, you can use anonymous functions to capture the problem-dependent parameters. Suppose you want to solve the equation given in the function myfun, which is parameterized by its second argument c. Here myfun is an M-file function such as function f = myfun(x,c) f = cos(c*x); To solve the equation for a specific value of c, first assign the value to c. Then create a one-argument anonymous function that captures that value of c and calls myfun with two arguments. Finally, pass this anonymous function to FZERO: c = 2; % define parameter first x = fzero(@(x) myfun(x,c),0.1)

19

Limitations X = fzero(@(x) abs(x)+1, 1) returns NaN since this function does not change sign anywhere on the real axis (and does not have a zero as well). X = fzero(@tan,2) returns X near 1.5708 because the discontinuity of this function near the point X gives the appearance (numerically) that the function changes sign at X. See also roots, fminbnd, function_handle. Reference page in Help browser doc fzero ===================================================================== FSOLVE solves systems of nonlinear equations of several variables. FSOLVE attempts to solve equations of the form: F(X) = 0 where F and X may be vectors or matrices. X = FSOLVE(FUN,X0) starts at the matrix X0 and tries to solve the equations in FUN. FUN accepts input X and returns a vector (matrix) of equation values F evaluated at X. X = FSOLVE(FUN,X0,OPTIONS) solves the equations with the default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Used options are Display, TolX, TolFun, DerivativeCheck, Diagnostics, FunValCheck, Jacobian, JacobMult, JacobPattern, LineSearchType, NonlEqnAlgorithm, MaxFunEvals, MaxIter, PlotFcns, OutputFcn, DiffMinChange and DiffMaxChange, LargeScale, MaxPCGIter, PrecondBandWidth, TolPCG, and TypicalX. Use the Jacobian option to specify that FUN also returns a second output argument J that is the Jacobian matrix at the point X. If FUN returns a vector F of m components when X has length n, then J is an m-by-n matrix where J(i,j) is the partial derivative of F(i) with respect to x(j). (Note that the Jacobian J is the transpose of the gradient of F.) X = FSOLVE(PROBLEM) solves system defined in PROBLEM. PROBLEM is a structure with the function FUN in PROBLEM.objective, the start point in PROBLEM.x0, the options structure in PROBLEM.options, and solver name 'fsolve' in PROBLEM.solver. Use this syntax to solve at the command line a problem exported from OPTIMTOOL. The structure PROBLEM must have all the fields. [X,FVAL] = FSOLVE(FUN,X0,...) returns the value of the equations FUN at X. [X,FVAL,EXITFLAG] = FSOLVE(FUN,X0,...) returns an EXITFLAG that describes the exit condition of FSOLVE. Possible values of EXITFLAG and the corresponding exit conditions are 1 FSOLVE converged to a solution X. 2 Change in X smaller than the specified tolerance. 3 Change in the residual smaller than the specified tolerance. 4 Magnitude of search direction smaller than the specified tolerance. 0 Maximum number of function evaluations or iterations reached. -1 Algorithm terminated by the output function. -2 Algorithm seems to be converging to a point that is not a root. -3 Trust region radius became too small. -4 Line search cannot sufficiently decrease the residual along the current search direction.

20

[X,FVAL,EXITFLAG,OUTPUT] = FSOLVE(FUN,X0,...) returns a structure OUTPUT with the number of iterations taken in OUTPUT.iterations, the number of function evaluations in OUTPUT.funcCount, the algorithm used in OUTPUT.algorithm, the number of CG iterations (if used) in OUTPUT.cgiterations, the first-order optimality (if used) in OUTPUT.firstorderopt, and the exit message in OUTPUT.message. [X,FVAL,EXITFLAG,OUTPUT,JACOB] = FSOLVE(FUN,X0,...) returns the Jacobian of FUN at X. Examples FUN can be specified using @: x = fsolve(@myfun,[2 3 4],optimset('Display','iter')) where myfun is a MATLAB function such as: function F = myfun(x) F = sin(x); FUN can also be an anonymous function: x = fsolve(@(x) sin(3*x),[1 4],optimset('Display','off')) If FUN is parameterized, you can use anonymous functions to capture the problem-dependent parameters. Suppose you want to solve the system of nonlinear equations given in the function myfun, which is parameterized by its second argument c. Here myfun is an M-file function such as function F = myfun(x,c) F = [ 2*x(1) - x(2) - exp(c*x(1)) -x(1) + 2*x(2) - exp(c*x(2))]; To solve the system of equations for a specific value of c, first assign the value to c. Then create a one-argument anonymous function that captures that value of c and calls myfun with two arguments. Finally, pass this anonymous function to FSOLVE: c = -1; % define parameter first x = fsolve(@(x) myfun(x,c),[-5;-5]) See also optimset, lsqnonlin, @, inline. Reference page in Help browser doc fsolve ============================================== BINTPROG Binary integer programming.

=======================

BINTPROG solves the binary integer programming problem min f'*X subject to: A*X <= b, Aeq*X = beq, where the elements of X are binary integers, i.e., 0's or 1's. X = BINTPROG(f) solves the problem min f'*X, where the elements of X are binary integers. X = BINTPROG(f,A,b) solves the problem min f'*X subject to the linear inequalities A*X <= b, where the elements of X are binary integers. X = BINTPROG(f,A,b,Aeq,beq) solves the problem min f'*X subject to the linear equalities Aeq*X = beq, the linear inequalities A*X <= b, where the elements of X are binary integers. X = BINTPROG(f,A,b,Aeq,beq,X0) sets the starting point to X0. The

21

starting point X0 must be binary integer and feasible, or it will be ignored. X = BINTPROG(f,A,b,Aeq,beq,X0,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Available options are BranchStrategy, Diagnostics, Display, NodeDisplayInterval, MaxIter, MaxNodes, MaxRLPIter, MaxTime, NodeSearchStrategy, TolFun, TolXInteger and TolRLPFun. X = BINTPROG(PROBLEM) finds the minimum for PROBLEM. PROBLEM is a structure with the vector 'f' in PROBLEM.f, the linear inequality constraints in PROBLEM.Aineq and PROBLEM.bineq, the linear equality constraints in PROBLEM.Aeq and PROBLEM.beq, the start point in PROBLEM.x0, the options structure in PROBLEM.options, and solver name 'bintprog' in PROBLEM.solver. Use this syntax to solve at the command line a problem exported from OPTIMTOOL. The structure PROBLEM must have all the fields. [X,FVAL] = BINTPROG(...) returns the value of the objective function at X: FVAL = f'*X. [X,FVAL,EXITFLAG] = BINTPROG(...) returns an EXITFLAG that describes the exit condition of BINTPROG. Possible values of EXITFLAG and the corresponding exit conditions are 1 BINTPROG converged to a solution X. 0 Maximum number of iterations exceeded. -2 Problem is infeasible. -4 MaxNodes reached without converging. -5 MaxTime reached without converging. -6 Number of iterations performed at a node to solve the LP-relaxation problem exceeded MaxRLPIter reached without converging. [X,FVAL,EXITFLAG,OUTPUT] = BINTPROG(...) returns a structure OUTPUT with the number of iterations in OUTPUT.iterations, the number of nodes explored in OUTPUT.nodes, the execution time (in seconds) in OUTPUT.time, the algorithm used in OUTPUT.algorithm, the branch strategy in OUTPUT.branchStrategy, the node search strategy in OUTPUT.nodeSrchStrategy, and the exit message in OUTPUT.message. Example f = [-9; -5; -6; -4]; A = [6 3 5 2; 0 0 1 1; -1 0 1 0; 0 -1 0 1]; b = [9; 1; 0; 0]; X = bintprog(f,A,b) See also linprog. Reference page in Help browser doc bintprog =================================== LINPROG Linear programming.

==================================

X = LINPROG(f,A,b) attempts to solve the linear programming problem: min f'*x subject to: A*x <= b x X = LINPROG(f,A,b,Aeq,beq) solves the problem above while additionally satisfying the equality constraints Aeq*x = beq. X = LINPROG(f,A,b,Aeq,beq,LB,UB) defines a set of lower and upper bounds on the design variables, X, so that the solution is in

22

the range LB <= X <= UB. Use empty matrices for LB and UB if no bounds exist. Set LB(i) = -Inf if X(i) is unbounded below; set UB(i) = Inf if X(i) is unbounded above. X = LINPROG(f,A,b,Aeq,beq,LB,UB,X0) sets the starting point to X0. This option is only available with the active-set algorithm. The default interior point algorithm will ignore any non-empty starting point. X = LINPROG(f,A,b,Aeq,beq,LB,UB,X0,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Options are Display, Diagnostics, TolFun, LargeScale, MaxIter. Currently, only 'final' and 'off' are valid values for the parameter Display when LargeScale is 'off' ('iter' is valid when LargeScale is 'on'). X = LINPROG(PROBLEM) finds the minimum for PROBLEM. PROBLEM is a structure with the vector 'f' in PROBLEM.f, the linear inequality constraints in PROBLEM.Aineq and PROBLEM.bineq, the linear equality constraints in PROBLEM.Aeq and PROBLEM.beq, the lower bounds in PROBLEM.lb, the upper bounds in PROBLEM.ub, the start point in PROBLEM.x0, the options structure in PROBLEM.options, and solver name 'linprog' in PROBLEM.solver. Use this syntax to solve at the command line a problem exported from OPTIMTOOL. The structure PROBLEM must have all the fields. [X,FVAL] = LINPROG(f,A,b) returns the value of the objective function at X: FVAL = f'*X. [X,FVAL,EXITFLAG] = LINPROG(f,A,b) returns an EXITFLAG that describes the exit condition of LINPROG. Possible values of EXITFLAG and the corresponding exit conditions are 1 LINPROG converged to a solution X. 0 Maximum number of iterations reached. -2 No feasible point found. -3 Problem is unbounded. -4 NaN value encountered during execution of algorithm. -5 Both primal and dual problems are infeasible. -7 Magnitude of search direction became too small; no further progress can be made. The problem is ill-posed or badly conditioned. [X,FVAL,EXITFLAG,OUTPUT] = LINPROG(f,A,b) returns a structure OUTPUT with the number of iterations taken in OUTPUT.iterations, the type of algorithm used in OUTPUT.algorithm, the number of conjugate gradient iterations (if used) in OUTPUT.cgiterations, and the exit message in OUTPUT.message. [X,FVAL,EXITFLAG,OUTPUT,LAMBDA] = LINPROG(f,A,b) returns the set of Lagrangian multipliers LAMBDA, at the solution: LAMBDA.ineqlin for the linear inequalities A, LAMBDA.eqlin for the linear equalities Aeq, LAMBDA.lower for LB, and LAMBDA.upper for UB. NOTE: the LargeScale (the default) version of LINPROG uses a primal-dual method. Both the primal problem and the dual problem must be feasible for convergence. Infeasibility messages of either the primal or dual, or both, are given as appropriate. The primal problem in standard form is min f'*x such that A*x = b, x >= 0. The dual problem is max b'*y such that A'*y + s = f, s >= 0. See also quadprog.

23

Reference page in Help browser doc linprog ========================================= QUADPROG Quadratic programming.

============================

X = QUADPROG(H,f,A,b) attempts to solve the quadratic programming problem: min 0.5*x'*H*x + f'*x subject to: A*x <= b x X = QUADPROG(H,f,A,b,Aeq,beq) solves the problem above while additionally satisfying the equality constraints Aeq*x = beq. X = QUADPROG(H,f,A,b,Aeq,beq,LB,UB) defines a set of lower and upper bounds on the design variables, X, so that the solution is in the range LB <= X <= UB. Use empty matrices for LB and UB if no bounds exist. Set LB(i) = -Inf if X(i) is unbounded below; set UB(i) = Inf if X(i) is unbounded above. X = QUADPROG(H,f,A,b,Aeq,beq,LB,UB,X0) sets the starting point to X0. X = QUADPROG(H,f,A,b,Aeq,beq,LB,UB,X0,OPTIONS) minimizes with the default optimization parameters replaced by values in the structure OPTIONS, an argument created with the OPTIMSET function. See OPTIMSET for details. Used options are Display, Diagnostics, TolX, TolFun, HessMult, LargeScale, MaxIter, PrecondBandWidth, TypicalX, TolPCG, and MaxPCGIter. Currently, only 'final' and 'off' are valid values for the parameter Display ('iter' is not available). X = QUADPROG(Hinfo,f,A,b,Aeq,beq,LB,UB,X0,OPTIONS,P1,P2,...) passes the problem-dependent parameters P1,P2,... directly to the HMFUN function when OPTIMSET('HessMult',HMFUN) is set. HMFUN is provided by the user. Pass empty matrices for A, b, Aeq, beq, LB, UB, XO, OPTIONS, to use the default values. X = QUADPROG(PROBLEM) finds the minimum for PROBLEM. PROBLEM is a structure with matrix 'H' in PROBLEM.H, the vector 'f' in PROBLEM.f, the linear inequality constraints in PROBLEM.Aineq and PROBLEM.bineq, the linear equality constraints in PROBLEM.Aeq and PROBLEM.beq, the lower bounds in PROBLEM.lb, the upper bounds in PROBLEM.ub, the start point in PROBLEM.x0, the options structure in PROBLEM.options, and solver name 'quadprog' in PROBLEM.solver. Use this syntax to solve at the command line a problem exported from OPTIMTOOL. The structure PROBLEM must have all the fields. [X,FVAL] = QUADPROG(H,f,A,b) returns the value of the objective function at X: FVAL = 0.5*X'*H*X + f'*X. [X,FVAL,EXITFLAG] = QUADPROG(H,f,A,b) returns an EXITFLAG that describes the exit condition of QUADPROG. Possible values of EXITFLAG and the corresponding exit conditions are 1 QUADPROG converged with a solution X. 3 Change in objective function value smaller than the specified tolerance. 4 Local minimizer found. 0 Maximum number of iterations exceeded. -2 No feasible point found. -3 Problem is unbounded. -4 Current search direction is not a descent direction; no further progress can be made. -7 Magnitude of search direction became too small; no further progress can be made. The problem is ill-posed or badly conditioned.

24

[X,FVAL,EXITFLAG,OUTPUT] = QUADPROG(H,f,A,b) returns a structure OUTPUT with the number of iterations taken in OUTPUT.iterations, the type of algorithm used in OUTPUT.algorithm, the number of conjugate gradient iterations (if used) in OUTPUT.cgiterations, a measure of first order optimality (large-scale method only) in OUTPUT.firstorderopt, and the exit message in OUTPUT.message. [X,FVAL,EXITFLAG,OUTPUT,LAMBDA] = QUADPROG(H,f,A,b) returns the set of Lagrangian multipliers LAMBDA, at the solution: LAMBDA.ineqlin for the linear inequalities A, LAMBDA.eqlin for the linear equalities Aeq, LAMBDA.lower for LB, and LAMBDA.upper for UB. See also linprog, lsqlin. Reference page in Help browser doc quadprog ================================================================== OPTIMSET Create/alter optimization OPTIONS structure.

===

OPTIONS = OPTIMSET('PARAM1',VALUE1,'PARAM2',VALUE2,...) creates an optimization options structure OPTIONS in which the named parameters have the specified values. Any unspecified parameters are set to [] (parameters with value [] indicate to use the default value for that parameter when OPTIONS is passed to the optimization function). It is sufficient to type only the leading characters that uniquely identify the parameter. Case is ignored for parameter names. NOTE: For values that are strings, the complete string is required. OPTIONS = OPTIMSET(OLDOPTS,'PARAM1',VALUE1,...) creates a copy of OLDOPTS with the named parameters altered with the specified values. OPTIONS = OPTIMSET(OLDOPTS,NEWOPTS) combines an existing options structure OLDOPTS with a new options structure NEWOPTS. Any parameters in NEWOPTS with non-empty values overwrite the corresponding old parameters in OLDOPTS. OPTIMSET with no input arguments and no output arguments displays all parameter names and their possible values, with defaults shown in {} when the default is the same for all functions that use that option -- use OPTIMSET(OPTIMFUNCTION) to see options for a specific function. OPTIONS = OPTIMSET (with no input arguments) creates an options structure OPTIONS where all the fields are set to []. OPTIONS = OPTIMSET(OPTIMFUNCTION) creates an options structure with all the parameter names and default values relevant to the optimization function named in OPTIMFUNCTION. For example, optimset('fminbnd') or optimset(@fminbnd) returns an options structure containing all the parameter names and default values relevant to the function 'fminbnd'. OPTIMSET PARAMETERS for MATLAB Display - Level of display [ off | iter | notify | final ] MaxFunEvals - Maximum number of function evaluations allowed [ positive integer ] MaxIter - Maximum number of iterations allowed [ positive scalar ] TolFun - Termination tolerance on the function value [ positive scalar ] TolX - Termination tolerance on X [ positive scalar ] FunValCheck - Check for invalid values, such as NaN or complex, from user-supplied functions [ {off} | on ] OutputFcn - Name(s) of output function [ {[]} | function ] All output functions are called by the solver after each

25

iteration. PlotFcns - Name(s) of plot function [ {[]} | function ] Function(s) used to plot various quantities in every iteration Note: To see OPTIMSET parameters for the OPTIMIZATION TOOLBOX (if you have the Optimization Toolbox installed), type help optimoptions Examples To create options with the default options for FZERO options = optimset('fzero'); To create an options structure with TolFun equal to 1e-3 options = optimset('TolFun',1e-3); To change the Display value of options to 'iter' options = optimset(options,'Display','iter'); See also optimget, fzero, fminbnd, fminsearch, lsqnonneg. Overloaded methods: cgoptimstore/optimset ParameterEstimator.optimset sroengine.optimset ResponseOptimizer.optimset Reference page in Help browser doc optimset ================================================ OPTIMGET Get OPTIM OPTIONS parameters.

=====================

VAL = OPTIMGET(OPTIONS,'NAME') extracts the value of the named parameter from optimization options structure OPTIONS, returning an empty matrix if the parameter value is not specified in OPTIONS. It is sufficient to type only the leading characters that uniquely identify the parameter. Case is ignored for parameter names. [] is a valid OPTIONS argument. VAL = OPTIMGET(OPTIONS,'NAME',DEFAULT) extracts the named parameter as above, but returns DEFAULT if the named parameter is not specified (is []) in OPTIONS. For example val = optimget(opts,'TolX',1e-4); returns val = 1e-4 if the TolX property is not specified in opts. See also optimset. Overloaded methods: ParameterEstimator.optimget sroengine.optimget ResponseOptimizer.optimget Reference page in Help browser doc optimget =====================================================================