Nonlinear Stochastic Programming by the Monte-Carlo method Lecture 4 Leonidas Sakalauskas Institute...
-
date post
22-Dec-2015 -
Category
Documents
-
view
216 -
download
3
Transcript of Nonlinear Stochastic Programming by the Monte-Carlo method Lecture 4 Leonidas Sakalauskas Institute...
Nonlinear Stochastic Programming by the Monte-Carlo method
Lecture 4
Leonidas SakalauskasInstitute of Mathematics and InformaticsVilnius, Lithuania EURO Working Group on Continuous Optimization
Content Stochastic unconstrained optimization Monte Carlo estimators Statistical testing of optimality Gradient-based stochastic algorithm Rule for Monte-Carlo sample size regulation Counterexamples Nonlinear stochastic constrained optimization Convergence Analysis Counterexample
, , Px
Stochastic unconstrained optimization
nRDx
xEfxF
min,)(
,: RRf n Px
RRp n:
-elementary event in the probability space ,
- the measure, defined by probability density function:
Let us consider the stochastic unconstrained optimization problem
- random function:
Monte-Carlo samples
We assume here that the Monte-Carlo samples of a certain size N are provided for any
),,...,,( 21 NyyyY and the sampling estimator of the objective function is computed :
and sampling variance also computed that is useful to evaluate the accuracy of estimator
1
1( ) ( , )
Nj
j
F x f x yN
22
1
1( ) ( , ) ( )
1
Nj
j
D x f x y F xN
nRx
The gradient is evaluated using the same random sample:
1
1( ) ( , ),
Nj
j
g x g x yN
Gradient
Covariance matrix
1
1( ) , ,
N Tj j
j
A x g x y g x g x y g xN n
The sampling covariance matrix is computed
later on for normalising the gradient estimator.
Gradient search procedure
Let some initial point be given and
the random sample of a certain initial size N0 be generated at this point, as well as, the Monte-Carlo estimates be computed.
The iterative stochastic procedure of gradient search can be used further:
x D R n0
)(~1 ttt xgxx
Monte-Carlo sample size problem
There is no a great necessity to compute estimators with a high accuracy on starting the optimisation, because then it suffices only to approximately evaluate the direction leading to the optimum.
Therefore, one can obtain not so large samples at the beginning of the optimum search and, later on, increase the size of samples so as to get the estimate of the objective function with a desired accuracy just at the time of decision making on finding the solution to the optimisation problem.
The following version for regulating the sample size is proposed:
maxmin1
1 ,,)(
~())(()(
~(
),,(maxmin NNn
xGxAxG
nNnFishnN
ttTt
tt
Monte-Carlo sample size problem
Statistical testing of the optimality hypothesis
The optimality hypothesis could be accepted for some point xt with significance , if the following condition is satisfied
12 ( ) ( ( )) ( ( )) ( ( ))
( , , )t t T t t
tt
N n G x A x G xT Fish n N n
n
Next, we can use the asymptotic normality again and decide that the objective function is estimated with a permissible accuracy , if its confidence bound does not exceed this value:
tt NxD /)(~
1
Importance sampling
P x e dt e e e dt
e dt g a t e dt
t
x
t a t a t
x
ata t
x a
t
x a
( )
( , )
( ) ( )
1
2
1
2
1
2
1
2
2 2 2 2
2 2 2
2 2 2 2
2 2 2
Let consider an application of SP to estimation of probabilities of rare events:
g a t eat
a
( , )
2
2where:
Importance sampling
Assume a be the parameter that should be chosen. The second moment is :
D x a g a t e dt e dt
e dte
e dt
t
x a
at at
x a
ata t
x
a t
x a
2 2 22
2
2 2 2
1
2
1
2
1
2 2
22
2
2 2 2 2
( , ) ( , )
Importance sampling
Select the parameter a in order to minimize the variance:
0,00
0,20
0,40
0,60
0,80
1,00
1,20
0,00 2,00 4,00 6,00 8,00
a
22 2
2
D x a P x
P x P x
( , ) ( )
( ) ( )
Importance sampling
t at Sample size
1 5.000 1000 16.377 2.48182x10-7
2 5.092 51219 2.059 2.87169x10-7
3 5.097 217154 1.000 2.87010x10-7
~
( )
P
.86650x10
t
-7P x 2(%)
Manpower-planning problem
The employer must decide upon a base level of regular staff at various skill levels. The recourse actions available are regular staff overtime or outside temporary help in order to meet unknown demand of service at minimum cost (Ermolyev and Wets (1988)).
Manpower-planning problem
base level of regular staff at skill level j = 1, 2, 3
, :j ty
:jx
amount of overtime help
, :j tz amount of temporary help,
:jc cost of regular staff at skill level j = 1, 2, 3
:jq cost of overtime,
:jr cost of temporary help
:tw demand of services at period t,
:t absentees rate for regular staff at time t,
1 :j ratio of amount of skill level j per amount of j-1 required,
Manpower-planning problem
The problem is to choose the number of staff on three levels in order to minimize the expected costs:
1 2 3( , , )x x x x
3 12 3
, ,1 1 1
( , ) min ( ( ))j j j j t j j tj t j
F x z c x E q y r z
s.t. , ,
3 3
, ,1 1
,
1 1, 1, 1, 1,
0, 0, 0,
( ) , 1, 2, , 12,
0.2 , 1, 2, 3, 1, 2, , 12,
( ) ( ) 0, 1, 2, 3, 1, 2, , 12.
j j t j t
j t j t t t jj j
j t t j
j j j t j t j j t j t
x y z
y z w x t
y x j t
x y z x y z j t
2( , )t tN the demands are normal: , tt l
Manpower-planning problem
l x1 x2 x3 F
0 9222 5533 1106 94.899
1 9222 5533 1106 94.899
10 9376 5616 1106 96.832
30 9452 5672 1036 96.614
1X 2X 3X
Manpower amount and costs in USD with confidence interval 100 USD
Nonlinear Stochastic Programming
Constrained continuous (nonlinear )stochastic programming problem is in general
.
,0)(),(,)(
min)(),(,)(
111
000
Xx
dzzpzxfxEfxF
dzzpzxfxEfxF
n
n
R
R
Nonlinear Stochastic Programming
)()(),( 10 xFxFxL
),(),(),,( 10 xfxfxl
),,(),( xElxL
Let us define the Lagrange function
Nonlinear Stochastic Programming
x x L xt tx
t t 1 ~( , )
,0 0x - initial values
,0 0 -parameters of optimization
Procedure for stochastic optimization:
t t t
FtF x D x 1
101
max[ , (~
( )~
( ))]
,0N
Conditions and testing of optimality
0)(,0)()( 110 xFxFxF
( ) (~
)' (~
) / ( , , )N n L A L n Fish n N nx x 1
~( )
~( )F x D xt
Ft
1 10 2 i i
D NF i /
Analysis of Convergence
N
N
t
t 1
N NQi
i
t t
0 1
In general the sample size is increased as geometric progression:
Wrap-Up and conclusions The approach presented in this paper is
grounded by the stopping procedure and the rule for adaptive regulation of size of Monte-Carlo samples, taking into account the statistical modeling accuracy.
Several stochastic gradient estimators were compared by computer simulation studying workability of estimators for testing of optimality hypothesis by statistical criteria.
It was demonstrated that the minimal size of the Monte Carlo sample necessary to approximate the distribution of Hotelling statistics, computed using gradient estimators, by Fisher distribution depends on approximation approach and dimensionality of the task .
Wrap-Up and Conclusions
The computation results show that the approach developed provides us the estimator for a reliable checking of optimality hypothesis in a wide range of dimensionality of stochastic optimization problem (2<n<100)
Termination procedure proposed allows us to test the optimality hypothesis and to evaluate reliably the confidence intervals of the objective and constraint functions