PDF - arxiv.org · PDF file1 Graduate School of Computational Engineering, ... Darmstadt,...

20
Robust Optimization Approaches for the Design of an Electric Machine used for E-Mobility Zeger Bontinck 1,2 , Oliver Lass 3 , Sebastian Sch¨ops 1,2 , Stefan Ulbrich 3 , Oliver Rain 4 1 Graduate School of Computational Engineering, Technische Universit¨ at Darmstadt, Dolivostraße 15, 64293 Darmstadt, Germany 2 Institut f¨ ur Theorie Elektromagnetischer Felder, Technische Universit¨ at Darmstadt, Schloßgartenstraße 8, 64289 Darmstadt, Germany 3 Chair of Nonlinear Optimization, Department of Mathematics, Technische Universit¨ at Darmstadt, Dolivostraße 15, 64293 Darmstadt, Germany 4 Robert Bosch GmbH, 70049 Stuttgart, Germany Abstract In this paper two optimization approaches are analyzed, namely, a de- terministic optimization and an optimization considering stochastic quan- tities. Additionally, for a particular setting, an equivalence is shown. Since the optimization parameters can have deviations, a robust opti- mum is favored. In the deterministic approach, the worst-case deviation on the quantities is considered. The deterministic approach presented is a gradient-based method and needs first and second order derivatives. The second approach makes use of uncertainty quantification (UQ), ac- counting for the standard deviation. Here only first order derivatives are needed. It is shown that both settings are equivalent when a linearization is carried out. A numerically efficient approach is obtained by introducing an affine decomposition and model order reduction. The two approaches are used to minimize the amount of permanent magnet material of a per- manent magnet synchronous machine. Both approaches reduce the size of the magnets. However, the UQ based optimization approach is less pessimistic with respect to deviations and yields smaller magnets. Finite element analysis, gradient methods, monte carlo methods, electric machines 1 Introduction Electrical machines are often subjected to optimization in order to improve their performance and to reduce material costs. The quantity of interest might be calculated from a partial differential equation (PDE) describing the physical phenomena in the machine. This PDE might depend on a set of parameters. 1 arXiv:1712.01536v1 [math.OC] 5 Dec 2017

Transcript of PDF - arxiv.org · PDF file1 Graduate School of Computational Engineering, ... Darmstadt,...

Robust Optimization Approaches for the Design

of an Electric Machine used for E-Mobility

Zeger Bontinck1,2, Oliver Lass3, Sebastian Schops1,2,Stefan Ulbrich3, Oliver Rain4

1 Graduate School of Computational Engineering, Technische UniversitatDarmstadt, Dolivostraße 15, 64293 Darmstadt, Germany

2 Institut fur Theorie Elektromagnetischer Felder, Technische UniversitatDarmstadt, Schloßgartenstraße 8, 64289 Darmstadt, Germany

3 Chair of Nonlinear Optimization, Department of Mathematics, TechnischeUniversitat Darmstadt, Dolivostraße 15, 64293 Darmstadt, Germany

4 Robert Bosch GmbH, 70049 Stuttgart, Germany

Abstract

In this paper two optimization approaches are analyzed, namely, a de-terministic optimization and an optimization considering stochastic quan-tities. Additionally, for a particular setting, an equivalence is shown.Since the optimization parameters can have deviations, a robust opti-mum is favored. In the deterministic approach, the worst-case deviationon the quantities is considered. The deterministic approach presentedis a gradient-based method and needs first and second order derivatives.The second approach makes use of uncertainty quantification (UQ), ac-counting for the standard deviation. Here only first order derivatives areneeded. It is shown that both settings are equivalent when a linearizationis carried out. A numerically efficient approach is obtained by introducingan affine decomposition and model order reduction. The two approachesare used to minimize the amount of permanent magnet material of a per-manent magnet synchronous machine. Both approaches reduce the sizeof the magnets. However, the UQ based optimization approach is lesspessimistic with respect to deviations and yields smaller magnets.

Finite element analysis, gradient methods, monte carlo methods,electric machines

1 Introduction

Electrical machines are often subjected to optimization in order to improve theirperformance and to reduce material costs. The quantity of interest might becalculated from a partial differential equation (PDE) describing the physicalphenomena in the machine. This PDE might depend on a set of parameters.

1

arX

iv:1

712.

0153

6v1

[m

ath.

OC

] 5

Dec

201

7

However, during production, uncertainties are introduced, causing the parame-ters to be uncertain. These uncertainties cause the original optimal solution tobecome suboptimal or even infeasible. Robust optimization can alleviate thisissue, since it looks for solutions that are less influenced by these variations.First steps to develop a methodology for robust design were taken by Taguchi(see e.g. [1]). In this approach, beside the control parameter, the variationsare also considered by including noise factors. An overview of different robustoptimization methods can be found in [2]. The authors discuss deterministicand evolutionary optimization algorithms. The former often make use of gra-dients to find local minima. The advantage is that typically only a few stepsare needed to find the minimum [3]. Because of this, this paper only considersdeterministic methods. Robustification is achieved by robust worst-case opti-mization and the mean-variance approach [4]. In the worst-case optimization,the probability density functions (PDFs) of the random parameters are notconsidered, since the uncertainties are restricted to a bounded uncertainty set[5]. In the mean-variance approach, the PDFs are considered. The random pa-rameters are described by continuous PDFs. The mean value and the varianceare given by integration. The approximation of these integrals is achieved byquadrature. For an example in an optimization setting, one is referred to [6].The application of quadrature can be avoided by using approximations basedon a linearization of the cost function [4].

This paper aims at finding the relations between the above given proceduresand addresses equivalences between robust worst-case optimization and themean-variance approach. To illustrate these findings the methods are applied tothe example of a permanent magnet (PM) synchronous machine (PMSM), simi-lar to the motor used in the drive unit of the Bosch eBike (Figure 1). PMSMs areparticularly popular because of their high power density and efficiency. Manyaspects of PMSMs have been considered for optimization, e.g., minimizationof material costs [7]. Beside topological rotor shape optimization [8], also theoptimization of the shape of the permanent magnets (PMs) [9, 10] has beenconsidered. The PMs are constructed from rare earth elements as is the casefor, e.g. NdFeB magnets. The separation of these rare earth elements is en-vironmentally polluting [11]. Therefore in this paper the focus of optimizationis on the PMs. In particular, the size of the magnets will be minimized whilemaintaining a prescribed electromotive force (EMF).

The paper is structured as follows: In section 2 different measures for sen-sitivities are introduced. The theoretical setting for the optimization problemsintroduced in section 3, is applied to the PM finite element model in section 4.The results are shown and discussed in section 5. Finally, in the last sectionconclusions are drawn.

2 Sensitivities

Consider the PDEL (u(P),P) = f(P), (1)

2

Figure 1: Stator and rotor parts of the machine used in the drive unit of theBosch eBike.

on a suitable bounded domain D ⊂ R3 with Dirichlet boundary conditions andwith L an elliptic operator. The problem depends on the parameters P ∈ Pwhich we assume to stem from a bounded parametric domain P ⊂ RNP . Theunique solution is given by u : P → H1

0 (D) and L is an elliptic operator L :P ×H1

0 (D)→ H−1(D) with right-hand-side f : P → H−1(D). Let H : H10 → R

be a linear functional describing our quantity of interest (QoI). In the classicalcase

L(u(P),P) := div(a(u,P)∇u

)with an affine parameter dependence of a on P, the solution u is analytic,cf. [12, 13]. However, in practice lower regularity arises if a has arbitrarydependencies, e.g. on the gradient ∇u, see e.g. [14]. Let us assume thatthe solution is well-behaved, i.e., the solution u depends analytically on theparameters.

The parametrization in terms of P is important for optimizing the PDE (1)and for the quantification of uncertainties. In either case, sensitivities of thelinear QoI h(P) := H(u(P)) are needed. Since it inherits the smoothness of thesolution, standard concepts can be applied.

2.1 Local sensitivity analysis

Let us define, similar to [12], a set of sequences of nonnegative integers

F := α = (α1, α2, . . .) : αi ∈ N ∧ αi 6= 0

for a finite number of i ,

so that |α| =∑i≥1 |αi| is finite if and only if α ∈ F , then a partial derivative

operator ∂α is defined as

∂α =∂|α|

∂α1P1 . . . ∂αNPPNP

,

with α ∈ F supported in 1, . . . , NP. A first tool for sensitivity analsis isrelying on sensitivity equations. They require the calculation of the derivatives

3

of the QoI:∂αH(u(P)) = H(sα), (2)

where sα := ∂αu(P). With a slight abuse of notation, we denote the first ordersensitivities s = [s1, s2, . . . , sNP

] with si := ∂u(P)/∂Pi.Using this definition, one can introduce a first order Taylor expansion around

P ∈ P, where ∆ = (∆1, . . . ,∆NP)>,

H(u(P + ∆)) = H(u(P)) +

NP∑i=1

H(si)∆i +O(∆2i ), (3)

to approximate the QoI H in the orthotopic neighborhood

U∞ := ∆ ∈ RNP |∆l ≤ ∆i ≤ ∆u, i = 1, . . . , NP = ∆ ∈ RNP | ‖D−1∆‖∞ ≤ 1. (4)

Lower and upper bounds for ∆i are given by ∆l and ∆u; D is an implicitlydefined scaling matrix. Let us define a reduced parameter space

P∞ := P ∈ P|P + ∆ ∈ P,∆ ∈ U∞.

2.2 Global sensitivity analysis

In the global sensitivity analysis the deviation is treated in a stochastic set-ting. Therefore it is assumed that P = P(ω) are independent and identicallydistributed random variables on the probability space (Ω,Σ,P), where Ω isthe set of possible outcomes, Σ the sigma algebra and P the probability mea-sure. As a consequence, problem (1) becomes stochastic, i.e., find a stochasticfunction, u(P) : Ω×D → R, such that it almost surely holds [13]:

L(u(P(ω)

),P(ω)

)= f

(P(ω)

). (5)

By abuse of notation one writes for the unknown u(ω) = u (P(ω)) and the QoIH(ω) = H (u(ω)), where ω depicts the stochastic nature of a quantity.

For continuous distributions the expectation value E of the QoI or any otherstochastic function is defined as

E [H] =

∫Ω

H(ω) P(dω),

and more generally the k-th order non-centered moment M as

Mk [H] =

∫Ω

Hk(ω) P(dω). (6)

Global sensitivities can be defined as the first order Sobol coefficients [15]

Si [H] :=Vari [H]

Var [H],

4

where Var[H] is the centered M2[H] and Vari [H] := Var[E[H|Pi]

]. The mean-

ing of the inner expectation value is that the mean value of H is taken byconsidering all NP parameters as random except for Pi, which is kept fixed.

The solution of the probabilistic integrals is typically not available in closedform and one has to carry out numerical quadrature as will be discussed inSection 4.

2.3 Linearization with respect to the random parameter

In [2] and the references therein, the approaches described in the previous twosections are combined. Assume P = E [P(ω)] and ∆′ = ∆′(ω) to be a randomvariable, so that E[∆′i(ω)] = 0. Then, one finds for the expectation value,µH = E

[H(u(P + ∆′))

], that

µH = H(u(P)) +O(∆′2i ), (7)

where (3) is used for the linearization. A similar reasoning can be applied onthe variance, σ2

H = Var[H(u(P + ∆′))

],

σ2H = Var

[H(s) ·∆′

]+O(∆′3i ) (8)

=

NP∑i=1

H2(si)Var(∆′i) +O(∆′3i ). (9)

Neglecting the higher order terms, the standard deviation (std) becomes

σH ≈ ‖std[∆′] H(si)‖2, (10)

where depicts the element-wise product.

3 Optimization

In this section different formulations of the same optimization task are discussedand compared. Two settings are distinguished: a deterministic and a stochasticone. In the first setting nominal optimization and classical robust optimization[16] are considered using (1) as a constraint. The nominal one does not accountfor deviations on P while the latter optimizes the worst case scenario. In thesecond setting the optimization of the random PDE is carried out by consideringexpectation values and standard deviations. Finally similarities between bothsettings are addressed.

3.1 Deterministic setting

In the deterministic setting a nominal optimization problem (D Nom Opt) anda robust optimization problem (D Rob Opt 1) are formulated.

5

3.1.1 Nominal optimization (D Nom Opt)

Let J1 : P → R depict a cost function which one wants to optimize, so that

minP∈P∞

J1(P), (11a)

subject to the constraints

G1

(P, h(P)

)≤ 0, (11b)

where G1 depicts the collection of functions G(j)1 , with j = 1, ..., NG, with NG

the total number of constraints. Due to the presence of the QoI in the con-straint, the optimization problem is constraint by the PDE (1) as discussedin for example [16]. If first-order sensitivities si (2) are available, this opti-mization problem can be straightforwardly solved by gradient-based methods,e.g. Sequential Quadratic Programming (SQP) with damped Broyden-Fletcher-Goldfarb-Shanno (BFGS) updates [17] for the Hessian approximation as pre-sented in [18].

3.1.2 Robust optimization (D Rob Opt 1)

Let us assume that the parameters P = P + ∆ are uncertain within the set∆ ∈ U∞, e.g. due to manufacturing imperfections. Hence, a robust counterpart(worst-case) is introduced associated to (11) by considering

minP∈P∞

max∆∈U∞

J1(P + ∆), (12a)

subject tomax

∆∈U∞G1(P + ∆, h(P + ∆)) ≤ 0. (12b)

This nested optimization problem is hard to solve. Hence, an approximationof the max problem is utilized. By applying a first order Taylor expansion, seee.g. [19], a numerically feasible optimization problem is obtained. Also higherorder expansions can be exploited [18], however, in this work only linearizationsof the cost function and the constraint are considered:

J1(P + ∆) ≈ J1(P) +∇PJ1(P)∆ (13a)

and, since there is an unique solution u = u(P) for every admissable point P,one can introduce the reduced constrains

G(j)1 (P) := G

(j)1 (P + ∆, h(P + ∆)),

which leads to

G(j)1 (P) ≈ G(j)

1 (P) +∇PG(j)1 (P)∆. (13b)

6

Inserting this approximation into the optimization problem, one obtains thelinear approximation of the robust counterpart:

minP∈P∞

J2 := J1(P) + ‖D∇PJ1(P)‖1, (14a)

subject to

G(j)2 := G

(j)1 (P, h(P)) + ‖D∇PG

(j)1 (P, h(P))‖1

≤ 0, (14b)

for j = 1, . . . , NG. The dual norm || · ||∗ is defined as

‖ · ‖∗ : Rn → Rg 7→ ‖g‖∗ := max

g∈Rn,‖∆‖≤1g>∆.

In this particular case, one can use the property that the dual of ‖D−1 · ‖∞ isgiven by ‖D · ‖1. However, since the norms are not differentiable, this problemis not smooth. To obtain a differentiable problem, slack variables are introducedand one defines a smooth formulation of the optimization problem by

minP∈P∞,ξ∈RNP

J1(P) + V>ξ(0) (15a)

together with the constraints

G(j)2 (P) + V>ξ(j) ≤ 0 (15b)

and

−ξ(0) ≤ D∇PJ1(P) ≤ ξ(0),

−ξ(j) ≤ D∇PG(j)1 (P) ≤ ξ(j), (15c)

where j = 1, . . . , NG, ξ =(ξ(0), . . . , ξ(NP)

)and V = (1, . . . , 1)> ∈ RNP . This

optimization problem can now be efficiently solved numerically. However, ifgradient based methods are used, additionally second-order sensitivities s2

i asdefined in (2) are required. However, they can be obtained analogously as de-scribed previously. Finally, this approach can be generalized to use a quadraticapproximation with respect to P, see [18].

3.1.3 Robust optimization (D Rob Opt 2)

Albeit less common, one may have chosen the 2-norm instead of the max-normin the definition of the uncertainty set (4). This yields

U2 := ∆ ∈ RNP | ‖D−1∆‖2 ≤ 1. (16)

and changes consequently the norms in (14a). Let us define a reduced parameterspace

P2 := P ∈ P|P + ∆ ∈ P,∆ ∈ U2.

7

The resulting optimization problem reads

minP∈P2

J2 := J1(P) + ‖D∇PJ1(P)‖2, (17a)

subject to

G2 := G(j)1 (P, h(P)) + ‖D∇PG

(j)1 (P, h(P))‖2

≤ 0, (17b)

for j = 1, . . . , NG. This formulation may not account for optimizing the worstcase in (4) but will still improve its robustness. It does optimize the worst casefor (16) and has been used in [19].

3.2 Stochastic setting

In the stochastic setting the cost function and the constraints (5) are stochas-tic due to the uncertainties on P, which are given by random distributions.Nonetheless, the problem can be formulated in terms of the stochastic moments.

3.2.1 Nominal optimization (UQ Nom Opt)

Recalling that P = P + ∆′, the cost function and its constraints can now bedefined in terms of expectation values, as in [20],

minP∈P2

J3(P) := E [J1(P)] , (18a)

subject to

G(j)3 (P, h(P)) := E

[G

(j)1 (P, h(P))

]≤ 0. (18b)

This optimization problem is again deterministic and can be solved by usingthe same techniques as for D Nom Opt, however, with increased computationalcosts for approximating the probabilistic integrals.

3.2.2 Robust optimization (UQ Rob Opt)

The approach for UQ Rob Opt is similar to the description mentioned in [4],where in the cost function the expection value and the variance are considered.They are weighted by a, so-called, risk aversion parameter. In this work thisparameter has been normalized. In [6] also the constraints are robustified byincluding the standard deviations. Combining the two ideas, results in

minP∈P2

J4(P) := E [J1(P)] + λ std [J1(P)] , (19a)

subject to

G(j)4 (P, h(P)) := E

[G

(j)1 (P, h(P))

](19b)

+ λstd[G

(j)1 (P, h(P))

]≤ 0,

8

where λ > 0 can be interpreted as a weighting factor, similar to D in (4).If one wants to increase the stochastic accuracy, one can easily take higher

order moments into account, see e.g. [21].

3.2.3 Combined with linearization (UQ Lin Opt)

Applying the linearization of (7) and (10) to the cost function and its con-straints, leads to

J5(P) = E[J1(P + ∆′)

]+ λ std[J1(P + ∆′)]

≈ J1(P) + λ‖std[∆′] ∇PJ1(P)‖2. (20a)

and

G(j)5 (P, h(P)) = E

[G

(j)1 (P + ∆′, h(P + ∆′))

]+ λ std

[G

(j)1 (P + ∆′, h(P + ∆′))

]≈ G(j)

1 (P, h(P))

+ λ‖std[∆′] ∇PG(j)1 (P, h(P))‖2. (20b)

This reasoning can be extended to a quadratic approximation [22]. Finally,considering the linearization above unveils the following equivalence

Theorem 1. Let the assumptions

(i) the QoI h is linear in u, such that ∂h/∂u =const,

(ii) perturbations are independent and identically distributed, such that ∆′i issymmetric around 0,

hold true, then, by choosing λ = Dii

std[∆′i]

, the linearized UQ optimization (i.e.

(20), UQ Lin Opt) is equivalent to the worst-case optimization using the 2-norm(i.e. (17), D Rob Opt 2).

In contrast to related results in the literature this theorem focuses on thecase of PDE-constraint optimization, discusses the assumptions rigorously andtranslates all parameters among the two approaches.

For UQ Lin Opt first order quadrature is sufficient for determining the stan-dard deviations. Doing this quadrature on UQ Rob Opt leads to a different linearapproximation in parameter space. However, both approaches would be affectedby the curse of dimensionality (2NP quadrature points). Using D Rob Opt 2

offers a cheap alternative, especially when the (higher order) derivatives areavailable. On the other hand the embedding of D Rob Opt 2 into the stochas-tic framework allows the development of hybrid optimization algorithms whichadaptively switch among methods depending on the accuracy required for theprobabilistic integrals.

9

Figure 2: The cross-section of one pole of the machine with the magnet depictedin gray and the region of the affine decomposition indicated by the dashed box.On the right hand side, the triangulation into NL subdomains is shown by thedashed lines.

4 Application

The above mentioned optimization procedures are applied to a 3-phase 6-polePMSM with buried magnets. Up to some topological differences this simplifiedmachine features the same working principle as the eBike motor and presentssimilar challenges in the robust design. Thus, the applicability of presentedmethods and investigations on their effectiveness are transferable to the eBikemachine. The goal of the optimization is to reduce the amount of PM materialwhile maintaining a QoI, here, in particular, the electromotive force.

4.1 Machine description

One pole of the 3-phase PMSM is shown in Fig. 2. The winding is a doublelayered winding with two slots per pole per phase. The laminated steel ofthe machine is modeled with zero conductivity and a relative permeability ofµr = 500. The machine is based on the model described in [23]. The magnetshave a width P1 and height P2. They are buried in the rotor at a depth P3.The machine is subjected to an optimization procedure in which the amount ofPM material is reduced while maintaining a prescribed EMF Ed.

4.2 Finite Element Model

To calculate the EMF of every configuration, one has to solve Maxwell’s equa-tions. The magnetostatic formulation is sufficient to model this type of electricmachines. This implies that eddy currents and displacement currents are ne-glected. To calculate the magnetic vector potential (MVP) ~A(x, y, z,P), onehas to solve the PDE

~∇×(ν(P)~∇× ~A(P)

)= ~Jsrc − ~∇× ~Hpm(P), (21)

10

with adequate boundary conditions. ν(P) = ν(x, y, z,P) depicts the reluctivity,~Jsrc(x, y, z) is the source current density and ~Hpm(x, y, z,P) is the PM’s sourcemagnetic field strength. The magnetization current density induced by the PMs,i.e., ~∇ × ~Hpm(P) is not in L2, such that the smoothness of the results of (1)is not guaranteed by classical arguments. Nonlinear saturation curves are notdiscussed, but could be considered easily. In the 2D planar case, the ansatz

~A(P) ≈ND∑j=1

uj(P)~wj(x, y) =

ND∑j=1

uj(P)Nj(x, y)

lz~ez

is made, where ND is the total number of degrees of freedom, ~wj(x, y) are theedge shape functions related to the nodal finite elements Nj(x, y) and lz is thelength of the machine. The Galerkin procedure leads to the system of equations

Kν(P)u(P) = jsrc + jpm(P) (22)

with

Kν(P),i,j(P) =

∫VD

ν(P)~∇× ~wi · ~∇× ~wj dV, (23)

jsrc,i =

∫VD

~Jsrc · ~wi dV, (24)

jpm,i(P) =

∫VD

~Hpm(P) · ~∇× ~wi dV, (25)

where VD = SD × [0, lz] is the computational domain and SD the machine’scross-section [24].

Solving (22) gives the MVP from which the EMF E0 can be calculated byusing the loading method [25]. It corresponds essentially to a FFT. This meansthat the EMF can be computed during post-processing. Using the notationabove, our QOI reads

E0(u(P)) = h(P).

4.3 Affine Decomposition

Since during the optimization different configurations of the magnet postion inthe machine will be considered, one wants to dispose of a computationally fastmodel and avoid remeshing in order to reduce numerical noise. Therefore anaffine decomposition (see e.g. [27]) is applied to a region around the permanentmagnet (Fig. 2). The region is subdivided into NL triangular subdomains. Thefinite system matrix from (22) can be rewritten as

Kν(P) = K0ν +

NL∑`=1

ϑ`(P)K`ν , (26)

where K0ν represents the system matrix for the domain outside the dashed box

and K`ν represents the matrices corresponding to the subdomains. The weights

11

ϑ` inherit the dependency on P and can be computed analytically. The notationin (26) is compact for

ϑ`(P)K`ν := ϑ`1(P)K`

xx + ϑ`2(P)K`yy

+ϑ`3(P)K`xy + ϑ`4(P)K`

yx,

where K`ν has been decomposed into four submatrices K`

xx, K`yy, K`

xy and K`yx.

The subindices indicate the partial derivatives of the nodal shape functions. Anadditional advantage of this method is that these matrices can be precomputedand matrix assembly can be avoided. An analogue decomposition is made forthe right-hand-side of (22).

4.4 Stochastic Quadrature

In the stochastic setting, the probabilistic dimension of the PDE (5) is sampledby using collocation, [26]. The integrals (6) are approximated by

Mk[h]≈

N∑n=1

wnhk(Pn),

where N are the number of samples and the weights wn and evaluation pointsPn are method specific. In this paper stochastic quadrature (SQ) and MonteCarlo (MC) is used. In the former case the weights and sample points Pn arechosen according to the quadrature rules. For the latter, all samples have thesame weight 1/N but the sample points itself are chosen randomly. The mainadvantage of SQ over MC is the fast convergence for low dimensional problems[26].

The computational drawback of all quadrature-based approaches is the needof many evaluations of the PDE (1). This is computationally expensive andtherefore model order reduction (MOR) is desirable.

4.5 Model order reduction

To speed up the computations the reduced basis method is used [27]. The ideais to project the high dimensional problem to a lower dimensional space. Onelooks for a solution in the reduced space ψ1, . . . , ψd of rank d. This space isconstructed during an offline phase. The basis ψi of the space is computed bya greedy algorithm. With the help of an error estimator, it is decided for whichP in a training set, a solution is computed and added as ψi in order to obtain amaximal error reduction. For every ψi the high dimensional problem is solvedand then orthonormalized with respect to the current subspace by using theGram-Schmidt process. The algorithm is stopped when a predefined accuracyis reached.

Finally one obtains a reduced basis of rank d that is sufficient to capture the

12

dynamics in the parameter space. With the Galerkin ansatz one retrieves

ud(P) :=

d∑i=1

ui(P)ψi = Ψu(P).

Multiplying equation (22) from the left with Ψ>, utilizing the affine decompo-sition introduced in (26) and inserting ud(P), one retrieves the reduced ordermodel

Ψ>

(K0 +

L∑`=1

ϑ`(P)K`

)Ψu(P)

= Ψ>

(jout +

L∑`=1

ϑ`(P)j`

).

Since the problem is linear one obtainsΨ>K0Ψ︸ ︷︷ ︸Kout

+

L∑`=1

ϑ`(P) Ψ>K`Ψ︸ ︷︷ ︸K`

u(P)

= Ψ>jout︸ ︷︷ ︸jout

+

L∑`=1

ϑ`(P) Ψ>j`︸ ︷︷ ︸j`

.

Note that all quantities with tildes are of dimension d ND. They can alsobe precomputed, except for u(P). The reduced system of equations can now besolved very efficiently during the online phase. The system can also be set upfor different values of P without the need for high dimensional operations.

The error on the solution can be bounded by a posteriori error estimates(see for example [28]). Define a norm on a reference geometry by ‖v‖2Pref

=

v>K(Pref)v. Then the error estimator is given by

‖u(P)− ud(P)‖Pref≤ ∆u(P) :=

‖r(P)‖Pref∗

α(P).

The residual r is defined by r = K(P)ud − (jsrc + jpm). The dual norm isdepicted by ‖v‖2Pref∗ = v>K(Pref)

−1v. Due to the affine decomposition andour choice for the norm, the coercivity constant α(P) of K(P) can be computedby the ”min Θ” approach

α(P) = min`∈1,...,N`

ϑ`(P)

ϑ`(Pref).

Furthermore, an error estimator for the sensitivity can be derived with similaringredients. Detailed information can be found in [29]. The error estimator canbe decomposed in the offline/online framework. Hence, in the online phase theevaluation of the error estimator does not rely on high dimensional operations.

13

Problems with a high sensitivity with respect to the parameter pose majorchallenges when the reduced basis method is applied. Especially problems in-volving geometry transformations can be very sensitive to the parameters andlead to large reduced order models since different phenomena have to be cap-tured. To further reduce the computational cost during the online phase and tokeep the dimension of the reduced order models small a ’Dictionary’ of modelsis generated. This is done by dividing the parameter space into NPar partitions.For each partition a separate reduced order model is generated. In the onlinephase for a given parameter P the associated partition is determined and thecorresponding reduced model utilized. This approach allows us to obtain lowdimensional models that can be evaluated rapidly. A similar approach has beeninvestigated in [30] where a strategy using adaptive partitioning was developed.Optionally, in the presented approach the offline phase can be accelerated sig-nificantly by using parallel computing, since the partitioning is chosen fixed andthe reduced order models in the different partitions can be computed indepen-dently.

4.6 Optimization Procedure

Since the modeling of the machine is carried out in 2D, the optimization consid-ers the reduction of the surface S = P1P2 instead of the volumes of the magnet.The depth of the magnet P3 is chosen to be a free parameter which is alsochanged during the optimization process. The cost function (11a) becomes

minP∈R3

J1(P) := P 1P 2 (27a)

subject to

G1(P, h(P)) :=

P l1 − P 1

P l2 − P 2

P l3 − P 3

P 3 − P u3

P 2 + P 3 − 15

3P 1 − 2P 3 − 50

Ed − E0(u(P))

≤ 0. (27b)

The first four constraints are related to the lower and upper bounds of P:(P l

1, Pl2, P

l3) = (1, 1, 5) and (P u

1 , Pu2 , P

u3 ) = (∞,∞, 14). The fifth constraint

ensures the validity of the affine decomposition (no intersections). Only a sub-domain of the geometry is considered. Hence, it is required to stay in thatregion. The sixth constraint is a design constraint, enforcing that each PMhas to have a certain distance to the rotor’s surface, meaning that the depthof the magnet is linked to its width. The last constraint is the requirement tofulfill the prescribed EMF and since it is calculated from (22), the optimizationproblem actually has a PDE constraint. For D Rob Opt 1 and D Rob Opt 2 theuncertainty set is chosen to be D = diag((∆u −∆l)/2), where in our numerical

14

experiments −∆l = ∆u = ∆b and the value of ∆b is increased from 0 to 0.2 mm.In the stochastic setting (UQ Nom Opt and UQ Rob Opt), it is assumed that thecomponents of P are independently uniformly distributed:

P ∼ U(P + ∆l,P + ∆u), (28)

where P = E [P]. Since P1 and P2 are independent random variables, (18a) canbe written as

minP∈R3

J3 = E [P1]E [P2] , (29a)

subject to

G3 :=

P l1 − E [P1]

P l2 − E [P2]

P l3 − E [P3]

E [P3]− P u3

E [P2] + E [P3]− 15

3E [P1]− 2E [P3]− 50

Ed − E[E0(u(P))

]

≤ 0, (29b)

where the notation has been shortened so that Ji = Ji(P) and Gi = Gi(P, h).For the robust counterpart in the stochastic setting, the same convention fornotation is applied:

minP∈R3

J4 := E [P1]E [P2] + λ std [P1P2] , (30a)

subject to

G4 :=

P l1 − E [P1] + λstd [P1]

P l2 − E [P2] + λstd [P2]

P l3 − E [P3] + v [P3]

E [P3]− λstd [P3]− P u3

E [P2] + E [P3]− λstd [ · ]− 15

E [3P1]− E [2P3] + λstd [ · ]− 50

Ed − E [E0] + λstd[E0(u(P))

]

≤ 0. (30b)

The sampling in both cases is carried out using SQ and MC. For SQ a tensorgrid of 5×5×5 is constructed and a Gauß-Legendre quadrature is applied sinceonly uniform distributions are considered [26]. For MC NMC = 5000 randomsamples are generated such that the estimated Monte Carlo error is below 1%for all optimizations.

5 Results and Discussion

A visualization of the results is given in Fig. 4. Decreasing the maximal deviation∆b to zero leads, as expected, to the result of D Nom Opt. The equivalence

15

0 0.05 0.1 0.15 0.262

64

66

68

70

72

74

76

78

∆ (mm)p

1p

2(m

m2)

UQ Rob OptUQ Lin OptD Rob Opt 1D Rob Opt 2D Nom Opt

Figure 3: Optimization results for different values of ∆b. The dark green circle isthe size of the PMs obtained by using deterministic nominal optimization. Thelight green diamonds and the black squares are the results using deterministicrobust optimization considering the 1-norm and the 2-norm respectively. Theblack line depicts the UQ optimization with the linearization. The light greenline is the robust UQ optimization.

between D Rob Opt 2 and UQ Lin Opt can also be observed numerically, whichconfirms Theorem 1. The results using robust optimization in the UQ anddeterministic setting do differ. This is caused by the fact that D Rob Opt 1 isa more pessimistic scenario since it aims at mitigating the worst case. Byincorporating the second moment, one relies on more stochastic informationduring optimization which eventually translates into more optimistic results,because rare events are disregarded. This is also underlined by the result inTable 1.

Around each optimum a Monte Carlo sampling is performed with the samedistribution as in (28). For every sample the EMF is calculated and comparedwith Ed. Using UQ Rob Opt one retrieves 4 % of machines that do not fulfill Ed,where for D Rob Opt 1 all the machines fulfill the prerequisite. The linearizationapproach differs only slightly from the full approach. One has to note thatwhen the solution is linear in P, e.g. due to the linearization like in (20),then 2 × 2 × 2 collocation points are sufficient and no further approximationis introduced by the stochastic collocation. For the nonlinear solution such acoarse collocation grid also corresponds to a linearized model but it may differfrom the one obtained by the Taylor expansion in (3).

In Table 2 the results for ∆b = 0.2 mm are presented. A visualization of themachines is depicted in Figure 4. All calculations were run on a 16 GB RAMIntel R© Core

TM

with i7-5820K processors (3.30 GHz). One sees that the size ofthe PM has decreased substantially while maintaining the desired EMF. Forthe settings with nominal optimization the three methods result in comparablePM sizes. The volume has been reduced by more than 50 %, which indicates avery good improvement (and that the initial guess was poor). The stochasticsetting requires an significantly increased amount of computational time, whichcan be reduced by using MOR or SQ. The difference in optimized volume for

16

Table 1: Percentage of machines with E0 < Ed.

Method Percentage ( %)D Nom Opt 51D Rob Opt 1 0UQ Lin Opt 4.5UQ Rob Opt 4.0

the various combinations of MOR and SQ is less then 0.1 %.

Table 2: Numerical results obtained for ∆b = 0.2 mm.

time w.p(mm) V ( mm2) E0 (V) time (s) MOR (s+s)

Initial (19.00, 7.00, 7.00) 133 30.370 - -D Nom Opt (21.07, 2.98, 6.61) 62.80 30.370 2.0 -

UQ Nom Opt (SQ) (21.07, 2.98, 6.61) 62.80 30.370 224 241 + 12UQ Nom Opt (MC) (21.06, 2.98, 6.61) 62.83 30.371 9100 241 + 461

D Rob Opt 1 (20.88, 3.73, 6.82) 77.87 31.086 5.9 -UQ Rob Opt (SQ) (20.87, 3.53, 6.80) 73.66 30.815 239 241 + 13UQ Rob Opt (MC) (20.86, 3.53, 6.80) 73.68 30.814 9700 241 + 503

6 Conclusion

The equivalance between robust worst-case optimization and optimization withUQ has been shown, when applying a linearization. Both approaches have beenapplied to reduce the size of the permanent magnets in a permanent magnetsynchronous machine. It is found that robust optimization in the UQ settinggives less pessimistic results, since the worst case might be unlikely to hap-pen. However, the computational time is significantly increased whereas theimplementation effort reduced since no (further) derivatives are needed.

Acknowledgment

This work is supported by the German BMBF in the context of the SIMUROMproject (grant nr. 05M2013), by the ’Excellence Initiative’ of the German Fed-eral and State Governments and the Graduate School of CE at TU Darmstadt.

17

(a) The initial geometry (b) D Nom Opt

(c) D Rob Opt 1 (d) UQ Rob Opt

Figure 4: Optimized PMSM design according to three different algorithms.

References

References

[1] W.Y. Fowlkes and C.M. Creveling, Engineering Methods for Robust ProductDesign, Addison-Wesley, 1995.

[2] H.–G. Beyer and B. Sendhoff, Robust optimization – A comprehensive sur-vey, Comput. Meth. Appl. Mech. Eng. 196 (2007) 33–34.

[3] Y. Duan and D.M. Ionel, A review of recent developments in electrical ma-chine design optimization methods with a permanent-magnet synchronousmotor benchmark study, IEEE Trans. Ind. Appl. 49 (2014) 1268–1275.

[4] J. Darlington, C. Pantelides, B. Rustem and B.A. Tanyi, Decreasing the sen-sitivity of open–loop optimal solutions in decision making under uncertainty,Eur. J. Oper. Res. 121 (2000) 343–362.

[5] D. Bertsimas, D.B. Brown, and C. Caramanis, Theory and applications ofrobust optimization, SIAM Rev. 53 (2011) 464–501.

[6] B. Huang, and D. Xiaoping, A robust design method using variable transfor-mation and Gauss–Hermite integration, Int. J. Numer. Meth. Eng. 66 (2006)1841–1858.

18

[7] X.Meng, S. Wang, J. Qiu, Q. Zhang, J.G. Zhu, Y. Guo and D. Liu, Robustmultilevel optimization of PMSM using design for six sigma, IEEE Trans.Magn. 47 (2011) 3248–3251.

[8] P. Gangl, S. Amstutz and U. Langer, Topology optimization of electric motorusing topological derivative for nonlinear magnetostatics, IEEE Trans. Magn52 (2016) 1–4.

[9] S.I. Kim, J.Y. Lee, Y.K. Kim, YJ.P. Hong, Y. Hur and Y.H. Jung Opti-mization for reduction of torque ripple in interior permanent magnet motorby using the Taguchi method, IEEE Trans. Magn. 41 (2005) 1796–1799.

[10] M. Lukaniszyn, M. Jagiela and R. Wrobel, Optimization of permanentmagnet shape for minimum cogging torque using a genetic algorithm, IEEETrans. Magn. 40 (2004) 1228–1231, 2004.

[11] K. Binnemans, P.T. Jones, B. Blanpain, T. Van Gerven, Y. Yang, A. Wal-ton and M. Buchert, Recycling of rare earths: a critical review, J. Clean.Prod. 51 (2013) 1–22.

[12] A. Cohen, R. DeVore and C. Schwab, Analytic regularity and polynomialapproximation of parametric and stochastic elliptic PDEs, Anal. Appl. 09(2011) 11–47.

[13] I. Babuska, R. Tempone and G.E. Zouraris, Solving elliptic boundaryvalue problems with uncertain coefficients by the finite element method:the stochastic formulation, Comput. Meth. Appl. Mech. Eng. 104 (2015)1251–1294.

[14] U. Romer, S. Schops and T. Weiland, Stochastic modeling and regularityof the nonlinear elliptic curl-curl equation, SIAM/ASA J. UQ 4 (2016) 952–979.

[15] I.M. Sobol, On sensitivity estimation for nonlinear mathematical models,Matematicheskoe Modelirovanie 2 (1990) 112-118.

[16] M. Hinze, R. Pinnau, M. Ulbrich and S. Ulbrich, Optimization with PDEConstraints. Mathematical Modelling: Theory and Applications, SpringerVerlag, 2008.

[17] J. Nocedal and S.J. Wright, Numerical Optimization, second edition,Springer Series in Operation Research and Financial Engineering, 2006.

[18] O. Lass and S. Ulbrich, Model order reduction techniques with a posteriorierror control for nonlinear robust optimization governed by partial differen-tial equations, accepted in SIAM J. Sci. Comput. 2016.

[19] M. Diehl, H.G. Bock and E. Kostina, An approximation technique for ro-bust nonlinear optimization, Mathematical Programming 107 (2006) 213–230.

19

[20] S. Sundaresan, K. Ishii and D. Houser, A robust optimization procedurewith variations on design variables and constraints, Eng. Opt. 24 (1995)101–117.

[21] H. Tiesler, R.M. Kirby, D. Xiu and T. Preusser, Stochastic collocation foroptimal control problems with stochastic PDE constraints, SIAM J. ControlOptim. 50 (2012) 2659–2682.

[22] A. Alexandrian, P. Noemi, G. Stadl and O. Ghattas, Mean–variance risk–averse optimal control of systems governed by PDEs with random parameterfields using quadraic approximations, arXiv:1602.07592v3 2017.

[23] U. Pahner, R. Mertens, H. De Gersem , R. Belmans and K. Hameyer,A parametric finite element environment tuned for numerical optimization,IEEE Trans. Magn. 34 (1998) 2936–2939.

[24] P. Monk, Finite Element Methods for Maxwell’s Equations, Oxford: OxfordUniversity Press, 2003.

[25] M.A. Rahman and P. Zhou, Determination of saturated parameters of PMmotors using loading magnetic fields, IEEE Trans. Magn. 27 (1991) 3947–3950.

[26] D. Xiu, Numerical Methods for Stochastic Computations: A SpectralMethod Approach, Princeton University Press, 2010.

[27] G. Rozza, D.B.P. Huynh and A.T. Patera, Reduced basis approximationand a posteriori error estimation for affinely parametrized elliptic coercivepartial differential equations, Arch. Comput. Methods. Eng. 15 (2008) 229–275.

[28] A.T. Patera and G. Rozza, “A posteriori Error Estimator,” in Reducedbasis approximation and a posteriori error estimation for parametrized par-tial differential equations, USA: Massachusetts Institute of Technology, ch.4 (2007) 127–153.

[29] M. Dihlmann and B. Haasdonk, Certified nonlinear parameter optimizationwith reduced basis surrogate models, Proc. Appl. Math. Mech 13 (2013) 3–6.

[30] B. Haasdonk, M. Dihlmann and M. Ohlberger, A training set and multiplebasis generation approach for parametrized model reduction based on adap-tive grids in parameter space, Math. Comput. Model. Dyn. Syst. 17 (2011)423-442.

20