00b7d51532c0bf3ceb000000.pdf

5
Optimization of a Fuzzy Logic Controller for Handover-based Load Balancing P. Mu˜ noz, R. Barco, I. de la Bandera, M. Toril and S. Luna-Ram´ ırez University of M´ alaga, Communications Engineering Dept., M´ alaga, Spain Campus de Teatinos. 29071. M´ alaga Email: {pabloml, rbm, ibanderac, mtoril, sluna}@ic.uma.es Abstract—In Self-Organizing Networks (SON), load balancing has been recognized as an effective means to increase network performance. In cellular networks, cell load balancing can be achieved by tuning handover parameters, for which a Fuzzy Logic Controller (FLC) usually provides good performance and usability. Operator experience can be used to define the behavior of the FLCs. However, such a knowledge is not always available and hence optimization techniques must be applied in the controller design. In this work, a fuzzy -Learning algorithm is proposed to find the optimal set of fuzzy rules in an FLC for traffic balancing in GSM-EDGE Radio Access Network (GERAN). Load balancing is performed by modifying handover margins. Simulation results show that the optimized FLC provides a significant reduction in call blocking. I. I NTRODUCTION Cellular networks have experienced a large increase in size and complexity in the last years. This has stimulated strong research activity in the field of Self-Organizing Networks (SONs) [1], [2]. A major issue tackled by SONs is the irregular distribution of cellular traffic both in time and space. To cope with such an increase in a cost-effective manner, operators of mature networks, such as GERAN, use traffic management techniques instead of adding new resources. One of such techniques is automatic load balancing, which aims to relieve traffic congestion in the network by sharing traffic between network elements. Thus, load balancing solves localized congestion problems caused by casual events, such as concerts, football matches or shopping centers. In a cellular network, load balancing is performed by sharing traffic between adjacent cells. This can be achieved by changing the base station to which users are attached. When a user is in a call, it is possible to change the base station to which the user is connected by means of the handover (HO) process. By adjusting HO parameters settings, the size of a cell can be modified to send users to neighboring cells. Thus, the area of the congested cell can be reduced and that of adjacent cells take users from the congested cell edge. As a result of a more even traffic distribution, the call blocking probability in the congested cell decreases. The study of load balancing in cellular networks can be found in the literature. In [3], an analytical teletraffic model for the optimal traffic sharing problem in GERAN is presented. In [4] the HO thresholds are changed depending on the load of the neighboring cells for real-time traffic. An algorithm to decide the balancing of the load between UMTS and GSM is presented in [5]. In this algorithm the decision for an inter- system HO depends on the load in the target and the source cell, the QoS, the time elapsed from the last HO and the HO overhead. In the area of SON, [1] proposes a load balancing algo- rithm in a Long-Term Evolution (LTE) mobile communication system. This algorithm also requires the cell load as input and it controls the HO parameters. In this field, self-tuning has been applied to traffic management in many references. An UMTS/WLAN load balancing algorithm based on auto- tuning of load thresholds is proposed in [6]. The auto-tuning process is lead by a Fuzzy Logic Controller (FLC) that is optimized using the fuzzy -Learning algorithm. In [7], a heuristic method is proposed to tune several parameters in the HO algorithm of a GSM/EDGE network. In [2], a load balancing algorithm based on automatic adjustment of HO thresholds and cell load measurements for LTE systems is described. All the previous references prove that FLCs can be suc- cessfully applied to automatic network parameter optimization. Their main strength is their ability to translate human know- ledge into a set of basic rules. When an FLC is designed, a set of ’IF-THEN’ rules must be defined, which represent the mapping of the input to the output in linguistic terms. Such rules might be extracted from operator experience. However, knowledge is not always available. In this case, reinforcement learning could be used to to find the set of rules providing the best performance. This paper investigates self-adaption of a fuzzy logic con- troller for load balancing in a cellular network. In particular, an FLC optimized by the fuzzy -Learning algorithm is proposed for traffic balancing of voice services in GERAN. It should be pointed out that GERAN is selected only as an example of radio access technology and results should be applicable to other radio access technologies. Unlike other FLCs optimized with reinforcement learning [6], the proposed controller modifies a specific parameter of the standard HO algorithm to reduce network blocking probability. The rest of the paper is organized as follows. Section 2 describes the self-tuning scheme proposed. Section 3 presents the fuzzy -Learning algorithm. Section 4 discusses the simulation results and Section 5 presents the main conclusions of the study.

Transcript of 00b7d51532c0bf3ceb000000.pdf

  • Optimization of a Fuzzy Logic Controller forHandover-based Load BalancingP. Munoz, R. Barco, I. de la Bandera, M. Toril and S. Luna-RamrezUniversity of Malaga, Communications Engineering Dept., Malaga, Spain

    Campus de Teatinos. 29071. MalagaEmail: {pabloml, rbm, ibanderac, mtoril, sluna}@ic.uma.es

    AbstractIn Self-Organizing Networks (SON), load balancinghas been recognized as an effective means to increase networkperformance. In cellular networks, cell load balancing can beachieved by tuning handover parameters, for which a FuzzyLogic Controller (FLC) usually provides good performanceand usability. Operator experience can be used to define thebehavior of the FLCs. However, such a knowledge is not alwaysavailable and hence optimization techniques must be appliedin the controller design. In this work, a fuzzy -Learningalgorithm is proposed to find the optimal set of fuzzy rulesin an FLC for traffic balancing in GSM-EDGE Radio AccessNetwork (GERAN). Load balancing is performed by modifyinghandover margins. Simulation results show that the optimizedFLC provides a significant reduction in call blocking.

    I. INTRODUCTION

    Cellular networks have experienced a large increase in sizeand complexity in the last years. This has stimulated strongresearch activity in the field of Self-Organizing Networks(SONs) [1], [2].

    A major issue tackled by SONs is the irregular distributionof cellular traffic both in time and space. To cope with suchan increase in a cost-effective manner, operators of maturenetworks, such as GERAN, use traffic management techniquesinstead of adding new resources. One of such techniquesis automatic load balancing, which aims to relieve trafficcongestion in the network by sharing traffic between networkelements. Thus, load balancing solves localized congestionproblems caused by casual events, such as concerts, footballmatches or shopping centers.

    In a cellular network, load balancing is performed bysharing traffic between adjacent cells. This can be achieved bychanging the base station to which users are attached. Whena user is in a call, it is possible to change the base station towhich the user is connected by means of the handover (HO)process. By adjusting HO parameters settings, the size of a cellcan be modified to send users to neighboring cells. Thus, thearea of the congested cell can be reduced and that of adjacentcells take users from the congested cell edge. As a result of amore even traffic distribution, the call blocking probability inthe congested cell decreases.

    The study of load balancing in cellular networks can befound in the literature. In [3], an analytical teletraffic modelfor the optimal traffic sharing problem in GERAN is presented.In [4] the HO thresholds are changed depending on the loadof the neighboring cells for real-time traffic. An algorithm to

    decide the balancing of the load between UMTS and GSM ispresented in [5]. In this algorithm the decision for an inter-system HO depends on the load in the target and the sourcecell, the QoS, the time elapsed from the last HO and the HOoverhead.

    In the area of SON, [1] proposes a load balancing algo-rithm in a Long-Term Evolution (LTE) mobile communicationsystem. This algorithm also requires the cell load as inputand it controls the HO parameters. In this field, self-tuninghas been applied to traffic management in many references.An UMTS/WLAN load balancing algorithm based on auto-tuning of load thresholds is proposed in [6]. The auto-tuningprocess is lead by a Fuzzy Logic Controller (FLC) that isoptimized using the fuzzy -Learning algorithm. In [7], aheuristic method is proposed to tune several parameters inthe HO algorithm of a GSM/EDGE network. In [2], a loadbalancing algorithm based on automatic adjustment of HOthresholds and cell load measurements for LTE systems isdescribed.

    All the previous references prove that FLCs can be suc-cessfully applied to automatic network parameter optimization.Their main strength is their ability to translate human know-ledge into a set of basic rules. When an FLC is designed, aset of IF-THEN rules must be defined, which represent themapping of the input to the output in linguistic terms. Suchrules might be extracted from operator experience. However,knowledge is not always available. In this case, reinforcementlearning could be used to to find the set of rules providing thebest performance.

    This paper investigates self-adaption of a fuzzy logic con-troller for load balancing in a cellular network. In particular,an FLC optimized by the fuzzy -Learning algorithm isproposed for traffic balancing of voice services in GERAN.It should be pointed out that GERAN is selected only as anexample of radio access technology and results should beapplicable to other radio access technologies. Unlike otherFLCs optimized with reinforcement learning [6], the proposedcontroller modifies a specific parameter of the standard HOalgorithm to reduce network blocking probability.

    The rest of the paper is organized as follows. Section 2describes the self-tuning scheme proposed. Section 3 presentsthe fuzzy -Learning algorithm. Section 4 discusses thesimulation results and Section 5 presents the main conclusionsof the study.

  • II. SELF-TUNING SCHEMEA. Handover parameters

    In GERAN, Power BudGeT (PBGT) HO establishes that aHO is initiated whenever the average pilot signal level from aneighbor cell exceeds the one received from the serving cell bya certain margin, provided a minimum signal level is ensured,namely:

    , (1) = , (2)

    where and are the average receivedpilot signal levels from neighbor cell and serving cell, is the power budget of neighbor cell withrespect to serving cell , is the HO signal-level constraint and is the HO margin in theadjacency.B. Fuzzy logic controller

    The proposed FLC, shown in Fig. 1, is inspired in the diffu-sive load-balancing algorithm presented in [8]. The objective isto minimize call blocking ratio (CBR) in the network. For thispurpose, FLCs (one per adjacency) compute the increment inHO margins for each pair of adjacent cells from the differenceof CBR between these cells and their current HO marginvalues. By definition, the CBR is defined as:

    =

    (3)

    where is the number of blocked calls and is the total number of offered calls. Note that increments inthe HO margins in both directions of the adjacency musthave the same magnitude but opposite sign to provide ahysteresis region to ensure HO stability. Likewise, HO marginsare restricted to a limited variation interval in order to avoidnetwork instabilities due to excessive parameter changes. Forsimplicity, all FLCs are based on the Takagi-Sugeno approach[8]. The optimization block of the diagram is explained in thenext section.

    As in [9], one of the FLC inputs is the difference betweenthe CBR of the two adjacent cells. Balancing the CBR betweenadjacent cells is achieved by iteratively changing HO marginson each adjacency. Thus, the overall CBR in the network isminimized, as shown in [3]. On the other hand, experiencehas shown that sensitivity to changes is greatly increasedwhen margins becomes negative as a result of large parameterchanges [7]. Thus, another input of the FLC is the currentHO margin in order to reduce the magnitude of changes oncemargins become negative.

    FLC inputs are classified according to linguistic terms bythe fuzzyfier. The fuzzyfier translates input numbers into avalue indicating the degree of membership to a linguisticterm depending on a specific input membership function.For simplicity, the selected input membership functions are

    triangular or trapezoidal. In the inference engine, a set of IF-THEN rules are defined to establish a relationship between theinput and the output in linguistic terms. Finally, the defuzzifierobtains a crisp output value by aggregating all rules. Theoutput membership functions are constants and the centre-of-gravity method is applied to obtain the final value of theoutput.

    III. FUZZY -LEARNING ALGORITHM

    Reinforcement learning comprises a set of learning tech-niques based on leading an agent to take actions in anenvironment to maximize a cumulative reward. -Learning isa reinforcement learning algorithm where a -function is builtincrementally by estimating the discounted future rewards fortaking actions from given states. This function is denoted by(, ), where represents the states and represents theactions. A fuzzy version of -Learning is used in this workdue to several advantages. Firstly, it allows treating continuousstate and action spaces, to store the state-action values and tointroduce a priori knowledge.

    In Fig. 1, the main elements of the reinforcement learningare represented. The set of environment states, , is given bythe inputs of the FLC. The set of actions, , represents theFLC output, which is the increment of HO margins. The value is the scalar immediate reward of a transition.

    A discretization of the -function is applied in order tostore a finite set of state-action values. If the learning systemcan choose one action among for rule , [, ] is definedas the possible action in rule and [, ] is defined asits associated -value. Hence, the representation of (, ) isequivalent to determine the -values for each consequent ofthe rules, then to interpolate for an any input vector.

    To initialize the -values in the algorithm, the followingassignment is used:

    [, ] = 0, 1 1 (4)

    where [, ] is the -value, is the number of rules and is the number of actions per rule.

    Fig. 1. Block diagram of the fuzzy controller

  • Once the initial state has been chosen, the FLC actionmust be selected. For all activated rules, with nonzero ac-tivation function, an action is selected using the followingexploration/exploitation policy:

    = argmax

    [, ] with probability 1 (5)

    = {, = 1, 2, ..., } with probability (6)where is the consequent of the rule and is a learningrate which establishes the trade-off between exploration andexploitation in the algorithm (e.g., = 0 means that there is noexploration, that is, the best action is always selected). Then,the action to execute is determined by:

    =

    =1

    () (7)

    where is the FLC action, () is the activation functionfor the rule and is the specific action for that rule. Theactivation function or degree of truth is the distance betweenthe input state () and the rule :

    (()) =

    =1

    (()) (8)

    where is the number of FLC inputs, (()) is themembership function for the FLC input and the rule .

    The -function can be calculated from the current -valuesand the activation functions of the rules:

    ((), ) =

    =1

    () [, ] (9)

    where ((), ) is the value of the -function for the state() in iteration and the action .

    Subsequently, the system evolves and the new state (+1)and the reinforcement signal (+ 1) are observed. The FLCcomputes (( + 1)) for all rules, so that the value of thenew state is calculated as:

    ((+ 1)) =

    =1

    ((+ 1)) max

    [, ] (10)

    The difference between ((), ) and (( + 1), ) canbe used to update the action -values. This difference can beseen as an error signal and it is given by:

    = (+ 1) + ((+ 1))((), ) (11)where is the error signal, ( + 1) is the reinforcementsignal, is a discount factor, ((+ 1)) is the value of thenew state and ((), ) is the value of the -function forthe previous state. The action -values can be immediatelyupdated by an ordinary gradient descent:

    [, ] = [, ] + (()) (12)

    where is a learning rate. Then, the above-described process isrepeated starting with the action selection for the new currentstate.

    To find the best consequents of the rules, the reinforcementsignal must be selected appropriately. In this work, the rein-forcement signal is defined as:

    () = 1() + 2() + 1 (13)

    where () is the reinforcement signal for the FLC initeration , 1 is a constant parameter, and 1() and 2()are the reinforcement signal contributions of both cells in theadjacency, which are chosen as:

    () = 2 log( 1( + 3) 1000 + 1) (14)

    where 2 and 3 are constant parameters and is thecall blocking ratio of the cell in the adjacency.

    Fuzzy -Learning algorithm aims to find the best conse-quent within a set of actions for each rule so as to maximize fu-ture reinforcements. The network operator may have differentdegrees of a priori knowledge for the rules, eventually beingable to propose a set of consequents for a specific rule. Themost unfavorable case is when there is no knowledge. In thiscase, consequents are equally distributed in the output interval.Another case is the imprecise knowledge, which occurs whenthe network operator is able to select a region more appropriatethan others in the output interval. Lastly, if a rule can beexactly known, it is called precise knowledge.

    As mentioned, the value of determines the explo-ration/exploitation policy. During the exploration phase, theset of all possible actions should be evaluated to avoidlocal minima solutions. The convergence of the -Learningalgorithm during the exploration phase is reached when the-values becomes constant. In the exploitation phase, the bestaction with highest -value is chosen most of the time. A smallprobability of experiencing other actions with less -value isgiven by the policy if is set to a small value. These actionsmay be worse at the moment, but eventually they may lead tozones with higher -values.

    IV. SIMULATION

    A dynamic GERAN system-level simulator has been de-veloped in MATLAB. This simulator first runs a module ofparameter configuration and initialization, where a warm-updistribution of users is generated. Then, a loop is started untilthe end of the simulation, where the update of user positions,propagation computation, generation of new calls, and radioresource management algorithms are executed. Finally, themain statistics and results of the simulation are shown.

    The simulated scenario includes a macro-cellular environ-ment whose layout consists of 19 tri-sectorized sites evenlydistributed in the scenario. The main simulation parametersare summarized in Table I. Only the downlink is consideredin the simulation, as it is the most restrictive link in GERAN.

  • For simplicity, the service provided to users is the circuit-switched voice call as it is the main service affected by thetuning process.

    A non-uniform spatial traffic distribution is assumed inorder to generate the need for load balancing. Thus, there areseveral cells with a high traffic density whereas surroundingcells have a low traffic density. It is expected that parameterchanges performed by the FLCs manage to relieve congestionin the affected area. It is noted that about 15-20 iterations aresufficient to obtain useful -values.

    In Fig. 2, the reinforcement signal is represented as afunction of the CBR. A low CBR is rewarded with positivereinforcement values whereas a high CBR is penalized withnegative reinforcement values. A single look-up table of action-values is shared by all FLCs, in such a way that the look-uptable is updated by all FLCs at each iteration and, thus, theconvergence process is accelerated.

    Once the -Learning algorithm has been applied to thecontrollers, the best consequent for each rule is determinedby the highest -value in the look-up table. Table II shows theset of candidate actions selected for each rule from impreciseknowledge (4 column) together with the consequent obtainedby the optimization process (5 column). The linguistic terms

    TABLE ISIMULATION PARAMETERS

    Cellular layout Hexagonal grid, 57 GSM cells,cell radius 0.5 km

    Propagation model Okumura-Hata with wrap-aroundCorrelated log-normal slow fading,=8dB

    Mobility model Random direction, constant speed 3 km/hService model Speech, mean call duration 100s,

    activity factor 0.5BS model Tri-sectorized antenna, EIRP=43dBm

    Adjacency plan Symmetrical adjacencies, 18 per cellRRM features Random FH, POC, DTX

    HO parameter settings Qual HO threshold: RXQUAL=4PBGT HO margin: [-24,24]dB

    Traffic distribution Unevenly distributed in spaceTime resolution SACCH frame (480 ms)Simulated time 4 h (per optimization epoch)

    0

    0.005

    0.01

    0 0.002 0.004 0.006 0.008 0.01

    1

    0

    1

    2

    3

    4

    5

    CBR cell 1CBR cell 2

    Rei

    nfor

    cem

    ent

    Fig. 2. Reinforcement signal

    defined for the fuzzy input HO margin are (low), (medium) and (high), while those defined for the fuzzyoutput HO margin are (extremely low), (verylow), (low), (null), (high), (very high) and (extremely high), which correspond to the crisp output values-8, -4, -1, 0, 1, 2, 4 and 8 dB, respectively.

    As shown in table II, rules 1 and 7 are triggered when thedifference between CBRs is high and the current HO marginhas a value which is opposite to the required value. The lastcolumn shows that the optimization process concludes that thebest consequents for these rules are the largest modification inHO margins. Rules 2 and 8 are triggered when the differencebetween CBRs is high and the current HO margin has an inter-mediate value. In this case, the optimal actions are moderatechanges in HO margins. Rules 3 and 9 are activated when thereis a high difference between CBRs but the current HO marginbelongs to the desired interval. In this situation, HO marginsmight experience saturation and the -Learning algorithm hassome difficulty in finding the best action. It should be pointedout that the optimization process avoids large changes in HOmargins because network sensitivity increases with negativeHOs. Rules 4 and 6 are triggered when traffic is balancedand the current HO margin belongs to the desired extremeinterval. Here, the optimal actions depend on the current loaddistribution in such a way that it can be more suitable tobring the handover margin to an intermediate value in orderto return users to the original cells or it can be better to leaveit at the same value. A combination between exploration andexploitation would be necessary here to determine the bestaction at any time. Finally, rule 5 is the only one that hasbeen selected by precise knowledge because when the trafficis balanced and the HO margin has a neutral value, the bestaction is obviously the same value for the HO margin.

    Fig. 3 shows an example of how the best consequentis selected for a rule by showing the -value evolution ofconsequents in rules 1 and 2 during a simulation. It is observedthat the consequent must be considered as the best actionfor rule 1, since it has the largest -value across iterations.Regarding rule 2, the best consequent is .

    To check the influence of the parameter into networkperformance, several simulations in the exploitation phasehave been carried out. For this purpose, one FLC configuration

    TABLE IIFLC RULES

    HO Candidate BestRule CBR1-CBR2 Margin actions action

    1 Unbalanced12 H EL, VL, L EL2 Unbalanced12 M EL, VL, L VL3 Unbalanced12 L VL, L, N L4 Balanced H VL, L, N N5 Balanced M N N6 Balanced L N, H, VH N7 Unbalanced21 L H, VH, EH EH8 Unbalanced21 M H, VH, EH VH9 Unbalanced21 H N, H, VH H

  • has been defined without exploration ( = 0), where theset of consequents obtained by the optimization process isfixed during the simulation. Another configuration considersexploration ( = 0.8) and the initial -values have been set to10 for the best consequents while the others have been set to0. In this case, consequents are adapted dynamically by the-Learning algorithm to manage the load imbalance.

    Fig. 4 shows the CBR for 19 selected cells distributeduniformly in the network. The row of bars at the backcorresponds to the initial situation of load imbalance. It isobserved that there are cells with high CBR and others withnegligible CBR. The central row of bars corresponds to theend of a simulation when FLCs have been performed withoutexploration. It is clear that traffic load is now shared betweencells and CBR is more equalized than the initial situation.Finally, the row of bars in front corresponds to the end of asimulation when FLCs have been performed with = 0.8. Asreflected in Fig. 4, load imbalance can be reduced even moreif exploration is also applied.

    The main drawback preventing operators from fully exploit-ing the potential of the HO-based load balancing is the increasein the Call Dropping Ratio (CDR). As expected, global CDR isslightly increased compared to the initial situation. Particularly,when exploration is considered ( = 0.8), there is an increase

    0 5 10 15 200

    5

    10

    15

    20

    Iteration index

    Qva

    lue

    Rule 1

    ELVLL

    0 5 10 15 200

    5

    10

    15

    20

    Iteration index

    Qva

    lue

    Rule 2

    ELVLL

    Fig. 3. -value evolution of consequents for rules 1 and 2

    12

    3

    05

    1015

    200

    0.01

    0.02

    0.03

    0.04

    0.05

    0.06

    0.07

    Cell

    CBR

    epsilon=0.8epsilon=0Initial situation

    Fig. 4. CBR per cell in the exploitation phase

    of 0.2% in the CDR. This value is similar to the case withoutexploration ( = 0). In this manner, Q-learning algorithmmaintains the same performance in both cases.

    V. CONCLUSIONSThis paper has described an optimized FLC tuning dy-

    namically HO margins for load balancing in GERAN. Theoptimization process is performed by the fuzzy -Learningalgorithm, which is able to find the best consequents for therules of the FLC inference engine. The learning algorithm hadsome difficulties to distinguish between actions, in one casedue to the saturation of the HO margin, and in another casedue to the time-variant nature of the rule. In the latter case, itcan be solved by providing some degree of exploration to the-Learning algorithm during the exploitation phase.

    Simulation results show that the optimization processachieves a significant reduction in call blocking for congestedcells, which lead to an acceptable decrease in the overallcall blocking. The FLC can also be considered as a cost-effective solution to increase network capacity because hard-ware upgrade is not necessary. The main disadvantage is aslight increment in call dropping and an increase in networksignaling load due to a higher number of HOs.

    ACKNOWLEDGMENTThis work has partially been supported by the Junta de

    Andaluca (Excellence Research Program, project TIC-4052)and by the Spanish Ministry of Science and Innovation (grantTEC2009-13413).

    REFERENCES[1] A. Lobinger, S. Stefanski, T. Jansen, and I. Balan, Load Balancing in

    Downlink LTE Self-Optimizing Networks, in Proc. of IEEE VehicularTechnology Conference (VTC), 2010.

    [2] R. Kwan, R. Arnott, R. Paterson, R. Trivisonno, and M. Kubota, OnMobility Load Balancing for LTE Systems, in Proc. of IEEE VehicularTechnology Conference (VTC), 2010.

    [3] S. Luna-Ramrez, M. Toril, M. Fernandez-Navarro, and V. Wille, Opti-mal traffic sharing in GERAN, Wireless Personal Communications, pp.1 22, 2009.

    [4] A. Tolli and P. Hakalin, Adaptive Load Balancing between Multiple CellLayers, in Proc. of IEEE Vehicular Technology Conference (VTC), Fall,2002.

    [5] A. Pillekeit, F. Derakhshan, E. Jugl, and A. Mitschele-Thiel, Force-basedLoad Balancing in Co-located UMTS/GSM Networks, in Proc. of IEEEVehicular Technology Conference (VTC), Fall, 2004.

    [6] R. Nasri, A. Samhat, and Z. Altman, A New Approach of UMTS-WLANLoad Balancing; Algorithm and its Dynamic Optimization, in IEEEInternational Symposium on a World of Wireless, Mobile and MultimediaNetworks, 2007.

    [7] M. Toril and V. Wille, Optimization of Handover Parameters for TrafficSharing in GERAN, Wireless Personal Communications, vol. 47, no. 3,pp. 315 336, 2008.

    [8] S. Luna-Ramrez, M. Toril, F. Ruiz, and M. Fernandez-Navarro, Ad-justment of a Fuzzy Logic Controller for IS-HO Parameters in a Het-erogeneous Scenario, in The 14th IEEE Mediterranean ElectrotechnicalConference, MELECON, 2008.

    [9] V. Wille, S. Pedraza, M. Toril, R. Ferrer, and J. Escobar, Trial Resultsfrom Adaptive Hand-over Boundary Modification in GERAN, Electron-ics Letters, vol. 39, pp. 405 407, 2003.

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 200 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    /CreateJDFFile false /Description > /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ > /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles true /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /NA /PreserveEditing false /UntaggedCMYKHandling /UseDocumentProfile /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ]>> setdistillerparams> setpagedevice