EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this...

12
EAI Endorsed Transactions Preprint Research Article/Editorial An Evaluation Framework for Moving Target Defense Based on Analytic Hierarchy Process Chu Huang 1 , Yi Yang 2,* , and Sencun Zhu 1,1 Pennsylvania State University, University Park, PA 16802, USA 2 Fontbonne University, St. Louis, MO 63105, USA Abstract A Moving Target Defense (MTD)-enabled system is one which can dynamically and rapidly change its properties and code such that the attackers do not have sucient time to exploit it. Although a variety of MTD systems have been proposed, few work has focused on assessing the relative cost-eectiveness of dierent MTD approaches. In this paper, based on a generic MTD theory, we propose five general evaluation metrics and an assessment framework on top of Analytic Hierarchy Process (AHP), which aggregates these five metrics and systematically evaluates/compares security strengths and costs of multiple MTD-based approaches in the same category. This framework could be widely used in dierent MTD categories under various attacks and it will enable a security specialist to choose the best MTD approach from a set of possible alternatives based on his/her goal and understanding of the problem. A detailed case study on a specific MTD category called software diversification validates the eectiveness of this framework. Our evaluation results rank three software diversity algorithms and choose the best one among three based on problem setting and situation constraints. Received on XXXX; accepted on XXXX; published on XXXX Keywords: Moving Target Defense, Analytic Hierarchy Process, Evaluation and Comparison Copyright © XXXX Chu Huang et al., licensed to ICST. This is an open access article distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unlimited use, distribution and reproduction in any medium so long as the original work is properly cited. doi:10.4108/XX.X.X.XX 1. Introduction In the history of arms-race between attackers and defenders, the game setting has always favored the attackers. This is because as defenders we must assume that the attackers know how the system works, and hence we must carefully examine the whole system to make sure that no vulnerability exists, while the attackers only need to know a single vulnerability to break the system. To ensure that a system is free of vulnerability is extremely dicult, if not impossible, especially for large systems with millions lines of code. Hence, besides keeping up with novel and advanced techniques for identifying and patching system vulnerabilities, detecting malware, and building new systems with security embedded from scratch, * Corresponding author. Email: [email protected] Acknowledgement: The work of Dr. Zhu was supported through NSF CNS-1618684. recently an innovative theme in cyber security has emerged. This new theme is named Moving Target Defense (or MTD) [1]. The philosophy of MTD is that instead of attempting to build flawless systems to prevent attacks, one may continually change certain system dimensions over time in order to increase complexity and cost for attackers to probe the system and launch the attack. Rather than leaving the system properties and code static and persistent long enough for an attacker to exploit vulnerabilities, a MTD-enabled system would rapidly change its properties and code such that the attackers do not have sucient time to study, search, and further to exploit. This strategy ultimately reverses the asymmetric advantage of attackers. The state-of-art approaches based on the concept of MTD can be roughly classified into a few categories. For example, various obfuscation techniques have been proposed to safeguard individual system against code injection attacks and memory error 1 EAI Endorsed Transactions Preprint

Transcript of EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this...

Page 1: EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this framework through a case study on a specific MTD category called software diversification

EAI Endorsed Transactions Preprint Research Article/Editorial

An Evaluation Framework for Moving TargetDefense Based on Analytic Hierarchy ProcessChu Huang1, Yi Yang2,∗, and Sencun Zhu1,†

1Pennsylvania State University, University Park, PA 16802, USA2Fontbonne University, St. Louis, MO 63105, USA

Abstract

A Moving Target Defense (MTD)-enabled system is one which can dynamically and rapidly change itsproperties and code such that the attackers do not have sufficient time to exploit it. Although a variety of MTDsystems have been proposed, fewwork has focused on assessing the relative cost-effectiveness of differentMTDapproaches. In this paper, based on a generic MTD theory, we propose five general evaluation metrics andan assessment framework on top of Analytic Hierarchy Process (AHP), which aggregates these five metricsand systematically evaluates/compares security strengths and costs of multiple MTD-based approaches inthe same category. This framework could be widely used in different MTD categories under various attacksand it will enable a security specialist to choose the best MTD approach from a set of possible alternativesbased on his/her goal and understanding of the problem. A detailed case study on a specific MTD categorycalled software diversification validates the effectiveness of this framework. Our evaluation results rank threesoftware diversity algorithms and choose the best one among three based on problem setting and situationconstraints.

Received on XXXX; accepted on XXXX; published on XXXX

Keywords: Moving Target Defense, Analytic Hierarchy Process, Evaluation and Comparison

Copyright © XXXX Chu Huang et al., licensed to ICST. This is an open access article distributed under the terms ofthe Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unlimiteduse, distribution and reproduction in any medium so long as the original work is properly cited.

doi:10.4108/XX.X.X.XX

1. Introduction

In the history of arms-race between attackers anddefenders, the game setting has always favored theattackers. This is because as defenders we must assumethat the attackers know how the system works, andhence we must carefully examine the whole systemto make sure that no vulnerability exists, while theattackers only need to know a single vulnerabilityto break the system. To ensure that a system isfree of vulnerability is extremely difficult, if notimpossible, especially for large systems with millionslines of code. Hence, besides keeping up with noveland advanced techniques for identifying and patchingsystem vulnerabilities, detecting malware, and buildingnew systems with security embedded from scratch,

∗Corresponding author. Email: [email protected]†Acknowledgement: The work of Dr. Zhuwas supported throughNSFCNS-1618684.

recently an innovative theme in cyber security hasemerged.

This new theme is named Moving Target Defense (orMTD) [1]. The philosophy of MTD is that instead ofattempting to build flawless systems to prevent attacks,one may continually change certain system dimensionsover time in order to increase complexity and cost forattackers to probe the system and launch the attack.Rather than leaving the system properties and codestatic and persistent long enough for an attacker toexploit vulnerabilities, a MTD-enabled system wouldrapidly change its properties and code such that theattackers do not have sufficient time to study, search,and further to exploit. This strategy ultimately reversesthe asymmetric advantage of attackers.

The state-of-art approaches based on the conceptof MTD can be roughly classified into a fewcategories. For example, various obfuscation techniqueshave been proposed to safeguard individual systemagainst code injection attacks and memory error

1EAI Endorsed Transactions Preprint

Page 2: EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this framework through a case study on a specific MTD category called software diversification

Chu Huang, Yi Yang, and Sencun Zhu

exploits, e.g., address space randomization (ASLR) [2–4] and instruction set randomization (ISR) [5, 6].Higher level MTD approaches have been mainlybased on diversity-inspired software assignment [7–9],system and network re-configuration, substitution, andshuffling techniques [10–13]. In military environments,frequency hopping techniques such as FrequencyHopping Spread Spectrum (FHSS) [14] has been usedto defend against eavesdropping and radio jamming forlong time. Such techniques are all examples of MTD.Although a variety of moving target defenses have

been discussed in literatures [1], a well-acceptedmethodology to assess the cost-effectiveness of differentMTDs is still missing. Many fundamental questionshave been raised regarding the evaluation of MTD.For example, how to compare two or more MTDapproaches, and how to determine the most cost-effective MTD-based approach for addressing a specificsecurity issue?The difficulty of evaluating the strength of MTD-

based approaches is mainly due to the followingreasons. First, there is a lack of knowledge about whichcriteria or metrics that must be considered. Currentlydifferent MTD approaches [1] use a different set ofterminologies to evaluate their performance. Second,because such factors as the types of exposed attacksurface, and the capabilities, resources and strategiesof both attackers and defenders vary among systems,different MTD approaches are needed to addressdifferent threats; thus, it is difficult to find unifiedcriteria that fit all the MTD approaches. Third, theeffectiveness of aMTDmay ormay not be quantitativelymeasurable, as in practice system security analysis isoften done in an ad-hoc and descriptive way. Despiteall the challenges, we still need a way to evaluateand compare different MTD techniques in the samecategory. Although there are many previous works onevaluating system and network security mechanisms, toour knowledge, few focused on MTD evaluations.In this paper, we propose an assessment framework

for systematically evaluating and comparing thesecurity strengthes and costs of multiple MTD-basedapproaches. This framework will enable a securityspecialist to choose the best MTD approach from aset of possible alternatives based on his/her goal andunderstanding of the problem. The ability to choose theappropriate MTD approach (w.r.t. the specific system ornetwork setting and constraints) would be critical forthe successful deployment of MTD systems. Our maincontributions are in three aspects:

• Based on a generic MTD theory, we carefullyselect five general evaluation metrics tailored forMTD evaluation and comparison;

• We aggregate five evaluationmetrics by proposinga generic MTD evaluation framework for the first

time, based on Analytic Hierarchy Process (AHP).Due to its generality, this framework could beused in different MTD categories under a varietyof attacks;

• We present a detailed case study on evaluating aspecific MTD category called software diversifica-tion with evaluation results, which validates theeffectiveness of our proposed evaluation frame-work.

The following content is organized as follows. Wesummarize the related work in Section 2 on MTDsystems and their evaluations. We present our systemmodels, including a uniform MTD theory model andour attack model in Section 3. Then, we propose ageneric evaluation framework for MTD in Section 4.After that, we give a detailed case study on a specificMTD category, named software diversification and wealso present our evaluation results in Section 5. InSection 6, we discuss the issues to apply our evaluationmethodology in other MTD categories and how to applyour methodology in different levels. Last, we concludeour paper and discuss future work in Section 7.

2. Related WorkThe dynamic nature of Moving Target Defense (MTD)alleviates the asymmetry of timing differences betweenattacks and defenses. So far, many MTD systems havebeen developed [15–17].Major MTD systems could be divided into four cate-

gories. The first category is software-based diversifica-tion such as [18]. The basic defensive idea is to switchamong multiple functionally equivalent but internallydifferent program variants to hinder the attacks. Differ-ent software implementations are not supposed to sharethe same vulnerability, so even if the attacker exploitsa vulnerability in one software version it still takessimilar time to compromise other software versions.The second category is called runtime-based diversifi-cation. Basic defensive idea here is to mitigate attacksvia introducing randomization in runtime environmentdynamically. Examples in this category include instruc-tion set randomization [5, 6], system call number ran-domization [19], and address space layout randomiza-tion [20]). The third MTD category is named network-based diversification [21], such as host IP mutationand hopping, network database schema mutation, andrandom finger printing, with the basic defensive ideato be: randomly change network configurations with-out causing network service failures. The fourth MTDcategory is dynamic platform techniques [21], whichchanges platform properties or switches among differ-ent platforms to stop attacking processes. Examples inthis category include virtual machine rotations, server-switching techniques, and self-cleaning techniques.

2EAI Endorsed Transactions Preprint

Page 3: EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this framework through a case study on a specific MTD category called software diversification

An Evaluation Framework for Moving Target Defense Based on Analytic Hierarchy Process

However, based on the current literature, it is hardto compare different MTD systems by evaluations andto pick an optimal system based on the given situationand constraints. Current MTD evaluation methodolo-gies [22–24] could be divided into four main categories,too. They are: attack-based experiments, simulation-based evaluation, mathematics-based evaluation, andgame theory based evaluation [25]. Among these,mathematics-based evaluation can be further catego-rized into probability-based evaluation and Markovmodel based evaluation [26]. There also exist hybridapproaches which combine at least two above methodstogether. The major limitation of previous work is thatthere is a lack of a generic, systematic, and fine-tunedway to evaluate and compare MTD systems of the samecategory.In this paper, for the first time, we carefully choose

five general evaluation metrics, based on a genericMTD theory. We also aggregate these five metrics byproposing an evaluation framework on top of AnalyticHierarchy Process (AHP). We validates the effectivenessof this framework through a case study on a specificMTD category called software diversification and wediscuss how to apply it to other MTD categories.

3. System Models3.1. A Uniform MTD Theory ModelA uniform MTD theory model is shown in Figure 1.In this model, MTD system starts from an initial state.Based on the current pool of adaptation space, therewill be numerous valid states (after eliminating thosedefined by the environment constraints) in the spacethat the initial state could transit to. MTD systemrandomly picks a state based on the current situationand transits to that state after a small but unpredictabledelay. Ideally, this process is an infinite loop and theMTD system will never run out of valid states, evenunder some attacks.

An Initial

State

Choose an Adaptation

Based on the Pool of

Adaptation Space and

the Environment

Constraints

Valid?Implement the

Adaptation

DelayA New

State

Yes

No

Figure 1. A uniform MTD theory model

3.2. Attack ModelThe attacker’s eventual goal is to stop the MTD self-evolving process, i.e., to terminate the infinite loop.There are two ways for the attacker to break the MTDsystem. The first way is to break states as many aspossible by exploiting vulnerabilities, then once the

MTD system is evolving into one of those vulnerablestates the MTD system will be broken into. The secondway is for the attacker to be able to predict the next stateand break that specific state. These two methods shouldbe equally difficult for the attacker to achieve if ourMTD defensive system is designed in a good manner.

4. The Proposed Generic MTD EvaluationFrameworkWe first give an overview of the evaluation framework.We then briefly present all the five general evaluationmetrics. The AHP procedure which aggregates all theevaluationmetrics is discussed at the end of this section.

4.1. An Overview of the Evaluation FrameworkWe propose that the evaluation of MTD-basedapproaches be generalized with five metrics:survivability, unpredictability, movability, stability,and usability. In practice, however, not all metricscan be directly quantitatively measured with absolutemeanings, so we will further propose a way for metricaggregation based on relative importance of metrics,which enables us to quantitatively assign weightsto metrics in an automatic way and compare MTDapproaches in each same category. An overall flowchartof the evaluation framework could be seen in Figure 2.

MTD application

requirement and

environment

constraints

Determine an

MTD category

and evaluation

metrics

Calculate running scores

for different MTD

approaches in this

category based on AHP

Select an MTD

approach with

the highest score

Input Output

Figure 2. An overview of the generic evaluation framework

4.2. Evaluation MetricsBelow, we justify the rationale to pick these fivegeneral evaluation metrics (including survivability,unpredictability, movability, stability, and usability)based on our systemmodels, then we briefly explain theconcepts/definitions of all the five metrics. The wholepicture and interactions among different metrics couldbe seen in Figure 3.

Survivability. Survivability describes the degree towhich a system/network is able to withstand attacks.Higher survivability means that the system underattack is able to function as in normal at a highprobability for a longer time. Survivability is closelyrelated to the size of attack surface and the typesof vulnerabilities. The larger the attack surface, themore ways a system could be attacked, and thelower the survivability. On the other hand, if thevulnerability is in the kernel level, greater damagecould be done. Although it is desirable to directlymeasure survivability of the whole system, in realitythis is often not possible because a system may carry

3EAI Endorsed Transactions Preprint

Page 4: EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this framework through a case study on a specific MTD category called software diversification

Chu Huang, Yi Yang, and Sencun Zhu

Valid State

Invalid State

Due to

Constraints

MTD

Evolving

Trajectory

Unpredictablity

Survivability

Movability

Stability

Figure 3. The MTD self-evolving loop with evaluation metrics

multiple types of vulnerabilities of different impacts. Inthis paper, we consider survivability in the context of aparticular attack, and it is measured through either oneof the following two ways, depending on whether thesystem of protection is a host or a networked system:(i) likelihood that a host machine will be compromisedin the presence of an attack when an attacker exploitsa vulnerability; (ii) the maximal number of machinesin a network that will be compromised altogetherwhen an attacker discoveries and exploits a particularvulnerability.More specifically, survivability relates to the life span

(or depth) of the MTD self-evolving loop. Ideally, asuccessful MTD system should have an infinite loop.In reality, if the loop is longer, that means that thesurvivability of the MTD system is higher. Consideringthe cost of the system, there exists a max number ofpossible states in practice.

Definition 1. Survivability is defined as the relativedepth of the MTD self-evolving loop, i.e., it is a realvalue between 0 and 1, which represents the ratiobetween total number of states in the MTD self-evolvingloop and the max number of possible states in practice.

Unpredictability. MTD-based approaches introduce ran-domness into the protected system to make its statemore unpredictable to an attacker. As a fundamentalcriterion, unpredictability requires the critical aspectsof the system to keep uncertain to the attacker, whichmakes it difficult for the attacker to anticipate defensiveactions. Higher unpredictability means that an attackeris less likely to accurately determine the “key" of trans-formation of a particular MTD strategy; consequentlythe data collected by the attacker contains more noises.In general, unpredictability is determined by the num-ber of states in the moving space of a system and theprobability that each state is traversed.More specifically, unpredictability relates to the

unlinkability between two consecutive valid states.

Definition 2. Unpredictability is defined as theunlinkability between two consecutive valid states in

the MTD self-evolving loop. Unpredictability will behigh (close to 1) if the current state is equally likelyto transit to all the valid states in the next adaptationspace, i.e., prob(Si → Sj )=

1n , where n is the total number

of valid states in the next adaptation space and j is aninteger between 1 and n.

Movability. Movability is an important characteristic ofa MTD-based strategy that overcomes the limitationof static defense mechanisms by dynamically alteringsome properties of systems/programs (e.g., proxysubstitution or reconfiguration of the network suchas IP addresses). For an actual host or a network,there could be many practical constraints on moving.Instead of randomly changing states, a movable strategyor algorithm should be designed to conform to suchconstraints. Hence, movability is defined as the degreeto which certain aspects of a system can be alteredwithout impacting its normal operations; that is, howwell a MTD strategy is able to accommodate givenpractical constraints.More specifically, movability relates to the breadth

(or width) of the MTD system self-evolving loop.

Definition 3. Movability Mt is defined as the width ofthe MTD system self-evolving loop at time t, i.e., thetotal number of valid states in the adaptation space attime t, after eliminating all the invalid states due topractical constraints.

Stability. Stability describes the performance of theMTD approach to be sustained and effective over the“moving space". Assuming a particular MTD approachchanges the system’s state from one to another, stabilityrequires the security level of the system remainsapproximately the same. A non-stable MTD approachmight produce one state with a greater attacking surface(e.g., lots of entry points available to untrusted users, orlots of untrusted software running) and another statewith a smaller attacking surface, thus exposing thesystem to a greater danger once compromised in thefirst state.More specifically, stability means that the MTD self-

evolving channel should have roughly even or balancedwidths.

Definition 4. Stability is defined as the standarddeviation of widths of the MTD system self-evolvingloop. If the standard deviation is higher, stability of theMTD system will be lower.

Usability. Usability is defined, from user’s experience,as “ease of use". It is the degree to which a useris satisfied with a particular system state. In otherwords, it reflects how comfortable it is for systemusers to perform tasks or routine operations on aprotected system. A protected system adopting MTDis considered to have high usability if it causes little

4EAI Endorsed Transactions Preprint

Page 5: EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this framework through a case study on a specific MTD category called software diversification

An Evaluation Framework for Moving Target Defense Based on Analytic Hierarchy Process

inconvenience (e.g., not requiring users to be re-trainedfor every change of state), efficient in time and resources(e.g., minimum delay and disruption), etc.More specifically, usability reflects the tradeoffs

among ease of use, performance, and security.

Definition 5. Usability complements the other evalu-ation metrics by defining how easy of use the MTDsystem is. An ideal MTD system should maximize easeof use and minimize the performance penalty whileachieving maximally possible security.

Note that we do not consider hardware/softwarecost when comparing MTD approaches, because thecomparable MTD approaches are from the samecategory and hence have similar hardware/softwarerequirements.

4.3. Aggregation of Evaluation Metrics by AnalyticHierarchy Process (AHP)To achieve a comprehensive and systematic evaluation,we need fully consider different types of information,which each relates to a specific criterion/attribute.Hence, after identifying the criteria for evaluation,we further adopt a methodology to aggregate themand provide a comprehensive analysis. Many analyticaltools have been discussed to address these problemsassociated with the field of decision analysis, suchas Multiple Attribute Utility Theory (MAUT) [27],Multiple-Attribute Decision Analysis (MADA) [28],Multiple Correspondence Analysis (MCA) [29] andAnalytic Hierarchy Process (AHP) [30], to name a few.We adopt the multi-criteria evaluation methodology

named Analytic Hierarchy Process (AHP), for measur-ing the relative strength of MTD approaches. AHP [30,31] plays an important role in many real world deci-sion situations such as government, business, indus-try, healthcare and education. It provides the decisionmakers a comprehensive and rational framework forstructuring a decision problem, for representing andquantifying its elements, for relating those elements tooverall goals and their understanding of the problem,and for evaluating alternative solutions (A runningexample on how to choose a company leader can befound in [32]).AHP suits our problem setting very well because

of the following reason. For MTD approaches ofdifferent categories (e.g., software diversity, addressspace/data/instruction set randomization, N-version),their actual security and performance concerns varya lot, because the type and size of attack surface,the likelihood of successful attacks, as well ascost of dynamically changing attack surface in eachcategory are very different from one another. As such,although the five metrics we propose are generic,their relative weights in final evaluation would vary

for different categories. Hence, to determine the bestapproach in each category, we will first understandthe network/system model, the security model and thecost model for each category. We can leverage AHPto evaluate the alternative MTD approaches in eachcategory against each criterion/metric that measureshow well a method accomplishes a particular criterion.Then we compare the alternative MTD approaches bygenerating a score of each alternative MTD approachfor ranking. Note that a very attractive feature of AHPis that for comparison purpose it will help generaterelative scores for criteria which cannot be directlyquantified with an absolute meaning.

General Principles of AHP. Specifically, using AHP wecan first construct a hierarchy, as shown in Figure 4.This hierarchy has three levels: the top one is ourgoal to find the best MTD strategy in a category, thesecond one includes the five criteria we proposed,and the bottom one includes the alternative MTDapproaches to evaluate. Once the hierarchy is built,we can systematically evaluate its various elements bycomparing them in a pairwise way, with respect to theirimpacts on an element above them in the hierarchy.

Figure 4. An AHP hierarchy to choose the best MTD approach

For example, we will start from the bottom levelby comparing each pair of alternatives w.r.t. each ofthe five metrics above and totally we will perform3 ∗ 5 = 15 comparisons. During the comparison, wecalculate numerical weights (priorities) for each of thedecision alternatives and these numbers represent thealternatives’ relative ability to achieve the decisiongoal. For each criterion (metric), the weights of allalternatives are then transferred to an AHP matrixto calculate the priority of each alternative. Afterevaluating the alternatives with respect to their strengthin meeting the criteria, we will then evaluate the criteriawith respect to their importance in reaching the overallgoal. Following a similar process, each criterion willbe given a weight w.r.t. the goal. Finally, with all thepriorities of the criteria with respect to the goal, and thepriorities of the alternatives with respect to the criteria,we can synthesize and calculate the priorities of thealternatives with respect to the goal. The one with thehighest priority will be the winner. (we will show aconcrete example later in the case study of Section 5.)

5EAI Endorsed Transactions Preprint

Page 6: EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this framework through a case study on a specific MTD category called software diversification

Chu Huang, Yi Yang, and Sencun Zhu

A Detailed Procedure of AHP. Suppose thatm criteria areconsidered and n alternatives are to be evaluated. In ourcase, m = 5 and n = 3. The AHP can be implemented inthree consecutive steps [33, 34].Step 1. Computing the vector of criteria weights:

The AHP starts with creating a pairwise comparisonmatrix A. The matrix A is a m ×m matrix with realnumbers. Each entry ajk in matrix A represents theimportance of jth criterion relative to the kth criterion.The entries ajk and akj satisfy the following constraint:ajk × akj = 1. Obviously, ajj = 1 for all 1 ≤ j ≤ m. Therelative importance between two criteria is measuredaccording to a numerical scale from 1 to 9 [34].Once the matrix A is constructed, it is possible to

derive from A the normalized pairwise comparisonmatrix Anorm by summing all the entries in eachcolumn, then each entry ajk of the matrix Anorm is

computed as ajk =ajk∑ml=1 alk

, i.e.,

A =

a11 a12 a13 a14 a15a21 a22 a23 a24 a25a31 a32 a33 a34 a35a41 a42 a43 a44 a45a51 a52 a53 a54 a55

ajk=

ajk∑ml=1 alk

−−−−−−−−−−−→

Anorm =

a11 a12 a13 a14 a15a21 a22 a23 a24 a25a31 a32 a33 a34 a35a41 a42 a43 a44 a45a51 a52 a53 a54 a55

Finally, the criteria weight vector w (that is an m-

dimensional column vector) is built by averaging theentries on each row of Anorm, i.e.,

wj =∑m

l=1 ajlm , for w =

w1w2w3w4w5

Step 2. Computing the matrix of alternative scores:

The matrix of alternative scores is a n ×mmatrix S withreal numbers. Each entry sij of S represents the scoreof the ith alternative with respect to the jth criterion.In order to derive such scores, a pairwise comparisonmatrix B(j) is first built for each of the m criteria. Thematrix B(j) is a n × nmatrix with real values. Each entrybih of the matrix B(j) represents the evaluation of theith alternative compared to the hth alternative withrespect to the jth criterion. Similarly, the entries bih andbhi satisfy the following constraints: bih × bhi = 1 andbii = 1 for all i. An evaluation scale [34] will be used totranslate the decision maker’s pairwise evaluations intonumbers.Second, the AHP applies to each matrix B(j) the

same two-step procedure described for the pairwisecomparison matrix A, i.e., it divides each entry by thesum of the entries in the same column, and then itaverages the entries on each row, thus obtaining the

score vectors s(j), j = 1, · · · , m. The vector s(j) containsthe scores of the evaluated alternatives with respect tothe jth criterion.Finally, the score matrix S is obtained as S =

[s(1) · · · s(m)], i.e., the jth column of S corresponds to s(j).Step 3. Ranking the alternatives: Once the weight

vector w and the score matrix S have been computed,the AHP obtains a vector v of global scores bymultiplying S and w, i.e., v = S × w. The ith entry vi ofv represents the global score assigned by the AHP to theith alternative.AHP incorporates an effective technique for checking

the consistency of the evaluations made by the decisionmaker when building the pairwise comparison matricesinvolved in the process, namely the matrix A and thematrices B(j). The technique relies on the computationof a suitable consistency index CI . CI is obtained byfirst computing the scalar λ as the average of theelements of the vector whose jth elements is the ratio ofthe jth element of the vector A × w to the correspondingelement of the vector w. Then, CI = λ−m

m−1 . A perfectlyconsistent decision maker should always obtain CI = 0,but small values of inconsistency could be tolerated.

5. A Case Study on Software Diversification MTDTo demonstrate the applicability of our proposedevaluation framework, next we present a case study ona network level approach - dynamic software diversitybased MTD.

5.1. Software Diversification MTDSoftware diversity [7–9], in spirit of survivabilitythrough heterogeneity, has been one of the majorMTD approaches. Specifically, the purpose of softwarediversity is to select and deploy a set of off-the-shelfsoftware to hosts in a networked system, such that thenumber and types of vulnerabilities presented on onehost would be different from that on its neighboringnodes. In this way, one would be able to contain anautomated worm attack in an isolated “island".An illustrating example is showed in Figure 5.

We use an undirected graph as the abstraction of ageneral networked system. Example networked systemsinclude intranet, enterprise social networks, tacticalmobile ad hoc networks, and wireless sensor networksof different network topologies. In this figure, thereare 11 machines represented by nodes and 5 distinctpieces of vulnerable software represented by differentcolors. An attack can propagate by exploring one typeof vulnerability (color). From the figure, we can seethat a successful attack exploiting the green color cancompromise up to four machines (v2, v5, v7, v11), but itcan only compromise one machine when it exploits theyellow color as machines with the yellow color cannotcommunicate directly.

6EAI Endorsed Transactions Preprint

Page 7: EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this framework through a case study on a specific MTD category called software diversification

An Evaluation Framework for Moving Target Defense Based on Analytic Hierarchy Process

Figure 5. Network topology utilizing a diverse softwaredistribution. Dashed or solid lines mean two nodes cancommunicate directly (e.g., through TCP/IP or being friends ina social network). Solid lines further indicate two nodes share atleast one common color.

5.2. Quantifying Five Evaluation MetricsNext, we discuss how to instantiate and quantify fivegeneral evaluation metrics.

Survivability. According to Figure 5, a defective edgebetween two nodes (which share the same color) indi-cates that the exploitation of one type of vulnerabilityon one host can lead to the compromise of the other. Thesize of a connected component indicates the number ofcompromised machines if the corresponding vulnera-bility is discovered and exploited by the attacker (e.g.,via a worm attack). We denote a connected componentas a common vulnerability graph (CVG). If one caneffectively limit the size of the largest CVG, systemsurvivability can then be improved. A better softwareassignment algorithm should be able to produce a soft-ware assignment solution with a smaller largest CVG.Formally, the survivability of a networked system canbe computed as follows:

Survivability = 1 − smax(ci)/N , (1)

where smax(ci) denotes the size of the largest CVG thatis formed by color ci , and N denotes the network size.

Unpredictability. The goal of the attacker is to compro-mise as many nodes as possible; thus, the vulnerability(color) of the largest CVG is always the attackerŠsfirst choice to exploit. Unpredictability of a softwareassignment strategy in this case describes the difficultyfor an attacker to determine the prevailing color (whichforms the largest CVG) after shuffling. For example,given two software assignment algorithms, to comparetheir unpredictability, we can observe the distributionof colors for the largest CVGs across a number of soft-ware shufflings. If the prevailing colors are uniformlyrandomly distributed, it would be hard for an attackerto learn the pattern and predict the next prevailingcolor. The attacker has to try out every color withabout an equal probability in order to compromise thenetwork to the largest extent.

To formulate unpredictability of software assignmentin a quantitative way, we use entropy to measurethe expected or average ŚsurpriseŠ over all shufflings,reflecting the uncertainty of the prevailing color beforeit is determined. If the color of the largest CVG isc, let p(ci) be the probability c = ci . Given a set ofcolors C and the probabilities of their occurrences,unpredictability produced by the software diversityalgorithm is quantified as:

Unpredicability =∑ci∈C−p(ci) × ln(p(ci)). (2)

Movability. In order to survive from long-lasting(persistent) attacks, software diversity mechanismsshould further adopt the technique of softwareshuffling. By (periodically) re-allocating software on themachines, the attack surfaces of the systems continuallychange to confuse the attacker and thus delay theattack. In practice, however, to make the assignmentsolution generated after each shuffling acceptable,the software assignment algorithm needs to take anumber of realistic constraints into account. A softwareassignment algorithm may be able to well or onlypartially accommodate the practical constraints thatgive rise by host and software requirements. Here hostconstraint means that certain hosts should be installedwith some specific types of software to performrequired functionality (e.g., to deploy a database serverit is required to assign DB2). Software constraint meanscertain combination of software should (or should not)be assigned to specified hosts simultaneously (e.g., PHP,Apache, MySQL and Linux need to be assigned togetherto implement LAMP on a single node).Besides, in practice the constraints are not equally

important. Some of the constraints are critical and thuscannot be violated, for example, the case of LAMP -a lack of any one of these four components wouldcause a service failure on the web server. On the otherhand, some constraints are less critical and thus can berelaxed to some extent without impacting the essentialfunctionality of the machine. For example, supposethere is a constraint that a PC should not install aprogram (for the lack of understanding of its security).If a software assignment algorithm has to assign theprogram to this machine for greater security of thenetwork, one may, for example, relax this constraintby launching it through a browser-based SaaS cloudservice.We assume that a software assignment algorithm

unable to satisfy the practical constraints is lessappropriate than an algorithm that well accommodatesthem. Thus we propose to use penalty score toquantitatively determine the functionality loss causedby violations of given practical constraints. Specifically,we initially assign a penalty score to each constraint

7EAI Endorsed Transactions Preprint

Page 8: EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this framework through a case study on a specific MTD category called software diversification

Chu Huang, Yi Yang, and Sencun Zhu

reflecting its significance and the penalty is onlyapplied when that particular constraint is violated.For instance, penalty(xi) denotes the penalty score ofviolating constraint xi . For the most critical constraintsthat cannot go against, we assign an infinite value tothem. In this way, given a set of constraints CSTR ={x1, x2, · · · , xm}, the total penalty score for the system isgiven as:

P enalty =m∑

xi∈CSTRpenalty(xi) × d(xi), (3)

whered(xi) =

1, if xi is violated0, otherwise

specifies which constraint is violated.Stability. Stability of a software assignment algorithmmeasures variation of the quality of the assignmentsolutions, in terms of variation of the largest CVG sizesgenerated by shuffling. In this case, we equate stabilitywith variability. Note that for the unpredictabilityof shuffling, ideally the shuffling process should bestateless, i.e., each shuffling is independent of thehistory. Standard deviation can be used to quantifythe average difference between assignment solutions,independent of the temporal order in which eachassignment solution is generated. The less the variationof the largest CVG size, the more stable of a givensoftware assignment algorithm. Let E(S) be the meansize and N be the number of shufflings, the stability ofthe software assigning algorithm is quantified as:

Stability =

√1N

∑i=1

(Si − E(S))2. (4)

Usability. The combination of the software installedin a machine is prone to variation due to shuffling.For example, Microsoft Office may be substitutedby Open Office and Firefox may be substituted byInternet Explorer or Google Chrome. Among all theseoptions, some software products are easier to operateas compared to others. Besides, users tend to choosesoftware that they are familiar with or have certainpreference with. A software assignment algorithmmight cause inconvenience to users in accomplishingtheir tasks (e.g, when a secretary’s computer cannotinstall Microsoft Word despite the fact she has beenusing it in the past years), and it is very likelynot a comfortable assignment for some users. Thus,for software diversity algorithms, usability is definedto reflect user’s experience regarding the assignedsoftware, e.g., familiarity, comfort, satisfaction and easeof use from a user’s point of view. A good algorithmshould be able to take user’s concerns as one input andmaximize overall usability.

We will use acceptance rate (a real value from 0 to1) to measure user’s satisfaction level, reflecting theirattitudes toward shifting from a particular softwareproduct to another one. We first categorize softwareproducts based on their functionalities. For example,Linux, Snow Leopard, Windows are in the operatingsystem category while Firefox, Chrome, IE and Operaare in the web browser category. Software may bereplaced by another one in the same category. Softwaresubstitution with high acceptance rate indicates usersare satisfied or at least have little trouble with theassignment. Low acceptance rate, on the other hand,indicates that a user finds it inconvenient (or evenbeing prevented from doing his job) switching to newsoftware.There are two methods to measure usability of a

software shuffling. The first one asks users to assignan acceptance rate for every software substitution (inpairwise) in each category based on their experienceand attitude. Given these ratings, one can automaticallycompute the overall acceptance rate for each assign-ment (compared with the previous assignment) andcheck whether it is optimal. The other way is to conducta survey after each shuffling. Users are asked to providetheir feedbacks/scores indicating their willingness toaccept or reject the assignment of software in theirsystems. By adding up the scores from users, the finalscore is then used as the overall usability of a softwareassignment. Generally speaking, the first approach ismore preferable as it only conducts user survey once. Agood algorithm may take into consideration individualuser acceptance rates when running.

5.3. Evaluation ResultsWe first evaluate three software diversification algo-rithms in terms of our general evaluation metricsincluding survivability, unpredictability, movability,and stability (usability is subject to the user surveyresults), which builds a foundation for our AHP proce-dure. Then, we use AHP procedure to produce the bestalternative/option among three algorithms for this cat-egory. Three software diversification algorithms underour consideration are:

• Algorithm I: basic software diversityalgorithm [35];

• Algorithm II: an adjusted software assignmentalgorithm [36];

• Algorithm III: an Ant Colony Optimization (ACO)based software assignment algorithm [18].

Figure 6 shows an example to compare three softwareassignment algorithms in terms of survivability. Herethe x-axis is the ratio #weight/#color, where #weight isthe number of vulnerable software assigned to a singlemachine and #color is the total number of vulnerable

8EAI Endorsed Transactions Preprint

Page 9: EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this framework through a case study on a specific MTD category called software diversification

An Evaluation Framework for Moving Target Defense Based on Analytic Hierarchy Process

software installed in the whole network system. Thisratio implicitly reflects the likelihood of sharing samecolors among nodes. The y-axis is the size of the largestCVG. As we can see, Algorithm I outperforms the othertwo algorithms by creating smaller smax under the samesituation. Algorithm III is better than Algorithm II.Thus it is straightforward to rank these algorithms asAlgorithm I > Algorithm III > Algorithm II, in terms ofsurvivability. Note that in practice it is not necessarilyalways the case one algorithm outperforms the othersall the time. (These three algorithms will be used forillustration purpose throughout this paper.)

Figure 6. Survivability of different algorithmsTo gain intuition for unpredictability, we show below

an illustrative example on distribution of the prevailingcolor (of the largest CVG). As observed in Figure 7,in Algorithm III, smax(ci) is formed by any color ciamong all the available colors with approximately theequal probability, which indicates the random natureof assignments generated by Algorithm III. Accordingto Shannon theory, when p(ci) = p(cj ) for any i , j,unpredictability (entropy) reaches a maximum. As forAlgorithm I and Algorithm II, they are more predictableto the attacker than Algorithm III. For example, in theshuffling outcome of Algorithm I, color 1 is more likelyto cause largest CVGs, hence more preferable for theattacker to exploit.

Figure 7. Unpredictability of different algorithms. Here totally20 colors (vulnerable software) are assigned in the networkedsystem, and the x-axis is the numerical label for each color.Figure 8 is an example that plots the moving

penalties for three algorithms. Each data point inthis figure is obtained by calculating the penaltyscore for a corresponding software assignment. It

is observed that, in general, the penalties resultedfrom Algorithm II are the highest, indicating theperformance of Algorithm II is largely restricted bypractical constraints (lowest movability). Algorithm IIIcannot accommodate constraints well either. AlgorithmI has the least penalty scores, so it offers better softwareassignment strategies over Algorithm II and AlgorithmIII in terms of movability.

Figure 8. Movability of different algorithmsTo fully evaluate the stability of an algorithm, we

will need to try different network settings. In thisway, one can see the ability of the assigning algorithmto accommodate mutable environments (e.g., differenttypes of network topologies such as scale-free, randomor regular graph). Again we use the three algorithms asan example to explain the concept of stability. Supposewe run each of the three algorithms twenty times whilechanging network topologies and use all generatedassignment solutions. In Figure 9, we observe that thevariation range of smax(ci) of Algorithm II is muchsmaller, which means its standard deviation is lowerthan the other two. Hence, it produces more stableresults, even though the size of smax(ci) of AlgorithmII is larger than Algorithm I (that is, Algorithm II isinferior to Algorithm I in terms of survivability).

Figure 9. Stability of different algorithmsTable 1 is a simple illustrative example to compare

and rank three available software diversity algorithms.We consider the five proposed metrics as the evaluationcriteria for ranking (if more metrics are required to beconsidered, this example can be expanded accordingly).We are interested in comparing relative strength ofthe alternative software assignment algorithms anddetermining the best one in terms of the proposedevaluating metrics.

9EAI Endorsed Transactions Preprint

Page 10: EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this framework through a case study on a specific MTD category called software diversification

Chu Huang, Yi Yang, and Sencun Zhu

Table 1. Calculate running weights for evaluation metrics (CI=0.8%)

Metrics Ranking Survivability Unpredictability Movability Stability Usability Weights

Survivability 1 1/2 1/2 2 4 0.175Unpredictability 2 1 2 4 9 0.415Movability 2 1/2 1 3 6 0.273Stability 1/2 1/4 1/3 1 2 0.092Usability 1/4 1/9 1/6 1/2 1 0.045

Table 2. Final scores for every algorithm

Alternatives Survivability Unpredictability Movability Stability Usability Score\Metrics (0.175) (0.415) (0.273) (0.092) (0.045)

Algorithm I 0.571 0.143 0.571 0.143 0.333 0.343Algorithm II 0.143 0.286 0.143 0.571 0.333 0.25Algorithms III 0.286 0.571 0.286 0.286 0.333 0.406

First, we need a judgment matrix for determiningweights of the 5 metrics according to their importance.In comparing the 5 metrics, for illustration purpose,we assume they are ordered as Unpredictability >Movability > Survivability > Stability > Usability basedon their importance. The weight assigned to eachmetriccan then be determined by adopting a scale referredto as 9-point scale of measurement [30]. The following5 × 5matrix contains all of the pairwise comparisons forthe metrics, as shown in Table 1.The matrixes for metrics of the MTDs are omitted

here because the value for each metric is mostly givenin the previous table. The final decision matrix forthis problem can be generated and the final scores areshown in Table 2.Finally, the ranking for these three software diversity

algorithms is: Algorithm III > Algorithm I > AlgorithmII, as shown in Figure 10.

Find the best software

assignment algorithmGoal:

Criteria:

Alternatives:

Survivability

(0.175)

Unpredictability

(0.415)

Movability

(0.273)

Stability

(0.092)

Usability

(0.045)

Algorithm I

(0.343)

Algorithm II

(0.25)

Algorithm III

(0.406)

Figure 10. Evaluation result

6. Discussions

In this section, we discuss i) how to apply our genericevaluation framework to other MTD categories, suchas runtime-based diversification and network-baseddiversification; ii) how to apply our generic evaluation

framework in different levels, such as system-level,platform-level, and network-level.

6.1. Evaluations on Other MTD CategoriesIn Section 2 we discussed four different MTD categoriesand in Section 5 we discussed a detailed case study ona specific MTD category called software diversification.In this Section, we discuss how to apply our genericevaluation framework to other three MTD categories.Since our five evaluation metrics are general, theycould be applied to all other three MTD categories.The ways that we instantiate or quantify them mightbe different for each different category. The ideaof Analytic Hierarchy Process (AHP) in the genericevaluation framework is similar, where five generalevaluation metrics might have different weights fordifferent categories and the alternative algorithms ineach MTD category will be different. Therefore, theflowchart for our generic evaluation framework willstay the same for all theMTD categories evaluations andcomparisons and our generic framework will work forall the MTD categories.

6.2. Applying the Proposed Generic EvaluationFramework in Different LevelsOur generic evaluation framework can also be appliedin different levels, including system-level, platform-level, and network-level. Software diversification is anetwork-level solution, so we have already seen theinstantiation of our generic framework on network-level. For system-level, we consider a specific operatingsystem on a machine, then all the general evaluationmetrics will be defined in this domain. Also, forplatform-level, we consider a single machine withpotentially multiple operating systems, then our

10EAI Endorsed Transactions Preprint

Page 11: EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this framework through a case study on a specific MTD category called software diversification

An Evaluation Framework for Moving Target Defense Based on Analytic Hierarchy Process

general evaluation metrics need to be defined for thisscope. Other than this, the AHP procedure is similar.So, our generic evaluation framework will apply to thesystem-level and platform-level, too.

7. Conclusion and Future WorkIn this paper, we carefully choose five general metricsfor MTD evaluations and comparisons, includingsurvivability, unpredictability, movability, stability, andusability. We also aggregate these five evaluationmetrics by for the first time proposing a genericevaluation framework based on Analytic HierarchyProcess (AHP). We work on a detailed case study undera specific MTD category named software diversificationwith numerical evaluation results, which validates theeffectiveness of our generic evaluation framework. Ourevaluation framework can be easily ported and appliedto other MTD categories (such as runtime-baseddiversification and network-based diversification) andin different levels. Finally, we discuss the ways to dothem.Our future work includes applying our generic eval-

uation framework to all other three MTD categories.Besides software diversification, we will study all theother three cases, including runtime-based diversifica-tion, network-based diversification, and dynamic plat-form techniques, in details by instantiating our generalevaluation metrics and AHP procedure. We believe thatour generic MTD evaluation framework will be effectiveand efficient for all the MTD categories in differentscope/level.

References[1] Jajodia, S., Ghosh, A.K., Swarup, V., Wang, C. and

Wang, X.S. (2011) Moving Target Defense: CreatingAsymmetric Uncertainty for Cyber Threats, 54 (Springer).

[2] Giuffrida, C., Kuijsten, A. and Tanenbaum, A. (2012)Enhanced operating system security through efficientand fine-grained address space randomization. InUSENIX Security Symposium.

[3] Bojinov, H., Boneh, D., Cannings, R. and Malchev, I.

(2011) Address space randomization for mobile devices.In Proceedings of the fourth ACM conference on WirelessNetwork Security (ACM).

[4] Snow, K.Z., Monrose, F., Davi, L., Dmitrienko, A.,Liebchen, C. and Sadeghi, A.R. (2013) Just-in-time codereuse: On the effectiveness of fine-grained address spacelayout randomization. In IEEE Symposium on Securityand Privacy (IEEE).

[5] Gundy, M.V. and Chen, H. (2009) Noncespaces: Usingrandomization to enforce information flow tracking andthwart cross-site scripting attacks. In NDSS.

[6] Boyd, S.W., Kc, G.S., Locasto, M.E., Keromytis, A.D.

and Prevelakis, V. (2010) On the general applicabilityof instruction-set randomization. IEEE Transactions onDependable and Secure Computing 7(3): 255–270.

[7] O’Donnell, A. and Sethu, H. (2004) On achievingsoftware diversity for improved network security usingdistributed coloring algorithms. In Proceedings of the11th ACM conference on Computer and communicationssecurity (ACM).

[8] Yang, Y., Zhu, S. and Cao, G. (2008) Improving sensornetwork immunity under worm attacks: a softwarediversity approach. In Proceedings of the 9th ACMinternational symposium on Mobile ad hoc networking andcomputing (ACM).

[9] Mont, M.C., Baldwin, A., Beres, Y., Harrison, K.,Sadler, M. and Shiu, S. (2002) Towards Diversity ofCots Software Applications: Reducing Risks of WidespreadFaults and Attacks. Tech. Rep. UK HPL-2002-178, HPLaboratories, Bristol.

[10] Jia, Q., Sun, K. and Stavrou, A. (2013) Motag:Moving target defense against internet denial of serviceattacks. In 22nd International Conference on ComputerCommunications and Networks (ICCCN) (IEEE).

[11] Crouse, M. and Fulp, E. (2011) A moving targetenvironment for computer configurations using geneticalgorithms. In 4th Symposium on Configuration Analyticsand Automation (SAFECONFIG) (IEEE).

[12] Cui, A. and Stolfo, S. (2011) Symbiotes and DefensiveMutualism: Moving Target Defense (in Moving TargetDefense, Springer).

[13] Al-shaer, E. (2011) Toward Network ConfigurationRandomization for Moving Target Defense (in MovingTarget Defense 2011, Springer).

[14] Wikipedia (2017), Frequency-hopping spread spectrum.URL http://en.wikipedia.org/wiki/FHSS.

[15] Zhuang, R., DeLoach, S.A. and Ou, X. (2014) Towards atheory of moving target defense. In MTD workshop.

[16] Zhuang, R., Bardas, A.G., DeLoach, S.A. and Ou,

X. (2015) A theory of cyber attacks: A step towardsanalyzing mtd systems. In MTD workshop.

[17] Zhuang, R. (2015) A Theory for Understanding andQuantifying Moving Target Defense. Ph.D. thesis, KansasState University.

[18] Huang, C., Zhu, S. and Guan, Q. (2015) Multi-objectivesoftware assignment for active cyber defense. In CNS.

[19] Jiang, X., Wang, H.J., Xu, D. and Wang, Y.M. (2007)Randsys: Thwarting code injection attacks with systemservice interface randomization. In SRDS.

[20] Shacham, H., Page, M., Pfaff, B., Goh, E.J., Modadugu,

N. and Boneh, D. (2004) On the effectiveness of address-space randomization. In CCS.

[21] Xu, J., Guo, P., Zhao, M., Erbacher, R.F., Zhu, M. andLiu, P. (2014) Comparing different moving target defensetechniques. In MTD workshop.

[22] Zaffarano, K., Taylor, J. and Hamilton, S. (2015)A quantitative framework for moving target defenseeffectiveness evaluation. In MTD workshop.

[23] Lamb, C. and Hamlet, J. (2016) Dependency graphanalysis and moving target defense selection. In MTDworkshop.

[24] Taylor, J., Zaffarano, K., Koller, B., Bancroft, C. andSyversen, J. (2016) Automated effectiveness evaluationof moving target defenses: Metrics for missions andattacks. In MTD workshop.

11EAI Endorsed Transactions Preprint

Page 12: EAI Endorsed Transactions Preprint Research Article ...sxz16/papers/MTDFramework.pdf · of this framework through a case study on a specific MTD category called software diversification

Chu Huang, Yi Yang, and Sencun Zhu

[25] Prakash, A. and Wellman, M. (2015) Empirical game-theoretic analysis for moving target defense. In MTDworkshop.

[26] andăHodaăMaleki, S.V.,Koch,W.,AzerăBestavros andvan Dijk, M. (2016) Markov modeling of moving targetdefense games. In MTD workshop.

[27] Kainuma, Y. and Tawara, N. (2006) A multiple attributeutility theory approach to lean and green supplychain management. International Journal of ProductionEconomics 101(1): 99–108.

[28] Yoon, K. and Hwang, C.L. (1995) Multiple AttributeDecision Making: an Introduction (Sage).

[29] Abdi, H. and Valentin, D. (2007) Multiple Correspon-dence Analysis, Encyclopedia of Measurement and Statistics.

[30] Saaty, T. (2008) Decision making with the analytichierarchy process. International Journal of ServicesSciences 1(1): 83–98.

[31] Saaty, T. (1990) How to make a decision: the analytichierarchy process. European journal of operational research48(1): 9–26.

[32] (2017), AHP Example. URL https://en.wikipedia.

org/wiki/Analytic_hierarchy_process.[33] Saaty, T. (1980) The Analytic Hierarchy Process (New

York: McGraw-Hill).[34] (2017), The analytic hierarchy process. URL www.dii.

unisi.it/mocenni/Note_AHP.pdf.[35] Huang, C., Zhu, S. and Erbacher, R. (2014) Toward

software diversity in heterogeneous networked systems.In Proceedings of the 28th IFIP WG 11.3 Conference onData and Applications Security and Privacy (DBSec).

[36] Huang, C., Zhu, S., Guan, Q. and He, Y. (2017) Asoftware assignment algorithm for minimizing wormdamage in networked systems. Journal of InformationSecurity and Applications .

12EAI Endorsed Transactions Preprint