2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS...

18
Calhoun: The NPS Institutional Archive Faculty and Researcher Publications Faculty and Researcher Publications 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS methodology Fox, William P. þÿJournal of Defense Modeling and Simulation: Applications, Methodology, Technology, pp. 1 17, 2015 http://hdl.handle.net/10945/45319

Transcript of 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS...

Page 1: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Calhoun: The NPS Institutional Archive

Faculty and Researcher Publications Faculty and Researcher Publications

2015

Ranking terrorist targets using a hybrid

AHP-TOPSIS methodology

Fox, William P.

þÿ�J�o�u�r�n�a�l� �o�f� �D�e�f�e�n�s�e� �M�o�d�e�l�i�n�g� �a�n�d� �S�i�m�u�l�a�t�i�o�n�:� �A�p�p�l�i�c�a�t�i�o�n�s�,� �M�e�t�h�o�d�o�l�o�g�y�,� �T�e�c�h�n�o�l�o�g�y�,� �p�p�.� � ��1 ��1�7�,� �2�0�1�5

http://hdl.handle.net/10945/45319

Page 2: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Applications

JDMS

Journal of Defense Modeling andSimulation: Applications,Methodology, Technology1–17� 2015 The Society for Modelingand Simulation InternationalDOI: 10.1177/1548512914563619dms.sagepub.com

Ranking terrorist targets using a hybridAHP–TOPSIS methodology

William P Fox1, Brendan Ormond2, and Alex Williams2

AbstractWe present a methodology and an example of preparing an order of merit list to rank terrorist targets based upondecision-maker weights. We used an old terrorist data set as our base data to keep the information unclassified. Thisdata is used to demonstrate this methodology. We perform numerical iterative criteria weight sensitivity analysis toshow the effects on the model’s outputs in changes in the weights. We identify the critical criterion.

KeywordsRanking terrorist targets, technique of order preference by similarity to ideal solution, analytical hierarchy process, cri-teria weights, pairwise comparisons, sensitivity analysis

1. Introduction

The United States of America is still fighting the global

war against terrorism. This strategy is imbedded in many

of our countries’ strategic plans. The National Strategy for

Combating Terror1 states that the United States

Government’s intent is ‘‘victory in the long war against

terror’’, and that the goal and objective is ‘‘to defeat ter-

rorists and their organizations’’. The US has coined this

struggle as the Global War on Terror (GWOT) and the tar-

geting of a terrorist organizations’ personnel is an integral

part of this effort.

According to Department of Defense (DOD) doctrine in

Army FM 34-8-22 and FM 3-60:3

Targeting is the process of selecting targets and matching the

appropriate response to them, including operational require-

ments and capabilities. The purpose of targeting is to disrupt,

delay, or limit threat interference with friendly Courses of

Action (COAs).2

Human-targeting, the process of selecting a human tar-

get, exists as a subset of this more general targeting doc-

trine. This human targeting is being applied to terrorists.

A common misconception is that human-targeting

denotes either a hard-power and soft-power strategy that

involves either kinetic or non-kinetic power. Human-tar-

geting is instead intent or objective neutral: it does not

specify the type of action taken nor the counterterrorism

(CT) objective desired. Human-targeting, rather, repre-

sents an analytical process that assigns a heuristic value to

a target. This assignment of ‘‘value’’ allows for the priori-

tization of multiple targets and this prioritization permits

CT organizations to direct efforts and allocate resources.

Consequently, every government agency, unit, or official

whose function serves to counter terrorism remains depen-

dent on the human-targeting process. In order to mitigate

this risk, we propose the development of a systematic

method for the conduct of human targeting. We test the

proposition that statistical and mathematical modeling

methods extensively tested and used in other academic

and commercial disciplines may be successfully applied to

prioritize human targeting. This paper is an example of

this concept.

1Department of Defense Analysis, Naval Postgraduate School, Monterey,

CA, USA2US Army, Special Forces

Corresponding author:

William P. Fox, Department of Defense Analysis, Naval Postgraduate

School, 589 Dyer Road, Room 103 F, NPS, Monterey, CA 93943, USA.

Email: [email protected]

Page 3: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

The process involves numerous complex and dynamic

interactions filled with ambiguities. Minor variations in

the process dramatically affect human-targeting decisions

producing essentially unpredictable results. The fog of war

causes varied decisions. In other words, CT organizations

may be targeting the wrong (or a less-valuable) terrorist.

This inefficiency is not only a misuse of intelligence, but

wastes limited national resources, which inevitably place

lives unnecessarily at risk. Left unaddressed, this critical

United States Government decision-making process with

systemic problems could result in a catastrophic intelli-

gence failure.

In previous work by Twedell and Edmonds4 a series of

six linear regression models were used to ultimately model

and obtain a series of terrorist rank orderings. We believe

our proposed methodology is better suited to obtain a rank

ordering. In addition, in previous research using AHP and

the Technique of Order Preference by Similarity to Ideal

Solution (TOPSIS) alone we found a need for decision-

maker weights. Without the actual analysts and decision

makers to provide weights, we used a hybrid approach

where we used AHP’s subjective methods to obtain

weights from our prioritized criteria and then available

unclassified data for the terrorist themselves and their

organizations.

2. Proposed methodology: TOPSIS

TOPSIS was the result of work done by Yoon and

Hwang.5 TOPSIS has been used in a wide spectrum of

comparisons of alternatives including: item selection from

among alternatives, ranking leaders or entities, remote sen-

sing in regions, data mining, and supply chain operations.

TOPSIS is chosen over other methods because it orders

the feasible alternatives according to their closeness to an

ideal solution.6

Napier7 provided some analysis of the use of TOPSIS

for the DOD in industrial base planning and item selection.

For years the military used TOPSIS to rank order the sys-

tems’ request from all of the branches within the service

for the annual budget review process8 as well as being

taught again in as part of decision analysis. Current work

is being done to show the ability of TOPSIS to rank order

nodes of a dark or social network across all the metrics of

social network analysis.9,10

In manufacturing analysis, Wang and He11 proposed

two methods to improve TOPSIS for multi-response opti-

mization using Taguchi’s loss function. Ozturk and

Batuk12 used TOPSIS for spatial decisions and then linked

to geographical information systems (GIS) operations for

flood vulnerability. Olson and Wu13 have shown how

TOPSIS may be used for data mining and analysis in

credit card score data. Olson and Wu13 presented a com-

parison of weights (centroid weights, equal weights, and

weights by linear regression) in TOPSIS models using

baseball data where their conclusion is that accurate

weights in TOPSIS are crucial to success.

In a business setting it has been applied to a large num-

ber of application cases in advanced manufacturing pro-

cesses,14–16 purchasing and outsourcing,17,18 and financial

performance measurement.19

In social and dark networks, TOPSIS has been used to

rank order the nodes across all metrics in order to identify

the most influential node.13,20

TOPSIS methodology

The TOPSIS process is carried out as follows:

Step 1

Create an evaluation matrix consisting of m alterna-

tives (Ai) and n criteria, with the intersection of each

alternative and criteria given as xij, giving us a matrix

(Xij)mxn:

x1 x2 x3 : : : xn

D=

A1

A2

A3

:

:

:

Am

x11 x12 x13 : : : x1n

x21 x22 x23 : : : x2n

x31 x32 x33 : : : x3n

: : : :

: : : :

: : : :

xm1 xm2 xm3 : : : xmn

2666666666664

3777777777775

Step 2

The matrix shown as D above then normalized to

form the matrix R=(Rij)mxn,

using the normalization method

rij = xijffiffiffiffiffiffiffiffiffiffiffiPx2

ij

q

for i=1,2.,m; j = 1,2,.n

Step 3

Calculate the weighted normalized decision matrix.

First we need the weights. Weights

can come from either the decision maker or by

computation.

Step 3(a).

Use either the decision maker’s weights for the attri-

butes x1,x2,..xn or compute the weights through the use

of Saaty’s21–23 AHP decision maker weights method to

obtain the weights (wj) as the eigenvector to the

2 Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Page 4: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

attributes versus attribute pair-wise comparison matrix:

Xn

j= 1

wj = 1

The sum of the weights over all attributes must equal 1

regardless of the method used.

Step 3(b).

Multiply the weights to each of the column entries in

the matrix from Step 2 to obtain the normalized deci-

sion matrix, T:

T = (tij)m x n = (wjrij)m x n, i= 1, 2, . . . ,m

Step 4

Determine the worst alternative (Aw) and the best alter-

native (Ab): examine each attribute’s column and select

the largest and smallest values appropriately. If the val-

ues imply larger is better (profit), then the best alterna-

tives are the largest values, and if the values imply

smaller is better (such as cost), then the best alternative

is the smallest value:

Aw = fmax (tijji= 1, 2, . . . ,mjj∈ J�, min (tijji= 1, 2, . . . ,m)

jj∈ J+ g≡ ftwjjj= 1, 2, . . . , ng

Ab = fmin (tijji= 1, 2, . . .,mjj∈ J�, max (tijji= 1, 2, . . . ,m)

jj∈ J+ g≡ ftbjjj= 1, 2, . . . , ng

here Jþ= fj= 1, 2, . . . njj) associated with the criteria

having a positive impact, and J�= fj= 1, 2, . . . njj)associated with the criteria having a negative impact.

We suggest that if possible make all entry values in

terms of positive impacts.

Step 5

Calculate the L2-distance between the target alternative

i and the worst condition Aw

diw =ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiXn

j= 1(tij � twj)

2,

ri= 1, 2, . . . m

and the distance between the alternative i and the best

condition Ab

dib =ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiXn

j= 1(tij � tbj)

2

r, i= 1, 2, . . . m

where diw and dib are L2-norm distances from the target

alternative i to the worst and best conditions,

respectively.

Step 6

Calculate the similarity to the worst condition:

siw = diw

(diw + dib), 0≤ siw ≤ 1, i= 1, 2, . . . ,m

siw = 1 if and only if the alternative solution has the

worst condition, and siw = 0 if and only if the alterna-

tive solution has the best condition.

Step 7

Rank the alternatives according to their value from Siw

(i = 1,2,.,m).

Normalization

Two methods of normalization that have been used to deal

with incongruous criteria dimensions are linear normaliza-

tion and vector normalization.

Linear normalization can be calculated as in Step 2 of

the TOPSIS process above. Vector normalization was

incorporated with the original development of the TOPSIS

method,24 and is calculated using the following formula:

rij = xijffiffiffiffiffiffiffiffiffiffiffiPx2

ij

q

for i=1,2.,m; j = 1,2,.n.

In using vector normalization, the nonlinear distances

between single dimension scores and ratios should produce

smoother trade-offs.25

Sensitivity analysis

Since AHP, at least in the pairwise comparisons, is based

upon subjective inputs using the nine-point scale then sen-

sitivity analysis is extremely important. Leonelli26 in his

master’s thesis, outlines procedures for sensitivity analysis

to enhance decision support tools, including numerical

incremental analysis of a weight, probabilistic simulations,

and mathematical models. How often do we change our

minds about the relative importance of an object, place, or

thing? Often enough that we should alter the pairwise com-

parison values to determine how robust our rankings are in

the AHP process. We suggest doing enough sensitivity

analysis to find the ‘‘break-point’’ values, if they exist, of

the decision-maker weights that change the rankings of our

alternatives. Since the pairwise comparisons are subjective

matrices compiled using Saaty’s method, we suggest as a

minimum a ‘‘trial and error’’ sensitivity analysis using the

numerical incremental analysis of the weights.

Hwang et al.25 grouped sensitivity analysis into three

main groups: numerical incremental analysis, probabilistic

simulations, and mathematical models The numerical

incremental analysis, also known as one-at-a-time (OAT)

Fox et al. 3

Page 5: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

or ‘‘trial and error’’ works by incrementally changing one

parameter at a time, finding the new solution and showing

graphically how the ranks change. There exist several var-

iations of this method.27,28 Probabilistic simulation

employs Monte Carlo simulation29 that allows random

changes in the weights and simultaneously explores the

effect on the ranks. Modeling may be used when it is pos-

sible to express the relationship between the input data

and the solution results.

The decision weights are subject to sensitivity analysis

to determine how they affect the final ranking. The same

procedures discussed previously are valid here. Sensitivity

analysis is essential to good analysis. In addition,

Alinezhad and Amini30 suggest sensitivity analysis for

TOPSIS for changing an attribute weight. The equation

they developed for adjusting weights based upon a single

weight change that we used is

w0

j =(1� w

0

p)

(1� wp)wj

where w’j is the future weight of criteria j, wp the current

selected weight to be changed, w’p the new value of the

selected weight, wj is the current weight of criteria j.

In this paper, we demonstrate the hybrid AHP–TOPSIS

methodology by using AHP to obtain the critical decision-

maker weights and then applying it to the terrorist-ranking

problem identified by Twedell and Edmonds.4 We must

caveat, that the model constructed and presented in this

research is based entirely on our personal analysis and sub-

jective judgments of the discrete data set found in Twedell

and Edmonds’ thesis.4 A real-world application of this

methodology for ranking terrorists would require the expli-

cit input and pairwise preferences of the commensurate

decision -makers within the US Government.

We devised a three-phase methodology for analyzing

terrorists as outlined in Figure 1.

3. Applying the phases

Before applying our methodology, we first conducted a

comprehensive review of Twedell and Edmonds’ research

and a detailed analysis of their data set. The first compo-

nent of their data set consists of two target lists, the Green

Target List and the Blue Target List that are based on

actual historical target lists that were created by an

unknown CT analyst sometime between 2004 and 2005

(Tables 1 and 2).

To construct these target lists, Twedell and Edmonds4

thoroughly researched both of the analyst-created target

lists and eliminated terrorists from their consideration if

they could not corroborate identities or find corresponding

data for the attributes they were analyzing (note that the

terrorist identities were masked by Twedell and Edmonds4

using a generic Terrorist ID number). Since this elimina-

tion process left gaps in the rankings from the analysts’

lists, for the purpose of our research we re-ranked the

remaining terrorists in each list (depicted in the Ordinal

rank row in Tables 1 and 2). The 21 terrorists on the Green

Target List and the 45 terrorists on the Blue Target List

will be evaluated as our alternatives by our model.

Terrorist attributes

The second component of the data set contains a list of 74

critical terrorist attributes that Twedell and Edmonds4

hypothesized that CT analysts explicitly and/or implicitly

considered when constructing target lists.31–33 These attri-

butes were further classified into five categories: (1) orga-

nizational variables, (2) individual variables, specific cell

Figure 1. Human-targeting methodology using hybrid AHP–TOPSIS methods.

4 Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Page 6: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

membership/individual experience, (3) other individual

variables, (4) individual variables, ‘‘Worldliness’’, and (5)

social network analysis/graph theory measure variables.

Figure 2 shows all 74 terrorist attributes and their category

classification.

Using open source information, historical classified doc-

uments, and the Terrorist Identities Datamart Environment

(TIDE) database (the US central repository of information

on international terrorist identities) Twedell and Edmonds

aggregated data values for as many attributes as they could

for the terrorists on the Green and Blue Target Lists.

Phase one: apply AHP to criteria weights (attributes)

According to Joint Publication 3-60:34

Every target has distinct intrinsic or acquired characteristics,

the most important of which affect how it is targeted. These

characteristics form the basis for target detection, location,

identification, and classification for future surveillance, anal-

ysis, strike, and assessment.

Joint Publication (JP) 3–6034

Step one: build a decision hierarchy of the problem. As stated

in our methodology section, this phase uses an abbreviated

version of AHP to calculate only the global decision

weights of our decision criteria. In AHP the first task is to

disaggregate a problem into a decision hierarchy consist-

ing of a goal, a list of alternatives, and a set of decision

criteria. In our problem, our stated goal is to construct a

prioritized target list using the terrorists identified on the

Green and Blue Target Lists (Tables 1 and 1) as our deci-

sion alternatives. Our analysis reduced these 74 criteria to

4 main criteria and 10 sub-criteria as shown in Table 3.

These variables are considered essential from TIDE and

other data bases.

Step two: revised data set. After selecting our decision cri-

teria, we revised the data set to reflect only the data for

those criteria we were assessing. The data set used for the

remainder of our research is located in Table 4.

Step three: constructing the decision hierarchy. After identify-

ing our major decision criteria and their subordinate sub-

criteria, we constructed our decision hierarchy based on

our goal of ranking the terrorist alternatives in the Green

and Blue Target Lists (Figure 3).

Step four (steps two through four): pairwise comparison, deci-sion weights, and consistency. In applying AHP’s pairwise

comparison process, we first constructed a pairwise matrix

containing our major decision criteria: individual

Table 1. Green Target List terrorist alternatives.

Green Target List

Terrorist ID 12 91 53 39 87 93 64 65 100 7 2 75 1 98 71 21 79 58 92 47 56Analyst Rank 1 3 4 5 6 12 13 14 15 17 18 19 21 22 23 24 25 26 29 31 32Ordinal Rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

Table 2. Blue Target List terrorist alternatives.

Blue Target List

Terrorist

ID

54 3 52 50 24 22 73 77 6 78 63 103 19 72 104 70 45 13 25 27 33 4 83 90 99 66 67 23 60 86 84 40 96 97 20 74 107 15 30 80 69 16 17 5 10

Analyst

rank

1 2 3 6 7 8 9 10 11 12 13 14 15 16 18 19 20 22 23 25 26 36 36 37 38 39 39 40 41 43 45 46 47 47 48 49 50 51 52 57 61 62 63 65 67

Ordinal

rank

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45

Table 3. Major decision criteria and subordinate sub-criteria.

Individual variables Organizational variables Worldliness variables SNA variables

1. Role in Organization 5. Organizational Type 6.Number of Critical Locations 8. Lead Eigenvector2. References in Intel 7.Number of Countries Traveled 9. Closeness3. Time as a Terrorist 10. Degree of Centrality4. Number of Aliases

Fox et al. 5

Page 7: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

variables, organizational variables, worldliness variables,

and SNA variables. We then systematically compared each

criterion with each other. Using Saaty’s nine-point scale,

these comparisons were scored on how we perceived their

relative importance was in our goal of ranking terrorists.

The results of this process are located in Table 5.

From Table 5, we see a preferred order emerge. Our

major decision criteria preferences can be interpreted as:

individual variables are more important than worldliness

and SNA variables; organizational variables are more

important than SNA variables.

Next, we applied Saaty’s21–23 three-step eigenvector

procedure to our pairwise matrix to determine our major

criteria vector weights (or local decision weights). Before

accepting our results, we calculated the consistency of our

preferences using Saaty’s consistency ratio (CR) formula.

Our major decision criteria local weights and overall cor-

responding consistency are shown in Table 6.

Since the consistency ratio (CR) was calculated to be

0.024 (less than the maximum 0.10) our preferences are

deemed consistent and resulting weights valid.

Assessment of decision sub-criteria

The processes used to calculate the local decision weights

and matrix consistency for our decision hierarchy’s sub-

criteria were similar to those used to calculate the major

decision criteria. The only exception being that during the

pairwise comparison step, our pairwise preferences for

each sub-criterion were scored based on our assessment of

their importance in influencing their parent major decision

criterion and the goal of ranking terrorists. Table 7 sum-

marizes the results.

Since our model uses the TOPSIS method to rank order

terrorists in lieu of entirely AHP’s method, the calculation

of global decision weights for our sub-criteria completes

our use of the AHP method.

Phase two: applying TOPSIS

After calculating the global decision weights of our deci-

sion criteria using the AHP method, we used TOPSIS to

rank our terrorist targets.

Step 1: create a decision matrix consisting of criteria andalternatives. Our TOPSIS decision matrix was constructed

using our refined data for the Blue Target Lists found in

Table 1. The column to the left of the matrix consists of

the Blue Target List’s generic terrorist identity numbers

while the rows contain the data for our decision criteria.

The decision matrix constructed for the Blue Target List is

shown in Table 8.

We applied the TOPSIS steps and obtained output

shown in Table 9.

Figure 2. Twedell and Edmonds4 proposed terrorist attributes.

6 Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Page 8: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Table 4. Revised data set based on decision criteria selection (note that cells with asterisks contained missing data values inTwedell and Edmonds’ data set).

Terrorist ID Rank RolewithinOrg

Time asTerrorist(mons)

Refer inIntel Rep

Number ofAliases

OrgType

ConflictZoneTravel

Number ofCountriestraveled to

Leadeigenvector

Closeness Degree ofCentrality

Green Target List

12 1G 4 288 58 34 4 5 9 8.642 3.245 4091 3G 4 150 77 12 4 4 5 * * *53 4G 2 244 47 2 4 4 8 0.679 4.282 439 5G 3 24 1 0 * * * 0.613 4.226 387 6G 3 2 18 33 3 1 1 2.474 3.841 693 12G 3 6 5 6 3 1 1 * * *64 13G 3 3 5 12 3 1 1 2.344 3.81 665 14G 3 28 2 2 3 * * * * *100 15G 2 34 1 2 3 2 2 * * *7 17G 4 3 36 21 1 2 2 6.240 3.755 132 18G 4 24 6 17 3 2 2 * * *75 19G 2 0 6 5 3 1 2 * * 81 21G 3 21 5 2 3 1 1 0.840 4.240 398 22G 4 36 25 17 3 1 2 * * *71 23G 3 3 1 3 * * * * * *21 24G 3 23 28 11 3 1 1 0.6122 4.33 779 25G 4 29 4 9 3 1 1 * * *58 26G 3 21 16 13 4 1 4 9.339 3.457 1792 29G 3 0 8 23 3 1 1 * * *47 31G 0 10 1 2 3 1 1 * * *56 32G 3 18 39 52 3 1 3 3.153 3.79 9Blue Target List64 1B 4 288 58 34 4 5 9 8.642 3.345 403 2B 4 150 77 12 4 4 5 2.312 3.794 1262 3B 4 244 47 2 4 4 8 0.678 4.282 460 6B 1 7 5 9 3 1 1 14.773 3.190 4724 7B 4 324 64 17 4 5 9 * * *22 8B 0 0 1 0 3 1 1 * * *73 9B 1 17 6 5 3 2 2 * * *77 10B 4 36 12 8 4 3 6 * * *6 11B 4 9 39 13 3 3 3 3.654 3.718 2178 12B 3 11 15 15 3 2 3 * * *63 13B 4 23 24 22 3 2 2 * * *103 14B 3 31 21 12 3 2 3 0.00025 6.889 319 15B 3 18 39 52 3 1 3 6.795 3.689 2472 16B 2 6 4 10 3 1 1 * * *104 18B * 30 2 0 * * * * * *70 19B 3 3 1 3 * * * * * *45 20B 4 1 1 1 3 1 1 * * *13 22B 3 23 6 2 3 1 1 * * *25 23B 4 21 46 3 2 1 1 5.474 3.696 2427 25B 3 32 19 1 2 1 1 4.051 3.774 1633 26B 3 24 29 2 4 1 1 11.242 3.476 494 36B 0 27 3 0 2 1 1 * * *83 36B 3 19 2 12 1 1 1 * * *90 37B 3 0 31 15 3 2 5 * * *99 38B 4 156 25 17 3 1 2 * * *66 39B 3 36 2 33 3 * * * * *67 39B 3 28 5 2 3 * * * * *23 40B 3 28 6 0 3 1 1 11.132 3.242 3860 41B 4 24 6 17 3 2 2 * * *86 43B 3 2 18 33 3 1 1 2.474 3.841 684 45B 1 0 2 2 3 1 1 * * *40 46B 3 10 44 0 3 1 3 22.34 3.09 86

(continued)

Fox et al. 7

Page 9: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Analyzing the results. When evaluating the performance of

any model it is first important to confirm that it functions

as it is designed to and then determine how well it func-

tions. The processes of model verification and model vali-

dation can provide this insight according to Thacker et

al.35 These processes are critical in evaluating any model

since ‘‘quantifying the confidence and predictive accuracy

of model calculations provides the decision-maker with

the information necessary for making high-consequence

decisions’’.

Table 4. (continued)

Terrorist ID Rank RolewithinOrg

Time asTerrorist(mons)

Refer inIntel Rep

Number ofAliases

OrgType

ConflictZoneTravel

Number ofCountriestraveled to

Leadeigenvector

Closeness Degree ofCentrality

Green Target List

96 47B 3 12 0 5 3 1 2 * * *97 47B 3 12 0 5 3 1 2 18.57 3.15 4320 48B 3 23 28 11 3 1 1 0.6122 4.330 774 49B 3 4 8 21 3 1 1 * * *107 50B 4 22 8 3 3 * * * * *15 51B 4 39 4 0 2 0 1 0.594 4.317 430 52B 2 27 1 1 4 1 1 7.662 3.614 2480 57B 1 6 3 1 3 * * * * *69 61B 4 24 4 3 3 1 1 * * *16 62B 3 31 1 3 1 * * 6.823 3.613 2517 63B 4 14 15 6 3 1 1 0.569 4.456 46 65B 3 22 14 3 3 1 2 17.465 3.235 6210 67B 2 33 7 1 3 1 2 * * *

Figure 3. AHP hierarchy model of the terrorist ranking decision problem.

Table 5. Pairwise comparison matrix of the major criteria with respect to the goal.

Major criteria Individual variables Organizational variables Worldliness variables SNA variables

Individual variables 1 1 2 2Organizational variables 1 1 1 2Worldliness variables ½ 1 1 1SNA variables ½ ½ 1 1

Table 6. Major criteria local decision weight and consistency.

Major criteria Local weight

Individual variables 0.3333Organizational variables 0.2917Worldliness variables 0.2083SNA variables 0.1667Consistency Ratio (CR) = 0.024

8 Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Page 10: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Table 7. Global decision (vector) weights.

Major criteria Local weights Sub-criteria Local weights Global decision weights (criteriaweight × sub-criteria weight)

Individual Variables 0.3333 Role in Organization 0.3542 0.1181Ref. in Intel Reports 0.2917 0.0972Time as Terrorist 0.2083 0.0694Aliases 0.1458 0.0486

Organizational Variables 0.2917 Organizational Type 1.0000 0.2917Worldliness Variables 0.2083 Critical Locations 0.7500 0.1563

# Countries Trav. 0.2500 0.0521SNA Variables 0.1667 Lead Eigenvector 0.5453 0.0909

Closeness 0.2866 0.0478Degree Centrality 0.1681 0.0280

Note that the SUM of all weights = 1

Table 8. TOPSIS Blue Target List decision matrix.

Blue Target List Decision Matrix Decision Criteria

SC 1 SC 2 SC 3 SC 4 SC 5 SC 6 SC 7 SC 8 SC 9 SC 10

Blue Target ListTerrorist Alternatives(Generic Target Number)

54 4 288 58 34 4 5 9 8.64203 3.34582 403 4 150 77 12 4 4 5 2.31276 3.79475 1252 4 244 47 2 4 4 8 0.67809 4.28259 450 1 7 5 9 3 1 1 14.7738 3.19066 4724 4 324 64 17 4 5 9 0 0 022 0 0 1 0 3 1 1 0 0 073 1 17 6 5 3 2 2 0 0 077 4 36 12 8 4 3 6 0 0 06 4 9 39 13 3 3 3 3.65351 3.71839 2178 3 11 15 15 3 2 3 0 0 063 4 23 24 22 3 2 2 0 0 0103 3 31 21 12 3 2 3 0.00025931 6.88911 319 3 18 39 52 3 1 3 6.79449 3.68969 2472 2 6 4 10 3 1 1 0 0 0104 0 30 2 0 0 0 0 0 0 070 3 3 1 3 0 0 0 0 0 045 4 23 1 1 3 1 1 0 0 013 3 21 5 2 3 1 1 0 0 025 4 32 46 3 2 1 1 5.47498 3.6965 2427 3 24 19 1 2 1 1 4.05132 3.77432 1633 3 27 29 2 4 1 1 11.2429 3.47665 494 0 19 3 0 2 1 1 0 0 083 3 0 2 12 1 1 1 0 0 090 3 156 31 15 3 2 5 0 0 099 4 36 25 17 3 1 2 0 0 066 3 28 2 2 3 0 0 0 0 067 3 28 2 2 3 0 0 0 0 023 3 6 5 6 3 1 1 11.1327 3.24222 3860 4 24 6 17 3 2 2 0 0 086 3 2 18 33 3 1 1 2.47438 3.84193 684 1 0 2 2 3 1 1 0 0 040 3 10 44 0 3 1 3 22.3498 3.09484 8696 3 12 0 5 3 1 2 0 0 097 3 12 0 5 3 1 2 18.5791 3.15029 4320 3 23 28 11 3 1 1 0.612255 4.33025 774 3 4 8 21 3 1 1 0 0 0107 4 22 9 3 3 0 0 0 0 015 4 39 4 0 2 0 1 0.593595 4.31712 430 2 27 1 1 4 1 1 7.66192 3.6143 2480 1 5 3 1 3 0 0 0 0 069 4 24 4 3 3 1 1 0 0 016 3 31 1 3 1 0 0 6.82329 3.61381 2517 4 14 15 6 3 1 1 0.569282 4.45671 45 3 22 14 3 3 1 2 17.4653 3.23589 6210 2 33 7 1 3 1 2 0 0 0

Fox et al. 9

Page 11: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Model comparison. We perform a model comparison by

comparing head-to-head model predictions to real-world

data. We first conducted a side-by-side comparison of the

original Blue Target List with the rankings generated by

our model as shown in Table 10.

In a direct comparison of our model rankings to the

subjective analyst’s rankings on the Blue Target list, our

model matched the ranks of the top two and the 9th terror-

ist. When comparing an aggregated top 5, top 10, or top

20 list without ordinal preference, our model successfully

matched 4 of the top 5, 5 of the top 10, and 9 of the top 20

terrorists, respectively. We assess that our model per-

formed exceptionally well and would expect significantly

better accuracy when constructed with the analysts and

decision makers responsible for ranking terrorist targets.

Phase three: conduct sensitivity analysis

The goal of our methodology is to create both an accurate

and useful model. While the process of verification and

Table 9. Distances and model derived terrorist rank.

Terrorist ID Ci Model rank

54 0.707824 13 0.584476 224 0.582038 340 0.572523 452 0.521921 533 0.483941 619 0.461912 75 0.446791 86 0.41697 925 0.41207 1090 0.401608 1197 0.392183 1250 0.380294 1330 0.367646 1477 0.363662 1523 0.359517 1620 0.35217 1799 0.351178 1863 0.349845 1986 0.345438 20103 0.330941 2117 0.317717 2278 0.309056 2360 0.305194 24107 0.300691 2574 0.299691 2669 0.292995 2745 0.288327 2813 0.283538 2966 0.2808 3067 0.2808 3110 0.280755 3227 0.280233 3396 0.27685 3472 0.275674 3573 0.271697 3680 0.264011 3784 0.262442 3822 0.257861 3915 0.241284 4016 0.213053 414 0.189947 4283 0.158737 4370 0.112238 44104 0.039632 45

Table 10. Blue Target List comparison of analyst vs. modelresults.

BLUE TARGET LIST COMPARISON

Terrorist ID Analyst Rank Model Rank

54 1 13 2 252 3 550 4 13

Top 5 24 5 322 6 3973 7 3677 8 156 9 9

Top 10 78 10 2363 11 19103 12 2119 13 772 14 35104 15 4570 16 4445 17 2813 18 2925 19 10

Top 20 27 20 3333 21 64 22 4283 23 4390 24 1199 25 1866 26 3067 27 3123 28 1660 29 2486 30 2084 31 3840 32 496 33 3497 34 1220 35 1774 36 26107 37 2515 38 4030 39 1480 40 3769 41 2716 42 4117 43 225 44 810 45 32

10 Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Page 12: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

validation in the previous phase demonstrated that our

model performs as intended, in this phase we conducted a

rigorous sensitivity analysis to gauge its robustness. Unlike

the validation process that compared our models results

with the analyst’s results, in sensitivity analysis, we are

only concerned with how changes in inputs affect the out-

puts of our model.

Impact of data outliers. The first form of sensitivity analy-

sis we conducted was to assess the impact of data outliers

on our model. In this test, we eliminated terrorist alterna-

tives from our model’s consideration that were missing

data for at least one of our four major decision criteria

(individual variables, organization variables, worldliness

variables, and SNA variables). Based on these constraints,

a review of our revised data set in Table 11 eliminated 27

terrorist alternatives. After removing these outlier terrorist

alternatives, we ordinally ranked the remaining 17 terror-

ists on the Blue Target List and reapplied TOPSIS. Our

new results were then juxtaposed next to our model’s orig-

inal results in Table 12.

By analyzing Table 13, we observed that there were no

significant terrorist ranking changes as a result of remov-

ing data outliers; thus demonstrating that our model is gen-

erally robust with respect to data fluctuations. With respect

to ranking terrorists, this knowledge is valuable to a deci-

sion maker. Since terrorist data sets, by their nature, are

generally incomplete, a decision maker gains confidence

in the model by knowing that its performance is not signif-

icantly degraded by these data gaps. While not presented

here, additional sensitivity testing should be performed on

criterion data inputs to better understand their relationship

in affecting a terrorist’s ranking. These tests are simply

performed by individually manipulating data or using a

simulation program to generate values to observe their

effects on the model’s results. Such sensitivity testing is

especially important in evaluating those data values that

are derived subjectively or measured qualitatively. Results

of this additional testing can be used to help refine data

collection methods and will ultimately improve the mod-

el’s robustness and the decision makers corresponding

confidence.

Identification of the most critical decision criterion. The next

focus of our sensitivity analysis consisted of identifying

the most critical decision criterion (the criterion that the

smallest change in weight has the largest impact to our

model). To accomplish this, we systematically applied the

step weighting process explained and demonstrated in our

methodology chapter to each of our 10 decision criterions

using step weights of ± 0.05, 0.1, 0.15, 0.20, and 0.25. It

also is important to note that when adjusting a decision

criterion’s weight up or down, the decision weights of the

other criteria must be adjusted proportionally. To calculate

this proportional adjustment for the other decision weights,

the following equation is used:

w0j =1� wp +�p

� �1� wp

� wj ; j 6¼ p, j= 1, 2, :::, k ð1Þ

Table 11. Comparison of target list rankings after removing data outliers.

BLUE TARGET LIST (outliers removed)

Terrorist ID Blue TargetList rank

Recalculatedrank after outliers removed

Model rank (based onentire Blue Target List)

Model rank(based on datawith outliers removed)

54 1 1 1 13 2 2 2 252 3 3 5 340 32 13 4 433 21 10 6 519 13 7 7 66 9 5 9 75 44 19 8 825 19 8 10 9

Top 10 30 39 17 14 1097 34 14 12 1120 35 15 17 1250 4 4 13 1386 30 12 20 1423 28 11 16 15103 12 6 21 1617 43 18 22 17

Fox et al. 11

Page 13: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

where wp +�p is the new weight of the criterion being

assessed and w0j are the corresponding new decision

weights for the other criterion.30

The results of this analysis identified our Degree of

Centrality criterion as our model’s most critical criterion

(coincidentally, the Degree of Centrality criterion is also

considered our ‘‘least important’’ based on its global deci-

sion weight of 0.028). Figure 4 graphically depicts the

effects of increasing the decision weight of the Degree of

Centrality criterion on our model’s ranking results.

In Figure 4, each line represents a terrorist and their

corresponding rank as the decision weight value of the

Degree of Centrality criterion is increased (with the high-

est line on the vertical (y) axis being the top ranked terror-

ist at any specific weight value). The horizontal (x) axis

lists the amount of each incremental weight change that

we increased the Degree of Centrality criterion (with zero

(0) representing our model’s initial ranking results). By

analyzing Figure 4, we extrapolated that our model begins

to experience significant rank changes as soon as the

Table 12. Effects on terrorist rank when increasing the decision weight of the Degree of Centrality criterion.

Increase in decision weight

Terrorist Ranking 0 0.05 0.1 0.15 0.2 0.251 4 4 4 4 42 1 8 8 8 83 8 6 6 6 64 6 1 13 13 135 13 13 1 12 126 12 12 12 1 17 2 16 16 16 168 16 7 7 7 79 3 10 10 10 4110 7 2 14 14 1011 5 14 41 41 1412 10 9 9 9 913 9 41 2 2 3314 14 3 3 33 215 11 5 33 3 316 41 33 5 5 517 15 17 17 17 1718 17 11 20 20 2019 20 20 15 15 2220 19 15 11 11 1521 18 19 22 22 1122 33 18 19 21 2123 21 21 18 19 1924 22 22 21 18 1825 24 24 24 24 4026 23 23 23 23 2427 25 26 26 26 2328 26 25 25 25 2629 27 27 27 27 2530 28 28 28 28 2731 29 29 29 40 2832 30 30 30 29 2933 31 31 31 30 3034 34 34 34 31 3135 32 32 32 34 3436 35 35 35 32 3237 36 36 36 35 3538 37 37 40 36 3639 38 38 37 37 3740 39 39 38 38 3841 40 40 39 39 3942 42 42 42 42 4243 43 43 43 43 4344 44 44 44 44 4445 45 45 45 45 45

12 Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Page 14: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Table 13. Model robustness to changes in decision criterion weights (criteria 1–3).

C1 (-) C1 (+ ) C2 (-) C3 (-) C3 (+ )

0.05 0.1 0.05 0.1 0.15 0.05 0.1 0.05 0.1 0.15 0.2 0.05 0.1 0.15 0.2

# Match Top 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

% MATCH 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%

# Matching Model Top 5 (11%) 5 5 5 5 5 4 4 5 4 4 4 5 5 5 5

% MATCH 100% 100% 100% 100% 100% 80% 80% 100% 80% 80% 80% 100% 100% 100% 100%

# Matching Model Top 10 (22%) 10 10 10 10 10 10 10 9 9 9 9 10 9 9 9

% MATCH 100% 100% 100% 100% 100% 100% 100% 90% 90% 90% 90% 100% 90% 90% 90%

# Matching Model Top 15 (33%) 15 14 14 13 13 14 14 15 15 14 14 14 12 11 11

% MATCH 100% 93% 93% 87% 87% 93% 93% 100% 100% 93% 93% 93% 80% 73% 73%

# Matching Model Top 23 (51%) 23 23 22 22 20 23 23 22 22 22 22 23 23 23 22

% MATCH 100% 100% 96% 96% 87% 100% 100% 96% 96% 96% 96% 100% 100% 100% 96%

Figure 4. Graph depicting the effects on the Blue Target List when increasing the decision weight of the Degree of Centralitycriterion.

Figure 5. Chart depicting the effects on Blue Target List when increasing the decision weight of the Degree of Centrality criterion.

Fox et al. 13

Page 15: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

decision weight of our Degree of Centrality criterion is

increased. When the decision weight was increased as lit-

tle as 0.0025, our model’s initial top ranked terrorist

moves into second place. In Figure 4 rank changes are

depicted by the intersections of the colored lines (the line

with the positive slope at the intersection point increases

in rank while the line with a negative slope decreases). By

plotting these results as a bar chart, we gained a better per-

spective on the systemic effects of increasing the Degree

of Centrality decision weight (Figure 5).

Figure 5 shows that, although our model experienced

numerous terrorist rank changes when increasing the

Degree of Centrality criterion, the majority of these

changes are limited to only the top 16 terrorists. When

cross-referencing these results with our refined data set, it

became clear that these rank changes directly correlated

with whether or not a terrorist had data values for the

Degree of Centrality criterion. As we increased Degree of

Centrality’s decision weight, our model immediately

began to show prejudice towards the 20 terrorists who

were missing data. Table 14 provides a better depiction of

these rank changes by color coding our model’s initial top

16 and showing their change in rank at each decision

weight increase.

The results of this analysis are significant since it high-

lights that the Degree of Centrality criterion is extremely

sensitive to decision weight changes. This result should

make the decision maker re-evaluate using this criterion in

the model. If the decision maker deems that it is necessary

to keep this criterion, then pairwise comparisons should be

scrutinized to ensure that they accurately reflect the decision

maker’s preferences. Ultimately, if it is decided to keep both

this criterion and preferences it should highlight the impor-

tance of ensuring that the method and means of collecting

and recording the data for this criterion are as accurate as

possible. In this regard, identifying sensitive criterion addi-

tionally assists a decision maker in determining which intel-

ligence requirements are most important; known in military

terms as prioritized intelligence requirements (PIRs). While

this example highlighted only the analysis for the ‘‘most’’

critical criterion, this process should be used to evaluate all

of the criteria. For our model, in addition to the Degree of

Centrality criterion; both the Lead Eigenvector and

Countries Travelled criteria additionally demonstrated high

sensitivity and should also be scrutinized.

By comparison and to demonstrate that the ‘‘most

important’’ (or highest weighted) decision criterion is not

necessarily (and according to some literature rarely) the

most critical criterion, the results of decreasing the deci-

sion weight of the Organizational Type criterion (our mod-

el’s ‘‘most important’’ criterion) is presented in Figure 6.

Note that besides a few changes in terrorist ranking

amongst the terrorists at the bottom of the target list, the

model remains relatively stable.Tab

le14.

Model

robust

nes

sto

chan

ges

indec

isio

ncr

iter

ion

wei

ghts

(cri

teri

a4–7).

C4

(+)

C5

(-)

C6

(-)

C6

(+)

C7

(+)

0.0

50.1

0.1

50.2

0.2

50.0

50.1

0.1

50.2

0.2

50.0

50.1

0.1

50.0

50.1

0.1

50.0

50.1

0.1

50.2

0.2

5

#M

atch

Top

11

11

00

11

11

11

11

11

11

11

11

%M

AT

CH

100%

100%

100

%0%

0%

100%

100

%100%

100%

100

%100

%100%

100%

100

%100%

100%

100

%100

%100%

100%

100

%

#M

atch

ing

Model

Top

5(1

1%

)5

55

55

55

55

55

43

55

54

43

32

%M

AT

CH

100%

100%

100

%100%

100%

100%

100

%100%

100%

100

%100

%80%

60%

100

%100%

100%

80%

80%

60%

60%

40%

#M

atch

ing

Model

Top

10

(22%

)10

10

99

910

10

10

10

10

88

810

99

98

75

4

%M

AT

CH

100%

100%

90%

90%

90%

100%

100

%100%

100%

100

%80%

80%

80%

100

%90%

90%

90%

80%

70%

50%

40%

#M

atch

ing

Model

Top

15

(33%

)14

13

13

12

11

13

13

13

13

13

14

14

14

12

11

11

13

12

10

98

%M

AT

CH

93%

87%

87%

80%

73%

87%

87%

87%

87%

87%

93%

93%

93%

80%

73%

73%

87%

80%

67%

60%

53%

#M

atch

ing

Model

Top

23

(51%

)23

21

21

21

21

23

22

22

22

21

22

22

22

23

23

22

21

21

21

21

20

%M

AT

CH

100%

91%

91%

91%

91%

100%

96%

96%

96%

91%

96%

96%

96%

100

%100%

96%

91%

91%

91%

91%

87%

14 Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Page 16: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Evaluating the robustness of the model. Measuring the

robustness of a model, or its ability to effectively perform

while its variables or assumptions are altered is one of the

primary purposes of sensitivity analysis. Although primar-

ily based on the objective observations during sensitivity

analysis testing, the measure of a model’s robustness is

often a subjective or qualitative assessment. To assess the

robustness of our model fully, we need to define its pur-

pose fully. Although we stated that the goal of our model

was to rank terrorists with respect to their priority for tar-

geting, we did not clearly define what exactly we wanted

to observe. For instance, our model’s purpose could be to

strictly identify only the top terrorist for targeting, or it

could be used more broadly to develop top 5, top 10, or

top 20 target lists. Tables 13–15 were created by aggregat-

ing data from all of our most critical criterion testing and

are used to show our model’s robustness with respect to

decision weight changes at accomplishing the tasks of

identifying the top terrorist, top 5 terrorists (11% of the

Blue Target List), top 10 terrorists (22% of the Blue

Target List), top 15 terrorists (33% of the Blue Target

List), and top 23 terrorists (51% of the Blue Target List).

To define a level of robustness for our model, we aggre-

gated the results (how many target matches) for each of

our table’s matching objective rows. We then averaged our

results across each row. This row average represents our

model’s percentage of matching a specific ranking objec-

tive (top 1, 5, 10, 15, 23) irrespective of how any of the

Figure 6. Graph depicting the effects on Blue Target List when decreasing the decision weight of the Organizational Type criterion.

Table 15. Model robustness to changes in decision criterion weights (criteria 8–10).

C8 (-) C8 (+ ) C9 (+ ) C10 (+ )

0.05 0.1 0.05 0.1 0.15 0.2 0.05 0.1 0.15 0.2 0.25 0.5 0.1 0.15 0.2 0.25

# Match Top 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 0 0

% MATCH 100% 100% 100% 100% 0% 0% 100% 100% 100% 100% 100% 0% 0% 0% 0% 0%

# Matching Model Top 5 (11%) 5 5 4 2 2 2 5 5 5 5 5 2 2 2 1 1

% MATCH 100% 100% 80% 40% 40% 40% 100% 100% 100% 100% 100% 40% 40% 40% 20% 20%

# Matching Model Top 10 (22%) 9 9 8 8 7 6 10 10 10 10 9 7 7 6 6 6

% MATCH 90% 90% 80% 80% 70% 60% 100% 100% 100% 100% 90% 70% 70% 60% 60% 60%

# Matching Model Top 15 (33%) 13 12 14 14 14 13 15 14 14 13 13 14 13 12 12 12

% MATCH 87% 80% 93% 93% 93% 87% 100% 93% 93% 87% 87% 93% 87% 80% 80% 80%

# Matching Model Top 23 (51%) 23 23 23 22 21 21 23 23 22 22 22 21 21 21 21 21

% MATCH 100% 100% 100% 96% 91% 91% 100% 100% 96% 96% 96% 91% 91% 91% 91% 91%

Table 16. Measure of robustness with respect to changingdecision weights.

Match Percentage of the time

Top terrorist 83%Top 5 terrorists 83%Top 10 terrorists 87%Top 15 terrorists 86%Top 23 terrorists 95%

Fox et al. 15

Page 17: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

criterion weights are changed. These results are presented

in Table 16.

From these results (remembering that top 5 = top 11%,

top 10 = 22%, top 15 = 33%, and top 23 = 51%), our

model is robust when determining the top 10, 15, and 23

terrorists irrespective of ordinal rank. Again, as stated

above, these results take into account any potential deci-

sion weight changes for all of our criteria. In reality, the

potential for this wide range of decision weight changes

would be rare, even if trying to aggregate group decision

weights. In this respect, a decision maker can look at simi-

lar charts to better define their model’s robustness based

on their specific (or groups) range of decision weight

change for each criterion. If changes do not occur within

their weight thresholds, the model’s robustness can be

increased and vice versa.

4. Discussion

This process provides leadership at all levels with a meth-

odology to produce key target lists among terrorist and ter-

rorist organizations based upon quantitative analysis. The

research reflects the work of the authors and not the DOD,

the Department of the Navy, or the Naval Postgraduate

School.

Declaration of Conflicting Interest

The authors declare that there is no conflict of interest.

Funding

This research received no specific grant from any funding

agency in the public, commercial, or not-for-profit sectors.

References

1. National Strategy for Combating Terrorism (2003),

Washington, DC, February 2003.

2. US Army. FM-34–8–2: The Intelligence Officer’s

Handbook, accessed 2July 2014, http://www.globalsecurity

.org/intell/library/policy/army/fm/34–8–2/index.html

3. U.S. Army. FM 3–60: The Targeting Process, November

2010, http://www.globalsecurity.org/military/library/policy/

army/fm/3–60/fm3–60.pdf.

4. Twedell P and Edmonds W. Countermetrics System (CMS):

Using Data and Comparative Evidence to Improve Human-

Targeting Decisions. Master’s thesis, Naval Postgraduate

School, 2008.

5. Yoon KP and Hwang C-L. Multiple Attribute Decision

Making: An Introduction. Thousand Oaks, CA: SAGE, 1995.

6. Malczewski J. A GIS approach to multiple criteria group

decision making. Int J Geographic Inform Syst. 1996, 10(8),

955–971.

7. Napier J. Industrial base program item selection indicators

analytical enhancements. Department of Defense Pamphlet,

DLA-93-P20047, 1992.

8. Fox WP. Mathematical modeling of the analytical hierarchy

process using discrete dynamical systems in decision analy-

sis. Comput Ed J 2012; (July–September): 27–34.

9. Fox WP and Everton SF. Using data envelopment analysis

and the analytical hierarchy process to find node influences

in a social network. JDMS 2014. DOI: 10.1177/15485129

13518273.

10. Fox W and Everton S. Mathematical modeling in social net-

work analysis: using TOPSIS to find node influences in a

social network. J Math Syst Sci 2013; 3(10): 531–541.

11. Wang Y and He Z. Improved TOPSIS methods for multi-

response optimization. In IEEE Symposium on Advanced

Management of Information for Globalized Enterprises,

2008 (AMIGE 2008).

12. Ozturk D and Batuk F. Technique for order preference by

similarity to ideal solution (TOPSIS) for spatial decision

problem. In: Proceedings ISPRS 2011, retrieved 1May 2013

from http://www.isprs.org/proceedings/2011/Gi4DM/PDF/

PP12.pdf

13. Olson D and Wu D. Decision making with uncertainty and

data mining. In: Li X, Wang S and Dong Z (eds), Advanced

Data Mining and Applications (Lecture Notes in Artificial

Intelligence, vol. 3584). Berlin: FRG: Springer, 2005, pp. 1–9.

14. Agrawal V, Kohli V and Gupta S. Computer aided robot

selection: The multiple attribute decision making approach.

Int J Product Res 1991; 29(8): 1629–1644.

15. Parkan C and Wu M. Decision-making and performance

measurement models with applications to robot selection.

Comput Ind Eng 1999; 36(3): 503–523.

16. Parkan C and Wu M. On the equivalence of operational per-

formance measurement and multiple attribute decision mak-

ing. Int J Product Res 1997; 35(11): 2963–2988.

17. Kahraman C, Engin O, Kubak O and Kaya L. Information

systems outsourcing decisions using group decision-making

approach. Eng Appl Artif Intell 2009; 22(6): 832–841.

18. Shyura H and Shih H. A hybrid MCDM model for strategic

vendor selection. Math Comput Model 2006; 44(7–8): 749–761.

19. Feng C and Wang R. Considering the financial ratio on the

performance evaluation of highway bus industry. Transport

Rev 2001; 21(4): 449–467.

20. Fox W and Everton S. Using mathematical models in decision

making methodologies to find key nodes in the Noordin dark

network. Am J Operat Res 2014. DOI: 10.4236/ajor.2014.

21. Saaty RW. The analytic hierarchy process—what it is and

how it is used. Math Modell 1987; 9(3–5): 161–176. DOI:

10.1016/0270-0255(87)90473-8.

22. Saaty TL. Decision making with the Analytic Hierarchy

Process. Int J Services Sci 2008; 1(1): 83–98.

23. Saaty T. The Analytic Hierarchy Process. New York, NY:

McGraw-Hill Book Company, 1980.

24. Hwang CL and Yoon K. Multiple attribute decision making:

Methods and applications. New York, NY: Springer-Verlag,

1981.

25. Hwang C, Lai Y and Liu T. A new approach for multiple

objective decision making. Comput Operat Res 1993; 20:

889–899.

26. Leonelli R. Enhancing a decision support tool with sensitiv-

ity analysis. Thesis, University of Manchester, 2012.

16 Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

Page 18: 2015 Ranking terrorist targets using a hybrid AHP-TOPSIS … · 2016-07-05 · Applications JDMS Journal of Defense Modeling and Simulation: Applications, Methodology, Technology

27. Baker T and Zabinsky Z. A multicriteria decision making

model for reverse logistics using Analytical Hierarchy

Process. Omega 2011; 39: 558–573.

28. Hurly WJ. The Analytical Hierarchy Process: A note on an

approach to sensitivity which preserves rank order. Comput

Operat Res 2001; 28: 185–188.

29. Butler J, Jia J and Dyer J. Simulation techniques for the sen-

sitivity analysis of multi-criteria decision models. Eur J

Operat Res 1997; 103: 531–546.

30. Alinezhad A and Amini A. Sensitivity Analysis of TOPSIS

technique: The results of change in the weight of one attri-

bute on the final ranking of alternatives. J Opt Indust Eng

2011; 7: 23–28.

31. Triantaphyliou E and Lin C. Development and evaluation of

five fuzzy multi-attribute decision making methods. Int J

Approx Reason 1996; 14(4): 281–310.

32. Triantaphyllou E. Decision Making in Engineering and

Business: A Cost-Effective Approach. Seattle, WA: LSU,

1994. Available at: http://www.csc.lsu.edu/trianta/PROCEE

DINGS_Papers/DecisionMakingGeneralByTRIANTA1.pdf.

33. Triantaphyllou E and Mann SH. Using the analytic hierarchy

process for decision making in engineering applications:

some challenges. Int J Indust Eng 1995; 2(1): 35–44.

34. The Department of Defense. Joint Publication 3-60: Joint

Targeting, 31 January 2013. Available at: http://www.bits.de/

NRANEU/others/jp-doctrine/jp3-60%2813%29.pdf.

35. Thacker BH, Doebling SW, Hemez FM, Anderson MC,

Pepin JE and Rodriguez EA. Concepts of Model Verification

and Validation. Los Alamos, NM: Los Alamos National

Laboratory, 2004.

Author biographies

Dr William P Fox is a professor in the Department of

Defense Analysis at the Naval Postgraduate School. He

received his BS degree from the United States Military

Academy at West Point, New York, his MS in OR at the

Naval Postgraduate School, and his PhD at Clemson

University. Previously he has taught at the United States

Military Academy and Francis Marion University where

he was the chair of the mathematics department for eight

years. He has many publications, including books, chap-

ters, journal articles, conference presentations, and work-

shops. He directs several mathematical modeling contests

through the Consortium for Mathematics and Its

Applications (COMAP). He is currently the president of

the Military Applications Society of INFORMS. His inter-

ests include applied mathematics, optimization (linear and

non-linear), mathematical modeling, statistical models,

bio-statistical analysis for medical research, and computer

simulations.

Major Brendan Ormond is a US Army Special Forces

officer who earned a MS degree in Special Operations/

Irregular Warfare from the Department of Defense

Analysis at the Naval Postgraduate School.

Major Alex Williams is a military intelligence officer

assigned as the S-3 for the New England Recruiting

Battalion, 1st Recruiting Brigade, US Army Recruiting

Command. Prior to this assignment he served in a variety

of operational and command positions within Intelligence

and Security Command. He holds a Master of Science in

Information Operations from the Naval Postgraduate

School and a Bachelor of Arts in Political Science from

the University of California at San Diego.

Fox et al. 17