Characterizing stateful resource attacks using modeling and simulation

10
QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL Qual. Reliab. Engng. Int. 2002; 18: 251–260 (DOI: 10.1002/qre.479) CHARACTERIZING STATEFUL RESOURCE ATTACKS USING MODELING AND SIMULATION DONNA GREGG , WILLIAM BLACKERT, DAVID HEINBUCH, DONNA FURNANAGE AND YANNI KOUSKOULAS The Johns Hopkins University Applied Physics Laboratory, Laurel, MD 20723-6099, USA SUMMARY Denial of Service (DoS) attacks come in a variety of types and can target groups of users, individual users or entire computer systems. With the ever-increasing reliance on networked information systems for command and control of military systems—not to mention communications infrastructures—relatively simple attacks that degrade or deny service can have devastating effects. We have modeled and validated a variety of DoS attacks and have executed these models against a validated target network model, whose architecture and stochastic behavior is varied for analysis purposes. We have conducted a systems analysis using these models and have characterized attack effects. This paper describes the analysis of two attacks, each of which consumes resources at a server. Output from our model includes the probability of denied service and service time under attack and no attack conditions. Our objective is to identify attack classes and characterize attack behavior within a class, so that the behavior of new attacks falling within a class can be anticipated and defended against. Copyright 2002 John Wiley & Sons, Ltd. KEY WORDS: information assurance; denial of service attacks; modeling and simulation BACKGROUND In the context of a network of computers, Denial of Service (DoS) occurs when a particular system resource (e.g. application, operating system or routing services, communications or processing bandwidth, memory, queue position) is not available to legitimate users. DoS may occur because of high legitimate demand or a non-malicious fault. It may also occur because of hostile actions taken on the network itself; that is, a DoS attack. DoS attacks are carried out in several basic ways. First, the attacker can send a single communication that the victim system cannot deal with, thus causing it to hang up or crash (e.g. ping of death). Second, attackers can manipulate information critical to the operation of the network. For example, in the Address Resolution Protocol (ARP) cache poisoning attack, invalid Internet Protocol (IP)-to-Ethernet- address translations are placed in the ARP cache. Correspondence to: D. Gregg, Information Operations Group, The Johns Hopkins University Applied Physics Laboratory, 11100 Johns Hopkins Road, Laurel, MD 20723-6099, USA. Email: [email protected] Contract/grant sponsor: Defense Advanced Research Projects Agency When the poisoned ARP cache is used to translate addresses, packets are simply lost in the network. Third, an attacker can send a continuous stream of communications that use up the victim system’s resources, making them unavailable to legitimate users (flooding). Flooding attacks can be further classified as brute force attacks, that make an entire site unavailable because of congestion, and stateful resource attacks [1], that make a service unavailable by consuming resources needed by the service (e.g. memory, file descriptors, queue allocations). Although a high priority, protecting against DoS attacks is an extremely difficult undertaking. Most research solutions call for sweeping changes in protocols, router behavior and special cooperation between security domains. However, lacking from many proposed solutions (and the topic of this paper) is the rigorous analysis of the statistical behavior of DoS attacks. Through statistical analysis it is possible to identify attack classes and characterize attack behavior within a class. Behavior of new attacks falling within a class can be anticipated and the attacker’s ability to adapt to proposed mitigation strategies examined. This type of analysis is the first step needed to drive such major alterations. Copyright 2002 John Wiley & Sons, Ltd.

Transcript of Characterizing stateful resource attacks using modeling and simulation

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL

Qual. Reliab. Engng. Int. 2002; 18: 251–260 (DOI: 10.1002/qre.479)

CHARACTERIZING STATEFUL RESOURCE ATTACKS USINGMODELING AND SIMULATION

DONNA GREGG∗, WILLIAM BLACKERT, DAVID HEINBUCH, DONNA FURNANAGE AND YANNI KOUSKOULAS

The Johns Hopkins University Applied Physics Laboratory, Laurel, MD 20723-6099, USA

SUMMARYDenial of Service (DoS) attacks come in a variety of types and can target groups of users, individual users or entirecomputer systems. With the ever-increasing reliance on networked information systems for command and controlof military systems—not to mention communications infrastructures—relatively simple attacks that degrade ordeny service can have devastating effects. We have modeled and validated a variety of DoS attacks and haveexecuted these models against a validated target network model, whose architecture and stochastic behavior isvaried for analysis purposes. We have conducted a systems analysis using these models and have characterizedattack effects. This paper describes the analysis of two attacks, each of which consumes resources at a server.Output from our model includes the probability of denied service and service time under attack and no attackconditions. Our objective is to identify attack classes and characterize attack behavior within a class, so that thebehavior of new attacks falling within a class can be anticipated and defended against. Copyright 2002 JohnWiley & Sons, Ltd.

KEY WORDS: information assurance; denial of service attacks; modeling and simulation

BACKGROUND

In the context of a network of computers, Denialof Service (DoS) occurs when a particular systemresource (e.g. application, operating system or routingservices, communications or processing bandwidth,memory, queue position) is not available to legitimateusers. DoS may occur because of high legitimatedemand or a non-malicious fault. It may also occurbecause of hostile actions taken on the network itself;that is, a DoS attack.

DoS attacks are carried out in several basic ways.First, the attacker can send a single communicationthat the victim system cannot deal with, thus causingit to hang up or crash (e.g. ping of death). Second,attackers can manipulate information critical to theoperation of the network. For example, in theAddress Resolution Protocol (ARP) cache poisoningattack, invalid Internet Protocol (IP)-to-Ethernet-address translations are placed in the ARP cache.

∗Correspondence to: D. Gregg, Information Operations Group, TheJohns Hopkins University Applied Physics Laboratory, 11100 JohnsHopkins Road, Laurel, MD 20723-6099, USA.Email: [email protected]

Contract/grant sponsor: Defense Advanced Research ProjectsAgency

When the poisoned ARP cache is used to translateaddresses, packets are simply lost in the network.Third, an attacker can send a continuous streamof communications that use up the victim system’sresources, making them unavailable to legitimateusers (flooding). Flooding attacks can be furtherclassified as brute force attacks, that make an entiresite unavailable because of congestion, and statefulresource attacks [1], that make a service unavailableby consuming resources needed by the service (e.g.memory, file descriptors, queue allocations).

Although a high priority, protecting against DoSattacks is an extremely difficult undertaking. Mostresearch solutions call for sweeping changes inprotocols, router behavior and special cooperationbetween security domains. However, lacking frommany proposed solutions (and the topic of this paper)is the rigorous analysis of the statistical behaviorof DoS attacks. Through statistical analysis it ispossible to identify attack classes and characterizeattack behavior within a class. Behavior of newattacks falling within a class can be anticipated andthe attacker’s ability to adapt to proposed mitigationstrategies examined. This type of analysis is the firststep needed to drive such major alterations.

Copyright 2002 John Wiley & Sons, Ltd.

252 D. GREGG ET AL.

APPROACH

Modeling and simulation (M&S) is one approach toanalyzing the effect of attacks against target networks.Specifically, event-driven simulation lends itself tocapturing and analyzing the stochastic nature ofnetworks and network attacks. With this technique,distributions are used to represent network traffic (e.g.packet size, packet inter-arrival rate) and applicationbehavior (e.g. number of pages on a Web site), as wellas other aspects of network behavior. Through MonteCarlo sampling, these distributions are sampled andtheir effect incorporated into model results.

One of the biggest advantages to using M&S is theease with which the analyst can modify the systemunder review (i.e. compared to using a testbed).Attack behavior can be examined under a variety ofconditions (e.g. attack rate can be varied, backgroundnetwork traffic modified) by simply changing modelparameters. Likewise, the target network can be modi-fied (size, topology) and the same attacks re-examined.

The Johns Hopkins University Applied PhysicsLaboratory (JHU/APL) is using M&S to perform theanalysis of DoS attacks. Several models have beenproduced: a high-fidelity model of the TCP networkprotocol; a 104-node subnet (i.e. the target network,a mixture of NT and Unix workstations); and severalDoS attacks—TCP SYN Flood, Octopus, Snork, UDPStorm and ARP Cache Poisoning. These models weredeveloped in OPNET Modeler, a commercial networksimulation package. Run as a Monte Carlo simulation,the models produce a variety of measures ofeffectiveness when attacks are launched against them.

This paper focuses on our analysis of the Octopusand TCP SYN Flood attacks executed against a serverin the 104-node network. Octopus consumes serverresources at the application layer by making manysimultaneous requests for a service (e.g. HypertextTransfer Protocol (HTTP)). These services oftenspawn child processes that use system resources(e.g. memory). With the Octopus attack, enoughrequests are made to exhaust available resources, thusdenying further requests. TCP SYN Flood consumesresources at the transport layer by filling the server’spending connection queue with SYN packets. Becausethe source IP address is spoofed, the SYN packetacknowledgement process is interrupted and the queueremains full until a timeout clears it. Once the queueis full, further connection requests are denied.

MODEL VERIFICATION AND VALIDATION

Before the subnet and attack models can be usedfor analysis, model verification and validation (V&V)

must be performed to assure model correctness.V&V are two distinct activities that assess thecorrectness and accuracy of models and simulations.Simply put, verification answers the question ‘DidI build the model right?’—is the code behavingas the model developer intends? Validation answersthe question ‘did I build the right thing?’—doesthe model replicate the ‘real’ system to the extentneeded? For models built from scratch, those createdby modifying OPNET code and OPNET-producedapplication models, results validation is performed.Results validation compares model results with outputfrom the system being modeled. Included in resultsvalidation are model output comparisons with realdata, sensitivity analyses and assessments made byexpert opinion. The objective of results validation isto assess whether the model output is sufficientlyaccurate, unbiased and complete for the model’sintended use.

A significant amount of work has been done forV&V of the attack and target network models. Inperforming results validation on the target networkmodel, JHU/APL created scripts to automate thecollection and processing of live network traffic datain a small network (four servers and two clients).Using the File Transfer Protocol (FTP), Telnet andHTTP services, 1 hour of real network data wascollected. Twenty stochastic runs on a model ofthe same network were analyzed to compare model-generated behavior with live network behavior. Aseries of formal assertions about TCP were checkedagainst both live network and model data to testprotocol validity [2]. In addition, traffic statistics werecompared between the live network and model forHTTP, FTP and Telnet [2]. Figures 1 and 2 show twosuch comparisons for HTTP. In Figure 1, aggregatepacket size for HTTP traffic is compared. The surgeof data centered at 405 bytes comes from connectionrequests; data centered at 1427 bytes represent themaximum transmission unit (MTU) in TCP; datacentered at 113 bytes contains packets exceeding theMTU.

Figure 2 shows connection length for HTTPtraffic. Testbed and model data show the majority ofconnections lasting less than 250 s. The number ofconnections declines for both the model and testbeddata as connection length increases. Model data issmooth in comparison with testbed data. We wouldexpect to see the same smoothness in testbed data hadmore data been collected.

Verification tests were performed on the Octopusand TCP SYN Flood models built by JHU/APL. ForOctopus, a simple network consisting of a server, a

Copyright 2002 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2002; 18: 251–260

CHARACTERIZING NETWORK ATTACKS 253

113 259 405 551 697 843 989 1135 1281 14270

200

400

600

800

1000HTTP Packet Size (Model)

bytes

# of

occ

urre

nces

113 259 405 551 697 843 989 1135 1281 14270

200

400

600

800

1000HTTP Packet Size (Network)

bytes

# of

occ

urre

nces

Sample Size: 1941

Sample Size: 1680

Figure 1. HTTP packet size distribution

0 500 1000 1500 2000 2500 30000

10

20

30

40

50HTTP Connection Length (Network)

seconds

# of

occ

urre

nces

0 500 1000 1500 2000 2500 30000

10

20

30

40

50HTTP Connection Length (Model)

seconds

# of

occ

urre

nces

Sample Size: 49

Sample Size: 50

Figure 2. HTTP connection length comparison

workstation, an Octopus attack node and a switch wereused for the tests. The server consists of the standardOPNET server with the addition of an ‘Octopusapplication’ and the use of a JHU/APL-modified TCPprocess. This application allows connections to becreated on a specific port and automatically terminatesconnections after a specified time period. In ourtests, the port is 101 and the time period is 500 s.Modifications to TCP enforce a limited number ofconnections on a host at a given time.

An example verification test performed on Octopusexamines the model’s ability to transmit a designatednumber of attack packets, potentially denying service.In this case the number of Octopus attack packets isset to 200 with a maximum number of connections setto 100. (These values are arbitrary and are used for

testing purposes only.) Figure 3 shows the number ofattack packets transmitted by the Octopus attacker andshows that, as expected, 200 packets are transmitted.

Our TCP SYN Flood attack uses a distributedDoS (DDoS) configuration. For verification, a simplenetwork consisting of a switch, a client, an attackerand a server was constructed. The server node isconfigured to provide Web service (port 80) and theDDoS node is configured to send TCP SYN packets tothe server at 1000 s (500 s start time plus a constantoffset of 500 s) and terminates at 2000 s. Testingverified that the packets appear to originate from theworkstation and not the DDoS machine. The packetsare sent at a fixed interval of 0.001 s and, in our testcase, the TCP connection queue size is 100. The serveris configured to use a 180-s timeout before terminatinga connection.

Figure 4 presents example verification test perfor-mance data for TCP SYN Flood, depicting the server’sconnection queue as a function of time. As expected,the connection queue fills up at 1000 s and then de-pletes at approximately 1180 s, but is quickly refilled.

Results validation was performed for the Octopusand TCP SYN Flood attacks. To perform resultsvalidation on Octopus, testbed data was collected.The test consisted of a server, client and attacker.The test measured the number of successful clientconnections (the client requests service at a fixed rate)while the server is attacked with Octopus. An OPNETmodel of this scenario was also generated and datacollected. A comparison of model and testbed resultsis shown in Figures 5 and 6. Figures 5 and 6 showthe number of valid user connections as a functionof time for the model and testbed, respectively. Bothshow connections changing at 300-second intervals,which is consistent with the timeout value used in themodel and testbed. Correctly capturing this behavioris critical to predicting the system’s performanceunder attack conditions. The timeout parameter settingdirectly affects the length of time the attacker canseize and hold server system resources. The rangesin Figures 5 and 6 are not identical because therelative packet rates between the attacker and clientdiffered in the model and testbed. This difference doesnot impact the model’s ability to accurately quantifyattack effects.

To perform results validation on TCP SYN Flood,we set up a testbed with four different subnets. Thesubnets were connected via a CISCO 7000 router over100 Mb Ethernet. Subnets 1 and 2 were comprisedof two Linux hosts each, while Subnet 3 hosted twoSolaris machines, one of which was the victim of theattack. The fourth subnet had a BSD machine.

Copyright 2002 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2002; 18: 251–260

254 D. GREGG ET AL.

0

50

100

150

200

250

0 1000 2000 3000 4000 5000 6000

Time (s)

Oct

op

us

Pac

kets

Tra

nsm

itte

d

Figure 3. Number of attack packets transmitted by Octopus

0

20

40

60

80

100

120

0 200 400 600 800 1000 1200 1400 1600 1800

Time (s)

Co

nn

ecti

on

Qu

eue

Len

gth

Figure 4. Connection queue at the server

Different elements of the Stacheldraht DDoS attackwere installed on all available nodes except the victimof the attack which was the HTTP server. Stacheldrahtrequires a controller and a handler to coordinateall SYN broadcasts. In our setup, the handler andcontroller co-existed with the legitimate client, whichcontinuously tried to connect to the HTTP server.Using this testbed, we ran a DDoS attack against theserver, while the client tried to connect at a rate of onceevery 5 s. During 400 attempts, the client connected

once, giving us a probability of denied service of99.75%. We also used this testbed to characterizeattack packet rate for each broadcasting node. We ransix different SYN flood attacks, 15 to 20 s each. Thefirst five attacks consisted of each attacker sendingSYN packets to the victim, while the others weredormant. For the sixth attack, all the nodes attackedin concert to produce one DDoS attack.

Figures 7 and 8 compare model and testbed results.In Figure 7, time refers to real time in the test bed; time

Copyright 2002 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2002; 18: 251–260

CHARACTERIZING NETWORK ATTACKS 255

0 500 1000 1500 2000 25000

0.5

1

1.5

2

2.5

3

Time (s)

Num

ber

of S

ucce

ssfu

l Clie

nt C

onne

ctio

ns

Figure 5. Model Octopus data

0 500 1000 1500 2000 25000

1

2

3

4

5

6

7

8

9

Time (s)

Num

ber

of S

ucce

ssfu

l Clie

nt C

onne

ctio

ns

Figure 6. Testbed Octopus data

Copyright 2002 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2002; 18: 251–260

256 D. GREGG ET AL.

Figure 7. TCP SYN packet send rate (testbed)

Figure 8. TCP SYN packet send rate (model)

Copyright 2002 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2002; 18: 251–260

CHARACTERIZING NETWORK ATTACKS 257

Table 1. FTP, HTTP and Telnet distribution and parameter settings

Service Parameter Setting

FTP Command mix (get/total) 75%Inter-request time (seconds) Exponential (97)File size (bytes) Normal (840 000, 20 000)

HTTP Max connections 4Page interrarrival time (seconds) Exponential (222)

Page properties:Page 1 size (bytes) Lognormal (1750, 6.2)Number of additional pages Pareto (1.77, 2.43)Additional page size (bytes) Lognormal (11 372, 6.2)

Telnet Duration (seconds) Normal (900, 200)Inter-command time (seconds) Exponential (5)Terminal traffic (bytes per command) Lognormal (15, 40 000)Host traffic (bytes per command) Lognormal (500, 80 000 000 000)

in Figure 8 is simulation time. The individual attackrates compared favorably as did the combined rate(not shown). The packet rate seen by the victim of thecombined attack in the testbed is significantly lowerthan the sum of the packet rates for the individualattacks. Even though the router can successfully routepackets for each of the separate attacks, the overallrate is too high for it to handle and it begins droppinga significant amount of traffic. For our test network,the Cisco 7000 router is, in fact, a bottleneck. Similarresults are seen in the model.

OPNET MODEL PARAMETERS FOR THETARGET NETWORK

Once the models are verified and validated, theycan be used for analysis purposes. To perform thisanalysis, we ran the models using the distributions andparameters shown in Table 1. These settings drive thebehavior of FTP, Telnet and HTTP in our analysis.The parameter values were derived empirically fromtestbed data.

ANALYSIS RESULTS—OCTOPUS ATTACK

To perform the analysis, our models are used tocalculate a variety of measures of effectiveness(MOEs). Presented here is the probability of deniedservice (PDS), which, for Octopus, is defined as

PDS = 1 − (number of successful services/number

of service attempts)

Results of our model runs show that the effective-ness of the Octopus attack (an attack that consumesserver resources) is governed by the rate at which

0

0.2

0.4

0.6

0.8

1

30 50 100

300

500

1000

Server Timeout (sec)

Ave

rag

e P

rob

abili

ty o

f D

enie

d

Ser

vice

Model Runs: 30Attack Rate: 10 requests/secMax. Conn.: 500

Figure 9. Average PDS as a function of server timeout (Octopusattack)

the resource, once captured, is released (i.e. servertimeout setting), the total amount of the resource (i.e.maximum number of allowed connections), and thearrival rate of user and attacker resource requests.

Model results in Figure 9 show average PDS(averaged over 30 model runs) for varying timeoutparameters. The model shows a clear dependencybetween attack effectiveness and the server timeoutsetting. Results show that decreasing the servertimeout parameter lowers attack effectiveness. In fact,when the timeout parameter is set to 30 s, PDS is0. It should be noted, however, that a 30-s timeoutmay have a secondary impact of denying service tolegitimate users. In addition, as will be discussedlater, the attacker can counter the positive effects oflowering the timeout setting.

Copyright 2002 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2002; 18: 251–260

258 D. GREGG ET AL.

0

0.2

0.4

0.6

0.8

1

500 1000 1500Server Maximum Number of

Connections

Ave

rag

e P

rob

abili

ty o

f D

enie

d S

ervi

ceModel Runs: 30Attack Rate: 10 requests/secServer Timeout: 300s

Figure 10. Average PDS as a function of maximum number ofconnections (Octopus attack)

0

0.2

0.4

0.6

0.8

1

1 2 10 20 100

Attack Rate (requests/sec)

Ave

rag

e P

rob

abili

ty o

f D

enie

d

Ser

vice

Model Runs: 30Max. Conn.: 500Server Timeout: 50s

Figure 11. Average PDS as a function of attack rate (Octopus attack)

Attack effectiveness also decreases by increasingthe total amount of the resource (i.e. maximumnumber of connections) at the server. These results areshown in Figure 10. In this case, PDS drops by closeto 0.30 by increasing the server’s maximum numberof connections from 500 to 1500. However, there isa limit to the success of this approach because of thelimit on a server’s available resources (e.g. memory).The attacker can counter the positive effects of thisapproach as well. Nevertheless, the model shows aclear dependency between attack effectiveness andmaximum number of connections.

The attacker can easily counter the impact of theseadjustments by stepping up the rate at which theresource is requested. Figure 11 shows average PDSas a function of attack rate for attack rates at 10,20 and 100 requests per second. At 10 requests persecond, PDS is below 0.2. However, as the attack rateincreases, so does PDS regardless of the 50-s servertimeout setting.

ANALYSIS RESULTS—TCP SYN FLOODATTACK

PDS is the primary MOE reported for the TCP SYNFlood attack. For this attack, PDS is defined as

PDS = 1 − (number of successful connections/total

number of attempts)

In this case, failed connections occur because thepending connection queue is full when the SYN packetarrives for a new, legitimate connection. In calculatingPDS, attack packets are not included in the totals.

In our scenario, the TCP SYN Flood attack wasexecuted between 500 and 700 s. Unlike the Octopusattack, TCP SYN Flood attack effects linger beyond700 s. Figure 12 presents the packets forwarded by therouter used by the attacker. The total rate of zombiepackets exceeds the maximum packet-forwarding rateof the router. As such, the router’s internal buffer fillswith attack packets. After the attack ends, the routercontinues to forward attack packets to the victim,extending attack effects.

The results of our model runs show that theeffectiveness of the TCP SYN Flood attack, like theOctopus attack, is governed by the rate at which theresource, once captured, is released (i.e. SYN queuetimeout setting), the total amount of the resource (i.e.pending connection queue size) and the arrival rate ofuser and attacker resource requests.

Model results in Figure 13 show average PDS(averaged over 20 model runs) for three connectionqueue timeout values: 15, 90, and 180 s. Figure 13summarizes PDS during and after the attack.Model results show the dependency between attackeffectiveness and the timeout values both during andafter the attack. As with Octopus, decreasing thetimeout parameter lowers attack effectiveness.

Attack effectiveness also decreases by increasingthe total amount of the resource (i.e. connection queuesize). These results are shown in Figure 14. Whenthe queue size is increased from 1024 to 8192, PDSdrops by close to 0.40 during the attack and byapproximately 0.10 after the attack. As in the Octopusattack, there is a limit to the success of this approachbecause of server resource constraints.

As before, the attacker can easily counter the impactof these adjustments by stepping up the attack rate.Figure 15 shows the impact of increasing the attackrate when packets are directly injected at a rate of 5000packets/s into the switch used by the two servers. Thisadditional input, combined with the existing attack,results in a total of 10 000 packets/s. The queue size forboth cases shown in Figure 15 is 8192. As expected,

Copyright 2002 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2002; 18: 251–260

CHARACTERIZING NETWORK ATTACKS 259

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

0 200 400 600 800 1000 1200

Time (s)

Pac

kets

Fo

rwar

ded

(p

kts/

s)

Figure 12. Router packet forwarding (TCP SYN Flood attack)

00.10.20.30.40.50.60.70.80.9

1

5000 10000

Aggregate SYN Rate (pkts/sec)

Ave

rag

e P

rob

abili

ty o

f D

enie

d

Ser

vice During

After

Model Runs: 20 Connection Queue Size: 8192 Connection Queue Timeout: 15s

Figure 13. Average PDS as a function of queue timeout (TCP SYNflood attack)

00.10.20.30.40.50.60.70.80.9

1

1024 4096 8192

Connection Queue Size

Ave

rag

e P

rob

abili

ty o

f D

enie

d

Ser

vice During

After

Model Runs: 20 Attack Rate: 5000 pkts/sec Connection Queue Timeout: 15s

Figure 14. Average PDS as a function of connection queue size(TCP SYN Flood attack)

00.10.20.30.40.50.60.70.80.9

1

5000 10000

Aggregate SYN Rate (pkts/sec)

Ave

rag

e P

rob

abili

ty o

f D

enie

d

Ser

vice During

After

Model Runs: 20 Connection Queue Size: 8192 Connection Queue Timeout: 15s

Figure 15. Average PDS as a function of attack rate (TCP SYNFlood attack)

the attacker is able to substantially increase PDS bysending more packets.

CONCLUSIONS

For the attacks examined thus far, including theOctopus and TCP SYN Flood attacks shown here,M&S has proven to be a fruitful approach toquantifying and assessing DoS attack effects. For theOctopus attack, M&S has shown that attack effectsvary as a function of server resources, server resourcerelease rate and attack rate. We have extended thisfinding showing that the TCP SYN Flood attackalso varies as a function of these three parameters.We believe our results will extend to other stateful

Copyright 2002 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2002; 18: 251–260

260 D. GREGG ET AL.

resource attacks and that a common mitigationstrategy may prove fruitful for all stateful resourceattacks. M&S also plays a role in anticipating theattacker’s ability to adapt to mitigation strategies.Armed with this information, more robust defensemechanisms can be developed enhancing overallnetwork defense. Through DARPA funding, our M&Swork is being extended in just this way—we areexamining DDoS attacks, mitigation strategies andthe attacker’s ability to overcome them. Results willbecome available in our future work.

V&V is an important part of M&S and is crucial toensuring the correctness and accuracy of the modelsand model results. We have found data collection ina testbed to be a good approach to results validationof attack and network models. Thus far, however, wehave examined attack effects in a small network (104-node) model. Performing a systems analysis of DoSattacks in a larger, more complex network is needed.Issues for future research include model scalabilityand V&V of large network models.

Protecting against DoS attacks is a difficult andcomplex problem. There is no single approach oranswer to increasing our resiliency and protectionagainst them. In fact, the solution will likely be a com-bination of approaches, including optimizing serversettings, using network topologies that minimize at-tack effects and sweeping changes to protocols androuter behavior. It is clear, however, that M&S is auseful tool in quantifying and assessing DoS attacks,enabling us to make intelligent decisions in building aprotection strategy.

REFERENCES

1. Croll AA, Packman E. A balanced approach to DoS.Information Security May 2000.

2. Gregg DM, Blackert WJ, Furnanage DC, Heinbuch DV. Denialof Service Attack Assessment Model Verification and ValidationReport. Johns Hopkins University Applied Physics Laboratory,May 2001.

Authors’ biographies:

Donna M. Gregg received a BS in Mathematics fromthe University of Maryland in 1984 and an MS inMathematics from The Johns Hopkins University in1988. She joined the APL in 1985 as an AssociateStaff Mathematician and was appointed to the PrincipalProfessional Staff in 1999. She is currently the GroupSupervisor of the Information Operations Group in thePower Projection Systems Department, leading the group’sefforts in information assurance including modeling andsimulation of information security (INFOSEC) systemsand verification and validation of INFOSEC models forgovernment sponsors.

William J. Blackert has a masters degree in ElectricalEngineering and is a member of JHU/APL’s seniorprofessional staff. He specializes in Information Assuranceanalysis.

David V. Heinbuch has a masters degree in ComputerScience and is a member JHU/APL’s associate professionalstaff.

Donna C. Furnanage has a masters degree in ElectricalEngineering and is a member of JHU/APL’s associateprofessional staff.

Yanni A. Kouskoulas has a PhD in Electrical Engineeringand is a member of JHU/APL’s senior professional staff.

Copyright 2002 John Wiley & Sons, Ltd. Qual. Reliab. Engng. Int. 2002; 18: 251–260