Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat...

26
Misuse Detection and Prevention in Ad-hoc Networks Reuven Gevaryahu ([email protected]) Bobby Yaros ([email protected]) Faculty Advisors: Saswati Sarkar, Farooq Anjum Graduate Advisor: Dhanant Subhadrabandhu

Transcript of Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat...

Page 1: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

Misuse Detection and Prevention in Ad-hoc Networks

Reuven Gevaryahu ([email protected]) Bobby Yaros ([email protected])

Faculty Advisors: Saswati Sarkar, Farooq Anjum Graduate Advisor: Dhanant Subhadrabandhu

Page 2: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

Abstract Generally, a computer using a wireless network will communicate to other computers through a base station. The base station transmits and receives packets wirelessly, but also connects to a landline and a wired network (ie. the Internet). However, there are situations where a base station does not exist, but users still wish to transmit to other computers. Situations like this include Native American reservations, underdeveloped nations and mobile warfare scenarios. There exist users with wireless connections, but no wired network infrastructure. Base station installations and an underlying wired network would be too costly. The solution to these situations is ad-hoc networking. Each wireless computer has a range for its transmission. If the source wants to transmit information to a destination computer and it is not within range, it will use its neighbors to relay the message. Unfortunately, with this type of networking, new security issues are introduced. In a wired scenario with base stations, all traffic with pass through a relatively small number of points. Security monitoring for various attacks can be done at these points. In ad hoc networks, these points do not exist, so a new system must be developed. In this project, we consider two separate types of systems, Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS). An IDS system raises alerts whenever bad traffic is discovered. Bad traffic is any packets that when received and processed by the destination, might be harmful to a system. An IPS actively drops bad traffic when it is discovered. In both systems, the goal is to use the fewest resources, but still guarantee that all traffic will be analyzed. In IDS, this means all bad packets should be detected. In IPS, this means all bad packets should be dropped. IDS systems take advantage of the promiscuous mode of operation in wireless because a node can listen to all the transmissions in his range. To minimized resource consumption, the goal of IDS becomes running the system on the fewest number of nodes possible, yet still guarantee complete coverage by guaranteeing the transmission of each node will be heard by at least on node running detection software. We consider a simple dominating set (DS) node selection algorithm and compare it to a naïve algorithm where each node activates its detection software with certain probability. In this project, the DS algorithm is proven to guarantee complete coverage, is implemented and compared to an implementation of RP. Promiscuous mode allows a node to listen to all traffic in range, but this is not as beneficial to an IPS because the nodes must be able to stop a bad packet’s transmission after it is detected. So, the node must be able to somehow control the packets transmission. One way to solve this problem is to have the scanning node tell the transmitting node not to send a bad packet after it is detected, but this would mean that nodes have to hold each packet until receiving a response from an IPS active neighbor, which is basically unfeasible. So, for a bad packet to feasibly be dropped, one node along its path must be running IPS to drop the packet. So, the goal of IPS becomes scanning each packet once along the path. We consider four different modes of operation, first hop, last hop, both first and last hop and randomly placed any hop.

Page 3: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

Related Work Intrusion Detection research in the wired world is well-developed. Snort, an open source project, provides an excellent traffic analyzer and intrusion detection system for wired computers. In a wired network, there are often a few points, such as routers, where most traffic passes. These points provide an excellent location where traffic can easily be analyzed. The research into determining the placement of the IDS is unimportant because the solution is obviously to concentrate all intrusion detection on these points. Wireless Networking is a relatively new phenomena that is experiencing rapid growth. Standards and protocols are still being formalized. 802.11a is rarely used, while 802.11b is quickly being replaced by 802.11g, and 802.11n is on the drawing board. Many companies are interested in the freedom offered by wireless, but security is still in early stages of development. Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources over an untrusted network.” [3] In addition to the security weaknesses of current infrastructure-based wireless security and protocols, ad-hoc wireless networking introduces even more issues, and is even less explored by previous research. Determining where to place IDS is not simple. Even determining what is an attack is more difficult. There is even no guarantee that any point in the network will see even the majority of the traffic! The idea of an intrusion detection system specific to ad-hoc wireless networks was voiced by Yongguang Zheng et al. in their paper “Intrusion detection techniques for mobile wireless networks”. They propose an intrusion detection architecture that has both distributed and local aspects. But this paper doesn't consider the efficacy. In Zheng's method, all nodes are running IDS software. We intend to use fewer nodes and still have very high probabilistic guarantee of detecting attacks. The methods for choosing which nodes will run the IDS software is considered in “Efficacy of Misuse Detection in Adhoc networks” by Subhadrabandhu et al., and is the basis of our research. Our project differs from previous research by looking at actual implementation rather than theory and simulation.

Page 4: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

Technical Approach There are many aspects to creating a good Intrusion Detection or Prevention System. Most current systems scan packets based on signatures of known attacks. Researchers are also looking into how to detect attacks even before the attack signature is known. This project focuses on IDS and IPS placement strategies rather than how to detect attacks. We assume that attacks can be detected and look into how the network and detection software can be arranged in order to produce the most efficient, yet robust systems. AODV Of the many ad-hoc routing protocols, Ad-Hoc On-demand Distance Vectoring (AODV) was chosen as the routing protocol to be used in this project because the faculty and graduate students we consulted for this project were very familiar with it. The two most commonly used implementations are provided by Uppsala University (UU) and by the National Institute of Standards and Technology (NIST). We chose the UU implementation because it operates in user-space whereas the NIST implementation runs entirely in kernel space. A user-space implementation makes modifications easier. Snort and Snort_inline Throughout this project, we used Snort as the detection software. Snort calls itself “the defacto standard for intrusion detection/prevention”. It is an open source packet sniffer with a large set of customizable rules use to determine if packets are malicious. Snort_inline is a modified version of Snort that allows a detector to actively drop packets determined to be malicious. It uses a netfilter (firewall) rule to request that packets be queued into userspace, and it uses libipq to load the packets and analyze them before allowing them to sent. In version 2.3.2 of Snort released in March 2005, both the Snort and Snort_inline projects merged. However, our code is based on previous releases when the projects were separated. Intrusion Detection System (IDS) Design and Implementation In a wired network, one would place Snort at a node where most traffic would pass. In a wireless ad-hoc network, these nodes are not available since traffic can come from anywhere in the network. To guarantee detection of all malicious packets, every transmission should be heard. If we consider a network where all nodes have the ability to run detection software, then the desire is arrange the network such that nodes whenever there is a transmission, at least one node running the detection software is in range to scan the packet. At the same time, efficiency requires that the fewest number of nodes possible be running the detection software. A solution is to pick nodes with the most neighbors (highest degree) such that the entire network is covered. This algorithm turns out to be NP-complete. In an ad-hoc network, global information is not available to each node. In fact, according to the aodv specifications, a node learns of its neighbors when it receives requests to route packets. Hence a node has no idea whether or not it should run the detection software. The problem can be modified slightly by requiring each node to broadcast periodic “Hello” messages. Thus, a node can discover its neighbors. However, the “Hello” messages provide only local information and with only this information, the problem is complex. Nevertheless, there are good approximation algorithms that allow nodes to decide whether or not to enable the detection software. A simple Dominating Set (DS) algorithm to perform this task was developed and is listed along with a proof in Appendix 1. The algorithm is designed with the intent to be careful to disable the detection software only when it is absolutely clear that disabling will not lead to missed detections.

Page 5: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

After developing and proving the algorithm, a software implementation to run the algorithm was developed. The implementation had to deviate from the algorithm slightly because of some networking realities. First, the algorithm assumes that when a node leaves the network, each node will be able to immediately detect the loss of a neighbor. In reality, there is no way to know for sure whether a node is present or not. So, much of the implementation relies on periodic broadcasts of “Hello” messages. If the node is not present, its “Hello” broadcasts will not be detected. This issue leads to the possibility of missing detections. Consider a network were one node has determined it is safe to disable because all neighbors have told it that it may disable. It has a neighbor with higher degree running the detection software. Therefore, its transmissions will be scanned. Now consider removing the higher degree neighbor. The node should enable its detection software, but it has not realized the removal of the higher degree neighbor. In this period before enabling its software detections might be missed. Nevertheless, this issue would arise only rarely in reality since topology changes even in ad-hoc networks are rare. Implementation Specifics The implementation is written in Java and makes use of threads to perform the various requirements of the algorithm.

The Communication class handles communication across a UDP socket. Degree Broadcasts are sent to the Broadcast address, so all nodes will learn the degree of its neighbors. UDP was chosen as the communication protocol because there is no way to tell which neighbors a node has. Hence, broadcast messages are necessary. It would be possible to use broadcast for neighbor discovery and later use TCP connections to build a session for further communications.

Page 6: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

However, this seemed unnecessarily complicated since multiple sockets and connections would have to be maintained. All UDP messages were traveling only one hop, so it was more the responsibility of the link layer to actually deliver the packets anyway. The DominatingSet class has four threads. The DegreeBroadcaster thread sends periodic broadcast messages containing a count of its to all its neighbors. The DisableNotifier thread periodically tells neighbors whether or not they may disable snort. The NeighborListener handles all incoming packets and updates the NeighborTreeMap according to any changes (ie. a new neighbor is discovered or a neighbor updates its degree.) The ExpiredNeighborRemover thread searches the NeighborTreeMap looking for nodes that have not sent a transmission for a specified timeout period. The NeighborTreeMap class stores neighbor node information in a red-black tree. Red-black trees were chosen over other data structures because the network topology changes much less rapidly than messages are sent out (i.e. a node would lose and gain neighbors relatively slowly). Hence, insertions and removals from the neighbor set would be infrequent compared to the neighbor look-ups. Red-Black trees have expensive insertion costs, but fast search times. Whenever an insertion, update, or removal occurred, NeighborTreeMap determined whether or not to enable Snort. If a change occurred, NeighborTreeMap also notified all neighbors that would be affected by the change. IDSControl enabled or disabled Snort. IDSControl was originally implemented using the Runtime class in Java that allows system commands to be executed. Unfortunately, this functionality was not available on the PDAs. Blackdown Java 1.3.1 was the only available version of Java available for the PDAs and Sun used an internal function of the Gnu C Library with version 1.3.1. When the next version of the Gnu Library was released, this function was not available anymore. So, a workaround was created by writing “ENABLE” and “DISABLE” to a control file. A perl helper script was written to periodically poll this file and enable or disable snort according to the file’s contents. The code for the implementation can be found on the CD accompanying this document. Tests for the implementation and the algorithm are discussed later in this paper. Intrusion Prevention System (IPS) Design and Implementation Whereas the goal of an IDS system is create a notification whenever an attack is detected, an IPS system must prevent attacks from reaching their destinations. In order to be able to drop packets, the node running detection software must be somewhere along the routing path for the packet. Designing a system where only some nodes are running detection software becomes nearly impossible because transmissions can come from anywhere in the network and may be routed along a various paths. Due to the nature of AODV and ad-hoc networking, there is no way to determine how a packet might be routed and thus no way to ensure that the packet is scanned by a routing node. Thus, our system is designed to have all nodes running detection software, however, the node may or may not scan the packet. An efficient system will scan a packet only once, so the question becomes where along the routing path to scan the packets. We considered three modes of determining when to scan packets, First, Last and Both First and Last (Both). First mode scans the packet only at the first node along the packet’s route. Last mode scans the packet only at the last hop node just previous to the receiver. Both mode scans

Page 7: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

the packet at both the first hop and the last hop. Scanning packets early in routing has a power saving and bandwidth usage benefit because if a bad packet is dropped early, it will not be retransmitted along its path to the destination. First mode functions has this benefit. Unfortunately, First mode is susceptible to spoofing malicious neighbors. A malicious node could report to the next hop in a route that the packet has already traveled more than one hop, but, in fact, that packet is originating from the malicious node. So, the next hop node does not scan the packet since it believes it has already been scanned. This is equivalent to IP spoofing where a malicious node alters the source address. So, packets of this type would elude the detection in a First mode network. Last mode, however, cannot be eluded since a routing node will know if the packet is at its last hop if the packet will be at its destination in the next transmission. Thus, last hop mode has the benefit of thoroughness, in that a malicious neighbor cannot spoof it. It, however, is the worst in terms of power consumption for all the intermediate nodes, because the malicious data has made it through the entire route before reaching this node. Both mode has the benefits of first and last, but requires two scans. So, non-spoofed malicious packets will be dropped immediately saving the network of routing transmissions and it will prevent spoofed packets from reaching their destinations because the last hop cannot be spoofed. The last hop is known by simply determining if the whether the packet will be delivered on the next transmission. The downside of Both mode is additional delay, each packet is scanned twice instead of once; thus the "cost" of Both is one additional scan delay. Additionally, we implemented a Probabilistic Mode where the node is given a random probability of running detection software. It will scan all packets if it was randomly selected to run. Probabilistic Mode has the benefit that memory and CPU usage can be controlled by altering the probability of running. However, Probabilistic Mode provides no guarantees. A packet might be scanned once, several times or not at all. Implementation Specifics Because both the AODV software and snort_inline require use of the kernel's queuing feature, they could not be used together. Thus we developed a system allowing this, as well as allowing the selective scanning of the packets based on their position in the route, specifically on the last hop. The Uppsala University aodv implementation, version 0.8.1, works by sending all packets (first 68 bytes only, to include the headers) up to userspace through libipq. kaodv, a kernel module, hooks all incoming packets and sends them to userspace using libipq. The aodvd daemon will pull the packet from the queue, if necessary, update the kernel routing tables, and then tell the kernel to accept or reject the packet. (Version 0.9, released several months ago, moves much of the userpace functionality to kernel space. Our implementation only works with the older userspace implementation). Inline Snort functions much like aodvd. It pulls all packets up to user space by use of libipq. It does not use a kernel hook module. Rather, it requires that the user insert a userspace-queuing rule into the regular firewall rules. Using a rule set, it will determine whether the packet appears to be part of an attack. If it does appear to be part of an attack, it will send a signal back via

Page 8: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

libipq to drop the packet. Otherwise, it tells libipq that the packet can be forwarded by the kernel. Since our initial goal was to use Inline Snort to detect intrusions either on specific positions within the route, we used aodv to determine whether the packet will be sent to the destination node in next hop, and only then handed to packet to Snort for analysis-This information was available in the AODV routing tables. We later added the probabilistic all-hop scan, which did not need the routing information. From a high level view, we made the following modifications to add intrusion detection to AODV-UU:

Packet IN kaodv / IP TablesPacket OUT

QU

EUE

libipq

modified aodvd

Rea

d Q

ueue

If Packet Is Last HopModified Snort

InlineGood /Bad

Forw

ard

/ Dro

pFo

rwar

d / D

rop

Drop

Forward

User Space

Kernel Space

All packets are sent as usual to aodvd by kaodv. Hence, no modifications were made to kaodv, libipq or to the kernel in general. Aodvd was modified such that if the packet is just prior to its destination, it will be sent to Snort_inline. Snort_inline was modified to receive packets from aodvd via shared memory and pipes.

Page 9: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

When our modified aodvd is started, it spawns a child process, with an input and output pipe. These are then set in place of the standard input and output, and snort-inline is executed. We also allocate and attach a shared memory block of the maximum packet size. When a packet is spotted with hop_count=1 (i.e. We're the last hop) it is placed into the shared memory block, and the ID of the block is sent to snort_inline via a pipe. First hop and both first and last mode operate similarly. We then do a blocking read on the other pipe to wait for a result from snort inline, and either continue or tell the kernel to drop the packet based on the result. Within snort inline, instead of using libipq to get and return results, we read integers on standard input, which are the shared-memory id numbers containing the packets. We then attach to this shared memory block, scan the packet, and return a “0” on standard output if it is ok, or a “1” on standard output if it is bad, and detach the shared memory block. Although we send an ID number each time, we attach persistently to the memory block and don't reattach it each time. This modification saved us about 1% CPU usage on each laptop. Testing Test Bed To develop and evaluate our systems, we deployed an ad-hoc network in Moore 306. The network consisted of laptops (2 Dell Latitude L400, 2 Dell Latitude X200, 1 IPM Thinkpad T41 and 1 Sony Viao PCG-SRX87) running various versions of the Red Hat/Fedora distribution of Linux and two Compaq H3950 iPAQ PDAs running the Familiar distribution of Linux (available at handhelds.org). The laptops have 0x86 Pentium III and Pentium M processors, while the PDAs have ARM processors. Each machine uses either an Orinoco Silver Wireless Card or their own internal Wireless Systems. All are configured for an 802.11b network. UU Aodv Version 0.8.1 was downloaded, compiled and installed on each of the laptops. One laptop was configured to cross-compile for the ARM processors since the PDAs do not have space for kernel headers and a compiler. Libpcap, the packet capture library, was also rebuilt for the PDAs, since the version in the Familiar Linux distribution would always operate in promiscuous mode with Orinoco cards. IPS Software Functionality Testing We initially tested our project using the publicly available attack “jolt.c”, which is an attack consisting of sending oversized ICMP packets. Our initial tests were partially successful, but the packets were not defragmented, thus many attacks were missed. During the early portion of the second semester, we modified the kaodv kernel module to allow the ip_conntrack module to perform packet defragmentation prior to it passing the packets to userspace. This solved the fragmentation issues. Subsequently, we tested the software using a much larger variety of attacks. Oversized pings are frequently used in DOS attacks. Packets containing “+++ATH0” cause old modems to hang-up. WinNuke is an attack that causes Windows 95 to lock up by sending out of band data to port 139. Finger 0@host returns a list of users on the host machine, so it can be used to gather usernames to log onto a computer using password guessing or a more sophisticated attack. We used netcat and ethereal to watch for these attacks on the target machine.

Page 10: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

We found that a small number of the finger attacks were missed during testing. Initially we thought this was a bug in our code's evaluation of the routing tables, but running in the scan all packets mode showed that it wasn't. Further testing demonstrated it was worse under load, and that it was a snort_inline bug independent of our code. Presumably there is a bug buried deep in the code of the snort state engine. Please see Appendix C for details about the missed finger attacks. IDS Performance Tests We sought to compare the DominatingSet Algorithm (DS) to an algorithm using Random Probabilities (RP) at each node to determine whether or not it would run. In our testbed, all nodes are within transmission range of each other. So, to create a topology, iptables was used to drop packets at the MAC layer. Since snort reads packets before they are filtered by iptables, snort had to be configured to only look for packets sent by its neighbors.

In the topology, each node sends a stream of traffic with occasional attack packets to each and every other node. MGEN is used to create the stream of traffic, which consists of ten 625-byte UDP packets sent a second (50 kbps). The occasional attack packets are x86 shellcode attacks (a packet containing a series of x86 NOOP instructions). Every two seconds, one attack packet is sent by each node to every other node. The test was run for 5 minutes, so a total of 4500 attack packets and 75000 regular data packets were sent during the test. We sought to make the network heterogeneous with respect to using nodes with different capabilities, so the nodes were setup as follows: Node 1: Dell Latitude L400 laptop Node 2: Sony Viao PCG-SRX87 laptop Node 3: Dell Latitude L400 laptop Node 4: Compaq H3950 iPAQ PDA Node 5: Compaq H3950 iPAQ PDA Node 6: IBM Thinkpad T41 Metrics: The two main metrics are the percentage of attacks detected and the number of duplicate scans. A duplicate scan occurs when more than one node scan the same packet. Duplicate scans

Page 11: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

unnecessarily consume a CPU time. An optimally efficient IDS would have 100% detection rate and no duplicate scans Power consumption by the wireless card is not a valid metric for IDS because malicious packets are only detected, not stopped. So, even if a node detects a bad packet, it will be delivered to the destination without being dropped. Therefore, power consumption would be the same for both RP and DS. Predictions: We hypothesized that DS would have near to 100% detection since the only chosen node - node 3, would hear all traffic. (The only reason it would drop below 100% is because there is a collision at node 3, but the packet is successfully delivered from the sender to receiver). It would also have no redundant scans because only node 3 would be running. We also hypothesized that at low probabilities of activating Snort, RP would have few duplicate scans because only one or two nodes would be running Snort, however, missed detection would be high because the nodes that are running would probably not cover the entire network. At higher probabilities, RP would have a high detection rate, but there would be many duplicate scans because many nodes are running and would hear the same transmissions. We expected that at the same rate of detection, RP would have many more duplicate scans than DS. Results:

RP DS Probability detection rate (%) duplicates Nodes running detection rate duplicates Nodes running

0 0.00% 0 X 99.89% 0 30.05 0.00% 0 X 99.89% 0 3

0.1 45.69% 0 5 99.89% 0 30.15 66.62% 2012 2,5 99.89% 0 3

0.2 100.00% 5978 1,2,3 99.89% 0 30.25 0.00% 0 X 99.89% 0 3

0.3 80.07% 1333 1,5 99.89% 0 30.35 83.27% 1490 1,6 99.89% 0 3

0.4 79.80% 1341 1,5 99.89% 0 30.45 99.76% 7135 1,3,4,5 99.89% 0 3

0.5 99.98% 5982 1,2,3 99.89% 0 30.55 100.00% 7910 1,2,3,5 99.89% 0 3

0.6 99.91% 7204 1,3,5,6 99.89% 0 30.65 96.96% 5878 1,2,4,5 99.89% 0 3

0.7 99.87% 10134 1,2,3,5,6 99.89% 0 30.75 99.96% 3738 1,2,6 99.89% 0 3

0.8 99.87% 2993 1,3 99.89% 0 30.85 99.71% 12307 1,2,3,4,5,6 99.89% 0 3

0.9 100.00% 7259 1,3,5,6 99.89% 0 30.95 99.87% 12186 1,2,3,4,5,6 99.89% 0 3

1 99.69% 11962 1,2,3,4,5,6 99.89% 0 3

Page 12: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

Detection Rate

0.00%

20.00%

40.00%

60.00%

80.00%

100.00%

120.00%

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

p(running)

Det

ectio

n R

ate

(% a

ttack

s de

tect

ed)

RPDS

Duplicate Scans

0

2000

4000

6000

8000

10000

12000

14000

0

0.05 0.

1

0.15 0.

2

0.25 0.

3

0.35 0.

4

0.45 0.

5

0.55 0.

6

0.65 0.

7

0.75 0.

8

0.85 0.

9

0.95 1

p(running)

Num

ber o

f Dup

licat

e Sc

ans

RPDS

Page 13: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

We found that results were consistent with our predictions. DS picked node 3 to run. Only 5 packets were missed and there were no duplicate scans. The missed packets are likely due to noise at Node 3 that would have resulted in a retransmission at link layer had Node 3 been the receiver. RP, however, did not perform as well. At low probabilities, RP would miss several packets because the nodes it chose usually did not cover the entire network. Only after 50% probability did RP consistently not miss packets, but at these probabilities, there were always over 2,500 duplicate scans. Basically, for RP to ever perform as well as DS, it would have to choose node 3 and only node 3, which is highly unlikely. Conclusions: The benefits of intelligently choosing nodes become obvious in this experiment. The DS algorithm helped nodes to determine if they should or should not enable their detection software. Yet, the DS algorithm is simple and requires few communication packets. Thus, simple node selection algorithms such as this one would be essential to any to ad-hoc network wishing to protect itself from attacks. DS provides excellent coverage, but uses only a minimal amount of resources to detect the attacks. IPS Performance Testing The goal of this test was to compare the performance of First and Last IPS modes. We used the following topology:

For the purpose of this experiment, we added snort_inline rules to consider UDP traffic on port 23 to be “bad” traffic, and UDP traffic on port 5000 to be “good” traffic. We used MGEN to generate a stream of 1.5mbps of "bad" data from 1 to 5. We had a similar stream of “good” data from 6 to 5. The idea is that if we stop the data at 2 instead of 4 (first hop instead of last hop), there will be channel capacity left over for 6 to send to 5 effectively, and node 4 will be less loaded. Metrics: The metrics that apply here are Delay, Power consumption, Throughput/Channel usage, and individual CPU usage. For the purpose of this test, Power Usage is defined as the number of bytes transmitted, Throughput as the amount of good data that reaches the destination, and Delay as the round-trip time of ping packets. CPU usage is measured with the system load counters, which can be displayed by using Unix command “uptime”. In First, Last (and Both), all nodes are running the Snort detection software (so memory usage is equal), but they only scan a subset of the packets that they route, according to the parameters chosen (first, last, or both). Furthermore, detection rate will be 100% in all cases unless we consider spoofing (malicious) AODV nodes, which is not reproducible in our setup. Aggregate CPU usage in first/last will be identical, because each packet is scanned once, although as

Page 14: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

mentioned above CPU usage on the node before a common destination will be higher in last or both modes. Predictions: With first hop, we expect to see better energy usage, and more efficient channel usage/throughput, as the “bad” data is stopped at 2 instead of progressing to 4. With last hop, we expect to see worse energy usage and congestion, as well as some additional delay and packet loss as node 4 gets behind in scanning. One may ask why in First/Last/Both we don't load snort on demand, saving memory as random mode does. We have to always load snort, because even if there are initially no packets that fit the algorithm, loading snort is a time/cpu intensive task (2-4 seconds), and unloading/loading it on a regular basis as packets come by matching or not matching the rules would be extremely inefficient in terms of CPU usage, with only minor gains in system memory availability. Results:

First Mode Node Bytes Transmitted Bytes Received Percentage CPU used 1 11820746 16826 66 2 7802 10962976 48 3 8931632 8290008 49 6 8854824 24482 50* 4 (PDA1) 8410060 8431632 27 5 (PDA2) 5034 8424148 48 *(Slightly higher than expected do to unrelated misconfiguration) Good data sent = 7,861,632

Last Mode Node Bytes Transmitted Bytes Received Percentage CPU used 1 7203932 26308 28 2 6167052 6682080 3 3 11800074 13319638 60 6 8366012 24606 26 4 (PDA1) 2635958 11080104 244 5 (PDA2) 5736 2654486 21 Good data sent = 2,451,804 Note that aodv uses internal routing messages, so Node 1 receives some bytes and Node 2 sends some bytes, but these correspond only to the aodv messages. Also, note that the CPU on node 4 was >200% utilized, meaning that more than 2 programs (Snort and AODV) were contending for CPU use at all times.

Page 15: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

Our results were as expected. In First mode, node 2 does not retransmit the bad data from node 1. Thus, nodes 3 is free to retransmit data from only node 6. So, node 6 is able to send more data through the channel to node 5 (ie. good data sent = 7,861,632 bytes). On the other hand, in Last mode, node 3 is retransmitting both the good data from node 6 and the bad data from node 1. Thus, thus node 3 has less opportunity to send data and only 2,451,804 bytes of good data are transmitted. Clearly, stopping the bad traffic at node 2 (the First hop) has benefits. Aggregate power usage was also better in the first-hop case as expected. In the time window, there were 2.7 million fewer bytes transmitted by the non-attacking nodes. Node 1, the attacker, was able to push out 400,000 additional packets because of the clearer channel, so it is apparent that a clearer channel also helps an attacker. Despite this, the throughput benefit to the non-attackers is obvious, with 7.8MB of good data reaching the destination, instead of the 2.4 of last hop mode. The delays were constant across the board, aside for Both mode where they were approximately double. For any of the single-hop modes: Delay on scanning is dependent on the size of the packet and the speed and workload of scanning machine. With IPS, 1450 sized pings, scans both directions, time is about 21.2 ms. 1000 byte = 20.6ms. 600byte = 10.6ms, 200b = 10.2ms, 56b = 10.1ms. Without snort, 1450 sized pings, scans both directions, time is about 20.9 ms. 1000 byte = 11.3ms. 600byte = 10.4ms, 200b = 1.06ms, 56b = .73ms. These delays are small yet significant, and their impact is discussed below. The CPU measurements above are not particularly accurate. In the case of the laptops, the X windowing system video drivers are very poor, causing 100% CPU usage while the xterms were scrolling. On machine 2 we partially solved this issue by running aodv/snort in a text console, but other activity in the X environment inflated these numbers on all 4 laptops. The CPU usage on all the machines was boosted by snort's disk logging, which takes significant CPU time while the machine is logging attacks. The combination of these factors make the CPU usage numbers above only a very rough estimate of what CPU the scanning and routing used. Machine 2 in Last, and PDA1 in First, show what the CPU usage of a laptop or PDA just routing might be.

Page 16: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

Security implications and considerations of our wireless IDS/IPS: Our system shares the limitations of any signature-based attack detection/prevention system: We cannot protect against new attacks. Some new attacks may be matched by the generic Snort rules, such as the x86 shell code matching rule. But we cannot rely on this heuristic-like detection, because hackers, like virus writers have been doing for years, can find ways around this detection. When our IPS is in use, known attacks are dropped by the IPS nodes while routing, if they scan the packet. Additional delay is the primary reason not to scan packets, we have found that power and CPU usage are not significant factors. Additionally, the memory required to keep a full ruleset and state engine in memory may not be practical on PDAs. We have found that transmission of data is a significant power drain; constant transmission cuts the battery life of laptops and PDAs by approximately 30%. Thus it is preferable to drop malicious packets sooner rather than later, to conserve power in the network. Delay on scanning is dependent on the size of the packet and the speed and workload of scanning machine, and are listed above. These delays are the main penalty for each additional scan. DS uses the algorithm developed based on [1], but with a modification to prefer over-coverage rather than under-coverage. Its goal is to insure coverage but keep the minimum number of nodes active (RP does neither). There is a situation where picking the highest degree nodes to run the detection software might lead to missed detections. This called the “Hidden Terminal Problem”, which is a scenario is where a centrally placed node simply cannot hear all the traffic on a channel because the outlying nodes are all transmitting at the same time, but not all outlying nodes are within range of each other. So, there will be collisions at the middle nodes, but not at the outlying nodes. This is the same effect one gets when listening to VHF radios from a very high peak; multiple transmissions obscure the receipt of any one of them. The case of “middle nodes” is more difficult to test than the IPS modes, because in the IPS test modes we were able to simply drop packets at layer 3, ignoring the fact that on layer 2 the packets arrived. In this case we would have to physically move the laptops and PDAs to defeat the 802.11b channel access control. We did not have time to perform these tests, but our testing topology could have been 2 pairs of devices that could hear each other but not the other pair, and a central IDS node that could hear all 4 devices. The central scanning node would have only heard a percentage of the data, because of the channel being full.

Page 17: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

References [1] Subhadrabandhu, Dhanant and Saswati Sarker and Farooq Anjum, “ Efficacy of Misuse

Detection in Adhoc networks” Pre-Print. [2] Zhang, Yongguang, and Wenke Lee and Yi-An Huang. “Intrusion detection techniques for

mobile wireless networks”. Wireless Networks. Volume 9, Issue 5 (September 2003) [3] Gast, Matthew. “Wireless LAN Security: A Short History” O’Reilly Network. (April 2002)

<URL: http://www.oreillynet.com/pub/a/wireless/2002/04/19/security.html> [4] Lang, Brian. “How To Guide – Implimenting a Network Based Security Intrusion Detection

System” Internet Security Systems. (2000) <URL: http://www.snort.org/docs/iss-placement.pdf>

Tools Used: Snort – http://www.snort.orgFedora/Red Hat Linux – http://www.redhat.com, http://www.fedora.redhat.comFamiliar Linux – http://www.handhelds.orgMGEN - http://mgen.pf.itd.nrl.navy.milUppsala University AODV - http://www.docs.uu.se/docs/research/projects/scanet/aodv/aodvuu.shtml iptables/ipq – http://www.netfilter.orglibpcap – http://www.tcpdump.orgEthereal – http://www.ethereal.comBlackdown Java-Linux – http://www.blackdown.orgSun Java – http://java.sun.comPerl – http://www.perl.comnetcat – http://netcat.sourceforge.net (although we used the older non-sourceforge release)

Page 18: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

Appendix A: Dominating Set Algorithm and Proof <will insert pdf pages here>

Page 19: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

Dominating Set Algorithm Illustration

Page 20: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

Appendix B: Finger Attack Analysis Finger attacks seem to be occasionally slipping past the aodvd-snort_inline IDS system. The following analysis was run to determine if there is a threshold rate of traffic at which the attacks would not be missed by the IDS. Using iptable rules, a network was setup in a line topology with a left node, a center node and a right node. The center node ran IDS and was the only neighbor to both the right and left node. Consequently, all traffic between the left and right node had to be routed through the center node. The right node sent a finger attack to the left node approximately every 25 sec (at random intervals between 1 and 50 seconds). Using mgen, the right node also generated a specified amount of UDP traffic destined for the left node. The results of the trials are below:

Traffic (Mbps) time (sec) approx. attacks sent attacks passed p(attack missed)

0 90833 1816.66 0 00.5 77349 1546.98 0 0

0.75 96789 1935.78 0 01 82386 1647.72 2 0.001213798

1.5 110783 2215.66 7 0.003159333 61240 1224.8 3 0.0024493796 60413 1208.26 12 0.009931637

Attacks missed vs. Traffic

0

0.002

0.004

0.006

0.008

0.01

0.012

0 1 2 3 4 5 6 7

Traffic (Mbps)

p(m

isse

d at

tack

)

Conclusions The probability of missing an attack increases as traffic increases. The trial with highest rate having no attacks arriving was 0.75 Mbps, which ran for nearly 27 hours. In the worst case (6Mbps), one in one hundred packets arrived.

Page 21: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

In a later test, we used an unmodified version of snort_inline and specified routes instead of using aodvd. Again, we found that some finger attacks arrived at the receiver. This indicates the problem is a bug in snort_inline, and not our code.

Page 22: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

Appendix C: Resource Consumption Tests Running detection software on all nodes would guarantee a safe network. However, running software detection might have a significant impact on resource and power consumption on each node, and performance on the network. This was assumed in the paper that we're basing our project on, however we wanted hard numbers to verify this. We sought to demonstrate the need for a distributed method of intrusion detection by measuring the performance and power-consumption impact on the machines. Memory Usage The test of memory consumption was simple. After compiling snort on our cross-compiling laptop, we install Snort on a PDA. When we ran snort, it required little processor time, however, it required nearly 50% of the PDA’s 64 MB RAM (displayed by ‘top’). A RAM requirement so large alone gives reason to avoid running an IDS on all nodes. We created a modified snort configuration that used only 20% of the PDA's memory, however that is still unacceptably much. The RAM requirement, however, had a smaller impact on the laptops and as technology improves, the RAM requirements will probably be less significant on future machines. CPU Usage During the IDS performance testing, CPU usage was monitored using ‘top’. These percentages are rough estimates of the CPU consumption, but they provide an idea of the work required to perform Intrusion Detection. In the IDS, performance test ten 625 packets (50kbps streams) where sent from each node to every other node every second. It was a six-node network, so each node received at least fifty packets a second. (Some nodes had to route packets for the other nodes and nodes were also sending attack packets every two seconds. These factors increase the packet rate to a maximum of 105 packets/sec at node 3.)

PC Model Processor CPU Usage (%) Dell Latitiude L400 Pentium III-M 700 MHz 5.5 - 7.5 Sony Viao PCG-SRX87 Pentium III-M 850 MHz 6.9 - 10.0 Compaq H3950 iPAQ PDA Intel Xscale Strong Arm 400 MHz 40.0 - 58.0 IBM Thinkpad T41 Centrino 1.6 GHz 3.8 - 4.5

It should be noted that during this testing, Snort was used with a full ruleset and full alert logging. Snort has options to help reduce CPU usage including reducing the ruleset so that each packet need pass a fewer number of comparison. Snort also allows “fast” alert mode, which only records packet headers instead of the entire packet. Power Consumption The power consumption tests were more complicated. We ran several tests and finally found reasonable results in the final semester. Test #1 Laptops were assigned IPs 192.168.1.1, 192.168.1.2 and 192.168.1.3 One PDA was assigned IP 192.168.1.102 (the other PDA was disabled) Using iptables to drop packets based on MAC addresses, one laptop was placed “out of range” of the other laptop. To communicate with the other laptops, packets had to be routed through the

Page 23: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

PDA. Ping streams were created between the nodes with 500 byte packets. The stream configuration is shown in the diagram below.

The following command was run on the PDA to monitor the power usage: while sleep 10; do apm; date; done > apmtime.log & So, every ten seconds, apm was called to monitor the battery usage. The power consumption was measured as the time taken to drain the battery from 100% to 90%. The results are tabulated below: Configuration 10% battery drain timeNo Snort 49.85Snort Full Non-Promiscuous 49.19Snort Full Promiscuous 49.03 The results indicate that running snort had no significant effect on power consumption. However, this test setup had several flaws. First, apm is likely not an accurate measure of the battery’s charge. We did not calibrate apm, which requires completely draining the battery from a full charge and then charging it to full capacity. Regardless, the best way to know when a battery has lost power is to monitor it until it is completely drained. Second, apm was reading the main battery, which was not powering the wireless card. There is an external battery that is part of the PCMCIA expansion pack, which was holding the wireless card. So, this setup was not accurately measuring the power lost because the power used by the wireless card was not being monitored. Third, the network setup has the PDA sending one stream of 500 byte packets and routing another stream. The power required to send packets is much high than the power consumed to receive packets. So, the consumption due to transmitting was overwhelming any variation that would result from Snort.

Page 24: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

This first set of results is not entirely worthless. Assuming apm did accurately monitor the power level, it shows the non-wireless-card resource usage (CPU & RAM) due Snort did not have a significant impact on power consumption (i.e. the CPU and memory did not use more power when running Snort.) Test #2 The second attempt of the first semester still showed no power consumption difference between running snort and not running snort. The following network setup was used:

This network approach was much simpler and took less setup time. We believed that ping -f generated plenty of traffic because as soon as one ICMP ECHO_REQUEST packet is sent by node 3, another is sent right afterwards. When Snort was running in promiscuous mode, the network would still be filled with traffic. Node 1 ran the following command to monitor the PDA and see when the network card would fail: while sleep 20; do date; ping –q –c 1 192.168.1.102; done > ping.log & The external battery was charged for one hour. The results are as follows: Configuration consumption of 1 hour charge (min)No Snort 120.50Snort Full Promiscuous 120.77 Only one trial was run, but it appears that running Snort seems to have no effect on power consumption. Snort uses promiscuous mode to monitor network traffic and one would expect promiscuous mode to use more power as all packets are passed up the stack. However, in our configuration, promiscuous mode had no effect.

Page 25: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

Test #3 – (Conducted second semester) Most research and reasoning predicts that more transmission would lead to greater power consumption. We decided to try yet another test using a more direct method of guaranteeing that we were flooding the channel than “ping –f”. We used the navy’s MGEN (The Multi-Generator Toolset) to generate 6 Mbps traffic across a PDA and a Laptop. The PDA and laptop were tested in 3 modes, light traffic & no Snort, heavy traffic & no Snort and heavy traffic with Snort. Light traffic consisted of the internal routing messages used by AODVD – no user data was transmitted. Heavy traffic consisted of the 6 Mbps traffic generated by MGEN in addition to any AODV routing messages.

Results were consistent with the belief that transmissions lead to greater battery consumption: PDA Configuration Time to Depletion (min) 1 hr APM reading (battery %) No Snort-Light Traffic 122.12 88.50No Snort-Heavy Traffic 84.50 84.00Snort - Heavy Traffic 84.88 85.00 Time to Depletion is the amount of time until the jacket battery was empty after a one-hour charge. 1 hr APM reading is the a percentage indication of the amount of battery life left in the main CPU battery. Laptop Configuration Time to Depletion (min) No Snort-Light Traffic 62.23No Snort-Heavy Traffic 51.20Snort - Heavy Traffic 51.90

Page 26: Misuse Detection in Adhoc Networkscse400/CSE400_2004... · Security experts advise to “Treat wireless stations as you would treat an unknown user asking for access to network resources

Time to Depletion is the amount of time until the main battery was empty after a one-hour charge. The laptop did not have an external battery powering the wireless card. For both the PDA and laptop, heavy traffic greatly decreased the battery life. However, running snort had no significant effect on the battery life. From the PDA results, we can see that the main CPU consumed energy under heavy traffic, but running snort had no effect on the battery life.