Low Delay Marking for TCP in Wireless Ad Hoc Networks

27
Low Delay Marking for TCP in Wireless Ad Hoc Networks Choong-Soo Lee , Mingzhe Li Emmanuel Agu, Mark Claypool, Robert Kinicki Worcester Polytechnic Institute Apr 17, 2004

description

Low Delay Marking for TCP in Wireless Ad Hoc Networks. Choong-Soo Lee , Mingzhe Li Emmanuel Agu, Mark Claypool, Robert Kinicki Worcester Polytechnic Institute Apr 17, 2004. Introduction. Wireless Ad Hoc Network uses TCP - PowerPoint PPT Presentation

Transcript of Low Delay Marking for TCP in Wireless Ad Hoc Networks

Low Delay Marking for TCPin Wireless Ad Hoc Networks

Choong-Soo Lee, Mingzhe LiEmmanuel Agu, Mark Claypool, Robert Kinicki

Worcester Polytechnic Institute

Apr 17, 2004

2

Introduction

Wireless Ad Hoc Network uses TCP TCP, being designed for wired networks, performs poorly

over wireless networks. Wireless ad hoc network uses Carrier Sense Multiple

Access with Collision Avoidance (CSMA/CA) and Request-to-Send/Clear-to-Send (RTS/CTS) mechanism to avoid collisions.

TCP performance suffers from the contention delays and drops known as RTS/CTS jamming and RTS/CTS-induced congestion.

3

Introduction

Previous research to improve TCP performance includes Investigation of link breakage and routing failure problems

[4] [5] [6] link layer/MAC solutions [7] [8] [9] protocol modifications [10]

Most of these approaches are link layer optimizations tied to the device drivers rather than the operating system.

4

Motivation

Previous research is only concerned with improved throughput. Emerging applications such as interactive multimedia and

network games demand low round-trip times. End-to-end delays will become increasingly important

relative to throughput. Throughput is important but we project steady increase in

maximum wireless network capacity.

5

Proposal

We propose an IP layer solution which modifies the packet queue manager. Our goal is to improve round-trip times, loss rates and

collisions with minimal degradation to throughput. This facilitates easier deployment since operating system

upgrades/patches can be used independently of hardware changes.

6

Outline

Introduction Background Proposed Mechanism Evaluation Conclusion and Future Work

7

Explicit Congestion Notification

Traditionally, TCP uses dropped packets as an indication of network congestion. This requires 3 duplicate acknowledgement. Window size below 4 results in a retransmission timeouts

and reduces throughput significantly.

Explicit Congestion Notification (ECN) uses an unused bit (the ECN bit) in IP header to get congestion notification.

8

Link RED and Adaptive Pacing

Link RED has a similar mechanism to Random Early Detection (RED). It uses an exponentially weighted average of RTS

retries to calculate the dropping/marking probability. Adaptive Pacing is an additional mechanism that

Link RED controls. Adaptive pacing adds extra back-off before trying to

send a packet.

[8]

9

Outline

Introduction Background Proposed Mechanism Evaluation Conclusion and Future Work

10

Performance and Window Size

[8] demonstrates that the throughput is not optimal with regular TCP.

Optimal Window Size also provides reasonable delay.

We can adjust the packet marking probability to force TCP to operate around the optimal window size.

11

Low Delay Marking Algorithm

At each node, on receiving packet p identify flow fi to which p belongs

estimate hi for fi estimate n calculate wopt calculate pmark mark p with

probability pmark

p : packetfi : the i-th flowhi : the number of wireless hopsn : the total number of flows going through the nodewopt : the optimal window size for fi

pmark : the marking probability

12

Optimal Window Size

Optimal window size is a function of the number of hops between the source and destination nodes. Due to the hidden terminal problem, it is derived that there

should be only one packet in transit every 4 hops for optimal TCP throughput.

n

h

wopt4

13

Number of Hops

We use Time-To-Live (TTL) values in the data packets. The default TTL values are typically 128 or 256. We keep track of source and destination node pairs to

identify each flow. We take the TTL values from the packets going one way

and the packets going the other way. We subtract them from the default TTL values and sum the

difference.

14

Number of Flows

We estimate the number of flows using Morris’ calculation. We use a fixed-length bit v. A packet is hashed based on source-destination address

and port number and the corresponding bit in v is set. The bits in v are cleared at a certain rate and also the

corresponding number of hops.

[17]

15

Marking Probability

We use Morris’ formula that links the overall loss rate and the TCP window size. We consider the overall loss rate as an equivalent of

marking probability.

Then we substitute all the previous calculated/estimated values to come up with

2

76.0

wp

2

2

2

16.12

4

76.0

h

n

nh

pmark

16

Marking Probability

However, this is the overall marking probability, NOT per-node. We distribute the overall marking probability uniformly along

all nodes.

1

1

2

2

1

16.121

11

h

node

hnodemark

h

np

pp

17

Outline

Introduction Background Proposed Mechanism Evaluation Conclusion and Future Work

18

Simulation Setup

We used NS-2 to implement and evaluate LDM. LRED and Adaptive Pacing implementations LDM implementations (hard-coded version)

Wireless Multi-hop Chain Network For h-hop network, we need h+1 nodes (n0 to nh).

All TCP flows go from n0 to nh. We tested 7-hop, 15-hop and 24-hop networks. All TCP flows use TCP NewReno.

19

Single Flow Experiment

Roundtrip Time

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

5 10 15 20 25

Number of Hops

Ro

un

dtr

ip T

ime

(s)

Original TCP Restrained TCP Adaptive Pacing LDM

20

Single Flow Experiment

Normalized Throughput

0.9

1.0

1.1

1.2

1.3

1.4

5 10 15 20 25

Number of Hops

No

rmal

ized

Th

rou

gh

pu

t

Original TCP Restrained TCP Adaptive Pacing LDM

21

Multiple Flow Experiment

Roundtrip Time

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

5 10 15 20 25

Number of Hops

Ro

un

dtr

ip T

ime

(s)

Original TCP Restrained TCP Adaptive Pacing LDM

22

Multiple Flow Experiment

Normalized Throughput

0.9

1.0

1.1

1.2

1.3

1.4

1.5

1.6

5 10 15 20 25

Number of Hops

No

rmal

ized

Th

rou

gh

pu

t

Original TCP Restrained TCP Adaptive Pacing LDM

23

Summary

Category Single Flow Multiple Flows

Hops 7 15 24 7 15 24

Throughput + 0 0 + + 0

Round-Trip Time + + + + + +

Loss Rate + + + + + +

RTS Collisions 0 + + 0 + +

Performance Comparison to Regular TCP

+ : better by more than 10%0 : within 10%– : worse by more than 10%

24

Summary

Category Single Flow Multiple Flows

Hops 7 15 24 7 15 24

Throughput 0 – – 0 – –

Round-Trip Time + + + + + +

Loss Rate 0 0 0 + + +

RTS Collisions + + + + + +

Performance Comparison to Adaptive Pacing

+ : better by more than 10%0 : within 10%– : worse by more than 10%

25

Outline

Introduction Background Proposed Mechanism Evaluation Conclusion and Future Work

26

Conclusion

Low Delay Marking (LDM) is an IP layer approach. lowers delay and loss rate without sacrificing throughput.

round-trip time up to 57.6% loss rate up to 59.5%

reduces MAC layer congestion.

We successfully implemented and evaluated Low Delay Marking (LDM) in NS-2.

27

Future Work

All our evaluation is done over with the number of hops and number of flows known ahead of time at each node. Implementation and evaluation of hop and flow counting

techniques

Investigation of LDM performance over more complex topologies such as crosses and grids to evaluate robustness.