RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To...

77
RED QUEUE'S OCCUPANCY AND PERFORMANCE A THESIS SUBMITTED TO THE GRADUATE DEVISION OF THE UNIVERSITY OF HAWAI'I IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN ELECTRICAL ENGINEERING August 2004 By Xiaogang Wang Thesis Committee: Galen Sasaki, Chairperson N. Thomas Gaarder Nancy Reed

Transcript of RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To...

Page 1: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

RED QUEUE'S OCCUPANCY AND PERFORMANCE

A THESIS SUBMITTED TO THE GRADUATE DEVISION OF THEUNIVERSITY OF HAWAI'I IN PARTIAL FULFILLMENT OF THE

REQUIREMENTS FOR THE DEGREE OF

MASTER OF SCIENCE

IN

ELECTRICAL ENGINEERING

August 2004

By

Xiaogang Wang

Thesis Committee:

Galen Sasaki, ChairpersonN. Thomas Gaarder

Nancy Reed

Page 2: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

Acknowledgement

I would like to thank my advisor Dr. Sasaki for his guidance, patience, and

encouragement throughout my research and study at Department of Electrical

Engineering. I would also like to express my sincere gratitude to my committee members

Dr. N. Thomas Gaarder and Dr. Nancy Reed for advice and discussion on my thesis.

111

Page 3: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

Abstract

Random Early Detection (RED) is a queue management algorithm used in packet

routers of Internet Protocol (IP) networks. It drops/marks packets with a certain

probability before packet buffers overflow with the aim of improving TCP performance.

RED and other queue management algorithms that drop packets early to adjust network

traffic are classified as Active Queue Management (AQM). This contrasts with

traditional queue management that drops packets only when packet buffers overflow.

The RED algorithm is composed of three parts:. .

a queue SIze averagmg

mechanism, a drop probability function, and a dropping algorithm. When a packet

arrives at a RED queue, RED decides to enqueue or drop/mark the packet using the three

parts in its computation.

In this thesis, we survey current AQM algorithms and discuss their strengths and

limitations. We thoroughly investigate RED queue's occupancy and performance by

testing the three parts of its algorithm separately. We derive an upper bound on RED's

required buffering and investigate the effect of the queue size averaging mechanism on

RED's performance through simulations. We define an extended RED model used to test

the parameters of the drop probability function that affect the model's performance

through simulations. Based on the simulation results, an improved AQM drop

probability function is proposed. We also propose two dropping algorithms that decide

how a packet is dropped. One of these two algorithms shows improvement over the

original dropping algorithm ofRED in the simulations.

IV

Page 4: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

Table of Contents

Acknowledgement iii

Abstract iv

List of Figures vii

CHAPTER 1 IN"TRODUCTION 1

1.1 Random Early Detection Algorithm .3

1.2 Gentle RED modification 7

1.3 Problems of *RED 7

1.4 Purpose and organization of the thesis 9

CHAPTER 2 RELATED WORK 11

2.1 Introduction to NS-2 simulator 11

2.2 Previous simulation and analysis results 13

2.3 Other AQM Algorithms 14

CHAPTER 3 IN"FORMATION ON QUEUE PERFORMANCE 19

3.1 Trace file and record file 19

3.2 Information on queue performance .20

CHAPTER 4 QUEUE SIZE AVERAGIN"G MECHANISM 25

4.1 An upper bound on *RED's maximum buffer size 25

4.2 The effect of queue size averaging mechanism .28

CHAPTER 5 DROP PROBABILITY FUNCTION 32

5.1 An extended RED model 32

5.2 Effect ofERED's drop probability function .33

v

Page 5: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

5.2.1 Parameter max th 34

5.2.2 Parameter limity 36

5.2.3 Parameter maxy 39

5.2.4 Parameter min th 42

5.3 Discussion ofAQM algorithms .42

5.3.1 Queue size 46

5.3.2 Link utilization ,' 47

5.3.3 Average file transfer delay 49

5.3.4 Number ofpackets dropped 50

5.4 Summary 52

CHAPTER 6 AQM DROPPING ALGORITHMS 53

6.1 Two different dropping algorithms 53

6.2 Comparison of the three dropping algorithms 57

6.2.1 Queue size 58

6.2.2 Link utilization 59

6.2.3 Average file transfer delay 60

6.2.4 Number of packets dropped 62

6.3 Summary 63

CHAPTER 7 CONCLUSION 64

Bibliography 66

VI

Page 6: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

List of Figures

Figure Page

FIGURE 1.1: IP PACKET ROUTER 2

FIGURE 1.2: DROP PROBABILITY FUNCTION OF RED 4

FIGURE 1.3: RED ALGORITHM 6

FIGURE 1.4: DROP PROBABILITY FUNCTION OF GRED 7

FIGURE 3.1: NETWORK SIMULATION SCENARIO 21

FIGURE 3.2: SAMPLE SIMULATION RESULT OF QUEUE SIZE 22

FIGURE 3.3: SAMPLE SIMULATION RESULT OF FLOW DELAy 23

FIGURE 3.4: SAMPLE SIMULATION RESULT OF PACKET DROP 23

FIGURE 3.5: SAMPLE SIMULATION RESULT OF LINK UTILIZATION 24

FIGURE 4.1: MAXIMUM BUFFER SIZE FOR QUEUE WEIGHT = 0.005 28

FIGURE 4.2A: MAXIMUM AND MINIMUM QUEUE OCCUPANCy 30

FIGURE 4.2B: LINK UTILIZATION 30

FIGURE 4.2C: AVERAGE FILE TRANSFER DELAy 30

FIGURE 4.2D: MAXIMUM FILE TRANSFER DELAy 31

FIGURE 4.2E: NUMBER OF PACKETS DROPPED 31

FIGURE 5.1: ERED DROP PROBABILITY FUNCTION 33

FIGURE 5.2A: MAXIMUM AND MINIMUM QUEUE OCCUPANCy 35

FIGURE 5.2B: LINK UTILIZATION 35

FIGURE 5.2C: AVERAGE FILE TRANSFER DELAY 35

FIGURE 5.2D: MAXIMUM FILE TRANSFER DELAy 36

Vll

Page 7: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

FIGURE 5.2E: NUMBER OF PACKETS DROPPED 36

FIGURE 5.3A: MAXIMUM AND MINIMUM QUEUE OCCUPANCy 37

FIGURE 5.3B: LINK UTILIZATION 38

FIGURE5.3C: AVERAGE FILE TRANSFER DELAY 38

FIGURE 5.3D: MAXIMUM FILE TRANSFER DELAY 38

FIGURE 5.3E: NUMBER OF PACKETS DROPPED 39

FIGURE 5AA: MAXIMUM AND MINIMUM QUEUE OCCUPANCy 40

FIGURE 5.4B: LINK UTILIZATION 40

FIGURE 5AC: AVERAGE FILE TRANSFER DELAY 41

FIGURE 5AD: MAXIMUM FILE TRANSFER DELAY 41

FIGURE 5.4E: NUMBER OF PACKETS DROPPED 41

FIGURE 5.5A: MAXIMUM AND MINIMUM QUEUE OCCUPANCy 43

FIGURE 5.5B: LINK UTILIZATION 43

FIGURE 5.5C: AVERAGE FILE TRANSFER DELAY 43

FIGURE 5.5D: MAXIMUM FILE TRANSFER DELAY 44

FIGURE 5.5E: NUMBER OF PACKETS DROPPED 44

FIGURE 5.6: DROP PROBABILITY FUNCTION OF MRED 45

FIGURE 5.7A: MAXIMUM AND MINIMUM QUEUE OCCUPANCY OVER NUMBER OF

CONNECTIONS 46

FIGURE 5.7B: QUEUE SIZE OVER LINK CAPACITy 46

FIGURE 5.7C: QUEUE SIZE OVER LINK PROPAGATION DELAY 47

FIGURE 5.8A: LINK UTILIZATION OVER NUMBER OF CONNECTIONS 48

FIGURE 5.8B: LINK UTILIZATION OVER LINK CAPACITIES 48

V111

Page 8: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

FIGURE 5.8C: LINK UTILIZATION OVER LINK PROPAGATION DELAy 49

FIGURE 5.9A: AVERAGE FILE TRANSFER DELAY OVER NUMBER OF CONNECTIONS 49

FIGURE 5.9B: AVERAGE FILE TRANSFER DELAY OVER LINK CAPACITY 50

FIGURE 5.9C: AVERAGE FILE TRANSFER DELAY OVER LINK PROPAGATION DELAY 50

FIGURE 5.lOA: NUMBER OF PACKETS DROPPED OVER NUMBER OF CONNECTIONS 51

FIGURE 5.1 OB: NUMBER OF PACKETS DROPPED OVER LINK CAPACITy 51

FIGURE 5.10C: NUMBER OF PACKETS DROPPED OVER LINK PROPAGATION DELAY 52

FIGURE 6.1: RED DROPPING ALGORITHM 54

FIGURE 6.2: MRED DROPPING ALGORITHM 54

FIGURE 6.3: ARED DROPPING ALGORITHM 55

FIGURE 6.4: NRED DROPPING ALGORITHM 56

FIGURE 6.5A: QUEUE SIZE OVER NUMBER OF CONNECTIONS 58

FIGURE 6.5B: QUEUE SIZE OVER LINK CAPACITy 58

FIGURE 6.5C: QUEUE SIZE OVER LINK PROPAGATION DELAy 59

FIGURE 6.6A: LINK UTILIZATION OVER NUMBER OF CONNECTIONS 59

FIGURE 6.6B: LINK UTILIZATION OVER LINK CAPACITY 60

FIGURE 6.6C: LINK UTILIZATION OVER LINK PROPAGATION DELAYS 60

FIGURE 6.7A: AVERAGE FILE TRANSFER DELAY OVER NUMBER OF CONNECTIONS 61

FIGURE 6.7B: AVERAGE FILE TRANSFER DELAY OVER LINK CAPACITY 61

FIGURE 6.7C: AVERAGE FILE TRANSFER DELAY OVER LINK PROPAGATION DELAy 61

FIGURE 6.8A: NUMBER OF PACKETS DROPPED OVER NUMBER OF CONNECTIONS 62

FIGURE 6.8B: NUMBER OF PACKETS DROPPED OVER LINK CAPACITY 62

FIGURE 6.8C: NUMBER OF PACKETS DROPPED OVER LINK PROPAGATION DELAy 63

IX

Page 9: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

CHAPTERl

INTRODUCTION

Congestion in the bottleneck links in Internet Protocol (IP) networks has always

been a major problem. In conventional IP networks, congestion management is left to

end points and the protocols running above the IP layer.

Transmission Control Protocol (TCP) is the dominant transport protocol in the IP

network. The current TCP congestion control algorithm was developed in 1988 [7,13]

and has been of crucial importance in preventing congestion collapse. TCP infers that

there is congestion in the network by detecting packet losses, and then its congestion

control algorithm is invoked to alleviate congestion. TCP initially increases its

transmission rate until it detects a packet loss. A packet loss is inferred by the receipt of

duplicate acknowledgement packets (ACKs) or by timeout.

Note that congestion occurs at IP packet routers. Figure 1.1 shows a packet router

with its input and output links, each having a packet buffer. Congestion can build up at

the buffers of the output links. Traditionally, these buffers have the simple management

algorithm of dropping packets when it overflows. This technique is known as Drop Tail,

and is classified as passive queue management.

Sally Floyd and Van Jacobson proposed the Random Early Detection (RED)

queue management mechanism in 1993 [1]. They believed that Drop Tail interacts badly

with the TCP congestion control mechanism due to the following problems [1, 2, 3, 13,

20]:

(1) The router has no control over packet drops.

1

Page 10: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

(2) There is little ability to accommodate transient congestion.

IP Packet Router

Internal.switch

(i.e. crosshlr)

Packet

Figure 1.1: IP Packet Router

(3) TCP connections could become synchronized in behavior (so called "global

synchronization") resulting in reduced aggregate throughput.

(4) There may be biases against bursty traffic.

(5) There is the possibility of unfair treatment of TCP connections sharing the

same link.

To avoid these problems, RED tries to detect the congestion before the packet

buffer overflows. If it suspects congestion, it drops packets early to signal the congestion

to the end nodes. An alternative to dropping packets is to set the marking bits in the

Explicit Congestion Notification (ECN) field of the IP packet. With this approach, TCP

has early indication of congestion before serious congestion occurs, and tends to avoid

the five problems of Drop Tail. It is claimed that RED can provide better link utilization

and service quality than Drop Tail.

2

Page 11: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

RED provides control at intermediate routers to complement the TCP end system

congestion control and is classified as active queue management (AQM). The Internet

Engineering Task Force (IETF) issued [RFC 2309] to recommend the deployment of

RED as an AQM method in 1997 [13]. Cisco, an Internet router company, began to

deploy RED in their products in recent years. Since RED's introduction, several

enhancements have been proposed.

In this chapter, we first introduce RED and its "gentle" improvements in Sections

1.1 and 1.2. Then we discuss RED's problems in Section 1.3. In Section 1.4, we

describe the organization of this thesis.

1.1 Random Early Detection Algorithm

RED [1, 2, 3, 8] manages packet buffer by dropping/marking packets with a

certain probability before the buffer overflows. The probability of dropping/marking

packets is dependent on an average packet queue length. To simplify our discussion, we

separate the RED algorithm into three parts: the algorithm for computing the average

queue size, which is called the queue size averaging mechanism; the algorithm to

compute the drop probability, which maps average queue size to drop probabilities; and

the algorithm that determines if a packet is dropped/marked, which is called the dropping

algorithm. Note that these calculations are performed at link buffers in routers.

The average queue size avg is calculated by the exponential weighted moving

average (EWMA) method. In particular, whenever a packet arrives at a nonempty queue,

avg is updated based on the recursive equation

avg =avg x (1- q _ w) + q x q _ w,

3

Page 12: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

where q is the instantaneous queue length and q_w is a parameter between 0 and 1.

When a packet arrives at an empty queue, avg is updated by

avg =avg x (1- q _ w)m ,

where m is an estimate of the number of typical small packets that could have been

transmitted by the router during the idle period.

The incoming packet's drop probability Pb is computed according to the curve of

Figure 1.2, which is called the drop probability function. The average queue size avg is

compared to two thresholds, a minimum threshold min_th and a maximum threshold

max_tho If avg is less than min_th, the packet is accepted. If avg is greater than max_th,

the packet is dropped/marked. If avg is between min_th and max_th, the packet-marking

probability Pb is computed according to the drop probability function, i.e.

Pb = max_p x (avg -min_th)/(max_th -min_th)

-Pb(q)

1 ----------------------------------------------------------r-----

min th max thq (packet)

Figure 1.2: Drop probability function ofRED

Finally, the dropping algorithm determines if the packet is dropped/marked. First,

another packet dropping/marking probability Pais calculated by

4

Page 13: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

where count is the number of enqueued packets since the last dropped/marked packet.

Then the packet is dropped/marked with probability Pa •

Note that if Pb remains constant, then count is uniformly distributed between [1,

1/ Pb] with probability Pb since the probability

Pb n-l PbP[count =n] = IT(1- . ) =Pb

1- n x Pb ;=0 l-z x Pb

Thus, the number of arriving packets between dropped/marked packets is random and

uniformly distributed. This reduces the chance that there will be a burst of drops. Note

that TCP behaves badly when packets are dropped in bursts. Also note that for a

particular connection its packets are dropped/marked at a rate that is roughly proportional

to that connection's share of the bandwidth at the router. In this sense, RED is fair.

Figure 1.3 shows the RED algorithm. Note that there are two selected parameters

wait and rand_drop. They allow options that make RED drop more randomly. When

wait is true, RED will choose a larger interval between two packet drops. When

rand_drop is set to be true and average queue size exceeds max_th, RED will enqueue an

arriving packet, and then drop a randomly chosen packet inside the queue instead of

dropping the arriving packet.

In addition, the Explicit Congestion Notification (ECN) mechanism [10, 11]

enables RED to mark the packets instead of dropping them to notify congestion to the

end nodes while avg is between min_th and max_tho However, when avg exceeds

max_th, RED will drop the incoming packets even if the queue is not full [11]. When a

TCP receiver receives an ECN-packet, it will send ECN-ACK for acknowledgement. A

5

Page 14: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

TCP sender responds to ECN-ACK by reducing its transmission rate as if a packet loss

were detected.

Saved variable:avg: average queue sizeq_time: start of the queue idle timecount: packets since last marked packet

Fixed parameter:q_w: queue weightmin_th: minimum threshold for queuemax_th maximum threshold for queuemax_p: maximum value for pbs: typical transmission time

Selected parameter:wait: true or falsedrop_rand: true or false

other:pa: current packet-marking probabilityq: current queue sizetime: current timeR: random number from [0,1)

Initializationavg = 0, count =0

For each packet arrivingCalculate the new average queue size avg:If queue is nonempty

avg = (1-q_w)avg +q_w * qelse m=(time -q_time)/s

avg = avg *(1-q_w) m

Determine if the packet should be dropped/marked:if min_th <= avg <max_th

count++pb =max_p (avg - min_th)/(max_th - min_th)if wait false then pa =pb/(1-count*pb)else if count*pb<1 then pa =0

else pa =pb/(2-count*pb)R=random[0,1)If R<pa

drop/mark the arriving packet, count =0else enqueue the arriving packet

else if max_th <= avgif drop_rand false then drop the arriving packetelse enqueue packet

drop one packet inside queue randomly, count = 0

else count =0when queue becomes empty:

q_time =time

Figure 1.3: RED algorithm

6

Page 15: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

1.2 Gentle RED modification

RED has been shown to easily lead to serious oscillatory behavior due to its

abruptly changing drop probability from maxy to 1 when avg reaches max_th [6, 14].

To address this, Floyd later recommended using the "gentle" variant of RED [6]. Gentle

RED (GRED) has a different dropping probability Pb function from RED, as shown in

Figure 1.4.

The dropping probability of GRED increases linearly between max_th and

2max_th with a slope (l-maxy)/max_th. Then Pb increases slowly after avg exceeds

max_th rather than the abrupt jump to 1 as for RED. In later discussions, whenever our

remarks refer to RED or GRED we denote this by *RED.

2*max th q(packet)min tho

max-p

1 7..---

Figure 1.4: Drop probability function of GRED

1.3 Problems of *RED

While the idea of *RED is certainly an improvement over traditional Drop Tail

queue management, its performance is sensitive to its four control parameters: min_th,

max_th, maxy, and q_w. Furthermore, a correctly parameterized *RED queue is highly

dependent on the given network situation. In other words, it is difficult to find robust

7

Page 16: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

parameter settings. For example, a *RED queue that works fine for a fixed number of

TCP flows, may behave like a Drop Tail queue when there are many more flows [15].

How to properly configure these parameters has been the subject ofmany studies [1, 4, 7,

15, 18]. Other studies [15, 18, 27] show that *RED can improve queue management's

performance, but only for "well-configured" *RED under specific traffic loads. Floyd

and Jacobson, the inventors of the *RED algorithms, originally gave a recommendation

of *RED parameters in November 1997 [1]. More recent recommendations are given in

[4] and [7]. However, the current recommendations fail to provide the desired

performance over a wide range of scenarios. The robust setting of *RED parameters is

still an open question.

Since there is difficulty in setting *RED parameters, some researchers conclude

that RED exhibits no clear advantages over Drop Tail over a wide range of load types

[14, 16, 18] and the *RED algorithm does not significantly improve performance [14, 15,

16, 18, 19]. In some cases, its performance is worse than Drop Tail [15, 18]. Though

*RED decreases delay, it also drops more packets than Drop Tail. Fairness does not

show up clearly in *RED. More seriously, queue size oscillation exists in *RED [19,29],

i.e., the queue size occupancy oscillates between empty and full. This reduces

throughput and service quality. Therefore, some researchers have proposed

modifications of and alternatives to *RED [21, 22, 23, 24, 25, 26, 28] and others suggest

that *RED should not be used at all [17, 19].

8

Page 17: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

1.4 Purpose and organization of the thesis

In this thesis, we attempt to understand how the three parts of *RED algorithm

affect its occupancy and performance through simulation and analysis.

In Chapter 2, we first describe the well-known network simulation tool that is

commonly used in AQM research called Network Simulation version 2 (NS-2) [5, 8].

Then we discuss previous research. This includes other AQM algorithms and

mathematical analyses to understand AQM.

In Chapter 3, we describe how we get queue occupancy and performance

information for our investigations. Unlike other research that focuses on one or two

aspects of RED's performance, our evaluation is based on multiple aspects: queue size

occupancy, number of packets dropped, average and maximum file transfer delays, and

link utilization.

In Chapter 4, we investigate *RED's queue size averaging mechanism in detail.

First, we provide an upper bound on the queue size for *RED. Most researchers assume

infinite or very high queue size [14, 18] in their analyses and simulations. However, our

bounds show these assumptions are unnecessary. Then we investigate how queue size

averaging mechanism affects *RED's performance through simulations.

In Chapter 5, we investigate how drop probability function affects *RED

performance. To facilitate the study, we define an AQM that is a generalization of RED,

GRED, and Drop Tail. We refer to this AQM as Extended RED (ERED). By simulation,

we show how the performance of ERED is affected by the parameter values of its drop

probability function.

9

Page 18: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

Then, based on our investigation, we propose a modified RED (MRED) drop

probability function. We run simulations to compare the performance of MRED with

those ofDrop Tail and GRED with widely accepted parameter settings.

In Chapter 6, we propose two new dropping algorithms. We compare the

performance of the new dropping algorithms with that ofRED's dropping algorithm.

In Chapter 7, we summarize our results.

10

Page 19: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

CHAPTER 2

RELATED WORK

There are many researchers who studied *RED and proposed improvements to

*RED. In Section 2.1, we describe NS-2 simulator. In Section 2.2, we briefly discuss

three representative studies. In Section 2.3, we describe seven recently proposed AQM

algorithms.

2.1 Introduction to NS-2 simulator

NS-2 is an event driven network simulator developed at UC Berkeley that

simulates computer networks [5, 8]. It implements (i) network protocols such as TCP

and UDP (User Datagram Protocol); (ii) traffic source behavior such as FTP, Telnet,

Web, Constant Bit Rate (CBR) and Variable Bite Rate (VBR); (iii) router queue

management mechanism such as Drop Tail and *RED; (iv) routing algorithms such as

Dijkstra, Bellman-Ford, multicasting; and (v) some of the Medium Access Control

(MAC) layer protocols for Local Area Network (LAN) simulation.

NS-2 is an object-oriented simulator, written in C++, with an OTCL (Tool

Control Language with Object-oriented extensions) interpreter as the control language for

simulation. The simulator supports a class hierarchy in C++, and a similar hierarchy

within the OTCL interpreter. The two hierarchies are closely related to each other. From

the user's perspective, there is a one-to-one correspondence between a class in the

interpreted hierarchy and one in the compiled hierarchy. The root of this hierarchy is the

class TclObject. Users create new simulator objects through the interpreter. These

11

Page 20: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

objects are instantiated within the interpreter, and are closely mirrored by a

corresponding object in the compiled hierarchy.

NS-2 uses two languages because it has two things to do. On the one hand,

detailed simulations of the protocols require a system programming language that can

efficiently manipulate bytes, packet headers, and implement algorithms that run over

large data sets. For these tasks, run-time speed is important. The C++ programming

language is slow to change, but its speed makes it suitable for protocol implementation.

Packet processing, routing and other computation intensive activities are implemented in

C++ for the purpose of speed.

On the other hand, a large part of network research involves slightly varying

parameters or configurations, or exploring a number of scenarios. In these cases,

iteration time (time to change the model and re-run) is more important. Since each

configuration runs once, the run-time of this part is less important. OTCL runs slower but

can be changed very quickly which makes it ideal for simulation configuration. Setting

up nodes and links, generating a topology and controlling its behavior can be done using

OTCL.

NS-2 can open Network Animator (NAM) windows for a graphical visualization

of the network activities. The simulation result can also be dumped into one or more

trace files.

For our simulations, NS-2 already has the implementation of Drop Tail and

*RED. For new AQM algorithms, we developed C++ classes and corresponding OTCL

classes.

12

Page 21: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

2.2 Previous simulation and analysis results

In this section, we describe three previous works that focus on understanding

AQM algorithm.

Analysis of AQM over wide range of TCP loads

Chung and Claypool [18] investigated the effect of TCP load on AQM's

performance by running a series of NS-2 simulations. Their simulation set up is 100

TCPIFTP connections that go through a bottleneck link, which has AQM and an infinite

buffer. The version ofTCP is NewReno. Their AQM has a fixed drop probability. This

is different from *RED, which has a drop probability that is dependent on average queue

size avg. They observed that the average queue size for a given drop rate is affected by

the number of TCPIFTP connections, link capacity, and round trip times. They also

studied how the drop rate affects the TCP average congestion window size. They

concluded the following:

(1) The average queue size increases linearly with an increase in the number of

connections

(2) The average queue SIze decreases linearly with an lllcrease of the link

bandwidth

(3) The average queue size decreases linearly with a decrease of the round trip

time

(4) If the maximum congestion window for TCP is large enough, its average

congestion window is dependent only on the packet drop rate.

13

Page 22: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

Dynamics of TCP/RED

Doyle et al. [29] investigated the dynamic behavior of the TCP window and the

instability of RED. They mathematically modeled and analyzed TCP's congestion

window mechanism and ran NS-2 simulations. They concluded the following.

(1) Oscillations in the occupancy of the link buffers are an inevitable outcome of

the TCP protocol. This also leads to oscillations in the TCP transmission rate.

(2) TCPIRED becomes unstable when the delay, or capacity, is large or the

number ofTCP connections is small.

Control theoretic analysis

Rollot et al. [30,31,32] investigated the stability of AQM from a control theoretic

standpoint. A TCPIAQM system can be modeled as a feedback control system, where

the AQM is the controller or compensator. The compensator should be designed to

provide a "stable" closed-loop system. One of the results from this analysis is that the

average queue size mechanism in *RED works as a low pass filter in the control system.

This low pass filter introduces a lag in the feedback congestion information, so the

system becomes unstable. A method to improve *RED's stability is to remove the low

pass filter. In the *RED context, this corresponds to obtaining the loss probability from

the instantaneous queue size q instead of the average queue size avg.

2.3 Other AQM Algorithms

RED has problems, so many researchers are seeking alternatives and

improvements to *RED. In this section, we describe seven proposed AQM algorithms.

Refined RED and Adaptive RED

14

Page 23: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

Refined RED [23] is a variant of RED. In contrast to RED that has preset and

fixed q_w and maxy, Refined RED monitors the actual queue size and dynamically

changes the value of q_w according to the difference between actual queue size and

average queue size avg. It also dynamically adjusts the value of maxy based on avg.

Refined RED aims to effectively prevent buffer overflow, suppress queue size oscillation,

and reduce packet delay.

Another similar variant is Adaptive RED [21], which adaptively changes maxy

depending on avg, but q_w in Adaptive RED is kept constant.

Flow Random Early Drop (FRED)

Lin and Morris [24] proposed FRED to address fairness issues in RED. They

observed that TCP connections sharing a common *RED queue could use different

amounts of bandwidth if they have different congestion window sizes or/and round trip

times (RTTs). The unfairness comes from the fact that at any given time *RED enforces

the same drop rate upon all connections regardless of the bandwidth they use. Therefore,

the TCP connections with a small RTT or/and large congestion window size can recover

quickly and use more of the bandwidth. FRED improves the fairness of sharing

bandwidth by keeping track of all connections through "per-active-flow accounting". It

enforces a drop rate on each connection that is dependent on that connection's buffer

occupancy. As a result, FRED is supposed to be more effective in isolating ill-behaved

connections and provide protection for slow-responsive connections.

Stabilized RED (SRED)

SRED [25] stabilizes the queue size over a wide range of loads by having the drop

probability depend on three components: (i) instantaneous queue size, (ii) the estimated

15

Page 24: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

number of TCP connections, and (iii) whether inside the queue there is a packet

belonging to the same connection with the arriving packet. A "zombie list" records the

recent packet information on connections that are currently using the queue. A "hit" is

declared when two packets are compared and they are from the same connection.

Whenever a packet arrives, it is compared with a randomly chosen packet from the

zombie list. The number of active connections is estimated from the average hit rate.

Thus, SRED statistically estimates the number of connections and checks the packets in

queue.

Random Exponential Marking (REM)

REM [26] differs from *RED in the congestion measurement and the drop

probability function. The key idea of REM is called "match rate clear buffer", which

means that REM tries to stabilize the input rate around link capacity and the queue size

around a small target value.

REM maintains a variable called ''price'' as a congestion measure, and price is

used to determine the drop probability. Price is updated periodically based on the rate

mismatch (the difference between input rate and link capacity) and the queue mismatch

(the difference between queue size and target queue size). Price is incremented if the

weighted sum of these mismatches is positive and is decremented otherwise. The drop

probability will increase as price increases and decrease as price decreases.

It is hard to sample input rate in practice. Therefore, they use current and

previous queue sizes to approximate input rate. As a result, price is a linear function of

current, previous, and target queue sizes.

16

Page 25: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

It is claimed that REM can achieve high link utilization and negligible loss and

delay.

BLUE

Blue [22] focuses on improving packet loss rates. The key idea behind BLUE is

to perform AQM based on packet loss and link utilization. BLUE maintains a single

probability variable Pm' which it uses to drop/mark incoming packets. The mechanism of

BLUE is as follows: upon a packet loss due to buffer overflow, BLUE increments Pm;

conversely, upon the queue being empty, BLUE decreases Pm' It is presumed that this

mechanism effectively allows BLUE to "learn" the correct rate it needs to send back

congestion notification.

BLUE uses three other parameters to control the dropping/marking probability.

The parameter freeze_time determines the minimum time interval between two

successive updates ofPm' The parameter d1 and dz are used to determine the amount by

which Pm is incremented or decremented.

BLUE is found to behave like *RED when q_w is extremely small.

GREEN

The GREEN [28] adjusts its dropping/marking rate in response to the congestion

measure x_est, the estimated data arrival rate. If a link's x_est value is above the target

link capacity ct, the dropping/marking rate P is incremented by !1P , and ifx_est is below

ct, P is decremented by !1P .

The target link capacity ct is typically 97% of the actual link capacity, so that the

queue size converges to O. The data rate estimation is performed using exponential

17

Page 26: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

averaging, which is similar to EWMA. It is claimed that GREEN leads to high link

utilization while maintaining low delay and packet loss.

We see that above seven AQM algorithms are different from *RED in either the

way that they measure the congestion or the way that they map the congestion measure

into a drop probability. They try to improve *RED in one or two aspects. Most of they

use queue size as the measure of congestion after all.

Our investigation method is different. Instead of proposing an AQM algorithm

directly, we first investigate how the three parts of *RED algorithms affect its

performance, and we evaluate its performance in more aspects. Then we give our

improved AQM algorithm.

18

Page 27: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

CHAPTER 3

INFORMATION ON QUEUE PERFORMANCE

Our investigations on RED's occupancy and performance are mainly based on

NS-2 simulations. In this chapter, we describe the methods that we use to obtain queue

occupancy and performance information. In Section 3.1, we introduce the trace file and

record file of NS-2. In section 3.2, we describe the information that we use to evaluate

queue performance.

3.1 Trace file and record file

NS-2 can dump its simulation results into trace files, which record each packet's

activities and the time of the activities. We can extract information from the trace files to

compute network performance. The following is an example of a trace file.

+ 1.84375 0 2 cbr 210 ------- 0 0.0 3.1 225 610

- 1. 84375 0 2 cbr 210 ------- 0 0.0 3.1 225 610

r 1.84471 2 1 cbr 210 ------- 1 3.0 1.0 195 600

r 1.84566 2 0 ack 40 nn_n 2 3.2 0.1 82 602

+ 1. 84566 0 2 tcp 1000 ------- 2 0.1 3.2 102 611

- 1. 84566 0 2 tcp 1000 ------- 2 0.1 3.2 102 611

r 1. 84609 2 3 cbr 210 ------- 0 0.0 3.1 225 610

+ 1.84609 2 3 cbr 210 ------- 0 0.0 3.1 225 610

d 1.84609 2 3 cbr 210 ------- 0 0.0 3.1 225 610

Each line has 18 trace fields (columns), which are described below.

• Column 1: A packet event, where "+" indicates enqueue operation, "_"

indicates dequeue operation, "r" indicates receive event, and "d" indicates

drop event.

• Column 2: The simulated time (in seconds) when the event occurred.

19

Page 28: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

• Column 3 and 4: The two nodes between which the event is happening.

• Column 5: A descriptive name for the type ofpacket.

• Column 6: The packet's size, as encoded in its IP header.

• Column 7 to 13: The flags for ECN mechanism and priority.

• Column 14: The IP flow identifier.

• Column 15 and 16: The packet's source and destination node addresses.

• Column 17: The sequence number of a connection.

• Column 18: A unique packet identifier.

Besides the trace file, NS-2 also provides a trace object that can be used to trace a

parameter. Whenever the traced parameter is changed, the changes will be put into a

record file.

3.2 Information on queue performance

In our research, we evaluate queue's performance by measuring queue size

oscillation, file transfer delay, packet drop rate, and link utilization. We measure them by

using the NS-2 simulation to simulate a network scenario. The network scenario is

shown in Figure 3.1. In this scenario, there are multiple connections (either TCP or

UDP) that go through a bottleneck link. The connections transfer single files of the same

size. Many researchers have used this scenario to study AQM algorithms [1, 2, 14, 18,

19, 20, 22, 23, 24, 25]. In this scenario, we use N to indicate the number of connections,

C to indicate the bottleneck link capacity, T to indicate the file size to be transferred in

one connection, and M to indicate the TCP maximum congestion window size. We will

use this scenario later in this thesis.

20

Page 29: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

N senders AQM

Bottleneck linkN receivers

Figure 3.1: Network simulation scenario

From the trace and record files generated by NS-2 simulator, we use the Unix text

filter awk to extract the AQM performance information. Then we write the results into

files in Xgraph format to create graphs. Figures 3.2 to 3.5 are sample graphs from one

simulation.

Figure 3.2 shows the instantaneous and average queue sizes over time. In this

graph, we can see an oscillation in the queue occupancy. We measure the amplitude of

the oscillation as the difference between the maximum and minimum queue occupancy in

the "steady state". The "transient" periods are at the beginning and end of the simulation

fUll. We define the transient period at the beginning to be from time 0 until the queue

size peaks for the second time. The transient period at the end is defined to be from a

time t to the end of the simulation, where t is calculated by the following equation:

t =_N_x_(_T_-_2_x_M_)C '

The time between these two transient periods is the "steady state" period.

(3.1)

Figure 3.3 shows the TCP file transfer delays. These are the delays for each TCP

connection to transfer its file. The vertical axis is for the connection's id number and

21

Page 30: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

horizontal axis shows the delay. Each line in the figure indicates the delay for a certain

connection.

Figure 3.4 shows the packet drops. The vertical axis is for the connection's id

number and the horizontal axis is time. Each packet drop event is represented by a dot in

the graph. Therefore, we can find when and from which connections packets are

dropped. In addition, the total drop number for all connections is calculated.

gmt

Figure 3.2: Sample simulation result ofqueue size

22

Page 31: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

DeJay(GRED)Ilowi<!

55.ro:xl1::"'1"----,:-----.,..----.----,..---=

;,;s'§l~--'

;r;,6.16563--

Figure 3.3: Sample simulation result of flow delay

Packet Drop(GRED)flowid

J>lrtdIOJ>J>l:'ii~~·292i···_·····

.\.5.o:xxl

35.=

30.=

25.ro:xl

20.=

\5.=

10.=

0.=

11-'-1,"'-' 1,'-1,1 q '. lllll,-j.-H:. 'u',I"in'i1 Ill! I 1.. II.j ... , .... · ..·.·.1 II_,H\ 1-'1,;-,1 1 .. 1 ,'n,'no",' "'IIH, Ij-U', 1'.11'11 I I II II ·'>'·11··.· 111,_1""-11

i' ,." , •.• " ,-;, '" 'i.',,-.·.:· , .• ·.···•·•..,'· ...,:'1Il',l'W' (- n'll ,1-'" IIU 1 '·"'f11 1.I ,I II" 'I,ll 1'1 In-IWI I'TII

,',', I.··.· .. ,"'', f··'·"",·" ,""','." .• ,-"r'II"", ,.,,'1'1-' II: ,-'" '" ," .. '" .,,;- • ,:.,.'" ,n -•. ',. ,- ,', II"" "., I un ,··.·1. I II" II"" .,. , II If· ..

III "1" I ,II 'II ,IT iii 1ll,1 I·' I '111'111 lilt 1111. II,,', 'i:.'·.....,..•.·.,.· ..". ,.,.•.... ,<",.,.' ".,-,'" ·j"'.i"'.·j·,-.,···."..• , "1.l.l···.'ltll·.U.' I III I. II "1·'11·' I'HI'I '111 .. {1'''':'(II', ill:I·I. :11111.11' II.1I1 liI.1 : Ill, "In 111111111,111 II ' .. ". 111 '11:1 11'11011'11111~ I ... 'f ":,,,,,,. "'. """,!':.,. :'. '. '.:'" 'II .:,,,.,, .."", " 'n·,,·: .. ' •I 11'11' ','11,.1'111" ,'1-" "11.1",111·11 I"·I,'I·."LI .11'''1'1''1 I'. "_ ""' 'H· ..·".: "'.'. : " .. :;:.. ,: ;, "':':-".'" , ..,'. ,:.,' " ,. ·1·,ill·.·I .. i' .:; ;11.' II'·"" "

II ,,:., 1.1:, .. ".,.,., ,., """".' ' ,., ',:0 ,.,,,,.,.'"'.'''11 ..il,III":.I:IIIIIIIIII'II.I'IIIIII .. ' I II'i'I·'II-I.I_i

,..,:,.,."'.,.":,.,.,,,, •..• ,., .....,:.,.,,, .. ': .....:, •. ,.• ", ''''··'.:i, ... :.1111:".1111.1111111111.' i:llllliIUII.1. I 1.1.1,)1.11.:111I "'\1" II ·IU·Il·I·--!-1: .I·IUL... ·•. 11.'11 l'lld .1::11. 'I '.11.1 ,II.HI I. III. LII-IIII' 1.11

• ft'. """j",", ..... Ii.'"',',, ,,- .•. ,: Ii."1.11.1.1 Iii .IlI·11 1.11.1.. I UJ II II I,illlllllilli il ··i.,_, ':."."'."". ",n,:"":O "','",,'1': " '" "",., ""."."., '.'II" ..... :"': •.", •. , :"., '.' " '" 01"'" ' •. ,., H ",.,., ',,,,' "" , "111·1 I I 11.'111'1 ... 1..·.·1 II ·111 ·11 '111' n 'I I 'I I 1'-1"'11".1'.1-1'.1.1·1.111' l,ii:i ·.i :.·."ii i: 'j" " '.:i... :, ,; i.·;:, ,n" i·,:·.I,' ,'i i :Iil'" ·,,':".11 "I"I .. ·U III I I 11.1 _Ill ''-1'1 1' .•".1'1 I I· III' III 1·'1'1 II' 1·'·11' 1111·1·111"11 1'111'11111 1.'1.11'.1 I I I I 1I"'I·,·I·il !'i1",'U

" •. ,' ,•., '.1''''' ;"'''.:'''''' "'.' ':0' ,., ,.,',II' 11,1,1,1 'I,H"I'I'" III 1'111 11'1 1·,I··n:·I·H·'

':.:',; ,-,.,,", 'r· ........" " .. ', .• ',;'." .• '11'_' ....1'(. I.;:· i:· ,', .... r-'·i.·. ".,,'" i 1-01.-1'.,. 1 '·U·.II.:'. ·'·.r· ..··.'··.II.,.•:, .• " ....,., '1: '·1':1 .ft ,III' .111"1 1,":1"': f" " " • ·H··I··I·I:1l111 1'111'1'--11"'1 :·I·II·I""·II.il,... , '.11'" :"" "." .,.,." ..,. H''''''. ""., 11--11·'· .• ·•. ·".·,1".1·,1' ·111 .....·.··.'. ·HI II I.. I II 11'.11 .. 1'.'11.1"." 1·1: 1'1-'" I

'1'1' ":"'1' .)',,",', ,,,":,-"'·1·... ":"' ""1"'1"." .. '," ·"·I··n·L.·"-'-:'·',I' ,.:. ,·,:0.,.:··..-":··0'.' ..·'.. U'·.I:·:·, ,:'.. , .•. ,m. '11' '1:-":... 1'.1'.11 T J-l III' .fI..... ·I·'I··..1 i.I"", '11 :11':1, ·'.'.:"i,:I·"" 1'1-1.,[".','''.'':'_ ."".':'.' ,.'.:.", ... ": ..:,..,., ... ,·,.·:'.. 1.""',··.. :...::.. :··:;'-'"',,:,: ..·,, .. ,.·;:·...""1'."i ..·'"I . '-1.·11":1,1""1" ',1'-1 I·. -1':1 :1.1 "I·I·Ii. I It' 1),11· .. ·111,1.·.: I I Iill i.'I".II· 1 ..··H·l·· ·IIH·I···1ll 1:.1 I 1'1 II I

",." ','"",;•. ,"':".'.'.,.. ,::,,', "" .. ,., .... ".". II"'. ,111 ......1.1 1'''1 I 1'.1'.1111'1 1-"'11';,1.1.1""".1..1':11',1 I I U'I U

"""1"""'" "." .•• _11'--'·· ..' " .. ,:".',', 01'" II"·:"'·"" "'.,11.'····;:, ...., ..,,'.'... '1:'·.'· ''''.'' .. Ii, "11""'1'1" ....... ",., I'.".'.,,'.'.'""I 1111111:; ii I 1'11'·11 ·.'ilillili. II 111111'1"1.11 Iliill I 1:111.1 I, •• H'I,II' I, 'l' 1,'-' ','.. 1.• , .•...,,,

~;",,;_.'...'.'.'.."'."".I"""'".'""'""',,,,"...;.".,,,,'...;.'..1..'",,':.."""'..'""'.,,,,"..'_',,,,'.'..1" ..' ",,'...' :..

80.=0.0000 20.0000 .JO.= 60.=thoe

Figure 3.4: Sample simulation result of packet drop

23

Page 32: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

linkuiUzation(GRED>Ih'''oShpol

1.0c00

0,9500

0,9000

0,8500

0,80:;0

0,7500

0,70:;0

0,6500

0,00::0

0,5500

0,50:;0

0,-1-500

uHi'ia1iol:l.··~

pol: 1

0,10:;0

0,1500

10,OCOO +O,OCOO 6O,OCOO 80,OCOOtiltle:

Figure 3.5: Sample simulation result oflink utilization

Figure 3.5 shows the link utilization. In calculating the link utilization, we also

ignore the "transient times" at the beginning and at the end of the simulation. However,

the transient periods are defined differently. We consider the time before t b and the time

after teas the transient times, where

NxMtb =---

C

Nx(T-M)te = C '

Times tb and te are shown in the graph by the two vertical lines.

24

(3.2)

(3.3)

Page 33: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

CHAPTER 4

QUEUE SIZE AVERAGING MECHANISM

In this chapter, we investigate how *RED's queue size averaging mechanism

affects its occupancy and performance through analysis and simulations. Parameter q_w

in *RED represents the degree of queue size averaging. In Section 4.1, we give an upper

bound on the queue size for a given q_w and check the accuracy of the result by a

simulation. In Section 4.2, we investigate how the values of q_w affects *RED's

performance.

4.1 An upper bound on *RED's maximum buffer size

We will derive an upper bound for the queue size of GRED. It should be noted

that the analysis also works for RED. Let q(n) and avg(n) denote the values of q and

avg respectively when the nth packet arrives. Assume that avg(O) = 0 and q(O) = o.

1Theorem 1: avg(n):2: q(n) - C , where C = -1+-­

q_w

Proof: We prove this theorem by induction. The assumption avg(O) =0 and

q(O) = oimplies the theorem for n = O. We assume that the theorem is true for n = 0, 1,

...... , k, for some integer k. Now we proceed to show the theorem is true for n = k+ 1.

Recall that when the (k + lrhpacket arrives, avg is updated by GRED as follows:

avg(k+l) = (l-q _ w)xavg(k)+q _ wxq(k+l).

Applying the theorem to avg(k) , we have

25

Page 34: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

avg(k+l) ~ (1-q _ w)x(q(k)-C)+q _ wxq(k+l) (4.1)

Next notice that q(k +1) ~ q(k) +1 because q(k) and q(k +1) are the queue sizes at the

times of two consecutive packet arrivals. Then q(k) ~ q(k +1) -1. We apply this to

Inequality (4.1), to get

avg(k +1) ~ (1- q _ w) x (q(k + 1) -1- C) + q _ wx q(k +1)

=(1- q _ w) x q(k + 1) - (1- q _ w) x (1 + C) + q _ wx q(k +1)

=q(k +1) - (1- q _ w) x (1 + C)

Inequality (4.2) implies the theorem because C = (1- q _ w) x (1 + C).

Q.E.D.

(4.2)

1Theorem 2: q(n) ~ 2x max_th +Ic1 for all n =0, 1,2, ......., where C = -1 +--.

q_w

Proof: We prove this theorem by contradiction. Suppose for some n,

q(n) > 2 x max_ th +IC1. Since the values of q(n) can increase by at most 1, without loss

of generality, we can assume q(n) =2x max_th +Ic1+ 1 and

Note that from Theorem 1, avg(n -1) ~ q(n -1) - C . Then we get

avg(n -1) ~ 2 x max_th. This implies when the (n -lrh packet arrives, Ph = 1 for

GRED. Therefore, the packet was dropped and

q(n) ~ q(n -1) =2 x max_ th +IC1. This is a contradiction to our assumption, so the

theorem is true.

26

Page 35: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

Q.E.D.

•Theorem 2 implies that the queue size for GRED is only required to be

2xmax_th+I- 1+q~W1 (4.3)

Thus, the assumptions that buffer sizes are infinite or very high [14, 18] are unnecessary.

Though our analysis is for GRED, it also works for RED by replacing 2 x max_ th by

max_ th in Equation (4.3). We can generalize the upper bound for *RED as follow: the

upper bound for *RED is the summation of two terms. One is the average queue size just

before *RED begin to drop all the arriving packets. Another one is lover q_W, which

represents the number of bursty packets that the *RED queue can accept.

To check if our upper bound is tight, we simulate a GRED queue using the

simulation topology shown in Figure 3.1. There are N = 5 UDP/CBR (User Datagram

Protocol/Constant Bit Rate) connections, each UDP sender sends packets at a rate of

400kbps. The GRED queue is in the bottleneck link. We set parameter q_w = 0.005,

max_th = 200 packets, min_th = 400 packets, maxJJ = 0, and link capacity = 0 kbps.

This scenario is for the case when a large burst ofpackets arrives to an empty queue. The

arrival rate of the burst is essentially infinite since C is zero. Figure 4.1 shows the

simulation results. The upper bound of Theorem 2 is q(n)::; 599 when q_w = 0.005. We

see that the maximum occupancy of queue size is close to the upper bound 599.

We run more simulations for different values of q_w. When maxJJ = 0 and link

capacity = Okbps, the maximum occupancies are close to the upper bounds. When maxJJ

> 0 and/or link capacity> 0 kbps, the maximum occupancies are below the upper bounds.

27

Page 36: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

......

O:ct:f,))

Figure 4.1: Maximum buffer size for queue weight = 0.005

4.2 The effect of queue size averaging mechanism

In this section, we investigate the queue size averaging mechanism's effect on

*RED's performance through simulations.

Our simulation network topology is the same topology shown in Figure 3.1. We

set the simulation scenario as follows: the TCP connections were the Reno version

(currently the most commonly deployed), the TCP maximum congestion window size

was 25 packets, each packet is 1000 bytes, the bottleneck link capacity is 10 Mbps, the

bottleneck link propagation delay is 50 ms (about lOOms RTT for each connection). We

use GRED for queue management. The queue size limit is 100 packets, the parameter

max_th is 50 packets, the parameter min_th is 25 packets, the value ofmaxy is 0.05. We

use three different connection numbers N = 25, 40, and 75 for simulations.

Since the parameter q_w of *RED represents the degree of queue size averaging,

in the simulations, we increase q_w from 0.002 (the number recommended by Sally

28

Page 37: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

Floyd [1, 3]), to 0.02, 0.08, 0.2, 0.5, and 1.0. Note that when q_w equals to1, the average

queue size avg is the instantaneous queue size. Results are shown in Figures 4.2A to

4.2£. Figure 4.2A shows the maximum and minimum queue occupancy in "steady

state" for N = 25, 40, and 75 connections. There are two curves for each N, where the

higher curve is for the maximum queue occupancy and the lower one is for the minimum

queue occupancy. A large difference between the maximum and minimum occupancy

means the existence of large amplitude oscillations. A zero value ofminimum occupancy

means that queue is empty for some time during files transfer. Therefore, in Figure 4.2A,

we observe that oscillations are increasing as the value of q_w decreases from 0.08.

Figures 4.2B, 4.2C and 4.2D clearly show that large oscillations lead to lower link

utilization, longer file transfer delay, and more packet drops. This agrees with C.V.

Hollot's analysis [30, 31, 32] that small q_w will introduce oscillations due to the

feedback delay of the congestion information. One phenomenon we noticed in Figure

4.2A is that when q_w is very small, oscillation increases with the increase in the number

of connections. This agrees with the simulation results in [15] that with many TCP

connections, RED behaves like Drop Tail. In summary, we agree with the suggestion in

[30,31,32] that AQM algorithms should remove the queue size averaging mechanism by

using instantaneous queue size as the congestion information.

29

Page 38: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

queue size vs. CLw

-f!r- 25 connections

-e-40 connections

~ 75 connections

2

~

U

L~

I

T'I ~"1'-'

" "" I"

120

~ 100(,l 80! 60

.~ 40: 20; 0:::ItT -20

Figure 4.2A: Maximum and minimum queue occupancy

link utilization vs q_w

-f!r- 25 connections

-e- 40 connections

~ 75 connections

1.20.6 0.80.40.2

1.05 T-r·-,.....,---,-,....···,·_~- .. -,..•.F"""r-,--,.- .... r-~- .....u···...r ...·"l,-,····,.......,.·T·~""(-r ...·,·-T-,~·\

g 1 '" ;~ 0.95 -u.p~~M~Ff:=R==f=fl=l=1:=i=!=R=+=+=t=++~~---+-1~~ 0.9 -Jt-+-+-+-+-+--+-+-f--I-+-+--+--+-+-+-+-+-+---+-+-+--+-+-+-+--+-+-+-+----J.; 0.85 'it"--+-f--I-+-+--+--+-+-+-+-+-+--+-+-f--I-+-+--+---+-+-f--I-+-+--+-+-+--f--I

:E 0.8 r10.75 "!='--J........J.........J...-l..--+-..J........J........J..-L-+-..J..-J..-I.........J...-J,-..J..-.l--l........J...-+-l........J..-J..-L.-+.....I-..J..-.l--l--\

o

Figure 4.2B: Link utilization

average file transfer delayvs. q_w

l'1/

r:l'-- ]"'-' I'-'

..Gl--111'0C CI! 0_ (,l

Gl Gl- IIIl;:: ­Gl >.cJ.!!!III Gl;'0>III

30252015105o

o 0.2 0.4 0.6 0.8 1.2

--tr- 25 connections avg.

-e-40 connections avg.

~ 75 connections avg.

Figure 4.2C: Average file transfer delay

30

Page 39: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

maximum file transfer delay vs. q_w

"~ .-

[il ~

~ '-'- ....

~ 35l!! 'C 30~ § 25j! u 20a;:: 3lE ;: 15~ ~ 10'x '1J 5~ 0

o 0.2 0.4 0.6 0.8 1.2

~ 25 connections max.

-8-40 connetions max.

-')IE-75 connections max.

Figure 4.2D: Maximum file transfer delay

number of packets dropped vs q_w

V1\1)1< i-:lil'

rhIi't1 I,...,

,- ,.....

3500

j 3000u 2500CU'1J.e ~ 2000

~ ~ 1500~ '1J 1000E2 500

oo 0.2 0.4 0.6 0.8

~ 25 connections

-8-40 connections

-')IE-75 connections

1.2

Figure 4.2E: Number ofpackets dropped

31

Page 40: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

CHAPTER 5

DROP PROBABILITY FUNCTION

In this chapter, we investigate the effect of drop probability function in detail. In

Section 5.1, we define an extended RED (ERED) model, where RED, GRED, and Drop

Tail are special cases. In Section 5.2, we present our simulation results and discuss how

ERED's performance is affected by the parameter settings of its drop probability

function. In Section 5.3, we propose a modified RED (MRED) drop probability function,

and show that MRED's performance is better than Drop Tail and GRED through

simulations in a wide range ofnetwork scenarios.

5.1 An extended RED model

In order to investigate how AQM parameters affect its performance, we first

define an extended RED (ERED) model. Under this model, RED, GRED, and Drop Tail

are the special cases for certain parameter settings. We run simulations to observe how

ERED's performance is affected by each of its parameter.

ERED is same as RED or GRED but with a different drop probability function

Pb' It inherits RED's three control parameters on its drop probability function, min_th,

max_th, and maxy, and adds two more control parameters, limit and limity. Parameter

limit is the average queue size where AQM begins to drop all packets, and limity is the

packet drop probability just before average queue size reaches limit. The function is

shown in Figure 5.1. Note that ERED is GRED when we set limit = 2max_th and limity

32

Page 41: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

= 1. RED is a special case of ERED with max_th = limit. In addition, note that ERED is

Drop Tail when q_w =1, maxy = 0, and max_t h = limit.

IimitJ)

limit q (packet)

Figure 5.1: ERED drop probability function

5.2 Effect of ERED's drop probability function

Before we present our simulation results, we describe our simulations set up. Our

simulation network topology is the same topology shown in Figure 3.1. We set a general

simulation scenario as in Section 4.2. In addition, the parameter limity is set to 1. The

value of maxy is 0.05 in Subsections 5.2.4, and it is zero in Subsections 5.2.1 and 5.2.2.

We will give an explanation in Subsection 5.2.1 for the reason that we set maxy to zero.

Based on this general scenario, we will change one parameter at a time and run a series of

simulations to see how it affects ERED performance. We use three different connection

numbers N = 25, 40, and 75 for simulations. The parameters we change are max_th,

limity, maxy and min_tho We do not change limit since limit is the fixed queue length

for an output link in an IP router. We set q_w to 1 corresponding to the result of Section

4.2. We present the simulation results in the following four subsections.

33

Page 42: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

5.2.1 Parameter max th

In this subsection, we present the effect of changing max_th on the performance

of ERED. As shown in Figure 5.1, ERED's drop probability function has two positive

slopes. The first starts from min_th and ends at max_tho The second starts from max_th

and ends at limit. Both of these slopes affect the performance of ERED. We will

consider the effect of the first slope by changing maxy in Section 5.2.3. Now we

consider the second slope. To fully understand the effect of this second slope, we set

maxy to °in the simulations in this and the next sections. This effectively eliminates

the affect of the first slope. We increase the value ofmax_th from °to 100 with intervals

of 10 packets. Note that when max_th equals 100, ERED is Drop Tail.

Just as in Figure 4.2A, Figure 5.2A shows the maximum and minimum occupancy

for N = 25, 40, and 75 connections. When max_th is close to zero, queue size will be

suppressed to zero and the queue is empty a large fraction of time. Then there will be

lower link utilization, as shown in Figure 5.2B; and much longer maximum file transfer

delay, as shown in Figure 5.2D. Thus, there is unfairness in using the bandwidth if the

value ofmax th is too small.

On the other hand, when max_th is too large, then ERED performance also

suffers. Note that the maximum occupancy increases linearly with max_th, and the

minimum occupancy is still zero for N = 25 and 40, as shown in Figure 5.2A. Then we

observe lower link utilization for N = 25 and 40 in Figure 5.2B, relatively longer file

transfer delays in Figure 5.2C, and relatively more packet drops in Figure 5.2E.

From these figures, we believe that good max_th value should be somewhere

between°and limit, which is dependent on the network scenario.

34

Page 43: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

queue size vs. max_th

- 120::u 100! 80

60~'iii 40CD 20i; 05- -20

v

=-0#:

i~",.

"'Iii

~If-

.... I- d1 ~- .... """'...

JV

-tr-25 connections

-8-40 connections

--*-75 connections

o

Figure 5.2A: Maximum and minimum queue occupancy

link utilization V5. max_th

,~... -- u ...~ ..m,- "",mn .._.1'.......,........ mm um.... ....,'" ,.. I<T> ."

I"", "'"....~~:--t::r I~ '- la> pi I""" "'" -"tWl--f ...... .-IA

~t:r --t.~I-1'-' L..J p

IkV- I->

/V i

1.05

e0~

0.95C'll.!::!;;

0.9=.II::§ 0.85

0.8

o 20 40 60 80 100 120

--tr- 25 connections

-e- 40 connections

~ 75 connections

Figure 5.2B: Link utilization

average file transfer delay vs. max_th

,~ " " ""I'"'71';

....... I....... ........ I........ .......

...... 1'-' ....... I....... -A A A.... I.... .... I.... ,....

~ 25(jj

"~ 20J!!w-e " 15I! e

:; ~ 10q::~

& 5I!~ 0

o 20 40 80 100 120

--tr- 25 connections

-8-40 connections

--*-75 connections

Figure 5.2C: Average file transfer delay

35

Page 44: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

maximum file transfer delay vs. max_th

~ ~I "" \ / \ 1/ \

"r I"k' ~~,/?!'i: ...... ...,.......

r--........ f..c- M 1,....., ....... ....... rp

'" hr' I..... ...... ......- ..... I.... ...... -

S 50

~ :c 40... c::- 0Gl (J 30ii: 5:E ;: 20::l coE­.~ ~ 10

E 0o 20 40 80 100 120

-tr-25 connections max.

-8-40 connections max.

~ 75 connections max.

Figure 5.2D: Maximum file transfer delay

..,IL "IL

,I.- 'UV .......~ 1-0-'" f7I'

r ....... ....... ,.,~ I....... I...... 1A A

~ .... ~ I~ I~

number of packets dropped vs. max_th

e2500't:lGi 2000.lI::

~ 1500Q.

'15 1000...1l 500E~ 0

o 20 40 80 100

-tr- 25 connections

-8-40 connections

~ 75 connections

120

Figure 5.2E: Number ofpackets dropped

5.2.2 Parameter limity

In this subsection, we present the effect of limity on the performance of ERED.

We change limity from °to 1 in increments ofO.l in our simulations. Notice that when

limity equals 0, ERED is essentially Drop Tail. The simulation results are shown in

Figures 5.3A through 5.3E. Just as in Figure 4.2A, Figure 5.3A shows both maximum

and minimum occupancy for N = 25, 40, and 70 connections.

36

Page 45: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

We observe that the best values for limity are between 0.02 and 0.1. For very

small values of limity (less than 0.02), ERED has undesirable behavior. Figure 5.3A

shows that the queue oscillations are large for theses values of limity. This leads to low

link utilization, as shown in Figure 5.3B; and a larger number of packets are dropped, as

shown in Figure 5.3E. The average file transfer delay is also a little higher for these

small values of limity.

For values of limity larger than 0.1, link utilization decreases, as shown in Figure

5.3B. Figure 5.3A explains this to some degree. Notice that the minimum occupancy of

the queue goes to zero when limity becomes larger enough.

Figures 5.3C and 5.3D show that the average and maximum file transfer delays

are fairly constant when limity is beyond 0.1. The exception is when N = 75. When

limity is larger than 0.8, the blocking probabilities become high for large numbers of

connections and ERED throttles many connections. Then the maximum file transfer

delay becomes quite large as shown in Figure 5.3D.

queue size vs. limit_p

120

;:0- 100Qlt; 80ell.e 60

.~ 401/1

g: 20g: 0C"

-20

Ii:::EE~

>l. '", ftE:~ ~ ~ I...... F' I....

'A: 'h f1 IW.:.u

1'-' ....... F' n~ '""""' f.ln? nil nle:: 1

-er- top 25 connections

-e- top 40 connections

-lIE- bottom 75 connections

2

Figure 5.3A: Maximum and minimum queue occupancy

37

Page 46: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

link utilization vs. Iimit_p

"""" I.... I.... .....l"':'~

I..... I..... I<'"1'-'...~ r-.

lP I~ L:t--...-A '8]"i~ ""1r' ~~

l

1.02

co

~ 0.98

:g 0.96..II::§ 0.94

0.92

o 0.2 0.4 0.6

IimiCp

0.8

~ 25 connections

-8-40 connections

~ 75 connections

1.2

Figure 5.3B: Link utilization

average file transfer delay vs. Iimit_p

"" "" '"""'",...

f""'~ ),...., ,..., I,...., I...... I...., h..... ..... I...... '-' I..... ....... .....A A

~ i-' I..... .... I..... I"" I.... I~

25L.

J!!UI 'C 20c cI!! 0-; lil 15== UI.. - 10

CD iV'tIl_l!! CD 5CD"C>ClI 0

o 0.2 0.4 0.8

--tr- 25 connections

-8-40 connections

~ 75 connections

1.2

Figure 5.3C: Average file transfer delay

maximum file transfer delay V5. IimiCp

~.,,,

V,,~ ">Il V

~ '" "I'rn.., ....., ..... ,....., h h!"r',.. "'"

.., ...... hr' I'T' ~'ID~ .... 'p; I.... .... I.... I~

50L.

J!!~ :g 40.... 0CD u 30

iE: :xE ;: 20:::l ClIE­.~ ~ 10

E 0o 0.2 0.4 0.6

IimiCp

0.8 1.2

--tr- 25 connections max.

-8-40 connections max.

~ 75 connections max.

Figure 5.3D: Maximum file transfer delay

38

Page 47: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

number of packets dropped vs. IimiCp

-tr-25 connections

-B-40 connections

~75 connections

1.20.80.40.2

'" ~

)I(lr",

""r1M h ~ I,....., I....... I~ I...... ..

(j\ .. .. ......,... .... I...... I.... I.... .... ~oo

500

2500

j 2000u"Cl

~ ~ 1500'0 Q.

t e 1000..Q"Cl

E:::lr:::

Figure 5.3E: Number of packets dropped

5.2.3 Parameter maxJJ

The purpose of this subsection is to show the effect of parameter maxy on

ERED's performance. We take the values of maxy 0, 1/50, 1/20, 1/10, 1/5, 1/3 in our

simulations. Results are shown in Figures 5.4A to 5AE. As in Figure 4.2A, Figure

5AA shows the maximum and minimum queue occupancy for N = 25, 40, and 75

connections.

We see that the good values for maxy are between 0.02 and 0.05. When maxy

increase beyond 0.05, minimum occupancy decreases as shown in Figure 5AA.

Especially when maxy is larger than 0.1, minimum occupancy is zero. Then we observe

lower link utilization in this region as shown in Figure 5AB. Also in this region, we

observe longer maximum file transfer delays for N = 75 in Figure 5AD, relatively longer

average file transfer delays in Figure 5AC, and relatively larger number of packets

dropped for N = 25 and 40 connections in Figure 5AE.

When maxy is smaller than 0.02, ERED also shows undesirable behavior.

Queue size shows larger oscillation, as in Figure 5.4A; link utilization is low for N =25,

39

Page 48: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

as shown in Figure 5.4B; maximum file transfer delay is longer for N = 75, as shown in

Figure 5.4D; the number of packets dropped is relatively large for N = 25 and 40, as

shown in Figure 5.4E, and the average file transfer delay becomes longer again, as shown

in Figure 5.4C.

Our preferred maxy value (between 0.02 and 0.05) IS inside Sally Floyd's

suggested region (0.02, not larger than 0.1) [1,2,4].

queue size vs. max_p

70j 60u 50! 40:s 30'iii 20CD~ 105- 0

-10

f1~r-t:' r-l--ttl

....f'"

.u'"' v. . ,;,) U., .~~ v . u . 5

-ts-25 connections

-8-40 connections

~ 75 connections

Figure 5.4A: Maximum and minimum queue occupancy

link utilization vs. max_p

~:l3

I)r-r--. .....I -r-

~ p

1.01

t:: 1o~ 0.99:: 0.98:i..Il: 0.97:§ 0.96

0.95

o 0.05 0.1 0.15 0.2 0.25 0.3 0.35

-ts- 25 connections

-8-40 connetions

~ 75 connections

Figure 5.4B: Link utilization

40

Page 49: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

average file transfer delay vs. max_p

",m

'"

p ....... .........

25

-,!r- 25 connections

--e-40 connections

~ 75 connections

0.350.30.250.20.150.10.055

o

....2!lA-C 'g 20f! 0- uCIl CIlii: ~ 15CIl >­01111f! a; 10CIl'1:l>III

Figure 5.4C: Average file transfer delay

maximum file transfer delay vs. max_p

,".J.~lK

~~ l.--f-..... i--'

~~ ,.....1-

........, ....u

I

~ 50lA_c'1:l 40f! c- 0CIl u 30ii: ZlE ;: 20::J III

.5 ~ 10><E 0

o 0.05 0.1 0.15 0.2 0.25 0.3 0.35

-,!r- 25 connections

--e-40 connections

-?lE-75 connections

Figure 5.4D: Maximum file transfer delay

number of packets dropped vs. max_p

~I" ""'1-1-....I'"

r ... .. ..,.....

2500

j 2000u'1:l~ 8. 1500'0 Q.... e 1000.!'1:lE 500::JC 0

o 0.05 0.1 0.15 0.2 0.25 0.3 0.35

-,!r- 25 connections

--e-40 connections

-?lE-75 connections

Figure 5.4E: Number of packets dropped

41

Page 50: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

5.2.4 Parameter min th

In this subsection, we investigate the effect of min_th on ERED's performance.

We let min_th take the values 0, 5, 15,25,35, and 45 packets, and show the performance

in Figures 5.5A to 5.5E. Also as in Figure 4.2A, Figure 5.5A shows the maximum and

minimum occupancy for N= 25,40, and 75 connections.

Examining the five figures of performance, we see that the effect of min_th is

minor. We cannot observe a clear tendency on file transfer delay or number of packets

dropped in Figures 5.5C, 5.5D, and 5.5E. However, we notice the change in performance

for queue size occupancy in Figure 5.5A and link utilization in Figure 5.5B.

When min_th is larger than 25 packets, the maximum queue occupancy continue

to increase but the minimum queue occupancy begins to decrease. This leads to heavy

oscillation as shown in Figure 5.5A and low link utilization as shown in Figure 5.5A. On

the other hand, when min_th is smaller than 15 packets, both the maximum and minimum

queue occupancy decrease. Therefore, we see low link utilization in Figure 5.5B in this

region also.

In our simulation scenario, the best value for min th IS between 15 and 25

packets. It is about Y4 of full queue length.

5.3 Discussion of AQM algorithms

Based on the simulation results in Sections 4.2 and 5.2, we propose a modified

RED (MRED) drop probability function, which is also a special case ofERED. As in the

discussion in Section 4.2, we use instantaneous queue size as the measure of congestion

in our drop probability function. This corresponds to using q_w =1 in our ERED model.

42

Page 51: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

queue size vs. min_th

60j 50u 40! 30l!l'iii 20~ 10~ 0l:T -10

I...~ I i-r-f--f--

I~ -V I--'"--

-'"......--- A - -r--~ ....I.... I.... .... .... ....

~ top 25 connections

-a- top 40 connecctions

~ top 75 connections

Figure 5.5A: Maximum and minimum queue occupancy

link utilization vs. min_th

M I""" I ...... I"", I~

1= 1"-'-" ,...... 1'-'-'

A

V ..... r~ r= it:3 - t--I~

1.01c 1o~ 0.99:is 0.98:::l 0.97

.lIl::§ 0.96

0.95o 10 20 30 40 50

-tr- 25 connections

-9- 40 connections

->IE- 75 connections

Figure 5.5B: Link utilization

average file transfer delay vs. min_th

I..... '1/

[ I...... I..... ........ I,...., I,....,1'-' 1'-' '-' I~ I~. . A Jo.I.... I.... ..... I.... I.....

25

oo 10 20 30 40 50

~ 25 connections

-a-40 connections

~ 75 connections

Figure 5.5C: Average file transfer delay

43

Page 52: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

maximum file transfer delay vs. min_th

bit 26 78,....

lo.?1 06,,/

/" ......i'-.

/" r-.... ~....... tr:::.I tJ "1'11""1......

I .... I .... ~ .... .... . to

--tr-25 connections

-El-40 connections

-llE-75 connections

50403020105

o

.. 30.!l!! ~ 25l!'g-; 8 20~ 3lE >; 15:l CllEGi";( 'tl 10CllE

Figure 5.5D: Maximum file transfer delay

number of packets dropped vs. min_th

Il' 1/1'.

[n I......, ,....., 1,.-, ,.-,'-J

A A

1'-' 1'-' '-' 1'-' 1'-'

-&- 25 connections

-e-40 connections

--'JlE- 75 connections

5040302010

500

oo

2500

~ 2000(,)

a. ! 1500

~ e1000Q)'tl.cE:lC

Figure 5.5E: Number of packets dropped

Because of the results in Subsections 5.2.2 and 5.2.3, where we use the limity =

0.1 and maxy = 0.05. Because of the results in Subsections 5.2.1 and 5.2.4, where we

set min_th =1/4 limit and max_th = Y2 limit. This parameter setting happens to be the

GRED drop function with limit = 2max_th and q_w = 1. It also looks like the RED drop

function because both slopes in the GRED are the same and appear as one slope. The

drop probability function ofMRED is shown in Figure 5.6.

44

Page 53: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

To illustrate the performance ofMRED, we ran simulations to compare it with the

performance of GRED and Drop Tail. The simulation scenario is the same as the general

simulation scenario in Section 5.2, but we change the network parameters in our

simulations, which are the number of connections N, the link capacity C, and link

propagation delay. Thus, we observe these three AQM's performance not only in the

network scenarios that we used to investigate ERED parameters in Section 5.2 but also in

a wide range network scenarios. For fair comparison, we give the three AQMs the same

queue length of 100 packets. Therefore, parameter limit in MRED is 100 packets, Drop

Tail will drop all incoming packets when the queue length exceeds 100 packets, and 2

max_th in GRED is set to this queue length. According to the suggested parameter

setting for GRED [3,4] by Sally Floyd, we set q_w = 0.002, min_th = 25, maxy =0.05,

and max_th = 50. We use four measures to show the performance. They are the queue

size occupancy, link utilization, average file transfer delay, and number of packets

dropped. We show one measure in each of following subsections.

0.1

limitmin_th=1/4 limit

Iimit_p

o l-----+-=--------~---__I~q (packet)

Figure 5.6: Drop probability function ofMRED

45

Page 54: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

5.3.1 Queue size

Figures 5.7A, 5.7B, and 5.7C show the queue size occupancy over the number of

connections, link capacity, and link propagation delay respectively. Similar to Figure

4.2A, these figures show the maximum and minimum queue occupancy for Drop Tail,

GRED and MRED. We observe that the maximum occupancy ofMRED is always lower

than that of GRED and Drop Tail, and the minimum occupancy of MRED is higher or

equal to that of GRED and Drop Tail. This means MRED is more stable and robust than

Drop Tail and GRED for the given network scenario and its parameter setting.

queue size vs. number of connections

~ '" n

""" _10- - ... ..~

.:r~

f\ nn .....~ ~ ~

~ 120Ql

~ 100

So 80

.~ 60: 40~ 205- 0

o 20 40 60

number of connections

80 100 120

-tr-Drop Tail

-B-GRED

~MRED

Figure 5.7A: Maximum and minimum queue occupancy over number of connections

queue size vs. link capacity

120

i 100..lI::tJ 80&-- 60.~ 40Ul

! 20! 0tT

-20

InnI.....r---{ i.. I'Q -r-1'1./: ---- r- .......r-r- .......

r--....... I~.......NI--.

I", 7~

1..... r-....I~ ,-I..... I-

", •link capacity

Figure 5.7B: Queue size over link capacity

46

o

-tr-Drop Tail

-B-GRED

~MRED

Page 55: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

queue size vs. link propagation delay

120

~ 100

ti 80l'G

E: 60

.~ 40

~ 20CD

g. 0

-20

Ir:= l-I-f-fB.I~ l"-I-l-I-t-

E~

I~r--- 71'.....1.....,1'\

~ I'--l-I.......

.h 11 ~n 11 ~n ')1 \n 1-" ;n 1':11 \n ,':II o

---&-.Drop Tail

-B-GRED

~MRED

link propagation delay

Figure 5.7C: Queue size over link propagation delay

5.3.2 Link utilization

Figure 5.8A shows the link utilization as a function of the number of connections.

We see that MRED always shows higher link utilization than GRED and Drop Tail in our

simulation scenario. In Figure 5.8B, we see that when the link capacity is smaller than

50Mbps, MRED's link utilization is higher; whereas when the link capacity exceeds

50Mbps, MRED shows lower link utilization than the other two algorithms. This

happens because when the link capacity is high, packets can be dequeued quickly. When

queue size arises due to some bursty traffic, this may not cause the queue to overflow.

However, MRED drops packets to slow down the TCP packet sending rate. The queue is

then easily emptied. In Figure 5.8C, we observe that when link propagation delay is

smaller than lOOms, MRED has the highest link utilization, but when the link

propagation delay is over lOOms, Drop Tail shows the highest link utilization. The

reason is that if the link propagation delay is long, TCP responds slowly to recover the

transmission rate of packets after a packet is dropped. MRED may drop packets when

47

Page 56: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

the queue size is not full due to the low transmission rate. Whereas Drop Tail drops

packets only when the transmission rate is high enough to cause queue overflow. Notice

that MRED will approach Drop Tail as maxy and limity decrease and as min_th and

max_th increase, so we can set a smaller drop probability to improve MRED in cases

with high link capacity and long link propagation delay.

link utilization vs. number of connections

..v V-

~~v~

,::i-I"--- l-I-~r-tlJr-... ,.. ......I"--- I...... I-.- -L.

1.05

c 1o~ 0.95

:;:l 0.9:::l

..II: 0.85:§

0.8

0.75o 20 40 60

number of connections

80 100 120

-lr-Drop Tail

-B-GRED

-llE-MRED

Figure 5.8A: Link utilization over number of connections

link utilization vs. link capacity

I",

R I\;::" ,.....I"" ~ -1000.

f::: ::=:::: :::::::: :::::-r-...r--:F::: :::::::::i9

1.2

c0

~ 0.8;;

0.6:::l..II::§ 0.4

0.2o 20 40 60

link capacity

80 100 120

~Droptail

-B-GRED

-llE-MRED

Figure 5.8B: Link utilization over link capacities

48

Page 57: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

link utilization vs. link propagation delay

G f"':~

1\ N a:;,1""=~ :=::::::".

l"'-I-f::::~~--Ill

1.1

t:: 1o~ 0.9

~ 0.8.lII: 0.7:§ 0.6

0.5o 50 100 150 200 250 300 350

-tr-Drop Tail

-8-GRED

-+-MRED

link propagation delay

Figure 5.8C: Link utilization over link propagation delay

5.3.3 Average file transfer delay

In Figure 5.9A, we see that MRED always produces the shortest delays in the

given network scenario. MRED also shows the shortest delays when the link capacity

and delay are small as shown Figures 5.9B and 5.9C. However, when the link capacity is

larger than 50Mbps MRED shows longer delays than Drop Tail and GRED, as shown in

Figure 5.9B. When the link propagation delay is longer than 100 ms, MRED shows

longer delays than Drop Tail. These results are consistent with their link utilizations and

the reasons are the same as explained in Subsection 5.3.2.

average file transfer dealy vs. number of connections

....-I- -I:IJ

_-El- f::::::: i:::== ""'~.....,.......I-- ~~F-

~ ...... ~

.-Q """"P....... -* ==---....." ;;;,. po-

rtf

.. 35

~ 'C 30I!! § 25- uGl Gli;: .!!!.. 20~ iO' 15I!! Gi 10Gl'tli; 5

o 20 40 60

number of connections

80 100 120

-tr-Drop Tail

-8-GRED

-+-MRED

Figure 5.9A: Average file transfer delay over number of connections

49

Page 58: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

average file transfer delay vs. link capacity

I~'~

~ '"';t: ......--.... t-..

1-

20

oo 20 40 60

link capacity

80 100 120

-&-DropTaii

-e--GRED

--.-MRED

Figure 5.9B: Average file transfer delay over link capacity

average file transfer delay vs. link propagation delay

I--.- f-EJ.-1-- I-" ,:::;~~

h I-" ~ t:::::[;1" :::::~ v

k:::

1/Ir-:>!Ilil'!

I

24..J!! _ 22Ul-Cc C 20f! 0_ u

.!!! III 18~"'-"

CIl >0 16ClIo!!f! ~ 14CIl

1; 12o 50 100 150 200 250 300 350

--er- Drop Tail

-8-GRED

--*-MRED

link propagation dealy

Figure 5.9C: Average file transfer delay over link propagation delay

5.3.4 Number of packets dropped

In Figures 5.10A through 5.10C, we compare the number ofpackets dropped. We

observe the following. MRED always drops fewer packets than GRED in our simulation

scenario with the exception of when the link capacity is very large as in Figure 5.10B.

Also, Drop Tail drops fewer packets than MRED whenever one of the conditions is true:

the number of connections is very large (as shown in Figure 5.10A), the link capacity is

high (as shown in Figure 5.10B), or link propagation delay is small (as shown in Figure

50

Page 59: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

5.lOC). As mentioned in Subsection 5.3.2, we can improve MRED by setting smaller

drop probabilities whenever one of the conditions is true.

number of packets dropped vs. number of connections

v ~IJ

l-v~vr::v

.,/.,/

10-~:;.-

v -'" 6::::: i='""

~v

i==:==~:::::-- "71'

---- ~I-. L-VII"

-tr-Drop Tail

-B-GRED

~MRED

12010080604020o

o

5000~..ll: 4000u"[ ~ 30000Q... e 2000~"E 1000::JC

number of connections

Figure 5.1 OA: Number ofpackets dropped over number of connections

number of packets dropped vs. link capacity

2000

~- 1500~"Q.Ql

o li 1000.. 0

~ -c 500::JC

IQ\

I~

~1::~~~ i'r--..

--r-;;:I-- - ,......

-tr-Drop Tail

-B-GRED

~MRED

oo 20 40 60

link capacity

80 100 120

Figure 5.lOB: Number ofpackets dropped over link capacity

51

Page 60: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

number of packets dropped vs. link propagation delay

i:l.,I..,

~~ "i'1'''':~

Lp'- r--.~~"=l!~

.....r-.-..... --4=~

2000

~- 1500~'t:IQ.CD

'0 8: 1000... ~~'t:IE 500:sI:

oo 50 100 150 200 250 300 350

----A-Drop Tail

-B-GRED

.......... MRED

link propagation delay

Figure 5.10C: Number ofpackets dropped over link propagation delay

5.4 Summary

Our results demonstrate that the MRED works better than Drop Tail and GRED in

the network scenario that we used in Section 5.2. When the network scenario is

dramatically changed, especially when the link capacity and link propagation delay

become very large, Drop Tail and GRED may show better performance than MRED.

However, we can improve MRED by modifying the parameter settings. We believe that

the optimal parameter settings for MRED are also dependent on the network scenario.

MRED is more robust when compared to Drop Tail and GRED in most situations,

demonstrating that MRED's drop probability function is an improvement.

52

Page 61: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

CHAPTER 6

AQM DROPPING ALGORITHMS

In this chapter, we propose two dropping algorithms and compare their

performance with the original dropping algorithm of RED through simulations. The two

proposed algorithms drop packets at the same rate as RED but the inter-drop times are

more deterministic. In Section 6.1, we review RED's dropping algorithm and then

describe the two proposed dropping algorithms. In Section 6.2, we compare their

performance.

6.1 Two different dropping algorithms

First, we will review the dropping algorithm ofRED. In RED, the variable count,

which is the number of packets that are enqueued since the last drop, is a uniformly

distributed random number between [1, 1/Ph] if Ph is constant as we described in

Section 1.1. The detailed RED algorithm is shown in Figure 1.2. We ignore the

variables wait and dropJand, which generally are not used in practice, and show its

dropping algorithm in Figure 6.1.

Determine if the packet should be dropped/marked:

if min_th <= avg < max_thcount++pb =max_p (avg - minJh)/(max_th - minJh)pa =pb/(1-count*pb)R=random [0,1]If R<pa

drop/mark the arriving packetcount =0

else enque the arriving packetelse if max_th <= avg

drop the arriving packetcount =0

53

Page 62: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

elseenque the arriving packetcount =0

Figure 6.1: RED dropping algorithm

Next, we transfonn RED's dropping algorithm by using MRED's drop probability

function. Figure 6.2 shows the MRED dropping algorithm. The differences between

RED and MRED are that the variable max th in RED is the variable of limit in MRED

and avg in MRED is the real queue size since q_w is set to 1.

Determine if the packet should be dropped/marked:

if min_th <= avg < limitcount++pb =max_p (avg - min_th)/(Iimit - min_th)pa =pb/(1-count*pb)R=random [0,1]If R<pa

drop/mark the arriving packetcount =0

else enque the arriving packetif limit <= avg

drop the arriving packetcount =0

elseenque the arriving packetcount =0

Figure 6.2: MRED dropping algorithm

Third, we introduce the two AQM algorithms. They are called ARED and

NRED. They use MRED drop probability function, but they employ different methods to

decide packet drops. Therefore, the only difference among MRED, ARED, and NRED is

their dropping algorithms.

54

Page 63: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

The ARED dropping algorithm is an accumulated summation algorithm. It

maintains two variables, psum and u. When a packet arrives and queue size is between

min_th and limit, psum is updated by adding the corresponding drop probability Ph to

itself. When queue size is smaller than min_th or just after a packet is dropped, psum is

reset to zero. The psum is then compared with the variable u, which is a threshold for

deciding packet drop and packet dropping rate. Ifpsum is bigger, or equal to u, then the

arriving packet is dropped and psum is decremented by u. Otherwise, ARED enqueues

the packet. The detailed dropping algorithm of ARED is given in Figure 6.3. The

variable avg in ARED corresponds to the queue size in our explanation, and other

variables have the same meaning as in MRED.

Determine if packet should be dropped/marked:

if avg <= min_thenqueue the arriving packetpsum =0

else if avg >minJhif min_th <avg <= limit

pb =max_p*(avg-min_th)/(Iimit-min_th)else if avg >= limit

pb =1psum =psum + pbif psum >= u

drop/mark the arriving packetpsum =psum - u

else enqueue the arriving packet

Saved variable:psum: accumulated drop probabilityu: threshold for packet drop, equal to %for equivalent dropping rate with MRED

Figure 6.3: ARED dropping algorithm

The NRED dropping algorithm is a fixed counting algorithm. NRED also

maintains two variables. They are count and u. The variable count of NRED has the

55

Page 64: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

same meaning as MRED's count, which is the number of packets enqueued since the last

drop. Variable u is a threshold for deciding packet drop and dropping rate just as it is in

ARED. When a packet arrives and the queue size is between min_th and limit, the value

of count is incremented by 1, and multiplied the drop probability Ph' which corresponds

to the queue size. The result of this multiplication is compared with the threshold u. If

the result is bigger than u, then the arriving packet is dropped, otherwise NRED enqueues

the arriving packet. When a packet arrives and the queue size is smaller than min_th or

just after a packet is dropped, count is set to zero. The detailed NRED algorithm is

shown in Figure 6.4.

Determine ifpacket should be dropped/marked:if avg <= min_th

enqueue the arriving packetpb =0count=O

else if avg > min_thcount++if min_th <avg <= limit

pb = max_p*(avg-min_th)/(Iimit-min_th)else if avg >= limit

pb =1

if count*pb >= udrop/mark the arriving packetcount = 0

else enqueue the arriving packet

Saved variable:u: threshold for packet drop, equal to %for equivalent dropping rate with MRED

Figure 6.4: NRED dropping algorithm

Fourth, we explain why we change our drop probability function from RED to

MRED and not other AQM algorithms. The first reason is that we have shown MRED is

56

Page 65: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

more stable, .so we want to further improve it by investigating modifications to its

dropping algorithm. Now, if the drop probability is a constant Pb' the number of count

for MRED between two packet drops is uniformly distributed between [1, 1/ Pb] and the

expectation is 1/2Pb +1/2. Ifwe set u to 1/2 for ARED and NRED, then the number of

the enqueued packets between two drops in ARED is 1/2 Pb' and the value of count in

NRED is 1/2 Pb' Thus, the rates of dropping for MRED, ARED, and NRED are

approximately the same. In addition, the rates are closer as Pb becomes smaller.

For ARED and NRED, the value of count between packet drops is not random at

all. Therefore, their drops are less bursty but at about the same rate as MRED. We

expected that this may improve TCP performance because TCP behaves badly when

packets are dropped in bursts.

6.2 Comparison of the three dropping algorithms

In this section, we present the performance of MRED, ARED, and NRED. The

network topology is the same as in Section 3.2. The general simulation network scenario

is the same as in Section 5.2. To compare the performance of MRED, Drop Tail, and

GRED in Section 5.3, we changed the same three network parameters in following

simulations. They are the number of connections N, the link capacity C, and link

propagation delay. Thus, we compare the three dropping algorithms in a wide range of

network scenarios. Note that since the drop rates of the three algorithms are almost same,

their performance differences are really small. We will show one aspect of their

performance in each of the following four subsections.

57

Page 66: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

6.2.1 Queue size

As in Figure 4.2A, Figures 6.5A, 6.5B and 6.5C show the maximum and

minimum queue occupancy for MRED, ARED, and NRED. We observe that NRED is

overall more stable than the other two algorithms with the exceptions for the case when C

equals 50Mbps, as shown in Figure 6.5B, and the case when link propagation delay is

above lOOms, as shown in Figure 6.5C. In addition, we see that MRED is overall more

stable than ARED. We will go to discuss this further after we compare more aspects of

their performance.

queue size vs. number of connections

AI ~

lIP"" .~i"--~ LAI---I--"olJiT~

Z" 100~u 80Cll

.!:: 60

.~III 40Glill 20;,C' 0

o 20 40 60

number of connections

80 100 120

-B-MRED

~ARED

~NRED

80

i.¥ 60u

! 40

.~III 20Gl;,~ 0C'

-20

Figure 6.5A: Queue size over number of connections

queue size vs. link capacity

1f7ijI"" """" I"'"b~ -I....

20 4b ep Ep 1 0 1

link capacity

Figure 6.5B: Queue size over link capacity58

o

-B-MRED

~ARED

~NRED

Page 67: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

queue size vs. link propagation delay

- 80~IJ 60l- 40.~Ul 20CIl~ 0::::sC" -20

~I"'" .1&

""IIi' '111!>w.w

1'7P" i'I'.- "-<n 1 ~n 1 ~n I?Ihn I?I~n h ~n h o

--B-MRED

-tr-ARED

~NRED

link propagation delay

Figure 6.5C: Queue size over link propagation delay

6.2.2 Link utilization

Figure 6.6A shows that NRED has higher, or equal, link utilization compared to

MRED and ARED. In Figure 6.6B, we see that NRED shows the highest the link

utilization except for the case when link capacity is 50 Mbps. In Figure 6.6C, we observe

that NRED shows the highest link utilization when the link propagation delay is shorter

than lOOms. These results are consistent with the queue occupancy results in last

subsection. Between MRED and ARED, we do not observe clear evidence on which one

is better.

link utilization vs. number of connections

1.05I:0...~ 0.95:s::::s.ll: 0.9:§

- I.

j;::j,.<l-' "" I-J::;;!;;;

~p. p.J,.o~

--B-MRED

-tr-ARED

~NRED

0.8520 30 40 50 60 70 80 90 100

number of connections

Figure 6.6A: Link utilization over number ofconnections

59

Page 68: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

link utilization V5. link capacity

""" ._--- 0- ._.~.

F"t~ --f-.-r- fw-- b .....r--r--r--r--

r0-t"-=--"=

c0 0.8:;:lIII.~ 0.6~:::l

.ll: 0.4:§

0.28 28 48

link capacity

68 88

~MRED

-&-ARBJ

~NRBJ

Figure 6.6B: Link utilization over link capacity

link utilization V5. link propagation delay

- .~

I"'" 1-

-r.... ....l-iiiilr--: :::;;;

~ ::::-I"f::::: :::::: r--..

i~

c 0.90

i 0.8.!:!;i 0.7:::l.ll:

:§ 0.6

0.5o 50 100 150 200 250 300 350

-8-MRED

-/s-ARED

~NRED

link propagation delay

Figure 6.6C: Link utilization over link propagation delays

6.2.3 Average file transfer delay

In Figures 6.7A, 6.7B, and 6.7C, we observe that the file transfer delays of the

three algorithms are very close. However, examining carefully, we find that NRED has

the shortest file transfer delay except for two cases: (i) when the link capacity is larger

than 50Mbps, shown in Figure 6.7B, where ARED shows smaller delay, and (ii) when the

link propagation delay is larger than 200ms, shown in Figure 6.7C, where MRED shows

smaller delay. These results are somehow consistent with the results of the last two

subsections. We cannot tell which one, ARED or MRED, has a smaller delay.

60

Page 69: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

r· ..· • r- . .- ...~!lI'-I-

r-- I-

"'" ..-1-

1-1-1-...

"'"...\'J~i"'"-

)II-1-1--

average file transfer delay vs. number of connections

..Gl---Ul"CC CE 0.... UGl Gl

- Ul1;::-Gl >.cd!!III Gl.. "C

~

27

22

17

12

7

20 30 40 50 60 70 80 90 100

number of connections

Figure 6.7A: Average file transfer delay over number of connections

~ ..,R!l;t

, ......\

" "1~

r-.....r--•

average file transfer delay vs. link capacity

..Gl---Ul"CC CE 0.... UGl Gl- Ulu:: ­Gl >.cd!!III Gl~"C

~

171513119753

8 28 48

link capacity

68 88

l~MRED--t!:r- ARED

--*-NRED

Figure 6.7B: Average file transfer delay over link capacity

average file transfer delayvs.link propagation delay

-f-I!~

'""'".....--f-- "LfJ

I--

-

.. 24J!! __ 22

~ g 20b u 18Gl Glii: ~ 16&~ 14~ ~ 12~ 10

20 70 120 170 220 270

~MRED

--t!:r- ARED

--*-NRED

link propagation dealy

Figure 6.7C: Average file transfer delay over link propagation delay

61

Page 70: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

6.2.4 Number of packets dropped

In Figures 6.8A, 6.8B, and 6.8C, we see that, among the three dropping

algorithms, NRED almost always drops fewer packets than the other two algorithms in

our simulation scenario. Between MRED and ARED, MRED more frequently drops

fewer packets than ARED. This implies that NRED drops packets in a way that is less

bursty and more effective for regulating traffic than other two algorithms.

number of packets dropped vs. number of connections

l:;;:1tJ1!\l~f;ij;i

!"'~

10-10- fill

~

Il!I!

3500.l!l 3000~'" 'tl 2500[ & 2000'0 go 1500:P oS 1000.cE 500E 0

20 30 40 50 60 70 80 90 100 110

number of connections

Figure 6.8A: Number of packets dropped over number of connections

number of packets dropped vs. link capacity

IKill

I""~Il;i

I""""l il>,.. .....r--

~"''tlCII Ql0.0.... 0.o 0... ...~'tl

E:IC

1000900800700600500400300200100

8 28 48

link capacity

68 88

I~MRED.....-fr-ARID

~NRED

Figure 6.8B: Number of packets dropped over link capacity

62

Page 71: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

number of packets dropped vs. link propagation delay

III"':11~n~~~ ....

1"'1...b...

100o

~ 1600g".e [ 1100o c... e!" 600E~c:

50 100 150 200 250 300

-El-MRED

~ARED

.......... NRED

link propagation delay

Figure 6.8C: Number of packets dropped over link propagation delay

6.3 Summary

After comparing the performance of the three dropping algorithms, we conclude

that RED's uniformly distributed random dropping algorithm does not show a clear

advantage over ARED and NRED dropping algorithms. Furthermore, the NRED

dropping algorithm shows overall better performance than MRED in our simulation

scenano.

63

Page 72: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

CHAPTER 7

CONCLUSION

In this thesis, we throughout investigate *RED's occupancy and perfonnance by

testing the effect of its queue averaging mechanism, drop probability function, and

dropping algorithm through simulations. We conclude as follows:

The maximum buffer size that can be used by a *RED algorithm is limited by an

upper bound. For a given q_w, the upper bound is the summation of two tenns. One is

the average queue size just before *RED begin to drop all the arriving packets. Another

tenn is lover q_w, which represents the number of bursty packets that the queue can

accept.

The queue averaging mechanism in the *RED algorithm introduces a feedback

delay of the congestion infonnation. Setting the q_w value equal to or close to I can

significantly improve the stability of queue.

To regulate traffic effectively, we need to set proper early drop probability and

drop starting position. A very high or low drop probability will damage AQM's ability to

regulate unstable traffic. Similarly, a very early or late drop starting position will also

reduce AQM's ability.

MRED overall shows better perfonnance than Drop Tail and GRED III our

simulation scenarios. It is also more stable and robust than Drop Tail and GRED.

64

Page 73: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

*RED's uniformly distributed random dropping algorithm is not optimal. The

NRED dropping algorithm shows overall better performance than the *RED's dropping

algorithm in our simulation scenarios.

The optimal parameter setting of MRED or NRED depends on the network

scenario, which is a topic for future research.

65

Page 74: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

Bibliography

[1] S. Floyd and V. Jacobson, "Random Early Detection Gateway for Congestion

Avoidance", IEEE/ACM Transactions on Network, 1(4): 397-413, Aug. 1993.

[2] S. Floyd and K. Fall, "Ns Simulation Tests for Random Early Detection (RED)

Queue Management", http://citeseer.nj .nec.com/70094.html

[3] S. Floyd, "Random Early Detection (RED) Gateways", March 1995.

http://ftp.ee.lbl.gov/floyd/red/html

[4] S. Floyd, "RED: Discussions of Setting Parameters", Nov. 1997,

http://www.aciri.org/floyed/REDparameters.txt.

[5] Kevin Fall, May 2001 "The ns Manual"

http://www.isi.edu/nsnam/ns/ns-documentation.

[6] S. Floyd, "Recommendation on using the gentle_ variant of RED", March 2000.

http://www.aciri.org/floyed/red/gentle.html.

[7] [RFC2581] "TCP Congestion Control", April 1999.

[8] A. Ganguly and P. Lassila, "A study of TCP-RED congestion control using ns2"

July 2001. http://keskus.hut.fi/tutkimus/cost279/publ/ganguly2000.pdf.

[9] S. Floyd, "TCP and Explicit Congestion Notification", ACM Computer

communication Review, Vol. 24 No.5, pp. 10-23. October 1994.

[10] [RFC 2481] "A Proposal to add Explicit Congestion Notification (ECN) to IP",

Jan. 1999

[11] [RFC 2884] "Performance Evaluation of Explicit Congestion Notification (ECN)

in IP Networks",

66

Page 75: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

[12] S. Floyd and K. Fall, "ECN implementations in the NS Simulator", 1997.

http://www.icir.orglfloydlpapers/ecnsims.pdf.

[13] [RFC 2309] "Recommendations on Queue Management and Congestion

Avoidance in the Internet", April 1998.

[14] C. Brandauer, G. Iannaccone, C. Diot and T. Ziegler, "Comparison of Drop Tail

and Active Queue Management Performance for Bulk-data and Web-like Internet

Traffic", 2001 http://eiteseer.nj.nee . eorn/4 97584. htrnl.

[15] K. Chandrayana, B. Sikdar and S. Kalyanaraman, " Scalable Configuration of

RED Queue Parameters", http://citeseer.nj.nec.coml448663.html

[16] M. Christiansen, K. Jeffay, D. Ott and F. Smith, "Tuning RED for Web Traffic",

In Proceedings of ACM Sigcomm 2000, Stockholm, Sweden, pages 139-150,

Sept.2000.

[17] M. May, J. Bolot, C. Diot and B. Lyles, "Reasons not to deploy RED", In proc. of

7th. International Workshop on Quality of Service, London, page 260-262, June

1999.

[18] J. Chung and M. Claypool, "Analysis of RED-Family Active Queue management

Over a Wide-Range ofTCP Loads", http://citeseer.nj.nec.coml501470.html

[19] T. Bonald; "Analytic Evaluation of RED Performance", In Proc. of IEEE

Infocom,pp. 1415-1424, Tel-Aviv, Israel, Mar. 2000.

[20] Y. Zhand and L. Qiu, "Understanding the End-to-End performance Impact of

RED in a Heterogeneous Environment", http://citeseer.nj.nec.coml381704.html

[21] S. Floyd, R. Gummadi and S. Shenker, "Adaptive RED. An Algorithm for

Increasing the Robustness of RED's Active Queue Management",

67

Page 76: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

http://www.icir.org/floyd/papers/adaptiveRED.pdf.

[22] W. Feng, D. Kandlur, D. Saha and K. Shin, "BLUE: A New Class of Active

Queue Management Algorithms", University ofMichigan Technical Report CSE­

TR-387-99. April 1999.

[23] H. Wang and K. Shin, "Refine Design ofRandom Early Detection gateways",

http://www.eecs.umich.edu/~hxw/paper/report.pdf.

[24] D. Lin and R. Morris, "Dynamics of Random Early detection", In Proc. Sigcomm

97, pp. 127-138, Sept. 1999.

[25] T. Ott, T. Lakshman and L. Wong, "SRED: Stabilized RED", "In proc. IEEE

infocom", San Francisco, pp.1346-1355, Mar. 1999.

[26] S. Athuraliya, V. Li, S. Low and Q. Yin, "REM: Active Queue Management",

IEEE Network, vo1.15, pp. 48-52, May/June, 2001.

[27] W. Feng, D. Kandlur, D. Saha and K. Shin, "A self-Configuring RED Gateway",

Inproc. ofIEEE infocom, pp. 1320-1328. Mar 1999.

[28] Bartek Wydrowski and Moshe Zukerman, "GREEN: An Active Queue

Management Algorithm for a Self managed Internet", Proceedings of ICC 2002,

New York, Vol. 4, pp.2368-2372, 2002

[29] S. Low, F. Paganini, J. Wang, S. Adlakha and J. Doyle, "Dynamics of TCPIRED

and a Scalable Control", In Proc. ofIEEE Infocorn, June 2002.

[30] V. Misra, W. Gong and D. Towsley, "Fluid-based Analysis of a Network ofAQM

Routers Supporting TCP Flows with an Application to RED", In proc. ofACM

SIGCOMM2000. Aug. 2000.

68

Page 77: RED QUEUE'S OCCUPANCY AND PERFORMANCE · packets is dependent on an average packet queue length. To simplify our discussion, we separate the RED algorithm into three parts: the algorithm

[31] C. Hollot, V. Misra, D. Towsley and W. Gong, "A control Theoretic Analysis of

RED", In Proceedings ofIEEE Infocorn, April 2001.

[32] C. Hollot, V. Misra, D. Towsley and W. Gong, "On Designing Improved

Controllers for AQM Routers Supporting TCP Flows", In Proceedings ofIEEE

Infocorn, April 2001.

69