Performance analysis of TCP variants

6
Performance Analysis of TCP Variants (Using NS-2) Anirudh Mittal College of Engineering Northeastern University Boston, MA 02115 [email protected] Sahil Jain College of Engineering Northeastern University Boston, MA 02115 [email protected] I. INTRODUCTION Transport Control Protocol (TCP) is a Transport layer connection oriented protocol of the transport layer which provides features like flow control, reliability and congestion control. In this paper we are trying to analyze the performance of different TCP variants under different load conditions. The main motive behind this experiment is to find the best suitable TCP variant for a given scenario. Using statistics and graphs provided in this experiment, we can easily analyze the performance comparison of different TCP variants. TCP variants have improved considerably since the time TCP evolved. It is interesting to study the short comes of one TCP variant that led to the discovery of the other variant. Now the question is, was the performance of these new variants satisfactory, or did the increase in performance of a parameter resulted in a trade-off with the performance of another parameter? The above question led us to perform experiments to analyze the trade-offs existing in the performance of different TCP variants. We performed the experiments under congestion to analyze the TCP performance, Fairness between TCP variants and Influence of queueing. These experiments provided us with packet by packet analysis to study the performance of TCP variants. II. METHODOLOGY To conduct these experiments we used NS-2, firstly because the simulation environment needs run time speed for detailed simulation of protocols and iteration time changes for variation in parameters. These services are provided by C++ and TCL. NS2 incorporates both these languages, thus making it an ideal simulator. Also, NS2 generates an entry whenever an event is generated, hence we can do a packet by packet analysis of the network. A script in TCL is written, which starts with making a Simulator object instance that perform tasks like creating compound objects like nodes and links, create connection between agents etc. Different topologies were designed in Tcl and conditions were given to study the performance variations as given below: 1. Each link between 2 nodes: 10Mbps 2. CBR Rate: for (i=1 ; i<=10 ; i=i+0.5) 3. Delay between each node: 2ms Fig 2.1 Topology We specifically chose these values as it was desired to obtain a comparative variation among two plots. The CBR rate was varied because we needed to analyze the performance when there was a change in congestion. Delay of 2ms was taken because it gave best comparative performance by generating large number of packets. On simulation this creates a trace file with *.tr extension. This file is used to record the simulation traces. We also implemented TCP and UDP agents in the simulation script and schedule events for them. To analyze and parse these results from the trace file, AWK programming was used as it uses associative arrays to store linked data. The *.awk file is created which uses the trace file and calculates the desired values to return the final output in *.dat file. The outputs from *.dat file is then used to generate graphs. Microsoft Excel being a very good software to analyze statistical data was used to generate graphs. III. SIMULATION AND REUSLTS A. Experiment 1 TCP Performance Under Congestion The CBR flow and TCP flow was started at the same time instance. The TCP packets travel between node 1 and node 4 and the CBR Rate is varied between node 2 and node 3 from 1Mbps to 10Mbps (Bottleneck Capacity) with an increment of 0.5. TCP packets start from node 1 and enter the queue. They are then removed from the queue before they are received on node 2. The same process happens when the packet is transmitted to node 3. But there may be some instance when due to congestion in the network, the packet may drop. If the node 1 does not receive ACK from the same packet sequence that it transmitted, it may retransmit the packet until the ACK is received. Once the packet

Transcript of Performance analysis of TCP variants

Page 1: Performance analysis of TCP variants

Performance Analysis of TCP Variants (Using NS-2)

Anirudh Mittal

College of Engineering

Northeastern University

Boston, MA 02115

[email protected]

Sahil Jain College of Engineering

Northeastern University

Boston, MA 02115

[email protected]

I. INTRODUCTION

Transport Control Protocol (TCP) is a Transport layer connection oriented protocol of the transport layer which provides features like flow control, reliability and congestion control.

In this paper we are trying to analyze the performance of different TCP variants under different load conditions. The main motive behind this experiment is to find the best suitable TCP variant for a given scenario. Using statistics and graphs provided in this experiment, we can easily analyze the performance comparison of different TCP variants. TCP variants have improved considerably since the time TCP evolved. It is interesting to study the short comes of one TCP variant that led to the discovery of the other variant.

Now the question is, was the performance of these new variants satisfactory, or did the increase in performance of a parameter resulted in a trade-off with the performance of another parameter?

The above question led us to perform experiments to analyze the trade-offs existing in the performance of different TCP variants. We performed the experiments under congestion to analyze the TCP performance, Fairness between TCP variants and Influence of queueing. These experiments provided us with packet by packet analysis to study the performance of TCP variants.

II. METHODOLOGY

To conduct these experiments we used NS-2, firstly because the simulation environment needs run time speed for detailed simulation of protocols and iteration time changes for variation in parameters. These services are provided by C++ and TCL. NS2 incorporates both these languages, thus making it an ideal simulator. Also, NS2 generates an entry whenever an event is generated, hence we can do a packet by packet analysis of the network.

A script in TCL is written, which starts with making a Simulator object instance that perform tasks like creating compound objects like nodes and links, create connection between agents etc. Different topologies were designed in Tcl and conditions were given to study the performance variations as given below:

1. Each link between 2 nodes: 10Mbps

2. CBR Rate: for (i=1 ; i<=10 ; i=i+0.5)

3. Delay between each node: 2ms

Fig 2.1 – Topology

We specifically chose these values as it was desired to obtain a comparative variation among two plots. The CBR rate was varied because we needed to analyze the performance when there was a change in congestion. Delay of 2ms was taken because it gave best comparative performance by generating large number of packets.

On simulation this creates a trace file with *.tr extension. This file is used to record the simulation traces. We also implemented TCP and UDP agents in the simulation script and schedule events for them.

To analyze and parse these results from the trace file, AWK programming was used as it uses associative arrays to store linked data. The *.awk file is created which uses the trace file and calculates the desired values to return the final output in *.dat file. The outputs from *.dat file is then used to generate graphs. Microsoft Excel being a very good software to analyze statistical data was used to generate graphs.

III. SIMULATION AND REUSLTS

A. Experiment 1 – TCP Performance Under Congestion

The CBR flow and TCP flow was started at the same time instance. The TCP packets travel between node 1 and node 4 and the CBR Rate is varied between node 2 and node 3 from 1Mbps to 10Mbps (Bottleneck Capacity) with an increment of 0.5. TCP packets start from node 1 and enter the queue. They are then removed from the queue before they are received on node 2. The same process happens when the packet is transmitted to node 3. But there may be some instance when due to congestion in the network, the packet may drop. If the node 1 does not receive ACK from the same packet sequence that it transmitted, it may retransmit the packet until the ACK is received. Once the packet

Page 2: Performance analysis of TCP variants

reaches node 3, it is send to node 4 and a ACK from node 4 leaves for node 3.

1) Throughput: The succesfully received TCP packets were

observed at TCP sink. The delay was calculated for every packet

and then the total bytes received were computed till the last

packet received at that node.

Throughput = (Bytes Received x 8)/(Total run-time x 1024)

Fig 3.1.1 – Throughput

Result: TCP Vegas > NewReno > Tahoe > Reno Reason: TCP Vegas predicts congestion well in advance hence shows a better performance, New Reno came next because it handles multiple packet loss better than Reno.

2) Latency: The successfully received TCP packets at the

TCP sink were observed. The Time-Stamp and the sequence

number were saved in the associative arrays. The

acknowledgment packets of these received packets from the

TCP Sink were obserevd at the TCP generator and the same

parameters were save in the associative arrays.

In this way RTT was calculated and Latency was calculated

as –

Average Latency = RTT/Total # of Acknowledged Packets

Fig 3.1.2 – Latency

Result: TCP Vegas > NewReno > Tahoe > Reno (Performance)

Reason: Time taken for acknowledgement packets of TCP

Vegas to reach the source is very less hence RTT is less, making

its performance much better. Reno comes least because it detects

congestion after 3 ack packets, hence RTT will be optimized

late.

3) Packet Drop Rate: The total number of TCP packets

dropped at all the nodes were counted using a counter.

Packet Drop Rate = Dropped packets / Total run-time

Fig 3.1.3 – Packet Drop Rate

Result: TCP Vegas > Reno > Tahoe > NewRen (Performance)

Reason: Since Vegas predicts congestion much better than

others, the packet drops (which happen because of congestion)

won’t occur. Reno performs least as it will come to know about

congestion after 3 repeated acks.

B. Experiment 2 – Fairness between TCP Variants

The CBR flow and TCP flow was started at the same time instance. The TCP variant 1 packets travel between node 1 and node 4 while TCP variant 2 packets travel between node 5 and node 6. The CBR Rate is varied between node 2 and node 3 from 1Mbps to 10Mbps (Bottleneck Capacity) with an increment of 0.5.

1) Reno/Reno – Two Reno variants were attached to node 1

and 5 respectively.

a) Throughput: The succesfully received TCP packets

were observed at TCP sink of respective variants. The delay was

calculated for every packet and then the total bytes received

were computed till the last packet was received at the nodes.

Fig 3.2.1.1 – Throughput

Page 3: Performance analysis of TCP variants

Result: Here any Reno can have the starting high throughput and they both will try to be fair to each other, hence going towards equilibrium.

b) Latency: The succesfully received TCP packets were

observed at TCP sink of respective variants. The Time-Stamp

and the sequence number were saved in the associative arrays at

the node 4 and 6. The acknowledgment packets of these received

packets from the TCP Sink were obserevd at the TCP generator

and the same parameters were save in the associative arrays.

Fig 3.2.1.2 – Latency

Result: Both Reno will try to reach equilibrium.

c) Packet Drop Rate: The total number of TCP packets

dropped at all the nodes were counted using a counter and the

flow_id. The flow id was used in order to keep a track of

number of dropped packets for respective TCP variants.

Fig 3.2.1.3 – Packet Drop Rate

Result: It will depend on whose packet tries to drop first, because of which more ack packets will be there associated to that flow id making the other variant getting a good throughput

2) NewReno/Reno – NewReno was attached at node 1 and

Reno at node 5.

a) Throughput:

Fig 3.2.2.1 – Throughput

Result – NewReno > Reno Reason – New Reno leads in throughput as it doesn’t wait for retransmission timer to go off.

b) Latency:

Fig 3.2.2.2 – Latency

Result – NewReno > Reno (Performance)

Reason – Lot of acks will be present before a retransmission

takes place in Reno hence increasing the net RTT thereby giving

high latency.

c) Packet Drop Rate:

Fig 3.2.2.3 – Packet Drop Rate

Page 4: Performance analysis of TCP variants

Result – Reno > NewReno (Performance) Reason – Since NewReno unnecessarily sends the packet without waiting for the retransmission timer to end, this might results in packet that are already reached successfully might be dropped at the sink.

3) Vegas/Vegas – Two Vegas variants were attached to node

1 and 5 respectively.

a) Throughput:

Fig 3.2.3.1 – Throughput

Result – Both Vegas are trying to get the best throughput and in pursuit of this going towards equilibrium.

b) Latency –

Fig 3.2.3.2 – Latency

Result – Both Vegas are trying to reach equilibrium.

Shocking Result – One variant suddenly increases its latency while the other maintains it.

c) Packet Drop Rate –

Fig 3.2.3.3 – Packet Drop Rate

Result – Both Vegas under similar conditions almost having the same Packet Drop Rate

4) NewReno/Vegas – NewReno was attached at node 1 and

Vegas at node 5.

a) Throughput-

Fig 3.2.4.1 – Throughput

Result – NewReno > Vegas

Reason – Vegas trying to predict congestion hence keeping the window size small resulting in transfer of less data whereas in New Reno it increases the window continuously.

Page 5: Performance analysis of TCP variants

b) Latency –

Fig 3.2.4.2 – Latency

Result – Vegas > NewReno (Performance)

Reason – Waiting for many acks NewReno results in high

Latency

c) Packet Drop Rate -

Fig 3.2.4.3 – Packet Drop Rate

Result – Vegas > NewReno (Performance)

Reason – Congestion control better in Vegas, hence packet drop

will be less observed (which is generally due to congestion).

C. Experiment 3 – Influence of Queuing

The changes are made to the TCL for DropTail and RED.

1) Reno DropTail/RED Performance–

Fig 3.3.1 – Throughput

Result – DropTail > RED

Reason – RED doesn’t let the whole queue to get filled thereby

maintaining a particular throughput whereas in DropTail it starts

to fill the whole queue till it starts dropping. Hence, because large

number of packets are queued the throughput increases.

2) SACK DropTail/RED Performance–

Fig 3.3.2 – Throughput

Result – DropTail > RED

Reason – Same reason as above.

3) Reno DropTail/RED Latency –

Fig 3.3.3 – Latency

Result – RED > DropTail (Performance)

Reason – RED will help to reduce the congestion better than

DropTail hence there will be less traffic making RTT more

optimized.

Page 6: Performance analysis of TCP variants

4) SACK DropTail/RED Latency –

Fig 3.3.4 – Latency

Result – DropTail ~ RED (Performance)

Reason – Since SACK sends only selective acks in both the

cases RTT will remain almost the same making Latency almost

equal.

IV. CONCLUSION

The experiments were successfully performed using NS-2. By the analysis of the above experiments we conclude that TCP Vegas overall performed better because of its Congestion Prediction nature and after that NewReno also gave good results.

V. REFERENCES

[1] Kelvin Fall and Sally Floyd."Simulation-based Comparision of Tahoe,

Reno, and SACK TCP" david.choffnes.com/classes/cs4700sp15/papers/tcp-sim.pdf

[2] URL nile.wpi.edu/NS/

[3] URL http://http://www.isi.edu/nsnam/ns/tutorial/index.html

[4] URL http://http://www.isi.edu/nsnam/ns/ns-documentation.html