Final Phase 1 Document
-
Upload
jacob-priyadharshan -
Category
Documents
-
view
219 -
download
0
Transcript of Final Phase 1 Document
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 1/31
CHAPTER 1
INTRODUCTION
1.1 OVERVIEW
Many applications would require fast data transfer in high-speed
wireless networks nowadays. However, due to its conservative congestion control
algorithm, Transmission Control Protocol (TCP) cannot effectively utilize the
network capacity in lossy wireless networks. Here a receiver-assisted congestion
1
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 2/31
control mechanism (RACC) in which the sender performs loss- based control,
while the receiver is performing delay-based control. The receiver measures the
network bandwidth based on the packet inter arrival interval and uses it to
compute a congestion window size deemed appropriate for the sender. After
receiving the advertised value feedback from the receiver, the sender then uses
the additive increase and multiplicative decrease (AIMD) mechanism to compute
the correct congestion window size to be used. By integrating the loss- based
and the delay-based congestion controls, this mechanism can mitigate the
effect of wireless losses, alleviate the timeout effect, and therefore make better
use of network bandwidth. Here a congestion window determines the number of
bytes that can be outgoing at any time. The size of this window is calculated by
estimating how much congestion there is between the two places.
1.2 RECEIVER-ASSISTED CONGESTION CONTROL
Transmitting bulk data over high-speed links is a requirement for many
applications. Some times it won ’t be satisfied with an ordinary network and it
is better to migrated to the wireless networks.
However it is not satisfied with the delay-throughput performance
because the Transmission Control Protocol (TCP) used in the bulk data transfer
suffers from significant throughput degradation and very high interactive
delay through the wireless networks. In addition, there is an increasing
interest and demand to access the Internet via high-bandwidth wireless networks
anytime and anywhere. Thus, it is very desirable to improve TCP performance in
wireless networks because TCP traffic is accounting for about 90% of all
Internet traffic nowadays.
To understand the poor performance of TCP in wireless networks,
one needs to understand how TCP operates. Upon a timeout or receiving
some continuous amount of duplicate acknowledges (ACKs), the duplication
2
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 3/31
of these acknowledges are taken as a packet loss and reduces its congestion
window. This technique is very efficient in a traditional wired network.
Unfortunately, packet loss in a wireless network may also be due to
transmission problems such as a high link error probability, fading, and
interference. Therefore, packet loss is no longer an appropriate indication for
network congestion. With the wrong constructed information, TCP may
reduce its congestion window unnecessarily, resulting in poor
performance of wireless networks.
Another problem of TCP is its poor capability to utilize the network
bandwidth efficient ly, especially in networks with a high bandwidth delay product
(BDP). Bandwidth-delay product refers to the p r o du c t o f a data link's capacity(in b it s p e r s ec o n d ) and its e n d - to - e n d d e la y ( in seconds). The result, an amount
of data measured in bits (or b y t e s ) , is equivalent to the maximum amount of data
on the network circuit at any given time, i.e. data that has been transmitted but
not yet received. Sometimes it is calculated as the data link's capacity times its
r o u n d t r ip t im e . T he standard TCP congestion avoidance algorithm employs
an additive increase and multiplicative decrease (AIMD) scheme.
When there is no packet loss detected, the congestion window (cwnd)
is increased by one maximum segment size (MSS) every round-trip time (RTT).
Otherwise the TCP sender reduces cwnd by half if the packet loss is detected by
three duplicate ACKs or reduces cwnd to one if the packet loss is detected by
timeout. In a high-speed network with a large RTT, TCP requires a very large
window to efficiently utilize the net work resource. What is worse, upon a
retransmission timeout, the sender has to wait for a time duration before sending
any new packets, and this waiting t ime greatly reduces the TCP throughput.
Unlike regular TCP where the receiver only performs flow control, here
it is allowed the receiver to participate in congestion control. A timer is used at the
receiver to time the arrival of the next packet and to therefore detect a packet
3
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 4/31
drop if timeout occurs. Since the receiver always detects a packet drop earlier
than the sender, it can send an ACK earlier to inform the sender about the
timeout the sender is going to see. This would greatly reduce the waiting time of
the sender to retransmit a lost packet.
The receiver can then estimate the rate the sender should adopt in order
to make the best use of this measured bandwidth. The rate is advertised to the
sender by embedding the rate information in the ACKs returned to the sender.
The sender can then adjust its congestion window based on both the receiver
advertised sending rate and the AIMD mechanism. Through this rate-based
control, our mechanism can make better use of the available bandwidth, and
through the window-based control, our mechanism can make good use of the
network buffer. Consequently, the afore-described mechanism can improve the
throughput performance in lossy high-speed wireless networks. Simulation and
preliminary experimental results suggest that our mechanism is a promising
algorithm to achieve high link utilization while remaining friendly with regular
TCP.
CHAPTER 2
LITERATURE SURVEY
2.1 P2P - BASED DATA SYSTEM FOR THE EAST EXPERIMENT
Peer-to-Peer (P2P) file-sharing applications are a popular way of exchanging files
directly between end users across the Internet. Many applications such as
Objectivity/DB have been operating in a peer-to-peer environment at Stanford Linear
Accelerator (SLAC) and CERN . SLAC is currently using Objectivity/DB to store
over 200 Terabytes of online data that is fully distributed over 250 Linux servers. The
new design for the EAST data system is instead based on the P2P networking
environment, to exploit enhanced capabilities from this configuration.4
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 5/31
P2P networking is the utilization of the relatively powerful computers (PCs) for more
than just client-based computing tasks. The modern PC has a very fast processor, vast
memory, and a large hard disk, none of which are being fully utilized when
performing common computing tasks. The modern PC can easily act as both a client
and server for many types of applications.
2.2 BINARY INCREASE CONGESTION CONTROL (BIC) FOR FASTLONG-DISTANCE NETWORKS
High-speed networks with large delays present a unique environment where TCP
may have a problem utilizing the full bandwidth. Several congestion control
proposals have been suggested to remedy this problem. The existing protocols
consider mainly two properties: TCP friendliness and bandwidth scalability. That is, a protocol should not take away too much bandwidth from standard TCP flows while
utilizing the full bandwidth of high-speed networks. This paper presents another
important constraint, namely, RTT (round trip time) unfairness where competing
flows with different RTTs may consume vastly unfair bandwidth shares. Existing
schemes have a severe RTT unfairness problem because the congestion window
increase rate gets larger as the window grows – ironically the very reason that makes
them more scalable. RTT unfairness for high-speed networks occurs distinctly with
drop tail routers for flows with large congestion windows where packet loss can be
highly synchronized. After identifying the RTT unfairness problem of existing
protocols, this paper presents a new congestion control scheme that alleviates RTT
unfairness while supporting TCP friendliness and bandwidth scalability.
2.3 FAST TCP: MOTIVATION, ARCHITECTURE, ALGORITHMS,PERFORMANCE
Control algorithm for high-speed long-latency networks, from design to
implementation. The approach taken by FAST TCP to address the four difficulties, at
both packet and flow levels, which the current TCP implementation has at large
windows. The architecture and summarize some of the algorithms implemented in
our prototype. The equilibrium and stability properties of FAST TCP.
5
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 6/31
2.4 A COMPOUND TCP APPROACH FOR HIGH-SPEED AND LONGDISTANCE NETWORKS
TCP provides reliable data transmission with embedded congestion control algorithm
which effectively removes congestion collapses in the Internet by adjusting the
sending rate according to the available bandwidth of the network. However, although
TCP achieves remarkable success (maximizing the utilization of the link and fairly
sharing bandwidth between competing flows) in the today’s Internet environment, it
has been reported that TCP substantially underutilizes network bandwidth over high-
speed and long distance networks.
2.5 TCP WESTWOOD: BANDWIDTH ESTIMATION FOR ENHANCEDTRANSPORT OVER WIRELESS LINKS
Effective error and congestion control for heterogeneous (wired and wireless)
networks has been an active area of research recently. End-to-end, Link Layer, and
Split Connection approaches have been suggested and their relative merits
extensively assessed in recent studies . One conclusion drawn from these studies is
that while end-to-end schemes are not as effective as local recovery techniques in
handling wireless losses, they are promising since significant gains can be achieved
without extensive support at the network layer in routers and base stations .
6
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 7/31
CHAPTER 3
SYSTEM IMPLEMENTATION
Ever since the poor performance of TCP under high-speed and
high link error probability environment was pointed out, numerous solutions
have been proposed to deal with it. These approaches can be generally classified
into several categories based on the location of the control center. They are the
sender-centric methods, the receiver-centric methods, and the mobile-host-
centric methods.
Unlike the three types of congestion control mechanisms classifiedabove, here it is propose a hybrid congestion control method by combining
sender-centric and receiver-centric methods as well as combining both delay-
based control and loss-based control. We can improve TCP performance in lossy
high-speed networks using the following two approaches. The first one is to
reduce the waiting time of a sender to alleviate the impact of timeout. The second
one is to combine the window-based control of the sender and the delay-based
control of the receiver in order to lower the impact of packet loss to TCP
performance and to improve the bandwidth utilization.
3.1 MECHANISM DESCRIPTION
In this system, the receiver not only performs the function of flow
control, but also participates in the congestion control. It first measures the
bandwidth, and then computes an appropriate congestion window size based on
the measured bandwidth and the RTT. To perform these functions, the receiver
has to maintain two timers: one timer for recording the packet inter- arrival
interval and the other for measuring the RTT. The sender makes use of this
information from its receiver to adjust the congestion window. The RTT is the
length of t im e i t takes for a signal to be sent plus the length of time it takes for
an acknowledgment of that signal to be received.In the context of computer
7
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 8/31
networks, the signal is generally a data packet, and the RTT time is also known
as the p in g t ime. An internet user can determine the RTT by using the
ping command.
3.1.1 RECIEVER FUNCTION
The receiver measures the bandwidth according to the packet inter-
arrival interval. This method can remedy the oscillation in the estimation of
bandwidth of TCP Westwood as mentioned earlier.Let Bw
be the measured
bandwidth, L be the data packet size, and tint
be the packet inter-arrival interval.
Then, one can estimate the available bandwidth by Bw
= L/tint
for each
packet arrival. Here it is used a moving average method within each congestion
window. Let Bwi
be the ith
measured bandwidth. Then, the bandwidth can be
continuously updated by
3.1.2 SENDER’S FUNCTION
Bw
= α Bw +
(1 – α ) Bwi ,
where is an exponential filter coefficient. From our experiments, α =
0.9 is a good value, for the following reason that the former averaged values
should have a higher weight to lower the measure variations. The receiver measures the RTT based on the DATA packet arrival time. The algorithm is
described in the next session.
As the receiver can timely help the sender to increase the
congestion window according to the instantaneous available bandwidth, the
sender only needs to maintain the AIMD mechanism in the congestion avoidance
stage, and the slow start stage can be eliminated.
8
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 9/31
Upon a timeout event (whether it is detected by the sender s timer or
informed by the receiver s ACK), the sender will decrease the congestion window
to one in consideration that the network is in congestion, and it will have some
time to recover. If congestion is mitigated after one RTT, the sender will
recover/adjust the congestion window in the next window by using the receiver
advertised window.
During fast retransmission, the sender sets the congestion window size
to the lesser value of the receiver advertised window size and the current size.
Since the packet loss may also indicate congestion, we should reduce the
congestion window. On the other hand, if the congestion window is less than
the receiver advertised window, the network may only be in a mild congestion.Therefore, it is unnecessary for the sender to reduce the congestion window.
In a normal state, when the sender receives an ACK, it will compare
the current congestion window with the receiver advertised congestion window.
If the receiver advertised window is larger than the sender s congestion
window, and the difference is larger than a predefined threshold, the sender
will set the congestion window size to the receiver advertised window.
Otherwise, the sender will ignore the receiver advertised congestion window by
just performing the additive increase mechanism.
3.2 ALGORITHM
i) When an ACK is sent, if rcv.rtt is zero, let rcv.rtt equal 1 and record the
corresponding sequence (rtseq) of data packet (equals to the sum of the
ACK sequence and the current congestion window.). Otherwise, just send
the ACK.
ii) If a data packet with a larger sequence number than rtseq is arrived in
order and the new measured packet inter-arrival interval is larger than two
times of pre-estimated packet inter-arrival interval, set the new measured
9
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 10/31
RTT to the value of rcv.rtt, and let rcv.rtt be zero.
iii) On each clock cycle, if rcv.rtt is not zero, rcv.rtt is added by 1.
Then, the retransmission timer is set based on the measured RTT. Next,
the receiver uses RTT to convert the bandwidth to the receiver congestion window
value rwnd using
rnwb = Bw
* RTT
Then, it compares the rwnd with the available receiver buffer, and
deposits the lesser value in the advertised window field (adv_wnd) of an ACK
going back to the sender. In other words, the receiver advertised windowadv_wnd = min(r_abuf,rnwb) not only has the original flow control function,
but also takes on the congestion control function.
3.3 SIMULATION
The algorithm is implemented in ns-2 software and simulated with 15
nodes. The simulated time is 190 seconds; the network topology is depicted in
Fig. 7.1 in Appendix. The nodes 1, 2, 3 and 4 are transmitters and are wired to
node 6, a TCP sender. Nodes 10, 11, 12 and 13 are receiving nodes, wired with the
TCP receiver node 14. Both end-links are wired, while the intermediate bottleneck
link is wireless with 40 Mb/s bandwidth. The nodes 6 and 14 are two routers
with a finite buffer capacity equal to the BDP. The packet error probability used
in this study below is due to the wireless channel error and not the packet loss ratefrom congestion.
While testing the impact of packet error probability on the network
throughput performance, from Figure 7.3, it is observed that the throughput of
TCP degrades as the packet error probability increases. Thus, this algorithm
appears to be robust to packet errors by maintaining a high throughput. Since
RACC can detect the packet loss for congestion earlier for the receiver s
10
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 11/31
function and can use the receiver s advertised rate to adjust the congestion
window, it can better utilize the bandwidth and have a better performance.
11
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 12/31
CHAPTER 4
PROJECT DESCRIPTION
4.1 TCP CONGESTION CONTROL
The main task of TCP congestion control is to adjust the sending rate of
the source in accordance with the state of the network. For this purpose, TCP
limits the amount of outstanding data. The congestion window (cwnd) represents
the maximum amount of data a sender can have sent and for which no
acknowledgment was yet received. In particular, when the source starts sending
data, TCP conservatively initializes cwnd to one packet and then doubles cwnd for
every window worth of ACKs received until congestion is detected. This
behavior is referred to as slow start (SS). Network congestion is identified by
packet losses. When loss is detected, cwnd is reduced to react to congestion and
to reduce the risk of losing more packets. The congestion avoidance algorithm is
then used to slowly increase cwnd in a linear manner by one packet for every
window worth of packets that are acknowledged. This is called the congestion
avoidance (CA) phase.
TCP uses two error recovery mechanisms. The first one is timeout
based and relies on the re-transmission timeout (RTO) to decide when to
retransmit. At the beginning, the RTO is initialized to a conservatively chosen
value Initial Timeout (ITO), which is often set to a value as high as 3 seconds, and
is later updated using the measured round trip time (RTT) samples. When a source
does not receive during a period RTO the ACK for a packet, the timer expires.
The source then first retransmits the packet and goes into slow start and sets cwnd
to one packet. The second error recovery mechanism in TCP is called fast-
retransmit . A packet is considered lost if the sender receives four times in a row
the acknowledgment for packet i. The sender then immediately retransmits
packet i-1 and sets cwnd to half its previous value, and goes into congestion12
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 13/31
avoidance. The detail of the subsequent behavior of TCP depends on the version
of TCP.
4.1.1 SLOW-START FOR CONGESTION CONTROL
Slow-start is part of the congestion control strategy used by
TCP, the data transmission protocol used by many Internet applications, such as
HTTP and Secure Shell. Slow- start is used in conjunction with other algorithms
to avoid sending more data than the network is capable of transmitting, that is,
network congestion. Slow-start is one of the algorithms that TCP uses to control
congestion inside the network. It is also known as the exponential growth phase.
During the exponential growth phase, Slow-start works byincreasing the TCP congestion window each time the acknowledgment is
received. It increases the window size by number of segments acknowledged.
This happens until either an acknowledgment is not received for some segment or
a predetermined threshold value is reached. If a loss event occurs, TCP assumes
this it is due to network congestion and takes steps to reduce the offered load on
the network. Once a loss event has occurred or the threshold has been reached,TCP enters the linear growth (congestion avoidance) phase. At this point, the
window is increased by 1 segment for each RTT. This happens until a loss event
occurs.
4.1.2 BASIC SLOW-START FOR CONGESTION CONTROL ALGORITHM
The algorithm begins in the exponential growth phase initially with a
congestion window size (cwnd) of 1 or 2 segments and increases it by 1 Segment
Size (SS) for each ACK received. This behavior effectively doubles the window
size each round trip of the network. This behavior continues until the congestion
window size (cwnd) reaches the size of the receiver's advertised window or until
a loss occurs.
When a loss occurs half of the current cwnd is saved as a Slow Start13
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 14/31
Threshold (SSThresh) and slow start begins again from its initial cwnd.
Once the cwnd reaches the SSThresh TCP goes into congestion avoidance
mode where each ACK increases the cwnd by SS*SS/cwnd. This results in a
linear increase of the cwnd.
4.2 CONGESTION CONTROL
Congestion control concerns controlling traffic entry into a
telecommunications network, so as to avoid congestive collapse by attempting to
avoid oversubscription of any of the processing or link capabilities of the
intermediate nodes and networks and taking resource reducing steps, such as
reducing the rate of sending packets. It should not be confused with flow control,
which prevents the sender from overwhelming the receiver
There are many ways to classify congest ion control algorithms:
• By the type and amount of feedback received from the network:
Loss,delay,single-bit or multi-bit explicit signals
• By incremental deploy ability on the current Internet: Only sender needs
modification; sender and receiver need modification; only router needs
modification; sender, receiver and routers need modification.• By the aspect of performance it aims to improve: high bandwidth-
delay product networks; lossy links; fairness; advantage to short flows;
variable-rate links
• By the fairness criterion it uses: max-min, proportional, "minimum potential
delay"
4.2.1 CONGESTION WINDOW
In TCP, the congestion window determines the number of bytes
that can be outstanding at any time. This is a means of stopping the link
between two places from getting overloaded with too much traffic. The size of
this window is calculated by estimating how much congestion there is between
the two places. The sender maintains the congestion window. When a connection
14
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 15/31
is set up, the congestion window is set to the maximum segment size (MSS)
allowed on that connection. Further variance in the collision window is dictated
by an Additive Increase/Multiplicative Decrease approach. This means that if all
segments are received and the acknowledgments reach the sender on time, some
constant is added to the window size. The window keeps growing linearly until
a timeout occurs or the receiver reaches its limit. If a timeout occurs, the
window size is halved
4.2.2 ADDITIVE INCREASE/MULTIPLICATIVE DECREASEALGORITHM
The Additive Increase/Multiplicative-Decrease (AIMD) algorithm is
a feedback control algorithm used in TCP Congestion Avoidance. Basically,AIMD represents a linear growth of the congestion window, combined to an
exponential reduction when congestion takes place.
The approach taken is to increase the transmission rate (window size),
probing for usable bandwidth, until loss occurs. The policy of additive increase
basically says to increase the congestion window by 1 MSS (Maximum segment
size) every RTT (Round Trip Time) until a loss is detected
15
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 16/31
When loss is detected, the policy is changed to be one of multiplicative decreasewhich is to cut the congestion window in half after loss. The result is a saw tooth
behavior that represents the probe for bandwidth. A loss event is generallydescribed to be either a timeout or the event of receiving 3 duplicate ACKs. Alsorelated to TCP congestion control is the slow start mechanism. Other policies or algorithms for fairness in congestion control are Additive Increase AdditiveDecrease (AIAD), Multiplicative Increase Additive Decrease (MIAD) and
Multiplicative Increase Multiplicative Decrease (MIMD).
CHAPTER 5
SOFTWARE DESCRIPTION
The main software used for the simulation of the work is ns-2 (Network Simulator-2)
5.1. NS-2 NS-2 is an Object-Oriented, discrete event network Simulator
developed at UC Berkley. It is written in C++ and OTcl (Object-Oriented
Tcl) and primarily uses OTcl as Command and Configuration Language. NS is
mainly used for simulating local and wide area networks. It simulates a wide
variety of IP networks. . Ns provides substantial support for simulation of
TCP, routing, and multicast protocols over wired and wireless networks. NS is a discrete event simulator targeted at networking research.
NS provides substantial support for simulation of TCP, routing, and multicast
protocols over wired and wireless (local and satellite) networks. NS is not a
polished and finished product, but the result of an on-going effort of research and
development. NS began as a variant of the REAL network simulator in 1989
and has evolved substantially over the past few years. REAL is a network simulator originally intended for studying the dynamic behavior of flow and
congestion control schemes in packet-switched data networks.
The NS covers a very large numbers of applications of protocols of
network types of network elements and of traffic models. We call this as
„simulated objects . The NS is based on two languages: an object oriented
simulator written in C++, and a OTcl (an object oriented extension of TCL)
16
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 17/31
interpreter, used to execute users command scripts. NS has a rich library of
network and protocol objects. There are two class hierarchies: the compiled C++
hierarchy and the interpreted OTcl one, with one to one correspondence between
them.
NS-2 can simulate the following:
1. Topology: Wired, wireless
2. Scheduling Algorithms: RED, Drop Tail,
3. Transport Protocols: TCP, UDP
4. Routing: Static and dynamic routing
5. Application: FTP, HTTP, Telnet, Traffic generators
5.2 USER’S VIEW OF NS-2
From the user s perspective, NS-2 is an OTcl interpreter that takes anOTcl script as input and produces a trace file as output in fig NS-2.
Simulation -
OTcl Script OTcl Interpreter
Simulation Results
C++ Libraries
Figure 5.2 Block diagram of NS-2
5.3 NETWORK COMPONENTS
This section explains about the NS-2 components, mostly
compound network components. Figure 1.1 shows a partial OTcl class
hierarchy of NS, which will help understanding the basic network
components.
The root of the hierarchy is the TclObject class that is the super class17
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 18/31
of all OTcl library objects (scheduler, network components, timers and the
other objects including N AM related ones). As an ancestor class of TclObject,
NsObject class is the super class of all basic network component objects that
handle packets, which may compose compound network objects such as nodes
and links.
Tclobject
NsObj ect
Other Ob ject
Conn ector Cla ssif ier
SnoopQueueQueue Delay Agent Trace
Addclassifier M castcla ssif ier
In out DrpFOTcl
HiThe basic
network
component
s are
further
dividedinto two
subclasses,
Connector
and
Classifier,
based on
the number
of
the
po
ssi
ble
out
put
D
A
T
A
pat
hs.
Th
e
ba
sic
network
objects
that have
only one
output
DATA
path are
under the
Connector
class, and
switching
objects that
have
possible
multiple
output
DATA
paths
are
under
the
Classifi
er class.
5.4CLASSTCL
T
he
class
Tcl
encaps
u
l
a
t
e
s
t
h
e
a
c
t
u
a
l
i
n
st
a
n
c
e
o
f
t
h
e
O
T
c
l
i
n
t
e
r
p
r
e
t
e
r
a
n
d
p
r
o
vides the methods
to access and
communicate with
that interpreter
code. The class
provides methods
for the following
operations:
1. Obtain areference to theTel instance
2. Invoke OTcl proceduresthrough theinterpreter
3. Retrieve, or pass back results
to the interpreter
18
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 19/31
4.Reporterror situations andexit inan
uniformmanner
5. Storeandlookup"TclObject"
6.Acquiredirectaccess totheinterpreter.
Edrp Drop TailRE
Reno TCPUD
STACK Eng DeqDro Rec
19
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 20/31
Obtain a Reference to the class Tcl instance
A single instance of the class is declared in -tclcl/Tcl.cc as a static
member variable. The statement required to access this instance is Tel& tel
=Tcl::instance ()Invoking OTcl Procedures
There are four different methods to invoke an OTcl command through
the instance tcl. They differ essentially in their calling arguments. Each
function passes a string to the interpreter that then evaluates the string in a
global context. These methods will return to the caller if the interpreter
returns TCL_OK. On the other hand, if the interpreter returnsTCL_ERROR, the methods will call tkerror {}. The user can overload this
procedure to selectively disregard certain types of errors. When the interpreter
invokes a C++ method, it expects the result back in the private member variable,
tcl-> result error reporting and exit. This method provides a uniform way to report
errors in the compiled code.
5.5 COMMAND METHODS: DEFINITION ANDINVOCATION
For every Tcl Object that is created, ns establishes the instance
procedure, cmd {}, as a hook to executing methods through the compiled
shadow object. The procedure cmd {} invokes the method command () of the
shadow object automatically, passing t he arguments to cmd {} as an argument
vector to the command () method. The user can invoke the cmd {} method in
one of two ways, by explicitly invoking the procedure, specifying the
desired operation as the first argument, or implicitly, as if there were an instance
procedure of the same name as the desired operation. Most simulation scripts will
use the latter form.
Consider the distance computation in SRM is done by the compiledobject. It is often used by the interpreted object. It is usually invoked as
$srmObject distance (Agent Address). If there is no instance procedure called
distance and the interpreter will invoke the instance procedure unknown {},
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 21/31
defined in the base class TclObject. The unknown procedure then invokes
$srmObject cmd distance? (Agent Address) to execute the operation
through the compiled object's command() procedure. The user could explicitly
invoke the operation directly. One reason for this might be to overload the
operation by using an instance procedure of the same name.
5.6 NETWORK ANIMATOR
Network animator (NAM) is an animation tool for viewing network
simulation traces and real world packet teaches. It supports topology layout,
packet level animation and various DATA inspection tools.
Before starting to use NAM, trace file need to be created. This trace
file is usually generated by NS. It contains topology information, e.g. nodes and
links, as well as packet traces .during a simulation, the user can produce
topology configurations, layout information and packet traces using tracing
events in NS. Once the trace file is generated, NAM can be used to animate it.
Upon startup, NAM will read the trace file, create topology, pop up a window,
do layout if necessary and then pause at time 0. Through its user interface, NAM
provides control over many aspects of animation.
In Figure 4.3 a screen shot of a NAM window is shown, where the
most important function is explained. Although the NAM software contains
bugs, as do the NS software, it works fine most of the times and times and
causes only little trouble. NAM is an excellent first step to check that the
scenario works as expected. Ns and NAM can also be used together for
educational purpose and to easily demonstrate different networking issues.
In this thesis NS-2 is used to simulate and test the various approaches
and NAM is used to produce the graphical output.
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 22/31
Figure 5.6 Screen shot of a NAM window explaining the most important functions.
NS and NAM can also be used together for educational purpose
and to easily demonstrate different networking issues. In this thesis NS-2 is
used to simulate and test the various approaches and NAM is used to produce the
graphical output.
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 23/31
CHAPTER 6
SYSTEM REQUIREMENTS
6.1 HARDWARE SPECIFICATION
Main processor : Pentium IV processor 1 . 13 GHzHard disk capacity :
Cache memory :
40GB
512 MBMonitor
: Keyboard
LG Digital Color Monitor
Standard
Standard
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 24/31
CHAPTER 7
CONCLUSION AND ENHANCEMENTS
Here in this work, in a wireless network, a sender and receiver
combined congestion control mechanism. The receiver estimates a congestion
window deemed to be appropriate from the measured bandwidth and RTT, andthen advertises the window size (feeds this information back) to the sender. The
sender then adjusts its congestion window according to the advertised window
of the receiver. Through this receiver-assisted method, the sender can increase
the congestion window quickly to the available bandwidth, thus improving the
network utilization. On the other hand, when timeout happens, the receiver can
feed this information quickly back to the sender to relieve the impact of
timeout to TCP performance. Compared to other receiver-centric mechanisms
or mobile-host- centric mechanisms, this mechanism has a simple
implementation with little modification required for the sender and receiver.
Moreover, when only the receiver end protocol is modified without changing the
sender functions, it can still improve the network throughput and fairness
performance.
This system can again be improved in the case of a lossy wireless
network such as a wireless LAN (WLAN) system, IEEE 802.11, in mult i-hop
scenarios using the medium access control (MAC) layer protocol in the place of
the TCP congestion control. It can be shown how its aggressive behavior can
throttle the spatial reuse and reduce bandwidth efficiency. An adaptive,
layer-2, distributed coordination scheme for 802.11 using explicit MAC
feedback is then proposed to pace the transmissions on adjacent nodes, thereby
assisting the MAC protocol to operate around its saturation state while
minimizing resource contention. Again, the sender can increase the congestion
window quickly to the available bandwidth, thus improving the network
utilization.
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 25/31
CHAPTER 8
APPENDIX
8.1 SCREEN SHOTS
Network Topologyused
Fig 8.1, network topology for simulation
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 26/31
Fig 8.2 Simulation of the Network
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 27/31
Fig 8.3 Throughput Analysis
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 28/31
Fig 8.4 Data Packet Reception
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 29/31
Fig 8.5 Energy Consumption
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 30/31
CHAPTER 8
REFERENCES
1. C. Casetti et al., “TCPWestwood: Bandwidth estimation for enhanced transport
over wireless links,” in Proc. 7th ACM Annu. Int. Conf. Mobile Comput. Netw.,
Rome, Italy, 2001, pp. 287– 297.
2. D. X. Wei, C. Jin, S. H. Low, and S. Hedge, “FAST TCP: Motivation,architecture, algorithm and performance,” IEEE/ACM Trans. Netw., vol. 14, no. 6,
pp. 1246–1259, Dec. 2006.
3. H. Hsieh et al., “A receiver-centric transport protocol for mobile hosts with
heterogeneous wireless interfaces,” in Proc. 9th ACM Annu. Int. Conf. Mobile
Comput. Netw., San Diego, CA, 2003, pp. 1–15.
4. J. S. Ahn, P. B. Danzig, Z. Liu, and L. Yan, “Evaluation of TCP Vegas:
Emulation and experiment,” Comput. Commun. Rev., vol. 25, no. 4, pp. 185–195,
Oct. 1995.
5. Kai Shi, Yantai Shu, Oliver Yang, and Jiarong Luo, “Receiver-Assisted
Congestion Control to Achieve High Throughput in Lossy Wireless Networks”,
IEEE Trans. nuclear science, vol. 57, no. 2, April 2010.
6. L. Xu, K. Harfoush, and I. Rhee, “Binary increase congestion control (BIC) for
fast long- distance networks,” in Proc. 23rd IEEE INFOCOM, Hong Kong, China,
2004, pp. 2514–2524.
7. L. S. Brakmo and L. L. Peterson, “TCP vegas: End to end congestion avoidance
on a global internet,” IEEE J. Sel. Areas Commun., vol. 13, no. 8, pp. 1465–1480,
7/28/2019 Final Phase 1 Document
http://slidepdf.com/reader/full/final-phase-1-document 31/31
Oct. 1995.
8. V. Paxson and M. Allman, “Computing TCP s retransmission timer,” IETF, RFC
2988, 2000.
9. W. Stevens, “TCP slow start, congestion avoidance, fast retransmit”, IETF,
RFC 2001, Jan.1997.
10. Y. Hong and O.Yang, “Design of adaptive PI rate controller for best-effort
traffic in the internet based on phase margin,” IEEE Trans. Parallel Distrib. Syst.,
vol. 18, no. 4, pp. 550–561, Apr. 2007.
11. Y. Shu et al., “Mobile-host-centric transport protocol for EAST experiment,”
IEEE Trans. Nucl. Sci., vol. 55, no. 1, pp. 209–216, Feb. 2008.
12. Y. Shu, L. Zhang, W. Zhao, H. Chen, and J. Luo, “P2P-based data system for
the EAST experiment,” IEEE Trans. Nucl. Sci., vol. 53, no. 3, pp. 694–699, Jun.
2006.