Data Communication and Networks Lecture 7 Congestion in Data Networks October 16, 2003 Joseph Conron...
-
Upload
vivian-miller -
Category
Documents
-
view
214 -
download
1
Transcript of Data Communication and Networks Lecture 7 Congestion in Data Networks October 16, 2003 Joseph Conron...
Data Communication and Networks
Lecture 7
Congestion in Data NetworksOctober 16, 2003
Joseph Conron
Computer Science Department
New York University
Principles of Congestion Control
Congestion: informally: “too many sources sending too much
data too fast for network to handle” different from flow control! manifestations:
lost packets (buffer overflow at routers) long delays (queuing in router buffers)
a top-10 problem!
What Is Congestion?Congestion occurs when the number of
packets being transmitted through the network approaches the packet handling capacity of the network
Congestion control aims to keep number of packets below level at which performance falls off dramatically
Data network is a network of queuesGenerally 80% utilization is criticalFinite queues mean data may be lost
Queues at a Node
Effects of CongestionPackets arriving are stored at input buffersRouting decision madePacket moves to output bufferPackets queued for output transmitted as fast
as possible Statistical time division multiplexing
If packets arrive to fast to be routed, or to be output, buffers will fill Can use flow control Can discard packets
Can propagate congestion through network
Interaction of Queues
Ideal NetworkUtilization
Causes/costs of congestion: scenario 1
two senders, two receivers
one router, infinite buffers
no retransmission
large delays when congested
maximum achievable throughput
Practical PerformanceIdeal assumes infinite buffers and no
overheadBuffers are finiteOverheads occur in exchanging
congestion control messages
Effects of Congestion -No Control
Causes/costs of congestion: scenario 2
one router, finite buffers sender retransmission of lost packet
Causes/costs of congestion: scenario 2 always: (’in = in)
“perfect” retransmission only when loss:
retransmission of delayed (not lost) packet makes
larger (than perfect case) for same
in
out
=
in
out
>
in
out
“costs” of congestion: more work (retrans) for given “goodput” unneeded retransmissions: link carries multiple copies of pkt
Causes/costs of congestion: scenario 3 four senders multihop paths timeout/retransmit
in
Q: what happens as and increase ?
in
Causes/costs of congestion: scenario 3
Another “cost” of congestion: when packet dropped, any “upstream transmission capacity
used for that packet was wasted!
Mechanisms for Congestion Control
Backpressure If node becomes congested it can slow
down or halt flow of packets from other nodes
May mean that other nodes have to apply control on incoming packet rates
Propagates back to sourceCan restrict to logical connections
generating most traffic in VC system.Backpressure is of limited utility:
Used in connection oriented that allow hop by hop congestion control (e.g. X.25)
Not used in ATM
Choke PacketControl packet
Generated at congested node Sent to source node e.g. ICMP source quench
From router or destinationSource cuts back until no more source quench
messageSent for every discarded packet, or anticipated
Rather crude mechanism
Implicit Congestion SignalingTransmission delay may increase with
congestionPacket may be discardedSource can detect these as implicit
indications of congestionUseful on connectionless (datagram)
networks e.g. IP based
(TCP includes congestion and flow control - see chapter 17)
Used in frame relay LAPF
Explicit Congestion SignalingNetwork alerts end systems of increasing
congestionEnd systems take steps to reduce offered loadBackwards
Congestion avoidance in opposite direction to packet required (node receiving packet must slow sending)
Forwards Congestion avoidance in same direction as packet
required (node receiving packet must tell peer to slow down)
Used in ATM by ABR Service
Explicit Signaling ApproachesBinary
Bit in packet indicates to receiver that congestion exists - reduce send rate.
Credit Based Sender is given “credit” for number of octets
or packets it may send before it must stop and wait for additional credit.
Rate Based Sender may transmit at a rate up to some
limit. Rate can be reduced by control message.
Traffic ShapingSmooth out traffic flow and reduce cell
clumpingUsed in rate based schemes.Token bucket algorithm.
Token Bucket for Traffic Shaping
Approaches towards congestion control
End-end congestion control:
no explicit feedback from network
congestion inferred from end-system observed loss, delay
approach taken by TCP This is an IMPLICIT
approach
Network-assisted congestion control:
routers provide feedback to end systems single bit indicating
congestion (SNA, DECbit, TCP/IP ECN, ATM)
explicit rate sender should send at
This is an EXPLICIT approach
Two broad approaches towards congestion control:
Case study: ATM ABR congestion controlABR: available bit rate: “elastic service” if sender’s path
“underloaded”: sender should use
available bandwidth if sender’s path
congested: sender throttled to
minimum guaranteed rate
RM (resource management) cells:
sent by sender, interspersed with data cells
bits in RM cell set by switches (“network-assisted”) NI bit: no increase in rate
(mild congestion) CI bit: congestion
indication RM cells returned to sender
by receiver, with bits intact
EXPLICIT Case study: ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell sender’ send rate thus minimum supportable rate on
path
EFCI bit in data cells: set to 1 in congested switch if data cell preceding RM cell has EFCI set, sender sets CI
bit in returned RM cell
ABR Cell Rate Feedback Rules
IF CI == 1 Reduce ACR to a value >= MCR
ELSE IF NI == 0 Increase ACR to a value <=
PCR
IF ACR > ER Set ACR = max(ER, MCR)
ACR = Allowed Cell RateMCR = Minimum Cell RatePCR = Peak Cell RateER = Explicit Rate
If you want to know more
Implicit Case Study: TCP Congestion Control end-end control (no network assistance) transmission rate limited by congestion window size,
Congwin, over segments:
w segments, each with MSS bytes sent in one RTT:
throughput = w * MSS
RTT Bytes/sec
Congwin
TCP congestion control: two “phases”
slow start congestion avoidance
important variables: Congwin threshold: defines
threshold between two slow start phase, congestion control phase
“probing” for usable bandwidth: ideally: transmit as fast
as possible (Congwin as large as possible) without loss
increase Congwin until loss (congestion)
loss: decrease Congwin, then begin probing (increasing) again
TCP Slowstart
exponential increase (per RTT) in window size (not so slow!)
loss event: timeout (Tahoe TCP) and/or or three duplicate ACKs (Reno TCP)
initialize: Congwin = 1for (each segment ACKed) Congwin++until (loss event OR CongWin > threshold)
Slowstart algorithmHost A
one segment
RTT
Host B
time
two segments
four segments
TCP Congestion Avoidance
/* slowstart is over */ /* Congwin > threshold */Until (loss event) { every w segments ACKed: Congwin++ }threshold = Congwin/2Congwin = 1perform slowstart
Congestion avoidance
1
1: TCP Reno skips slowstart (fast recovery) after three duplicate ACKs
TCP congestion avoidance (AIMD) :
Additive Increase, Multiplicative Decrease increase window by 1 per RTT decrease window by factor of 2 on loss
event
AIMD
TCP Fairness
Fairness goal: if N TCP sessions share same bottleneck link, each should get 1/N of link capacity
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
Why is TCP fair?
Two competing sessions: Additive increase gives slope of 1, as throughout increases multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance: additive increaseloss: decrease window by factor of 2
congestion avoidance: additive increaseloss: decrease window by factor of 2