1 Week 11 TCP Congestion Control. 2 Principles of Congestion Control Congestion: r informally:...
-
Upload
barrie-george -
Category
Documents
-
view
215 -
download
1
Transcript of 1 Week 11 TCP Congestion Control. 2 Principles of Congestion Control Congestion: r informally:...
1
Week 11TCP Congestion Control
2
Principles of Congestion Control
Congestion informally ldquotoo many sources sending too
much data too fast for network to handlerdquo
different from flow control
manifestations
lost packets (buffer overflow at routers)
long delays (queueing in router buffers)
a top-10 problem
3
Causescosts of congestion scenario 1
two senders two receivers
one router infinite buffers
no retransmission
large delays when congested
maximum achievable throughput
unlimited shared output link buffers
Host Ain original data
Host B
out
4
Causescosts of congestion scenario 2
one router finite buffers
sender retransmission of lost packet
finite shared output link buffers
Host A in original data
Host B
out
in original data plus retransmitted data
5
Causescosts of congestion scenario 2 always (goodput)
ldquoperfectrdquo retransmission only when loss
retransmission of delayed (not lost) packet makes
larger (than perfect case) for same
in
out
=
in
out
gt
in
out
ldquocostsrdquo of congestion
more work (retrans) for given ldquogoodputrdquo
unneeded retransmissions link carries multiple copies of pkt
R2
R2in
ou
t
b
R2
R2in
ou
t
a
R2
R2in
ou
t
c
R4
R3
6
Causescosts of congestion scenario 3 four senders
multihop paths
timeoutretransmit
inQ what happens as
and increase in
finite shared output link buffers
Host Ain original data
Host B
out
in original data plus retransmitted data
7
Example
What happens when each demand peaks at unity rateThroughput = 152 (How) twice the unity rate T = 107
8
Max-min fair allocation
Given a network and a set of sessions we would like to find a maximal flow that it is fair
We will see different definitions for max-min fairness and will learn a flow control algorithm
The tutorial will give understanding what is max-min fairness
9
How define fairness
Any session is entitled to as much network use as is any other
Allocating the same share to all
10
Max-Min Flow Control Rule
The rule is maximizing the network use allocated to the sessions with the minimum allocation
An alternative definition is to maximize the allocation of each session i under constraint that an increase in irsquos allocation doesnrsquot cause a decrease in some other session allocation with the same or smaller rate than i
11
Example
Maximal fair flow division will be to give for the sessions 012 a flow rate of 13 and for the session 3 a flow rate of 23
C=1 C=1
Session 1
Session 2 Session 3
Session 0
12
Notation
G=(NA) - Directed network graph (N is set of vertexes and A is set of edges)
Ca ndash the capacity of a link a
Fa ndash the flow on a link a
P ndash a set of the sessions rp ndash the rate of a session p
We assume a fixed single-path routing method
13
Definitions
We have following constraints on the vector r= rp | p Є P
A vector r satisfying these constraints is said to be feasible
alink crossing
p sessions allpa r F
allfor
allfor 0
AaCF
Ppr
aa
p
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
2
Principles of Congestion Control
Congestion informally ldquotoo many sources sending too
much data too fast for network to handlerdquo
different from flow control
manifestations
lost packets (buffer overflow at routers)
long delays (queueing in router buffers)
a top-10 problem
3
Causescosts of congestion scenario 1
two senders two receivers
one router infinite buffers
no retransmission
large delays when congested
maximum achievable throughput
unlimited shared output link buffers
Host Ain original data
Host B
out
4
Causescosts of congestion scenario 2
one router finite buffers
sender retransmission of lost packet
finite shared output link buffers
Host A in original data
Host B
out
in original data plus retransmitted data
5
Causescosts of congestion scenario 2 always (goodput)
ldquoperfectrdquo retransmission only when loss
retransmission of delayed (not lost) packet makes
larger (than perfect case) for same
in
out
=
in
out
gt
in
out
ldquocostsrdquo of congestion
more work (retrans) for given ldquogoodputrdquo
unneeded retransmissions link carries multiple copies of pkt
R2
R2in
ou
t
b
R2
R2in
ou
t
a
R2
R2in
ou
t
c
R4
R3
6
Causescosts of congestion scenario 3 four senders
multihop paths
timeoutretransmit
inQ what happens as
and increase in
finite shared output link buffers
Host Ain original data
Host B
out
in original data plus retransmitted data
7
Example
What happens when each demand peaks at unity rateThroughput = 152 (How) twice the unity rate T = 107
8
Max-min fair allocation
Given a network and a set of sessions we would like to find a maximal flow that it is fair
We will see different definitions for max-min fairness and will learn a flow control algorithm
The tutorial will give understanding what is max-min fairness
9
How define fairness
Any session is entitled to as much network use as is any other
Allocating the same share to all
10
Max-Min Flow Control Rule
The rule is maximizing the network use allocated to the sessions with the minimum allocation
An alternative definition is to maximize the allocation of each session i under constraint that an increase in irsquos allocation doesnrsquot cause a decrease in some other session allocation with the same or smaller rate than i
11
Example
Maximal fair flow division will be to give for the sessions 012 a flow rate of 13 and for the session 3 a flow rate of 23
C=1 C=1
Session 1
Session 2 Session 3
Session 0
12
Notation
G=(NA) - Directed network graph (N is set of vertexes and A is set of edges)
Ca ndash the capacity of a link a
Fa ndash the flow on a link a
P ndash a set of the sessions rp ndash the rate of a session p
We assume a fixed single-path routing method
13
Definitions
We have following constraints on the vector r= rp | p Є P
A vector r satisfying these constraints is said to be feasible
alink crossing
p sessions allpa r F
allfor
allfor 0
AaCF
Ppr
aa
p
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
3
Causescosts of congestion scenario 1
two senders two receivers
one router infinite buffers
no retransmission
large delays when congested
maximum achievable throughput
unlimited shared output link buffers
Host Ain original data
Host B
out
4
Causescosts of congestion scenario 2
one router finite buffers
sender retransmission of lost packet
finite shared output link buffers
Host A in original data
Host B
out
in original data plus retransmitted data
5
Causescosts of congestion scenario 2 always (goodput)
ldquoperfectrdquo retransmission only when loss
retransmission of delayed (not lost) packet makes
larger (than perfect case) for same
in
out
=
in
out
gt
in
out
ldquocostsrdquo of congestion
more work (retrans) for given ldquogoodputrdquo
unneeded retransmissions link carries multiple copies of pkt
R2
R2in
ou
t
b
R2
R2in
ou
t
a
R2
R2in
ou
t
c
R4
R3
6
Causescosts of congestion scenario 3 four senders
multihop paths
timeoutretransmit
inQ what happens as
and increase in
finite shared output link buffers
Host Ain original data
Host B
out
in original data plus retransmitted data
7
Example
What happens when each demand peaks at unity rateThroughput = 152 (How) twice the unity rate T = 107
8
Max-min fair allocation
Given a network and a set of sessions we would like to find a maximal flow that it is fair
We will see different definitions for max-min fairness and will learn a flow control algorithm
The tutorial will give understanding what is max-min fairness
9
How define fairness
Any session is entitled to as much network use as is any other
Allocating the same share to all
10
Max-Min Flow Control Rule
The rule is maximizing the network use allocated to the sessions with the minimum allocation
An alternative definition is to maximize the allocation of each session i under constraint that an increase in irsquos allocation doesnrsquot cause a decrease in some other session allocation with the same or smaller rate than i
11
Example
Maximal fair flow division will be to give for the sessions 012 a flow rate of 13 and for the session 3 a flow rate of 23
C=1 C=1
Session 1
Session 2 Session 3
Session 0
12
Notation
G=(NA) - Directed network graph (N is set of vertexes and A is set of edges)
Ca ndash the capacity of a link a
Fa ndash the flow on a link a
P ndash a set of the sessions rp ndash the rate of a session p
We assume a fixed single-path routing method
13
Definitions
We have following constraints on the vector r= rp | p Є P
A vector r satisfying these constraints is said to be feasible
alink crossing
p sessions allpa r F
allfor
allfor 0
AaCF
Ppr
aa
p
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
4
Causescosts of congestion scenario 2
one router finite buffers
sender retransmission of lost packet
finite shared output link buffers
Host A in original data
Host B
out
in original data plus retransmitted data
5
Causescosts of congestion scenario 2 always (goodput)
ldquoperfectrdquo retransmission only when loss
retransmission of delayed (not lost) packet makes
larger (than perfect case) for same
in
out
=
in
out
gt
in
out
ldquocostsrdquo of congestion
more work (retrans) for given ldquogoodputrdquo
unneeded retransmissions link carries multiple copies of pkt
R2
R2in
ou
t
b
R2
R2in
ou
t
a
R2
R2in
ou
t
c
R4
R3
6
Causescosts of congestion scenario 3 four senders
multihop paths
timeoutretransmit
inQ what happens as
and increase in
finite shared output link buffers
Host Ain original data
Host B
out
in original data plus retransmitted data
7
Example
What happens when each demand peaks at unity rateThroughput = 152 (How) twice the unity rate T = 107
8
Max-min fair allocation
Given a network and a set of sessions we would like to find a maximal flow that it is fair
We will see different definitions for max-min fairness and will learn a flow control algorithm
The tutorial will give understanding what is max-min fairness
9
How define fairness
Any session is entitled to as much network use as is any other
Allocating the same share to all
10
Max-Min Flow Control Rule
The rule is maximizing the network use allocated to the sessions with the minimum allocation
An alternative definition is to maximize the allocation of each session i under constraint that an increase in irsquos allocation doesnrsquot cause a decrease in some other session allocation with the same or smaller rate than i
11
Example
Maximal fair flow division will be to give for the sessions 012 a flow rate of 13 and for the session 3 a flow rate of 23
C=1 C=1
Session 1
Session 2 Session 3
Session 0
12
Notation
G=(NA) - Directed network graph (N is set of vertexes and A is set of edges)
Ca ndash the capacity of a link a
Fa ndash the flow on a link a
P ndash a set of the sessions rp ndash the rate of a session p
We assume a fixed single-path routing method
13
Definitions
We have following constraints on the vector r= rp | p Є P
A vector r satisfying these constraints is said to be feasible
alink crossing
p sessions allpa r F
allfor
allfor 0
AaCF
Ppr
aa
p
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
5
Causescosts of congestion scenario 2 always (goodput)
ldquoperfectrdquo retransmission only when loss
retransmission of delayed (not lost) packet makes
larger (than perfect case) for same
in
out
=
in
out
gt
in
out
ldquocostsrdquo of congestion
more work (retrans) for given ldquogoodputrdquo
unneeded retransmissions link carries multiple copies of pkt
R2
R2in
ou
t
b
R2
R2in
ou
t
a
R2
R2in
ou
t
c
R4
R3
6
Causescosts of congestion scenario 3 four senders
multihop paths
timeoutretransmit
inQ what happens as
and increase in
finite shared output link buffers
Host Ain original data
Host B
out
in original data plus retransmitted data
7
Example
What happens when each demand peaks at unity rateThroughput = 152 (How) twice the unity rate T = 107
8
Max-min fair allocation
Given a network and a set of sessions we would like to find a maximal flow that it is fair
We will see different definitions for max-min fairness and will learn a flow control algorithm
The tutorial will give understanding what is max-min fairness
9
How define fairness
Any session is entitled to as much network use as is any other
Allocating the same share to all
10
Max-Min Flow Control Rule
The rule is maximizing the network use allocated to the sessions with the minimum allocation
An alternative definition is to maximize the allocation of each session i under constraint that an increase in irsquos allocation doesnrsquot cause a decrease in some other session allocation with the same or smaller rate than i
11
Example
Maximal fair flow division will be to give for the sessions 012 a flow rate of 13 and for the session 3 a flow rate of 23
C=1 C=1
Session 1
Session 2 Session 3
Session 0
12
Notation
G=(NA) - Directed network graph (N is set of vertexes and A is set of edges)
Ca ndash the capacity of a link a
Fa ndash the flow on a link a
P ndash a set of the sessions rp ndash the rate of a session p
We assume a fixed single-path routing method
13
Definitions
We have following constraints on the vector r= rp | p Є P
A vector r satisfying these constraints is said to be feasible
alink crossing
p sessions allpa r F
allfor
allfor 0
AaCF
Ppr
aa
p
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
6
Causescosts of congestion scenario 3 four senders
multihop paths
timeoutretransmit
inQ what happens as
and increase in
finite shared output link buffers
Host Ain original data
Host B
out
in original data plus retransmitted data
7
Example
What happens when each demand peaks at unity rateThroughput = 152 (How) twice the unity rate T = 107
8
Max-min fair allocation
Given a network and a set of sessions we would like to find a maximal flow that it is fair
We will see different definitions for max-min fairness and will learn a flow control algorithm
The tutorial will give understanding what is max-min fairness
9
How define fairness
Any session is entitled to as much network use as is any other
Allocating the same share to all
10
Max-Min Flow Control Rule
The rule is maximizing the network use allocated to the sessions with the minimum allocation
An alternative definition is to maximize the allocation of each session i under constraint that an increase in irsquos allocation doesnrsquot cause a decrease in some other session allocation with the same or smaller rate than i
11
Example
Maximal fair flow division will be to give for the sessions 012 a flow rate of 13 and for the session 3 a flow rate of 23
C=1 C=1
Session 1
Session 2 Session 3
Session 0
12
Notation
G=(NA) - Directed network graph (N is set of vertexes and A is set of edges)
Ca ndash the capacity of a link a
Fa ndash the flow on a link a
P ndash a set of the sessions rp ndash the rate of a session p
We assume a fixed single-path routing method
13
Definitions
We have following constraints on the vector r= rp | p Є P
A vector r satisfying these constraints is said to be feasible
alink crossing
p sessions allpa r F
allfor
allfor 0
AaCF
Ppr
aa
p
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
7
Example
What happens when each demand peaks at unity rateThroughput = 152 (How) twice the unity rate T = 107
8
Max-min fair allocation
Given a network and a set of sessions we would like to find a maximal flow that it is fair
We will see different definitions for max-min fairness and will learn a flow control algorithm
The tutorial will give understanding what is max-min fairness
9
How define fairness
Any session is entitled to as much network use as is any other
Allocating the same share to all
10
Max-Min Flow Control Rule
The rule is maximizing the network use allocated to the sessions with the minimum allocation
An alternative definition is to maximize the allocation of each session i under constraint that an increase in irsquos allocation doesnrsquot cause a decrease in some other session allocation with the same or smaller rate than i
11
Example
Maximal fair flow division will be to give for the sessions 012 a flow rate of 13 and for the session 3 a flow rate of 23
C=1 C=1
Session 1
Session 2 Session 3
Session 0
12
Notation
G=(NA) - Directed network graph (N is set of vertexes and A is set of edges)
Ca ndash the capacity of a link a
Fa ndash the flow on a link a
P ndash a set of the sessions rp ndash the rate of a session p
We assume a fixed single-path routing method
13
Definitions
We have following constraints on the vector r= rp | p Є P
A vector r satisfying these constraints is said to be feasible
alink crossing
p sessions allpa r F
allfor
allfor 0
AaCF
Ppr
aa
p
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
8
Max-min fair allocation
Given a network and a set of sessions we would like to find a maximal flow that it is fair
We will see different definitions for max-min fairness and will learn a flow control algorithm
The tutorial will give understanding what is max-min fairness
9
How define fairness
Any session is entitled to as much network use as is any other
Allocating the same share to all
10
Max-Min Flow Control Rule
The rule is maximizing the network use allocated to the sessions with the minimum allocation
An alternative definition is to maximize the allocation of each session i under constraint that an increase in irsquos allocation doesnrsquot cause a decrease in some other session allocation with the same or smaller rate than i
11
Example
Maximal fair flow division will be to give for the sessions 012 a flow rate of 13 and for the session 3 a flow rate of 23
C=1 C=1
Session 1
Session 2 Session 3
Session 0
12
Notation
G=(NA) - Directed network graph (N is set of vertexes and A is set of edges)
Ca ndash the capacity of a link a
Fa ndash the flow on a link a
P ndash a set of the sessions rp ndash the rate of a session p
We assume a fixed single-path routing method
13
Definitions
We have following constraints on the vector r= rp | p Є P
A vector r satisfying these constraints is said to be feasible
alink crossing
p sessions allpa r F
allfor
allfor 0
AaCF
Ppr
aa
p
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
9
How define fairness
Any session is entitled to as much network use as is any other
Allocating the same share to all
10
Max-Min Flow Control Rule
The rule is maximizing the network use allocated to the sessions with the minimum allocation
An alternative definition is to maximize the allocation of each session i under constraint that an increase in irsquos allocation doesnrsquot cause a decrease in some other session allocation with the same or smaller rate than i
11
Example
Maximal fair flow division will be to give for the sessions 012 a flow rate of 13 and for the session 3 a flow rate of 23
C=1 C=1
Session 1
Session 2 Session 3
Session 0
12
Notation
G=(NA) - Directed network graph (N is set of vertexes and A is set of edges)
Ca ndash the capacity of a link a
Fa ndash the flow on a link a
P ndash a set of the sessions rp ndash the rate of a session p
We assume a fixed single-path routing method
13
Definitions
We have following constraints on the vector r= rp | p Є P
A vector r satisfying these constraints is said to be feasible
alink crossing
p sessions allpa r F
allfor
allfor 0
AaCF
Ppr
aa
p
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
10
Max-Min Flow Control Rule
The rule is maximizing the network use allocated to the sessions with the minimum allocation
An alternative definition is to maximize the allocation of each session i under constraint that an increase in irsquos allocation doesnrsquot cause a decrease in some other session allocation with the same or smaller rate than i
11
Example
Maximal fair flow division will be to give for the sessions 012 a flow rate of 13 and for the session 3 a flow rate of 23
C=1 C=1
Session 1
Session 2 Session 3
Session 0
12
Notation
G=(NA) - Directed network graph (N is set of vertexes and A is set of edges)
Ca ndash the capacity of a link a
Fa ndash the flow on a link a
P ndash a set of the sessions rp ndash the rate of a session p
We assume a fixed single-path routing method
13
Definitions
We have following constraints on the vector r= rp | p Є P
A vector r satisfying these constraints is said to be feasible
alink crossing
p sessions allpa r F
allfor
allfor 0
AaCF
Ppr
aa
p
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
11
Example
Maximal fair flow division will be to give for the sessions 012 a flow rate of 13 and for the session 3 a flow rate of 23
C=1 C=1
Session 1
Session 2 Session 3
Session 0
12
Notation
G=(NA) - Directed network graph (N is set of vertexes and A is set of edges)
Ca ndash the capacity of a link a
Fa ndash the flow on a link a
P ndash a set of the sessions rp ndash the rate of a session p
We assume a fixed single-path routing method
13
Definitions
We have following constraints on the vector r= rp | p Є P
A vector r satisfying these constraints is said to be feasible
alink crossing
p sessions allpa r F
allfor
allfor 0
AaCF
Ppr
aa
p
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
12
Notation
G=(NA) - Directed network graph (N is set of vertexes and A is set of edges)
Ca ndash the capacity of a link a
Fa ndash the flow on a link a
P ndash a set of the sessions rp ndash the rate of a session p
We assume a fixed single-path routing method
13
Definitions
We have following constraints on the vector r= rp | p Є P
A vector r satisfying these constraints is said to be feasible
alink crossing
p sessions allpa r F
allfor
allfor 0
AaCF
Ppr
aa
p
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
13
Definitions
We have following constraints on the vector r= rp | p Є P
A vector r satisfying these constraints is said to be feasible
alink crossing
p sessions allpa r F
allfor
allfor 0
AaCF
Ppr
aa
p
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
14
Definitions
A vector of rates r is said to be max-min
fair if is a feasible and for each p Є P rp
can not be increased while maintaining
feasibility without decreasing rprsquo for
some session prsquo for which rprsquo le rp
We want to find a rate vector that is max-min fair
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
15
Bottleneck Link for a Session
Given some feasible flow r we say that a is a bottleneck link with respect to r for a session p crossing a if Fa = Ca and rp ge rprsquo for all sessions prsquo crossing link a
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1 Bottlenecks for 12345 respectively are caadaNote c is not a bottleneck for 5 and b is not a bottleneck for 1
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
16
Max-Min Fairness Definition Using Bottleneck Theorem A feasible rate vector r is
max-min fair if and only if each session has a bottleneck link with respect to r
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
17
Algorithm for Computing Max-Min Fair Rate Vectors
The idea of the algorithm Bring all the sessions to the state that they
have a bottleneck link and then according to theorem it will be the maximal fair flow
We start with all-zero rate vector and to increase rates on all paths together until Fa = Ca for one or more links a
At this point each session using a saturated link has the same rate as every other session using this link Thus these saturated links serve as bottleneck links for all sessions using them
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
18
Algorithm for Computing Max-Min Fair Rate Vectors At the next step all sessions not using the
saturated links are incremented equally in rate until one or more new links become saturated
Note that the sessions using the previously saturated links might also be using these newly saturated links (at a lower rate)
The algorithm continues from step to step always equally incrementing all sessions not passing through any saturated link until all session pass through at least one such link
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
19
Algorithm for Computing Max-Min Fair Rate VectorsInit k=1 Fa
0=0 rp0=0 P1=P and A1=A
1 For all aA nak= num of sessions pPk
crossing link a2 Δr=minaA
k(Ca-Fak-1)na
k (find inc size)3 For all p Pk rp
k=rpk-1+ Δr (increment)
for other p rpk=rp
k-1
4 Fak=Σp crossing arp
k (Update flow)5 Ak+1= The set of unsaturated links6 Pk+1=all prsquos such that p cross only links in
Ak+1
7 k=k+18 If Pk is empty then stop else goto 1
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
20
Example of Algorithm Running
Step 1 All sessions get a rate of 13 because of a and the link a is saturated
Step 2 Sessions 1 and 4 get an additional rate increment of 13 for a total of 23 Link c is saturated now
Step 3 Session 4 gets an additional rate increment of 13 for a total of 1 Link d is saturated
End
1
2 3
4
5
213313513
12341
a
b
c
d
All link capacity is 1
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
21
Example revisited
Max-min fair vector if Tij = infin r = (frac12 frac12 frac12 frac12 ) T = 2 gt 152
What if the demands T13
and T31 = frac14
T24 = frac12 T42 = infin r = (frac14 frac12 frac14 frac34)
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
22
Causescosts of congestion scenario 3
Another ldquocostrdquo of congestion
when packet dropped any ldquoupstream transmission capacity used for that packet was wasted
Host A
Host B
o
u
t
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
23
Approaches towards congestion control
End-end congestion control
no explicit feedback from network
congestion inferred from end-system observed loss delay
approach taken by TCP
Network-assisted congestion control
routers provide feedback to end systems
single bit indicating congestion (SNA DECbit TCPIP ECN ATM)
explicit rate sender should send at
Two broad approaches towards congestion control
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
24
Case study ATM ABR congestion control
ABR available bit rate ldquoelastic servicerdquo
if senderrsquos path ldquounderloadedrdquo
sender should use available bandwidth
if senderrsquos path congested
sender throttled to minimum guaranteed rate
RM (resource management) cells
sent by sender interspersed with data cells
bits in RM cell set by switches (ldquonetwork-assistedrdquo)
NI bit no increase in rate (mild congestion)
CI bit congestion indication
RM cells returned to sender by receiver with bits intact
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
25
Case study ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell
senderrsquo send rate thus minimum supportable rate on path
EFCI bit in data cells set to 1 in congested switch if data cell preceding RM cell has EFCI set sender sets CI bit
in returned RM cell
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
26
TCP Congestion Control
end-end control (no network assistance)
sender limits transmission LastByteSent-LastByteAcked
CongWin
Roughly
CongWin is dynamic function of perceived network congestion
How does sender perceive congestion
loss event = timeout or 3 duplicate acks
TCP sender reduces rate (CongWin) after loss event
three mechanisms AIMD
slow start
conservative after timeout events
rate = CongWin
RTT Bytessec
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
27
TCP AIMD
8 Kbytes
16 Kbytes
24 Kbytes
time
congestionwindow
multiplicative decrease cut CongWin in half after loss event
additive increase increase CongWin by 1 MSS every RTT in the absence of loss events probing
Long-lived TCP connection
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
28
Additive Increase
increase CongWin by 1 MSS every RTT in the absence of loss events probing
cwnd += SMSSSMSScwnd () This adjustment is executed on every
incoming non-duplicate ACK Equation () provides an acceptable
approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
29
TCP Slow Start
When connection begins CongWin = 1 MSS Example MSS = 500
bytes amp RTT = 200 msec
initial rate = 20 kbps
available bandwidth may be gtgt MSSRTT desirable to quickly ramp
up to respectable rate
When connection begins increase rate exponentially fast until first loss event
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
30
TCP Slow Start (more) When connection
begins increase rate exponentially until first loss event double CongWin every
RTT
done by incrementing CongWin for every ACK received
Summary initial rate is slow but ramps up exponentially fast
Host A
one segment
RTT
Host B
time
two segments
four segments
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
31
Refinement After 3 dup ACKs
CongWin is cut in half Threshold is set to CongWin
window then grows linearly
But after timeout event
Threshold set to CongWin2 and
CongWin instead set to 1 MSS
window then grows exponentially
to a threshold then grows linearly
bull 3 dup ACKs indicates network capable of delivering some segmentsbull timeout before 3 dup ACKs is ldquomore alarmingrdquo
Philosophy
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
32
Refinement (more)Q When should the
exponential increase switch to linear
A When CongWin gets to 12 of its value before timeout
Implementation Variable Threshold
At loss event Threshold is set to 12 of CongWin just before loss event
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
33
Summary TCP Congestion Control
When CongWin is below Threshold sender in slow-start phase window grows exponentially
When CongWin is above Threshold sender is in congestion-avoidance phase window grows linearly
When a triple duplicate ACK occurs Threshold set to CongWin2 and CongWin set to Threshold
When timeout occurs Threshold set to CongWin2 and CongWin is set to 1 MSS
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
34
TCP sender congestion controlEvent State TCP Sender Action Commentary
ACK receipt for
previously unacked
data
Slow Start (SS) CongWin = CongWin + MSS
If (CongWin gt Threshold)
set state to ldquoCongestion
Avoidancerdquo
Resulting in a doubling of
CongWin every RTT
ACK receipt for
previously unacked
data
Congestion
Avoidance (CA)
CongWin = CongWin+MSS
(MSSCongWin)
Additive increase resulting in
increase of CongWin by 1 MSS
every RTT
Loss event detected
by triple duplicate
ACK
SS or CA Threshold = CongWin2
CongWin = Threshold
Set state to ldquoCongestion
Avoidancerdquo
Fast recovery implementing
multiplicative decrease CongWin
will not drop below 1 MSS
Timeout SS or CA Threshold = CongWin2
CongWin = 1 MSS
Set state to ldquoSlow Startrdquo
Enter slow start
Duplicate ACK SS or CA Increment duplicate ACK count
for segment being acked
CongWin and Threshold not
changed
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
35
TCP Futures
Example 1500 byte segments 100ms RTT want 10 Gbps throughput
Requires window size W = 83333 in-flight segments
Throughput in terms of loss rate
p = 210-10
New versions of TCP for high-speed needed
pRTT
MSS221
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
36
Macroscopic TCP model
Deterministic packet losses
1p packets transmitted in a cycle
losssuccess
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
37
TCP Model Contd
Equate the trapozeid area 38 W2 under to 1p
22123 C
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
38
Fairness goal if K TCP sessions share same bottleneck link of bandwidth R each should have average rate of RK
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
TCP Fairness
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
39
Why is TCP fair
Two competing sessions Additive increase gives slope of 1 as throughout increases
multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance additive increaseloss decrease window by factor of 2
congestion avoidance additive increaseloss decrease window by factor of 2
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
40
Fairness (more)
Fairness and UDP
Multimedia apps often do not use TCP do not want rate
throttled by congestion control
Instead use UDP pump audiovideo at
constant rate tolerate packet loss
Research area TCP friendly DCCP
Fairness and parallel TCP connections
nothing prevents app from opening parallel cnctions between 2 hosts
Web browsers do this
Example link of rate R supporting 9 cnctions new app asks for 1 TCP gets
rate R10
new app asks for 10 TCPs gets R2
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
41
Queuing Disciplines
Each router must implement some queuing discipline
Queuing allocates both bandwidth and buffer space Bandwidth which packet to serve (transmit)
next
Buffer space which packet to drop next (when required)
Queuing also affects latency
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
42
Typical Internet Queuing FIFO + drop-tail
Simplest choice
Used widely in the Internet
FIFO (first-in-first-out) Implies single class of traffic
Drop-tail Arriving packets get dropped when queue is full regardless
of flow or importance
Important distinction FIFO scheduling discipline
Drop-tail drop policy
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
43
FIFO + Drop-tail Problems
Leaves responsibility of congestion control completely to the edges (eg TCP)
Does not separate between different flows
No policing send more packets get more service
Synchronization end hosts react to same events
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
44
FIFO + Drop-tail Problems
Full queues Routers are forced to have have large queues
to maintain high utilizations
TCP detects congestion from lossbull Forces network to have long standing queues in
steady-state
Lock-out problem Drop-tail routers treat bursty traffic poorly
Traffic gets synchronized easily allows a few flows to monopolize the queue space
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
45
Active Queue Management
Design active router queue management to aid congestion control
Why Router has unified view of queuing behavior
Routers see actual queue occupancy (distinguish queue delay and propagation delay)
Routers can decide on transient congestion based on workload
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
46
Design Objectives
Keep throughput high and delay low High power (throughputdelay)
Accommodate bursts
Queue size should reflect ability to accept bursts rather than steady-state queuing
Improve TCP performance with minimal hardware changes
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
47
Lock-out Problem
Random drop Packet arriving when queue is full causes
some random packet to be dropped
Drop front On full queue drop packet at head of queue
Random drop and drop front solve the lock-out problem but not the full-queues problem
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
48
Full Queues Problem
Drop packets before queue becomes full (early drop)
Intuition notify senders of incipient congestionExample early random drop (ERD)
bull If qlen gt drop level drop each new packet with fixed probability p
bull Does not control misbehaving users
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
49
Random Early Detection (RED) Detect incipient congestion
Assume hosts respond to lost packets
Avoid window synchronization Randomly mark packets
Avoid bias against bursty traffic
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
50
RED Algorithm
Maintain running average of queue length
If avg lt minth do nothing Low queuing send packets through
If avg gt maxth drop packet Protection from misbehaving sources
Else mark packet in a manner proportional to queue length Notify sources of incipient congestion
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
51
RED OperationMin threshMax thresh
Average Queue Length
minth maxth
maxP
10
Avg queue length
P(drop)
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
52
Improving QOS in IP NetworksThus far ldquomaking the best of best effortrdquo
Future next generation Internet with QoS guarantees
RSVP signaling for resource reservations
Differentiated Services differential guarantees
Integrated Services firm guarantees
simple model for sharing and congestion studies
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
53
Principles for QOS Guarantees Example 1Mbps IP phone FTP share 15 Mbps
link bursts of FTP can congest router cause audio loss
want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes and new router policy to treat packets accordingly
Principle 1
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
54
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than
declared rate) policing force source adherence to bandwidth allocations
marking and policing at network edge similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
55
Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow
inefficient use of bandwidth if flows doesnrsquot use its allocation
While providing isolation it is desirable to use resources as efficiently as possible
Principle 3
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
56
Principles for QOS Guarantees (more) Basic fact of life can not support traffic demands
beyond link capacity
Call Admission flow declares its needs network may block call (eg busy signal) if it cannot meet needs
Principle 4
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
57
Summary of QoS Principles
Letrsquos next look at mechanisms for achieving this hellip
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
58
Scheduling And Policing Mechanisms scheduling choose next packet to send on link
FIFO (first in first out) scheduling send in order of arrival to queue real-world example
discard policy if packet arrives to full queue who to discard
bull Tail drop drop arriving packet
bull priority dropremove on priority basis
bull random dropremove randomly
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
59
Scheduling Policies morePriority scheduling transmit highest priority
queued packet
multiple classes with different priorities class may depend on marking or other header info eg
IP sourcedest port numbers etc
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
60
Scheduling Policies still moreround robin scheduling
multiple classes
cyclically scan class queues serving one from each class (if available)
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
61
Scheduling Policies still more
Weighted Fair Queuing
generalized Round Robin
each class gets weighted amount of service in each cycle
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
62
Deficit Weighted Round Robin (DWRR) DWRR addresses the limitations of the WRR
model by accurately supporting the weighted fair distribution of bandwidth when servicing queues that contain variable-length packets
DWRR addresses the limitations of the WFQ model by defining a scheduling discipline that has lower computational complexity and that can be implemented in hardware This allows DWRR to support the arbitration of output port bandwidth on high-speed interfaces in both the core and at the edges of the network
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
63
DRR
In DWRR queuing each queue is configured with a number of parameters A weight that defines the percentage of the output
port bandwidth allocated to the queue A DeficitCounter that specifies the total number of
bytes that the queue is permitted to transmit each time that it is visited by the scheduler The DeficitCounter allows a queue that was not permitted to transmit in the previous round because the packet at the head of the queue was larger than the value of the DeficitCounter to save transmission ldquocreditsrdquo and use them during the next service round
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
64
DWRR In the classic DWRR algorithm the scheduler
visits each non-empty queue and determines the number of bytes in the packet at the head of the queue
The variable DeficitCounter isincremented by the value quantum If the size of the packet at the head of the queue is greater than the variable DeficitCounter then the scheduler moves on to service the next queue
If the size of the packet at the head of the queue is less than or equal to the variable DeficitCounter then the variable DeficitCounter is reduced by the number of bytes in the packet and the packet is transmitted on the output port
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
65
DWRR A quantum of service that is proportional to the
weight of the queue and is expressed in terms of bytes The DeficitCounter for a queue is incremented by the quantum each time that the queue is visited by the scheduler
The scheduler continues to dequeue packets and decrement the variable DeficitCounter by the size of the transmitted packet until either the size of the packet at the head of the queue is greater than the variable DeficitCounter or the queue is empty
If the queue is empty the value of DeficitCounter is set to zero
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
66
DWRR Example
600400300
400 400300
600300400
Queue 1 50 BW quantum[1] =1000
Queue 2 25 BW quantum[2] =500
Queue 3 25 BW quantum[3] =500
Modified Deficit Round Robin gives priority to one say VoIP class
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
67
Policing MechanismsGoal limit traffic to not exceed declared parameters
Three common-used criteria
(Long term) Average Rate how many pkts can be sent per unit time (in the long run) crucial question what is the interval length 100 packets per sec or 6000 packets per min
have same average
Peak Rate eg 6000 pkts per min (ppm) avg 1500 ppm peak rate
(Max) Burst Size max number of pkts sent consecutively (with no intervening idle)
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
68
Policing Mechanisms
Token Bucket limit input to specified Burst Size and
Average Rate
bucket can hold b tokens
tokens generated at rate r tokensec unless bucket full
over interval of length t number of packets admitted less than or equal to (r t + b)
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-
69
Policing Mechanisms (more)
token bucket WFQ combine to provide guaranteed upper bound on delay ie QoS guarantee
WFQ
token rate r
bucket size b
per-flowrate R
D = bRmax
arrivingtraffic
- Week 11 TCP Congestion Control
- Principles of Congestion Control
- Causescosts of congestion scenario 1
- Causescosts of congestion scenario 2
- Slide 5
- Causescosts of congestion scenario 3
- Example
- Max-min fair allocation
- How define fairness
- Max-Min Flow Control Rule
- Slide 11
- Notation
- Definitions
- Slide 14
- Bottleneck Link for a Session
- Max-Min Fairness Definition Using Bottleneck
- Algorithm for Computing Max-Min Fair Rate Vectors
- Slide 18
- Slide 19
- Example of Algorithm Running
- Example revisited
- Slide 22
- Approaches towards congestion control
- Case study ATM ABR congestion control
- Slide 25
- TCP Congestion Control
- TCP AIMD
- Additive Increase
- TCP Slow Start
- TCP Slow Start (more)
- Refinement
- Refinement (more)
- Summary TCP Congestion Control
- TCP sender congestion control
- TCP Futures
- Macroscopic TCP model
- TCP Model Contd
- TCP Fairness
- Why is TCP fair
- Fairness (more)
- Queuing Disciplines
- Typical Internet Queuing
- FIFO + Drop-tail Problems
- Slide 44
- Active Queue Management
- Design Objectives
- Lock-out Problem
- Full Queues Problem
- Random Early Detection (RED)
- RED Algorithm
- RED Operation
- Improving QOS in IP Networks
- Principles for QOS Guarantees
- Principles for QOS Guarantees (more)
- Slide 55
- Slide 56
- Summary of QoS Principles
- Scheduling And Policing Mechanisms
- Scheduling Policies more
- Scheduling Policies still more
- Slide 61
- Deficit Weighted Round Robin (DWRR)
- DRR
- DWRR
- Slide 65
- DWRR Example
- Policing Mechanisms
- Slide 68
- Policing Mechanisms (more)
-