Project 2 is out!
-
Upload
grant-holden -
Category
Documents
-
view
27 -
download
4
description
Transcript of Project 2 is out!
![Page 1: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/1.jpg)
Project 2 is out!
Goal: implement reliable transport protocol
We give you the receiver, you implement the sender
Our receiver uses sliding window and cumulative
acks
1
![Page 2: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/2.jpg)
Grading Policy
The grades will be based on correctness and performance, not adherence to a specified algorithm.
2
![Page 3: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/3.jpg)
Correctness
Is the file sent the same as file received?
You need to handle… Loss
Corrupted packets
Re-ordering of packets
What are corner cases associated with each of these?
3
![Page 4: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/4.jpg)
Performance
Is the transfer timely, with a reasonable number of packets sent?
How long to wait before sending new packets / resending old packets? How many packets to send?
Stop-and-wait probably waits too long…
Go-back-n probably sends too much…
Think scale.
4
![Page 5: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/5.jpg)
Grading Policy
We provide you with a testing framework, including one test case
You need to implement further tests!
We have a whole suite of tests full of juicy corner cases…
5
![Page 6: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/6.jpg)
But I have questions…
General questions about reliable transport Ask your TA
Project specific questions Andrew (andrewor@berkeley) Colin ([email protected]) Radhika ([email protected])
6
![Page 7: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/7.jpg)
Collaboration Policy
Projects are designed to be solved independently, but you may work with a partner if you wish (but at most two people can work together). Grading will remain the same whether you choose to work alone or with a partner; both partners will receive the same grade regardless of the distribution of work between the two partners (so choose a partner wisely!).
7
![Page 8: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/8.jpg)
Collaboration Policy (continued)
You may not share code with any classmates other than your partner. You may discuss the assignment requirements or general programming decisions (e.g., what data structures were used to store routing tables) - away from a computer and without sharing code - but you should not discuss the detailed nature of your solution (e.g., what algorithm was used to compute the routing table).
8
![Page 9: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/9.jpg)
TCP: Congestion Control
EE 122, Fall 2013
Sylvia Ratnasamy
http://inst.eecs.berkeley.edu/~ee122/
Material thanks to Ion Stoica, Scott Shenker, Jennifer Rexford, Nick McKeown, and many other colleagues
![Page 10: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/10.jpg)
Last lecture Flow control: adjusting the sending rate to keep from
overwhelming a slow receiver
Today Congestion control: adjusting the sending rate to keep
from overloading the network
![Page 11: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/11.jpg)
If two packets arrive at the same time A router can only transmit one … and either buffers or drops the other
If many packets arrive in a short period of time The router cannot keep up with the arriving traffic … delays traffic, and the buffer may eventually overflow
Internet traffic is bursty
Statistical Multiplexing Congestion
![Page 12: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/12.jpg)
Who Takes Care of Congestion?
Network? End hosts? Both?
TCP’s approach: End hosts adjust sending rate Based on implicit feedback from network
Not the only approach A consequence of history rather than planning
![Page 13: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/13.jpg)
Some History: TCP in the 1980s
Sending rate only limited by flow control Packet drops senders (repeatedly!) retransmit a full
window’s worth of packets
Led to “congestion collapse” starting Oct. 1986 Throughput on the NSF network dropped from
32Kbits/s to 40bits/sec
“Fixed” by Van Jacobson’s development of TCP’s congestion control (CC) algorithms
![Page 14: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/14.jpg)
Jacobson’s Approach
Extend TCP’s existing window-based protocol but adapt the window size in response to congestion required no upgrades to routers or applications! patch of a few lines of code to TCP implementations
A pragmatic and effective solution but many other approaches exist
Extensively improved upon topic now sees less activity in ISP contexts but is making a comeback in datacenter environments
![Page 15: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/15.jpg)
Three Issues to Consider
Discovering the available (bottleneck) bandwidth
Adjusting to variations in bandwidth
Sharing bandwidth between flows
![Page 16: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/16.jpg)
Abstract View
Ignore internal structure of router and model it as having a single queue for a particular input-output pair
Sending Host Buffer in Router Receiving Host
A B
![Page 17: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/17.jpg)
Discovering available bandwidth
Pick sending rate to match bottleneck bandwidth Without any a priori knowledge Could be gigabit link, could be a modem
A B100 Mbps
![Page 18: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/18.jpg)
Adjusting to variations in bandwidth
Adjust rate to match instantaneous bandwidth Assuming you have rough idea of bandwidth
A BBW(t)
![Page 19: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/19.jpg)
Multiple flows and sharing bandwidth
Two Issues: Adjust total sending rate to match bandwidth Allocation of bandwidth between flows
A2 B2BW(t)
A1
A3 B3
B1
![Page 20: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/20.jpg)
Reality
Congestion control is a resource allocation problem involving many flows, many links, and complicated global dynamics
1Gbps
600Mbps
1Gbps
![Page 21: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/21.jpg)
Sender’s viewpoint
Knee – point after which Throughput increases slowly Delay increases fast
Cliff – point after which Throughput starts to drop to zero
(congestion collapse) Delay approaches infinity
Load
Load
Th
rou
ghp
ut
De
lay
knee cliff
congestioncollapse
packetloss
![Page 22: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/22.jpg)
General Approaches
(0) Send without care Many packet drops
![Page 23: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/23.jpg)
General Approaches
(0) Send without care
(1) Reservations Pre-arrange bandwidth allocations Requires negotiation before sending packets Low utilization
![Page 24: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/24.jpg)
General Approaches
(0) Send without care
(1) Reservations
(2) Pricing Don’t drop packets for the high-bidders Requires payment model
![Page 25: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/25.jpg)
General Approaches
(0) Send without care
(1) Reservations
(2) Pricing
(3) Dynamic Adjustment Hosts probe network; infer level of congestion; adjust Network reports congestion level to hosts; hosts adjust Combinations of the above Simple to implement but suboptimal, messy dynamics
![Page 26: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/26.jpg)
General Approaches
(0) Send without care
(1) Reservations
(2) Pricing
(3) Dynamic Adjustment
All three techniques have their place Generality of dynamic adjustment has proven powerful Doesn’t presume business model, traffic characteristics,
application requirements; does assume good citizenship
![Page 27: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/27.jpg)
TCP’s Approach in a Nutshell
TCP connection has window Controls number of packets in flight
Sending rate: ~Window/RTT
Vary window size to control sending rate
![Page 28: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/28.jpg)
All These Windows…
Congestion Window: CWND How many bytes can be sent without overflowing routers Computed by the sender using congestion control algorithm
Flow control window: AdvertisedWindow (RWND) How many bytes can be sent without overflowing receiver’s buffers Determined by the receiver and reported to the sender
Sender-side window = minimum{CWND, RWND} Assume for this lecture that RWND >> CWND
![Page 29: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/29.jpg)
Note
This lecture will talk about CWND in units of MSS (Recall MSS: Maximum Segment Size, the amount of
payload data in a TCP packet) This is only for pedagogical purposes
Keep in mind that real implementations maintain CWND in bytes
![Page 30: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/30.jpg)
Two Basic Questions
How does the sender detect congestion?
How does the sender adjust its sending rate? To address three issues
Finding available bottleneck bandwidth Adjusting to bandwidth variations Sharing bandwidth
![Page 31: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/31.jpg)
Detecting Congestion
Packet delays Tricky: noisy signal (delay often varies considerably)
Routers tell endhosts when they’re congested
Packet loss Fail-safe signal that TCP already has to detect Complication: non-congestive loss (checksum errors)
![Page 32: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/32.jpg)
Not All Losses the Same
Duplicate ACKs: isolated loss Still getting ACKs
Timeout: much more serious Not enough dupacks Must have suffered several losses
Will adjust rate differently for each case
![Page 33: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/33.jpg)
Rate Adjustment
Basic structure: Upon receipt of ACK (of new data): increase rate Upon detection of loss: decrease rate
How we increase/decrease the rate depends on the phase of congestion control we’re in: Discovering available bottleneck bandwidth vs. Adjusting to bandwidth variations
![Page 34: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/34.jpg)
Bandwidth Discovery with Slow Start
Goal: estimate available bandwidth start slow (for safety) but ramp up quickly (for efficiency)
Consider RTT = 100ms, MSS=1000bytes Window size to fill 1Mbps of BW = 12.5 packets Window size to fill 1Gbps = 12,500 packets Either is possible!
![Page 35: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/35.jpg)
“Slow Start” Phase
Sender starts at a slow rate but increases exponentially until first loss
Start with a small congestion window Initially, CWND = 1 So, initial sending rate is MSS/RTT
Double the CWND for each RTT with no loss
![Page 36: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/36.jpg)
Slow Start in Action
• For each RTT: double CWND
• Simpler implementation: for each ACK, CWND += 1
D A D D A A D D
Src
Dest
D D
1 2 43
A A A A
8
![Page 37: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/37.jpg)
Adjusting to Varying Bandwidth
Slow start gave an estimate of available bandwidth
Now, want to track variations in this available bandwidth, oscillating around its current value Repeated probing (rate increase) and backoff (rate
decrease)
TCP uses: “Additive Increase Multiplicative Decrease” (AIMD) We’ll see why shortly…
![Page 38: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/38.jpg)
AIMD
Additive increase Window grows by one MSS for every RTT with no loss For each successful RTT, CWND = CWND + 1 Simple implementation:
for each ACK, CWND = CWND+ 1/CWND
Multiplicative decrease On loss of packet, divide congestion window in half On loss, CWND = CWND/2
![Page 39: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/39.jpg)
Leads to the TCP “Sawtooth”
Loss
Exponential“slow start”
t
Window
![Page 40: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/40.jpg)
Slow-Start vs. AIMD
When does a sender stop Slow-Start and start Additive Increase?
Introduce a “slow start threshold” (ssthresh) Initialized to a large value On timeout, ssthresh = CWND/2
When CWND = ssthresh, sender switches from slow-start to AIMD-style increase
![Page 41: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/41.jpg)
Why AIMD?
![Page 42: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/42.jpg)
Recall: Three Issues
Discovering the available (bottleneck) bandwidth Slow Start
Adjusting to variations in bandwidth AIMD
Sharing bandwidth between flows
![Page 43: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/43.jpg)
Goals for bandwidth sharing
Efficiency: High utilization of link bandwidth Fairness: Each flow gets equal share
![Page 44: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/44.jpg)
Why AIMD?
Some rate adjustment options: Every RTT, we can Multiplicative increase or decrease: CWND a*CWND Additive increase or decrease: CWND CWND + b
Four alternatives: AIAD: gentle increase, gentle decrease AIMD: gentle increase, drastic decrease MIAD: drastic increase, gentle decrease MIMD: drastic increase and decrease
![Page 45: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/45.jpg)
Simple Model of Congestion Control
User 1’s rate (x1)
Us
er
2’s
rate
(x
2)
Fairnessline
Efficiencyline
1
1
Two users rates x1 and x2
Congestion when x1+x2 > 1
Unused capacity when x1+x2 < 1
Fair when x1 =x2
congeste
d
ineffic
ient
![Page 46: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/46.jpg)
Example
User 1: x1
Use
r 2:
x2
fairnessline
efficiencyline
1
1
Inefficient: x1+x2=0.7
(0.2, 0.5)
Congested: x1+x2=1.2
(0.7, 0.5)
Efficient: x1+x2=1Fair
(0.5, 0.5)
congeste
d
ineffic
ient
Efficient: x1+x2=1Not fair
(0.7, 0.3)
![Page 47: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/47.jpg)
AIAD
User 1: x1
Use
r 2:
x2
fairnessline
efficiencyline
(x1,x2)
(x1-aD,x2-aD)
(x1-aD+aI),x2-aD+aI)) Increase: x + aI
Decrease: x - aD
Does not converge to fairness
congeste
d
ineffic
ient
![Page 48: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/48.jpg)
MIMD
User 1: x1
Use
r 2:
x2
fairnessline
efficiencyline
(x1,x2)
(bdx1,bdx2)
(bIbDx1,bIbDx2)
Increase: x*bI
Decrease: x*bD
Does not converge to fairness
congeste
d
ineffic
ient
![Page 49: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/49.jpg)
(bDx1+aI,bDx2+aI)
AIMD
User 1: x1
Use
r 2:
x2
fairnessline
efficiencyline
(x1,x2)
(bDx1,bDx2)
Increase: x+aI
Decrease: x*bD
Converges to fairness
congeste
d
ineffic
ient
![Page 50: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/50.jpg)
50
AIMD Sharing Dynamics
A50 pkts/sec
Bx1
D E
0
10
20
30
40
50
60
1 28 55 82 109
136
163
190
217
244
271
298
325
352
379
406
433
460
487
Rates equalize fair share
x2
![Page 51: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/51.jpg)
51
AIAD Sharing Dynamics
A Bx1
D E
0
10
20
30
40
50
60
1 28 55 82 109
136
163
190
217
244
271
298
325
352
379
406
433
460
487
x2
![Page 52: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/52.jpg)
TCP Congestion Control Details
![Page 53: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/53.jpg)
Implementation
State at sender CWND (initialized to a small constant) ssthresh (initialized to a large constant) [Also dupACKcount and timer, as before]
Events ACK (new data) dupACK (duplicate ACK for old data) Timeout
![Page 54: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/54.jpg)
Event: ACK (new data)
If CWND < ssthresh CWND += 1
• CWND packets per RTT • Hence after one RTT
with no drops: CWND = 2xCWND
![Page 55: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/55.jpg)
Event: ACK (new data)
If CWND < ssthresh CWND += 1
Else CWND = CWND + 1/CWND
Slow start phase
• CWND packets per RTT • Hence after one RTT
with no drops: CWND = CWND + 1
“Congestion Avoidance” phase (additive increase)
![Page 56: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/56.jpg)
Event: TimeOut
On Timeout ssthresh CWND/2 CWND 1
![Page 57: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/57.jpg)
Event: dupACK
dupACKcount ++
If dupACKcount = 3 /* fast retransmit */ ssthresh = CWND/2 CWND = CWND/2
![Page 58: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/58.jpg)
Example
t
Window
Slow-start restart: Go back to CWND = 1 MSS, but take advantage of knowing the previous value of CWND
Slow start in operation until it reaches half of previous CWND, I.e.,
SSTHRESH
TimeoutFast
Retransmission
SSThreshSet to Here
![Page 59: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/59.jpg)
One Final Phase: Fast Recovery
The problem: congestion avoidance too slow in recovering from an isolated loss
![Page 60: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/60.jpg)
Example (in units of MSS, not bytes)
Consider a TCP connection with: CWND=10 packets Last ACK was for packet # 101
i.e., receiver expecting next packet to have seq. no. 101
10 packets [101, 102, 103,…, 110] are in flight Packet 101 is dropped What ACKs do they generate? And how does the sender respond?
![Page 61: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/61.jpg)
Timeline
ACK 101 (due to 102) cwnd=10 dupACK#1 (no xmit) ACK 101 (due to 103) cwnd=10 dupACK#2 (no xmit) ACK 101 (due to 104) cwnd=10 dupACK#3 (no xmit) RETRANSMIT 101 ssthresh=5 cwnd= 5 ACK 101 (due to 105) cwnd=5 + 1/5 (no xmit) ACK 101 (due to 106) cwnd=5 + 2/5 (no xmit) ACK 101 (due to 107) cwnd=5 + 3/5 (no xmit) ACK 101 (due to 108) cwnd=5 + 4/5 (no xmit) ACK 101 (due to 109) cwnd=5 + 5/5 (no xmit) ACK 101 (due to 110) cwnd=6 + 1/5 (no xmit) ACK 111 (due to 101) only now can we transmit new packets Plus no packets in flight so ACK “clocking” (to increase CWND)
stalls for another RTT
![Page 62: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/62.jpg)
Solution: Fast Recovery
Idea: Grant the sender temporary “credit” for each dupACK so as to keep packets in flight
If dupACKcount = 3 ssthresh = cwnd/2 cwnd = ssthresh + 3
While in fast recovery cwnd = cwnd + 1 for each additional duplicate ACK
Exit fast recovery after receiving new ACK set cwnd = ssthresh
![Page 63: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/63.jpg)
Example
Consider a TCP connection with: CWND=10 packets Last ACK was for packet # 101
i.e., receiver expecting next packet to have seq. no. 101
10 packets [101, 102, 103,…, 110] are in flight Packet 101 is dropped
![Page 64: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/64.jpg)
Timeline
ACK 101 (due to 102) cwnd=10 dup#1 ACK 101 (due to 103) cwnd=10 dup#2 ACK 101 (due to 104) cwnd=10 dup#3 REXMIT 101 ssthresh=5 cwnd= 8 (5+3) ACK 101 (due to 105) cwnd= 9 (no xmit) ACK 101 (due to 106) cwnd=10 (no xmit) ACK 101 (due to 107) cwnd=11 (xmit 111) ACK 101 (due to 108) cwnd=12 (xmit 112) ACK 101 (due to 109) cwnd=13 (xmit 113) ACK 101 (due to 110) cwnd=14 (xmit 114) ACK 111 (due to 101) cwnd = 5 (xmit 115) exiting fast recovery Packets 111-114 already in flight ACK 112 (due to 111) cwnd = 5 + 1/5 back in congestion avoidance
![Page 65: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/65.jpg)
Putting it all together: The TCP State Machine (partial)
How are ssthresh, CWND and dupACKcount updated for each event that causes a state transition?
slow start
congstn. avoid.
fast recovery
cwnd > ssthresh
timeout
dupACK=3
timeout
dupACK=3new ACK
dupACK
new ACK
timeout new ACK
![Page 66: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/66.jpg)
TCP Flavors
TCP-Tahoe cwnd =1 on triple dupACK
TCP-Reno cwnd =1 on timeout cwnd = cwnd/2 on triple dupack
TCP-newReno TCP-Reno + improved fast recovery
TCP-SACK incorporates selective acknowledgements
![Page 67: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/67.jpg)
TCP Throughput Equation
![Page 68: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/68.jpg)
A
A Simple Model for TCP Throughput
Timeouts
t
cwnd
1
RTT
maxW
2maxW
½ Wmax RTTs between drops
Avg. ¾ Wmax packets per RTTs
![Page 69: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/69.jpg)
A
A Simple Model for TCP Throughput
Timeouts
t
cwnd
maxW
2maxW
![Page 70: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/70.jpg)
Some implications: (1) Fairness
Flows get throughput inversely proportional to RTT Is this fair?
![Page 71: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/71.jpg)
Some Implications: (2) How does this look at high speed?
Assume that RTT = 100ms, MSS=1500bytes
What value of p is required to go 100Gbps? Roughly 2 x 10-12
How long between drops? Roughly 16.6 hours
How much data has been sent in this time? Roughly 6 petabits
These are not practical numbers!
![Page 72: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/72.jpg)
Some implications: (3) Rate-based Congestion Control
One can dispense with TCP and just match eqtn: Equation-based congestion control Measure drop percentage p, and set rate accordingly Useful for streaming applications
![Page 73: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/73.jpg)
Some Implications: (4) Lossy Links
TCP assumes all losses are due to congestion
What happens when the link is lossy?
Throughput ~ 1/sqrt(p) where p is loss prob.
This applies even for non-congestion losses!
![Page 74: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/74.jpg)
Other Issues: Cheating
Cheating pays off
Some favorite approaches to cheating: Increasing CWND faster than 1 per RTT Using large initial CWND Opening many connections
![Page 75: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/75.jpg)
Increasing CWND Faster
A Bx
D Ey
Limit rates:x = 2y
C
x
y
x increases by 2 per RTTy increases by 1 per RTT
![Page 76: Project 2 is out!](https://reader031.fdocuments.us/reader031/viewer/2022013122/568132e9550346895d99a655/html5/thumbnails/76.jpg)
Next Time
Closer look at problems with TCP CC
Router-assisted congestion control