1 CSE 524: Lecture 7 Transport layer functions. 2 Administrative Homework #2 out See
-
Upload
stella-robinson -
Category
Documents
-
view
220 -
download
0
description
Transcript of 1 CSE 524: Lecture 7 Transport layer functions. 2 Administrative Homework #2 out See
1
CSE 524: Lecture 7
Transport layer functions
2
Administrative
• Homework #2 out• See
http://www.cse.ogi.edu/class/cse524/homework.html
3
Where we’re at…• Internet architecture and history
• Internet protocols in practice
• Application layer
• Transport layer
• Network layer
• Data-link layer
• Physical layer
4
Transport layer outline
• Transport layer functions• Specific Internet transport layers
5
Transport Layer
• provide logical communication between app’ processes running on different hosts
• transport protocols run in end systems
• transport vs network layer services:
• network layer: data transfer between end systems
• transport layer: data transfer between processes – relies on, enhances, network
layer services
applicationtransportnetworkdata linkphysical
applicationtransportnetworkdata linkphysical
networkdata linkphysical
networkdata linkphysical
networkdata linkphysical
networkdata linkphysicalnetwork
data linkphysical
logical end-end transport
6
Transport Layer Functions
• Demux to upper layer• Quality of service• Security• Delivery semantics• Flow control• Congestion control• Reliable data transfer
7
applicationtransportnetwork
M P2applicationtransportnetwork
TL: Demux to upper layer (application)
Recall: segment - unit of data exchanged between transport layer entities – aka TPDU: transport
protocol data unit receiver
HtHn
Demultiplexing: delivering received segments to correct app layer processes
segmentsegment M
applicationtransportnetwork
P1M
M MP3 P4
segmentheader
application-layerdata
8
TL: Quality of service
• Provide predictability and guarantees in transport layer– Operating system issues
• Protocol handler scheduling• Buffer resource allocation• Process/application scheduling• Support for signaling (setup, management, teardown)
– In-network issues• L4 (transport) switches, L5 (application) switches, and NAT devices
9
TL: Security
• Provide at the transport level– Secrecy
• No eavesdropping
– Integrity• No man-in-the-middle attacks
– Authenticity• Ensure identity of source
• What is the difference between transport layer security and network layer security?
• Does the end-to-end principle apply?
10
TL: Delivery semantics
• Reliable vs. unreliable• Unicast vs. multicast• Ordered vs. unordered• Any others?
11
TL: Flow control
• Do not allow sender to overrun receiver’s buffer resources– Similar to data-link layer flow control, but done on an end-
to-end basis
12
TL: Congestion control
Congestion:• informally: “too many sources sending too much data too fast
for network to handle” • sources compete for resources inside network• different from flow control!• manifestations:
– lost packets (buffer overflow at routers)– long delays (queueing in router buffers)
13
TL: Congestion
• Why is it a problem?– Sources are unaware of current state of resource– Sources are unaware of each other– In many situations will result in < 1.5 Mbps of
“goodput” (more later)
10 Mbps
100 Mbps
1.5 Mbps
14
TL: Causes/costs of congestion: scenario 1
• two senders, two receivers
• one router, infinite buffers
• no retransmission
• large delays when congested
• maximum achievable throughput
unlimited shared output link buffers
Host Ain : original data
Host B
out
15
TL: Causes/costs of congestion: scenario 2
• one router, finite buffers • sender retransmission of lost packet
finite shared output link buffers
Host A in : original data
Host B
out
'in : original data, plus retransmitted data
16
TL: Causes/costs of congestion: scenario 2 • no loss : (goodput)
• “perfect” retransmission only when loss:
• retransmission of delayed (not lost) packet makes larger (than perfect
case) for same
inout
=
inout>
inout
“costs” of congestion: • more work (retrans) for given “goodput”• unneeded retransmissions: link carries multiple copies of pkt
17
TL: Causes/costs of congestion: scenario 3
• four senders• multihop paths• timeout/retransmit
inQ: what happens as
and increase ?in
finite shared output link buffers
Host Ain : original data
Host B
out
'in : original data, plus retransmitted data
18
TL: Causes/costs of congestion: scenario 3
Another “cost” of congestion: • when packet dropped, any “upstream transmission capacity
used for that packet was wasted!
Host A
Host B
o
u
t
19
TL: Congestion Collapse• Increase in network load results in decrease of useful work done
– Spurious retransmissions of packets still in flight• Classical congestion collapse• Solution: better timers and congestion control
– Undelivered packets• Packets consume resources and are dropped elsewhere in network• Solution: congestion control for ALL traffic
– Fragments• Mismatch of transmission and retransmission units• Solutions:
– Make network drop all fragments of a packet (early packet discard in ATM)– Do path MTU discovery
– Control traffic• Large percentage of traffic is for control• Headers, routing messages, DNS, etc.
– Stale or unwanted packets• Packets that are delayed on long queues• Solution: better congestion control and active queue management
20
TL: Preventing Congestion Collapse• End-host vs. network controlled
– Trust hosts to do the right thing • Hosts adjust rate based on detected congestion (TCP)
– Don’t trust hosts and enforce within network• Network adjusts rates at congestion points
– Scheduling– Queue management
• Hard to prevent global collapse conditions locally
• Implicit vs. explicit rate control– Infer congestion from packet loss or delay
• Increase rate in absence of loss, decrease on loss (TCP Tahoe/Reno)• Increase rate based on delay behavior (TCP Vegas, Packet pair)
– Explicit signaling from network• Congestion notification (DECbit, ECN)• Rate signaling (ATM ABR)
21
TL: Goals for congestion control mechanisms• Use network resources efficiently
– 100% link utilization, 0% packet loss, Low delay– Maximize network power: (throughput/delay) – Efficiency/goodput: Xknee = xi(t)
• Preserve fair network resource allocation– Fairness: (xi)2/n(xi
2)– Max-min fair sharing
• Small flows get all of the bandwidth they require• Large flows evenly share leftover
– Example• 100Mbs link• S1 and S2 are 1Mbs streams, S3 and S4 are infinite greedy streams• S1 and S2 each get 1Mbs, S3 and S4 each get 49Mbs
• Convergence and stability• Distributed operation• Simple router and end-host behavior
22
TL: Congestion Control vs. Avoidance
• Avoidance keeps the system performing at the knee/cliff
• Control kicks in once the system has reached a congested state
Load
Throughput
Load
Delay
23
TL: Basic Control Model
• Of all ways to do congestion, the Internet chooses….– Mainly end-host, window-based congestion control
• Only place to really prevent collapse is at end-host• Reduce sender window when congestion is perceived• Increase sender window otherwise (probe for bandwidth)
– Congestion signaling and detection• Mark/drop packets when queues fill, overflow• Will cover this separately in later lecture
• Given this, how does one design a windowing algorithm which best meets the goals of congestion control?
24
TL: Linear Control
• Many different possibilities for reaction to congestion and probing– Examine simple linear controls– Window(t + 1) = a + b Window(t)
– Different ai/bi for increase and ad/bd for decrease
• Supports various reaction to signals– Increase/decrease additively– Increase/decrease multiplicatively– Which of the four combinations is optimal?
25
TL: Phase plots
• Simple way to visualize behavior of competing connections over time
Efficiency Line
Fairness Line
User 1’s Allocation x1
User 2’s Allocation
x2
26
TL: Phase plots
• What are desirable properties?• What if flows are not equal?
Efficiency Line
Fairness Line
User 1’s Allocation x1
User 2’s Allocation
x2Optimal point
Overload
Underutilization
27
TL: Additive Increase/Decrease
T0
T1
Efficiency Line
Fairness Line
User 1’s Allocation x1
User 2’s Allocation
x2
• Both X1 and X2 increase/decrease by the same amount over time– Additive increase improves fairness and additive decrease
reduces fairness
28
TL: Muliplicative Increase/Decrease
• Both X1 and X2 increase by the same factor over time– Extension from origin – constant fairness
T0
T1
Efficiency Line
Fairness Line
User 1’s Allocation x1
User 2’s Allocation
x2
29
TL: Convergence to Efficiency & Fairness
• From any point, want to converge quickly to intersection of fairness and efficiency lines
xH
Efficiency Line
Fairness Line
User 1’s Allocation x1
User 2’s Allocation
x2
30
TL: What is the Right Choice?
• Constraints limit us to AIMD– AIMD moves towards optimal point
x0
x1
x2
Efficiency Line
Fairness Line
User 1’s Allocation x1
User 2’s Allocation
x2
31
TL: Reliable data transfer
• Error detection, correction• Retransmission• Duplicate detection• Connection integrity
32
TL: Principles of Reliable data transfer
• important in app., transport, link layers
• characteristics of unreliable channel will determine complexity of reliable data transfer protocol (rdt)
33
TL: Reliable data transfer: getting started
sendside
receiveside
rdt_send(): called from above, (e.g., by app.). Passed data to deliver to receiver upper layer
udt_send(): called by rdt,to transfer packet over unreliable channel to
receiver
rdt_rcv(): called when packet arrives on rcv-side of channel
deliver_data(): called by rdt to deliver data to
upper
34
TL: Reliable data transfer: getting started
We’ll:• incrementally develop sender, receiver sides of
reliable data transfer protocol (rdt)• consider only unidirectional data transfer
– but control info will flow on both directions!
• use finite state machines (FSM) to specify sender, receiver
state1 state
2
event causing state transitionactions taken on state transition
state: when in this “state” next state uniquely determined by next
event
eventactions
35
TL: Rdt1.0: reliable transfer over a reliable channel
• underlying channel perfectly reliable– no bit errors– no loss of packets
• separate FSMs for sender, receiver:– sender sends data into underlying channel– receiver read data from underlying channel
36
TL: Rdt2.0: channel with bit errors
• underlying channel may flip bits in packet• the question: how to recover from errors:
– acknowledgements (ACKs): receiver explicitly tells sender that pkt received OK
– negative acknowledgements (NAKs): receiver explicitly tells sender that pkt had errors
– sender retransmits pkt on receipt of NAK
• new mechanisms in rdt2.0 (beyond rdt1.0):– error detection– receiver feedback: control msgs (ACK,NAK) rcvr->sender
37
TL: rdt2.0: FSM specification
sender FSM receiver FSM
38
TL: rdt2.0: operation with no errors
Wait for call from above
snkpkt = make_pkt(data, checksum)udt_send(sndpkt)
extract(rcvpkt,data)deliver_data(data)udt_send(ACK)
rdt_rcv(rcvpkt) && notcorrupt(rcvpkt)
rdt_rcv(rcvpkt) && isACK(rcvpkt)
udt_send(sndpkt)
rdt_rcv(rcvpkt) && isNAK(rcvpkt)
udt_send(NAK)
rdt_rcv(rcvpkt) && corrupt(rcvpkt)
Wait for ACK or
NAK
Wait for call from
below
rdt_send(data)
39
TL: rdt2.0: error scenario
Wait for call from above
snkpkt = make_pkt(data, checksum)udt_send(sndpkt)
extract(rcvpkt,data)deliver_data(data)udt_send(ACK)
rdt_rcv(rcvpkt) && notcorrupt(rcvpkt)
rdt_rcv(rcvpkt) && isACK(rcvpkt)
udt_send(sndpkt)
rdt_rcv(rcvpkt) && isNAK(rcvpkt)
udt_send(NAK)
rdt_rcv(rcvpkt) && corrupt(rcvpkt)
Wait for ACK or
NAK
Wait for call from
below
rdt_send(data)
40
TL: rdt2.0 has a fatal flaw!
What happens if ACK/NAK corrupted?
• sender doesn’t know what happened at receiver!
• can’t just retransmit: possible duplicate
What to do?• sender ACKs/NAKs receiver’s
ACK/NAK? What if sender ACK/NAK lost?
• retransmit, but this might cause retransmission of correctly received pkt!
Handling duplicates: • sender adds sequence
number to each pkt• sender retransmits current
pkt if ACK/NAK garbled• receiver discards (doesn’t
deliver up) duplicate pkt
Sender sends one packet, then waits for receiver response
stop and wait
41
TL: rdt2.1: sender, handles garbled ACK/NAKs
42
TL: rdt2.1: receiver, handles garbled ACK/NAKs
43
TL: rdt2.1: discussion
Sender:• seq # added to pkt• two seq. #’s (0,1) will
suffice. Why?• must check if received
ACK/NAK corrupted • twice as many states
– state must “remember” whether “current” pkt has 0 or 1 seq. #
Receiver:• must check if received
packet is duplicate– state indicates whether 0 or 1
is expected pkt seq #
• note: receiver can not know if its last ACK/NAK received OK at sender
44
TL: rdt2.2: a NAK-free protocol
• same functionality as rdt2.1, using NAKs only• instead of NAK, receiver sends ACK for last pkt
received OK– receiver must explicitly include seq # of pkt being ACKed
• duplicate ACK at sender results in same action as NAK: retransmit current pkt
45
rdt2.2: sender, receiver fragments
Wait for call 0 from
above
sndpkt = make_pkt(0, data, checksum)udt_send(sndpkt)
rdt_send(data)
udt_send(sndpkt)
rdt_rcv(rcvpkt) && ( corrupt(rcvpkt) || isACK(rcvpkt,1) )
rdt_rcv(rcvpkt) && notcorrupt(rcvpkt) && isACK(rcvpkt,0)
Wait for ACK
0sender FSMfragment
Wait for 0 from below
rdt_rcv(rcvpkt) && notcorrupt(rcvpkt) && has_seq1(rcvpkt) extract(rcvpkt,data)deliver_data(data)sndpkt = make_pkt(ACK1, chksum)udt_send(sndpkt)
rdt_rcv(rcvpkt) && (corrupt(rcvpkt) || has_seq1(rcvpkt))udt_send(sndpkt)
receiver FSMfragment
46
TL: rdt3.0: channels with errors and loss
New assumption: underlying channel can also lose packets (data or ACKs)– checksum, seq. #, ACKs,
retransmissions will be of help, but not enough
Q: how to deal with loss?
Approach: sender waits “reasonable” amount of time for ACK
• retransmits if no ACK received in this time
• if pkt (or ACK) just delayed (not lost):– retransmission will be
duplicate, but use of seq. #’s already handles this
– receiver must specify seq # of pkt being ACKed
• requires countdown timer
47
TL: rdt3.0 sender
sndpkt = make_pkt(0, data, checksum)udt_send(sndpkt)start_timer
rdt_send(data)
Wait for
ACK0
rdt_rcv(rcvpkt) && ( corrupt(rcvpkt) ||isACK(rcvpkt,1) )
Wait for call 1 from
above
sndpkt = make_pkt(1, data, checksum)udt_send(sndpkt)start_timer
rdt_send(data)
rdt_rcv(rcvpkt) && notcorrupt(rcvpkt) && isACK(rcvpkt,0)
rdt_rcv(rcvpkt) && ( corrupt(rcvpkt) ||isACK(rcvpkt,0) )
rdt_rcv(rcvpkt) && notcorrupt(rcvpkt) && isACK(rcvpkt,1)
stop_timerstop_timer
udt_send(sndpkt)start_timer
timeout
udt_send(sndpkt)start_timer
timeout
rdt_rcv(rcvpkt)
Wait for call 0from
above
Wait for
ACK1
rdt_rcv(rcvpkt)
48
TL: rdt3.0 in action
49
TL: rdt3.0 in action
50
TL: Performance of rdt3.0
• rdt3.0 works, but performance stinks• example: 1 Gbps link, 15 ms e-e prop. delay, 1KB packet:
Ttransmit
= 8kb/pkt10**9 b/sec= 8 microsec
● U sender: utilization – fraction of time sender busy sending● 1KB pkt every 30 msec -> 33kB/sec thruput over 1 Gbps link● network protocol limits use of physical resources!
U sender = .008
30.008 = 0.00027
microseconds
L / R RTT + L / R
=
L (packet length in bits)R (transmission rate, bps) =
51
TL: rdt3.0: stop-and-wait operation
first packet bit transmitted, t = 0
sender receiver
RTT
last packet bit transmitted, t = L / R
first packet bit arriveslast packet bit arrives, send ACK
ACK arrives, send next packet, t = RTT + L / R
U sender = .008
30.008 = 0.00027
microseconds
L / R RTT + L / R
=
52
TL: Pipelined protocols
Pipelining: sender allows multiple, “in-flight”, yet-to-be-acknowledged pkts– range of sequence numbers must be increased– buffering at sender and/or receiver
• Two generic forms of pipelined protocols: go-Back-N, selective repeat
53
TL: Pipelining: increased utilization
first packet bit transmitted, t = 0
sender receiver
RTT
last bit transmitted, t = L / R
first packet bit arriveslast packet bit arrives, send ACK
ACK arrives, send next packet, t = RTT + L / R
last bit of 2nd packet arrives, send ACKlast bit of 3rd packet arrives, send ACK
U sender = .024
30.008 = 0.0008
microseconds
3 * L / R RTT + L / R
=
Increase utilizationby a factor of 3!
54
TL: Go-Back-N
Sender:• k-bit seq # in pkt header• “window” of up to N, consecutive unack’ed pkts allowed
• ACK(n): ACKs all pkts up to, including seq # n - “cumulative ACK”● may receive duplicate ACKs (see receiver)
• timer for each in-flight pkt• timeout(n): retransmit pkt n and all higher seq # pkts in window
55
TL: GBN: sender extended FSM
56
TL: GBN: receiver extended FSM
receiver simple:• ACK-only: always send ACK for correctly-received pkt
with highest in-order seq #– may generate duplicate ACKs– need only remember expectedseqnum
• out-of-order pkt: – discard (don’t buffer) -> no receiver buffering!– ACK pkt with highest in-order seq #
57
TL: GBN in action