Transport Layer Moving Segments. Transport Layer Protocols Provide a logical communication link...
-
Upload
everett-jacobs -
Category
Documents
-
view
217 -
download
0
Transcript of Transport Layer Moving Segments. Transport Layer Protocols Provide a logical communication link...
Transport Layer
Moving Segments
Transport Layer Protocols
• Provide a logical communication link between processes running on different hosts as if directly connected
• Implemented in end systems but not in network routers
• (Possibly) break messages into smaller units, adding transport layer header to create segment
• We have two protocols: TCP and UDP
An Analogy
• Ann, her brothers and sisters on West Coast; Bill and family on East Coast– Application messages = letters in envelopes– Processes = cousins– Hosts (end systems) = houses– Transport Layer Protocol = Ann and Bill– Network Layer Protocol = Postal Service
UDP
• User Datagram Protocol
• Provides an unreliable, connectionless service to a process
• Provides integrity checking by including error detection fields in segment header
TCP
• Transmission Control Protocol
• Provides reliable data transfer– Flow control– Sequence numbers– Acknowledgments– Timers
• Provides congestion control
• Provides integrity through error checking
Multiplexing and Demultiplexing
• Multi- is the job of gathering data chunks thru sockets and creating segments
• Demulti- is delivering data chunks (segments minus Transport header) to correct socket
Segment Identification
• UDP: destination IP address and port number
• TCP: source IP, source port, destination IP and destination port
TCP Handshake
• Server application has a “welcome socket” that waits for connection requests
• Client generates a connection-establishment request (includes source IP and port at client)
• Server creates new port (socket) for client
• Both sides allocate resources for connection
UDP
• Defined in RFC 768
• Does about as little as a transport protocol can do
• Attaches source and destination port numbers and passes segment to network layer
• No handshaking before segment is sent
• DNS uses UDP
Why use UDP?
• Finer application-level control over what data is sent and when
• No connection establishment (thus no delays)
• No connection state information• Only 8 bytes of packet overhead• Out of order receipt can be discarded• Lack of congestion control can lead to high
loss rates if network is busy
UDP Checksum
• For error detection, can’t fix error(s)
• Add (with wrap-around) 16-bit words
• Take 1’s compliment (invert 0/1)
• Send this value
• At receiver, all words are added (including checksum) and result should be 11111111111111111
Principles of Reliable Data Transfer
• No transferred data bits are corrupted
• All are delivered in the order sent
• This gets complicated because lower layer (IP) is a best-effort (no guarantees) delivery service
Stop and Wait protocol
• Sender sends packet
• Receiver gets packet, checks for accuracy
• Receiver sends acknowledgement back
• If sender times out, presume NAK and resend packet
• Use sequence number to identify packets sent/resent
A little math
• West coast to East coast transfer; RTT = 30msec; Channel with 1GHz rate; packet size of 1000 bytes (8000 bits)
• Time needed to send packet is 8 microsec• 15.008 msec for packet to get to East coast• Ack packet back to sender after 30.008msec• Utilization is .00027; effective throughput is 267
kbps
P 215
Pipelining
• Range of sequence numbers must increase
• May have to buffer packets on both sides of link
• Error response either Go-Back-N or Selective Repeat
Go-Back-N (GBN)
• Sender allowed to transmit multiple packets but is constrained to have no more than some maximum, N, not ACK’ed
• N is window size and GBN is sliding window protocol
Selective Repeat• Avoids unnecessary retransmission by having
the sender retransmit only those packets that it suspects were in error
• Big difference is that we will buffer (keep) out-of-order packets
TCP
• Client first sends a TCP segment
• Server responds with a second segment
• Client responds with a third segment (that can optionally have message data)
• Connection is point-to-point, not one to many
• Can be full duplex
TCP Timer
• We need to know when data is lost
• We can measure round trip time
• Timer expiration could be due to congestion in the network, so…
• If timeout, we double timeout value next interval and go back to original value when ACK received.
Flow Control
• We have receive buffer. If application is slow to read, we can overflow buffer
• Receiver sends value of receive window to sender (with each ACK)
• Sender makes sure un-ACK’ed data does not exceed receive window size.
Closing a connection
• Client issues a close command (FIN=1)
• Server sends ACK
• Server sends a close command (FIN=1)
• Client sends ACK
• Resources are now deallocated
Congestion Control
• Theory: As supply (feed) rate increases, output increases to limit of output line and then levels off
• T2: As feed rate increases, delay grows exponentially
• As feed rate grows, we start loosing packets at router – forcing retransmission
• TCP has to infer that this is congestion
TCP Congestion Control
• Additive-increase, multiplicative-decrease
• Slow start
• Reaction to timeout events
Speed Control
• Congestion window = amount of data “in the pipeline”
• With congestion (lost packet) we halve the window for each occurrence
• With ACKs, we increase window by set amount (Maximum Segment Size)
Slow Start
• Start at one MSS – send one packet
• Double that value each time an ACK comes back – send two, then four, then …