1 [email protected] Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science &...

129
1 [email protected] Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring 2001
  • date post

    20-Dec-2015
  • Category

    Documents

  • view

    231
  • download

    0

Transcript of 1 [email protected] Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science &...

Page 1: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Computer Networks - Theory and Practice

Sourav Bhattacharya

Computer Science & Engineering

Arizona State University

CSE 434 / 598Spring 2001

Page 2: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Class Objectives

Technical Goals: Provide basic training in the area of “Computer and Communication

Networks” A comprehensive protocol/algorithm level understanding of the

“essentials” of a network Concept driven, not implementation/package driven Focus on core communication aspects, and not on cosmetics Achieve a level where you are ready to learn about specific network

implementations

Other Goals: Learn to learn, Job well done, intellectual honesty, mutual “good

wish”, promote research careers, …

Page 3: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Success Criteria

At the end of the class Class does well in the tests, and projects Class has learnt the subject matter from the instructor Instructor has inspired few (at least !!) career

advancements Instructor has improved the class material …

Don’t Do List Instructor: Demonstration of “research” Class: Inhibitions, shy to ask questions, interrupt...

Page 4: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Text and Syllabus

Computer Networks, by Andrew S. Tannenbaum, 3rd ed., Prentice Hall, 1996

Flow of Discussion Chapter 1 and 2 - background, assumed !!

You are graduate students or undergrad seniors !! Chapter 4 - Medium Access Sublayer Chapter 3 - Data Link Layer Chapter 5 - Network Layer Chapter 6 - Transport Layer Sporadic Coverages: Security and Encryption, Network

Management, Multimedia, WWW, ... (as time permits)

Page 5: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

References

High-Speed Networks: TCP/IP and ATM Design Principles, by William Stallings, Prentice Hall

Network Analysis with Applications, by William D. Stanley, Prentice Hall

Local and Metropolitan Area Networks, by William Stallings, Prentice Hall

Protocol Design for Local and Metropolitan Area Networks, by Pawel Gburzynski, Prentice Hall

Introduction to Data Communications: A Practical Approach, by Larry Hughes, Jones and Burlett Publishers.

High-Speed LANs Handbook, by Stephen Saunders, McGraw-Hill

Page 6: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

The Network Design Problem: At A Glance

Design Analogy: N persons can successfully, and efficiently communicate amongst themselves, sharing individual, group and global views

Step 1: Two remote persons can communicate Step 2: Three or more remote persons can efficiently share

a common medium to exchange distinct views (but these people have to do the entire co-ordination by themselves)

Step 3: Increasingly convenient ways of doing Step 2 Abstraction Quality of Service Value added features...

Page 7: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Layered Protocol Hierarchies

Basic data transfer occurs at the lowest layer

The rest is merely solving “human problems” Abstraction and convenience

of access Inter-operability Making sure that multiple

users do not fight Or, if they do, at least

gracefully, and with a recourse Physical Medium

Layer 1

Layer 2

Layer 3

Layer 1

Layer 2

Layer 3

Layer N Layer N

... ...

Page 8: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Quality of Connections

Issues Layered Protocol Interfaces

Protocol Header, and Body Nework Architecture Network Architecture

Connections Type Simplex, vs. Duplex Connection-oriented, vs. Connection-less (datagram)

Life of connection, vs. Delay of setting up a new connection QoS of Connections

Page 9: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

OSI Model

Open System Interconnection (OSI) Model

Data header at each layer

Real data transfer at the lowest layer

Logical data flow at upper layers

Physical Medium

Physical

Data Link

Network

Application

Presentation

Session

Transport

Physical

Data Link

Network

Application

Presentation

Session

Transport

Page 10: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

TCP / IP Model

Application layer controls everything above the Transport Layer (theme: “reduce the overhead”)

Physical Medium

Physical

Data Link

Network

Application

Transport

Physical

Data Link

Network

Application

Transport

IP

TCP, or UDP

Telnet, Ftp, Smtp, DNS, ...

ArpaNet, NSFNet, various LANs, ...}

Page 11: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Network Standardization

International Standards Organization (ISO) Various TCs, and Working Groups

ANSI (Am. Nat’l Standards Inst.) NIST IEEE Internet Engineering Task Force (IETF)

Produces stream of RFCs

Page 12: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Medium Access Control

Chapter 4 of the Text

Page 13: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Problem Introduction

Two or more contenders for a common media Contenders: Independent nodes or stations with its own

data/information to distribute Distribute: one-to-one, one-to-many, one-to-all

(routing, multicast, broadcast) Data/Information: anything from a bit to a long message stream

Common media Fiber, cable, radio frequency channel, ... Characteristics of the media -- refer Chapter 2

Page 14: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

The Most Obvious Solution

N cars to share a common road Two approaches

Slice the road width into N parallel parts, i.e., Lanes(hopefully each part will still be wide enough for a car)

Each car drives in its own Lane Regulate the cars to drive on a rotation basis, i.e., one after the other

Careful co-ordination is critical No width restriction. Each car can enjoy the entire road width !!

Problems Naive, and simplistic Opportunity for resource wastage

Page 15: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

The Two Naive Solutions...

Frequency Division Multiplexing (FDM) For N user stations, partition the bandwidth into N (equally sized?)

frequency bands Each user transmits onto a particular bandwidth slot No contention. But, likely under-utilization of bandwidth.

Time Division Multiplexing (TDM) For N user stations, create a cycle of N (equally sized ?) time slots Each user takes its turn, and transmits only during the

corresponding time slot No contention. But, likely under-utilization of the time slots

Page 16: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Channel Allocation on “as needed” Basis

Instead of apriori partitioning of the channel resource (bandwidth, time) - employ dynamic resource management

Advantages include: reduced channel resource wastage Disadvantages:

Require explicit (or, implicit) co-ordination of transmission schedules Co-ordination can be of several categories

Detection and Correction Avoidance Prevention (contention-free !)

Page 17: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Model and Assumptions

User stations or Nodes Probability of a frame being generated in an interval T is L*T,

where L is a constant for a particular user. Independent in their transmissions. Can transmit a frame any time. Concerns:

This model is not valid for co-related transmissions (e.g., performance analysis for a set of parallel/distributed programs or threads)

Single channel Assumption No second medium is available among the stations to communicate

(data, and/or control information) Concern: this assumption is not true for many environments, where

the control information may be carried on a second channel.

Page 18: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Model (contd...)

Carrier Sense, or No Carrier Sense Before transmission nodes can (or, cannot) sense if the channel is

currently busy due to another user’s message Protocols can be lot more efficient if “Carrier Sense” is true Issue: It is hardware, and analog device specific

Activation Instances Continuous time: a message can be attempted for transmission at

any time. There is no master clock. Slotted time: a message can be delivered only at a fixed set of

points in time. Time axis is discretized. Requires a master clock.

Page 19: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

ALOHA - A Simple Multiple Access Protocol

N user stations, randomly generating data frames Anytime data is ready ---> transmit on the media

(without care for collison) Listen to the channel, and find out if there is/was collison If collison, then wait for a random time and goto step 1

Collision vulnerability period If frame time = t, then vulnerability period = 2t Reason: two frames can collide (head, tail) or (tail, head) at the

extreme ends Refer Figure 4-2

Page 20: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

insert figure 4-2 here

Page 21: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Performance of ALOHA

A lot of nodes are suddenly jumping into the shared, common channel - What can you expect about the performance ?

G = # frame transmission attempts (including new, and re-transmission) Thus, during a 2-frame vulnerability period (refer Fig 4-2) there will

be 2G frames generated

Probability that k frames are generated during a given vulnerability period = ((2G)^k * e^(-2G)) / k! Probability that no frame will be generated, i.e., k=0, => e^(-2G)

Successful transmissions, or throughput = rate * prob(none else transmits) = G * e^(-2G)

Page 22: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

ALOHA => Slotted ALOHA

Best case performance of ALOHA G = 0.5, Throughput = 1/(2e), nearly 18% What else can you expect from purely random, and no carrier

sense protocols

Slotted ALOHA Like ALOHA, in every sense, except when a transmission request

can originate Discretize the time axis into slots, 1 slot = 1 frame width A node can only transmit a frame at a slot beginning Requires a master clock, typically one node transmitting a special

control signal at the beginning of each frame Issue: Is clock synchronization that easy ?

Page 23: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Performance of Slotted ALOHA

Effect of restricted transmission request time instants Vulnerability period is reduced from 2t to t, where t is the frame

width (refer Figure 4-2, and explain why ?) Probability of no other transmission during one frame = e^(-G) Thus, Throughput = G * e^(-G) Best throughput is for G=1, with nearly 37% throughput

37% utilization, 37% empty slots and 26% collisions About twice better than pure ALOHA

Exercise Increasing G would reduce the # of empty slots. Why that will not

increase the throughput ? Work out few examples...

Page 24: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

ALOHA ==> Slotted ALOHA

Insert Fig 4-3 here

Page 25: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Carrier Sense Protocols

Best performance of Slotted ALOHA = 1/e Since, nodes cannot sense the carrier prior to transmission In other words, they cannot avoid collision, can only detect

Carrier Sense Protocols Can listen for a carrier, i.e., shared channel, to become idle and then transmit Carrier Sense Multiple Access (CSMA) class of protocols

Persistent CSMA Also, called as 1-persistent, since it transmits with a probability = 1 A node with ready data

Listen for idle channel, if line is busy then WAIT Persistently When channel is free, transmit the packet, and then listen for a collision If collision, then sleep for a random time and goto Step 1

Page 26: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Persistent CSMA

How does contention resolution occur ? Depends on the “randomness” of the wait periods If a set of random wait periods, one from each user, are in effect then

eventually everyone will get through...

Role of Propagation Delay Collision detection time depends on the propagation delay If d is the propagation delay, then worst case collision detection time = 2d d = 0, there may still be some collisions Analogous to round table conference discussions among human users

Improvement over ALOHA Nodes do not jump in at the middle of another node’s transmission

Page 27: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Non-Persistent CSMA

Persistent CSMA When looking for an idle channel, it keeps a continuous wait A greedy mode for “seize asap” Consequence: multiple contenders, each in the “seize asap” mode will

lead to followup collisions

Non-Persistent CSMA If an idle channel is not found, the node desiring to transmit does not wait

in a “grab as soon as available” mode Instead, the node attempting to transmit goes into a random wait period. It

wakes up at the end of the random wait, and re-tries for an idle channel Benefit: reduced contention (Note: it includes a 2-level randomness)

Random wait, if not found idle channel Random wait, if found idle channel, transmitted but had collision

Page 28: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Non-Persistent CSMA => p-Persistent CSMA

Contention reduction strategy Involve more and more random delays in each user activities Throughput will increase, but individual user delays will decrease

p-Persistent CSMA Channel is time slotted, similar to Slotted ALOHA A node with ready data

Look for an idle channel, if channel is busy then wait for the next slot If idle channel found then transmit with probability = p (i.e., defer until the

next slot with prob = 1-p) If next slot is also idle, then transmit with prob=p, and defer for the second

next slot with prob = 1-p Continue until the data is transmitted, or some other node starts transmitting

If so, wait for a random time and goto Step 1

Page 29: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Why p-Persistent CSMA ?

The more probabilistic events, and randomness => the less contention and increased throughput

Degrees of uncertainty Persistent CSMA = 1, random delay when a collision occurs Non-Persistent CSMA = 2, random delay both at the channel seek,

and at the collision p-Persistent CSMA = 2 (but different kind from Non-Persistent)

Random delay at collision (as Non-Persistent) Deterministic seizure attitude at channel seek time (like Persistent) Slotted time (like Slotted ALOHA) But, non-deterministic transmission even when channel is idle

An additional level of uncertainty beyond Persistent CSMA)

Page 30: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Performance of CSMA Class of Protocols

Throughput and individual user delays are against each other Throughput

Non-persistent is better than Persistent Non-Persistent VS. p-Persistent

Depends on the value of p Both have 2 degrees of uncertainty, but different kinds Refer Figure 4-4 for an aggregate performance depiction In increasing throughput

Pure ALOHA Slotted ALOHA 1-Persistent, or Persistent CSMA 0.5 Persistent CSMA (Non-Persistent, 0.1 Persistent) CSMA 0.01 Persistent CSMA

Page 31: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

include figure 4-4 here

Page 32: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

CSMA with Collision Detection

CSMA does not abort a transmission when a collision occurs Colliding transmissions will continue (until the frame completion) A fair (!!) amount of garbage being generated, once a collision occurs Why not abort transmission as soon as a collision is detected

CSMA with Collision Detection IEEE 802.3, Ethernet protocol Quickly terminate damaged frames Contention periods are single slot each, not a frame width (Fig 4-5) Resource wastage = width of the slots (and not those of the frames) Slot width = worst case signal propagation delay

Actually, twice of that Includes the delay of the analog devices as well

Page 33: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

include fig 4-5 here

Page 34: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Collision-Free Protocols

Channel co-ordination can be of several categories Detection and Correction Avoidance Prevention (contention-free !)

Static MAC Policies Collision-free by design, i.e., avoidance Resource utilization may be questionable

Dynamic MAC with Collision Detection Like CSMA/CD

Dynamic MAC with contention prevention Protocol does few extra steps in run-time to prevent collision

Page 35: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Reservation-Based Dynamic MAC Protocols

Protocols consist of two phases Reservation or bidding process Actual usage, after the bidding process

Reservation phase All nodes with data to transmit go through the reservation phase Result: one or more winners ==> implicit reservations

Transmission phase The winner channel(s) transmits (one after another)

Bit-Map Protocol - One Reservation Policy Basic idea stems from Link List approach Refer Figure 4-6

Page 36: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

include fig 4-6 here

Page 37: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Bit-Map Protocol

N Contention Slots for N stations Node i transmits a “1” in Slot i, iff node i has data to send The collection of 1’s in the Contention Slot will indicate which

stations are with data (to transmit) Followed by Transmission Phase

Allocate Frames only for those Nodes with a 1 in the Contention Slots

Performance Low load :-

Frames’ time << Contention Slot time Contention Slot’s delay for Low numbered station -- 1.5N (why ?) Contention Slot’s delay for High numbered station -- 0.5N (why?) Average wait = N slots (sloppy analysis !!) For d-bit data frames, efficiency = d / (d +N)

Page 38: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Performance of Bit-Map Protocol

At high load Multiple (k) frames per each group of N Contention Slots Efficiency = k*d / (N + k*d) For k ==> N, efficiency = d/(d+1)

Question ? Is this a realistic analysis ? Can you do a queueing analysis for this protocol ? Is there any fundamental bottleneck ?

Page 39: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Binary Countdown Protocol

2-phase Protocol : Reservation followed by Transmission Reservation phase

Each station, with ready data, transmits its bit address in msb to lsb order At each bit-position, binary OR of all the respective bits from each node.

If a node with a 0-bit, observes a 1 after the OR operation - then it withdraws from the competition. The latest surviving node is the winner.

Transmission phase: Winner (single) transmits the data Example, nodes 3, 4 and 6 have data to transmit

Node ids (0011), (0100) and (0110) get transmitted First transmission: 0, 0, and 0 Second transmission: 0, 1 and 1 ==> Node 3 withdraws Third transmission: none, 0, and 1 ==> Node 4 withdraws Node 6 is the winner. Node 6 transmits data frame.

Page 40: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Performance of Binary Countdown Protocol

Note: only a single winner in this approach The node with the highest bit address This approach may starve the lower numbered users

For N nodes, ln(N) bit addresses will be transmitted d bits frame ==> efficiency = d / (d + ln(N)) Enhancements:

Bit ordering different from (msb --> lsb) type Parallelized version of binary countdown, instead of serial Efficiency can reach upto 100%

Page 41: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Limited Contention Protocols

Design features: Low traffic load - Collision detection approaches are better, they

offer low delay, and not much collision occurs anyways High traffic load - Collision free protocols are better, they have

higher delay, but at least the channel efficiency is much better... What if we combine the advantages of the two ?

Limited Contention Protocols Idea: Do not let every station compete for the channel with equal

probability. Allow different groups of nodes to compete at different times...

Refer Figure 4-8, for Success Probability = f(# ready stations) Question: give an analogy of this idea using the car/road domain...

Page 42: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

include fig 4-8 here

Page 43: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Adaptive Tree Walk - Limited Contention Protocol

Group the N nodes as a log(N) height binary tree Tree leaves are the N nodes

Starting phase, or immediately after a successful transmit All N nodes can compete for the channel If one of the nodes acquire a channel, then repeat with all “N nodes” as the

contenders’ list Else, if collision then narrow the contenders’ list = left subgroup of nodes

If one of the nodes acquire a channel, then shift to the right sibling group of nodes for the next slot

Else, if there is a further collision, narrow down the contenders’ list to the leftward children subtree (Repeat...)

Refer Figure 4-9, essentially walk around with various subgroups of the tree leaves at each time as the Contenders’ list

Page 44: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Figure 4-9

Page 45: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Wavelength Division Multiplexed MAC Protocol

Analogous to FDM, used popularly for optical networks Partition the wavelength spectrum into (equal ?) slices

One slice for each node / user Can apply TDM in conjunction as well Useful for implementation of broadcast topologies

Refer Figure 4-10, each wavelength slice has two parts - for control information, and for data values

Can also implement point-to-point network topologies (how ?) Collectively it is called TWDM (time-wave-division multiplexed)

MAC protocol

Key design issue: #transmitters, and #receivers at each node Frequencies and Tunability of the transceivers...

Page 46: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Figure 4-10

Page 47: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

WDMA - A Particular WDM MAC

WDMA - a broadcast based protocol Each node is assigned two channels, for Control and for Data The data channel is slotted

One slot for every other node One slot for status information of the host node itself

The control channel is also slotted Supports three classes of traffic

Constant data rate connection-oriented traffic Variable data rate connection-oriented traffic Datagram traffic, e.g., UDP packets

Each node has two receivers (one fixed freq, another tunable) and two transmitted (one fixed freq, another tunable)

Page 48: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Arbitrary Topology Configurations using WDM and TDM

Consider any graph topology Replace every bi-directional edge using two back-to-back

simplex edges Assign each simplex edge of the graph topology to one slot

in the (frequency, time) Select #time slots just adequate enough so that #freq *

#time slots >= the #simplex edges Work out an example

Page 49: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Wireless LAN Protocols

Consider a Cellular Network, with Cell sizes anywhere between few meters to several miles

Frequency reuse is adopted, as a feature of Cellular system What could be a typical MAC ? Can CSMA work ?

No, since there is no common broadcast channel which everyone eventually listens to

Refer Figure 4-11 Design difficulty: how to detect interference at the Receiver ? Hidden station problem: Two nodes transmit to a common receiver

located in the middle Competitor station is too far away

Exposed station problem: Two adjacent nodes transmitting in opposite directions. False sense of competition...

Page 50: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

figure 4-11

Page 51: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

MACA - Multiple Access with Collision Avoidance

Idea: have both the sender and receiver ackn each other stating the length of upcoming transmission Consequently, neighbors both around the sender and receiver will be aware of

the transmission activity and its duration (from the #bits in the transmission) Figure 4-12

Protocol Sender: send a request-to-send (RTS) signal to receiver with #bits in the

upcoming data frame Receiver: ackn to sender using a clear-to-send (CTS), if no collision. Sender: start transmitting upon receiving the CTS

Where is the catch ? Both the sender’s and receiver’s neighbor can hear the message initiation along with size !!

Page 52: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

figure 4-12

Page 53: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

MACA and MACAW

Collisions in MACA Still possible, but chances are much reduced If two nodes initiate an RTS simultaneously Collision ==> backoff and re-try later (like CSMA) Backoff approach is based upon a binary exponential scheme

MACAW - an enhanced MACA Protocol ACK signal at the MAC layer, after each data frame Include carrier sensing to further reduce collision

(Although, carrier could only be sensed locally.) Random wait and re-try transmission at every message level,

instead of at every node level Congestion information exchange between pairwise stations,

leading to better congestion control and backoff approaches

Page 54: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Protocols for Digital Cellular Radio

Significant usage for mobile telephony Each connection lasts longer than few msec Hence, channel allocation per Call is better than per Frame (why ?) Preferably use digital coding, instead of analog

Allows compression of data/speech Allows to integrate voice, data, fax, ... Can include error-correcting codes (for reliability) and encryption (for security)

GSM - Global System for Mobile Communication Allocated in the 900 MHz band, later re-shuffled to the 1800 MHz range as

well (called DCS 1800) Employs 124 bi-directional freuqncy channels within each cell Refer Figure 4-13

Page 55: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

figure 4-13

Page 56: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

GSM - Details

Each cell has 124 (base station --> user nodes) frequency channels + 124 (user nodes --> base station) freq. channels These are used for Data In/Out and Control signals Each freq. channel is 200KHz wide, allowing a fair bit rate !! Each freq. channel is 8-way TDM slotted Thus, a total of 992 (=124 * 8) logical connections are possible Not all of the 992 connections are implemented

for avoiding frequency conflicts with neighboring cells Also, for enhancing the bps within each logical connection

Format of the TDM slots 148 bit in each slot, 8 slots per frame for time division

multiplexing, and 26 frames to create a multiframe

Page 57: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Data Format of GSM Frames

Refer Fig. 4-14 Each TDM slot, of 148 bits, consist of

3 start bits 57 bit Information 1 bit Voice/Data toggle 26 bit synchronization information 1 bit Voice/Data toggle 57 bit Information 3 stop bits

8 TDM slots create a TDM frame Slots are separated by 30 microsec guard time (worth 8.25 bit) Guard times accommodate lack of sync, and data overflow

Page 58: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

figure 4-14

Page 59: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

GSM (contd...)

26 TDM Frames constitute a TDM multi-frame 24 frames are data use, 1 frame for control, 1 left for future use Time spent for a TDM multiframe is 120 milisec Effective data rate in each logical connection is 9600 bps

Other GSM channels Apart from the GSM framing structure, it also supports other

specific purpose channels Broadcast Control Channel

Continuous stream of outputs from the Base Station to all the nodes describing the Base Station id

Mobile nodes check the strength of this signal to detect the cellular parenthood

Page 60: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Other GSM Channel (contd...)

Dedicated Control Channel for location updating, registration and call setup each base station maaintains a data structure with all intra-cell mobile nodes;

the control channel exchanges information to keep this data structure updated

Common Control Channel Paging Channel

Base station uses this for announcing Incoming Calls Mobile nodes listen to this for answering Incoming calls

Random Access Channel Slotted ALOHA to setup a call in the Dedicated Control Channel A node can setup a Call using this Channel

Access Grant Channel response of Random Access Channel

Page 61: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

GSM vs. CDPD = Cellular Digital Packet Data

GSM Circuit Switched, not packet switched Not friendly to cellular handoffs, each handoff can miss some data Increased error rate

CDPD A packet switched, digital datagram service Using 30 KHz channels, it can offer 19.2 Kbps links (excluding protocol

overhead ==> 9600 bps data channels)

CDPD System Architecture Three kinds of nodes: mobile end system, base stations and base interface

stations (which connect between base stations and to the Internet) Refer Figure 4-15

Page 62: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

figure 4-15

Page 63: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

CDPD Details

Uses three types of interfaces E-Interface: Connects a CDPD Network to the outside world

networks, e.g., the Internet I-Interface: Connects between multiple CDPD areas (basically,

between multiple cells) A-Interface: Between base station and mobile nodes

One Downlink part, from Base Station to Mobile Noeds Not difficult to manage, since it has only one user (the Base Station)

One Uplink channel, shared by all the mobile end users Digital Sense Multiple Access protocol adopted by the mobile end nodes Similar to Slotted, p-Persistent CSMA Data is packetized, time axis is slotted, and re-entry attempts are spread out

to non-consecutive time slots Combines the benefits of Slotted ALOHA, p-Persistent CSMA

Page 64: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Collision in CDPD

Possible, when two or more mobile end nodes start on a time slot together Mobile hosts may not immediately detect a collision (sensing delay due to

RF propagation) Microblock transmission is faster than the rate of detection of a failure

Correct/Incorrect reception of microblock n is delayed until microblock n+2 In between, the mobile node just goes ahead and continues transmission If a failure is detected (later), it stops - otherwise transmission continues

Voice data has higher priority, data transmission is next

Page 65: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Code Division Multiple Access

CDMA - a completely new line of MAC approach MAC approaches so far: TDM, FDM, WDMA, slotted ALOHA, ... CDMA - each user transmits across the entire spectrum

However, nobody collides with each other Each node has a unique code, called Chip, using which it transmits The uniquness of the Chips ensure no eventual collision

Analogy - Multiple people speaking in a room TDM: everyone takes turn in speaking FDM: Separate clusters of people, each speaking within its cluster,

yet not being overheard at other clusters CDMA: Everybody speaks loud and clear to everybody else, but

using different languages

Page 66: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

CDMA - Summary

Each node has a unique sequence, called Chip Usually its a 64 or 128 bit pattern, but we demonstrate using a 8-bit Chip Example: A’s chip = (0, 0, 0, 1, 1, 0, 1, 1)

If A wants to transmit a “1”, it will send the above chip If A wants to transmit a “0”, then it will send 1’s complement of the Chip

Another node, B, will have a different Chip sequence Orthogonal from every other node’s Chip Normalized inner product of any pair of Chip sequences = 0 Thus, A’s Chip <normalized inner product> B’s Chip = 0 By definition, A’s Chip <norm. inner prod.> Complement(B’s Chip) = 0

Bit sequence within the Chips are transmitted across the entire spread spectrum

Page 67: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

CDMA - Bandwidth Usage

Consider 100 nodes, and 1 MHz spectrum with 1 Mbps FDM allocates 10 KHz per station

Each station has a 10 Kbps data rate

CDMA, with m bit Chips Allocates the entire 1 MHz to each station Thus, each station’s data rate = 1000/m Kbps When m is smaller than 100, CDMA is a better bandwidth utilization

Where is the catch ? CDMA will expect to treat the RF media in an analog fashion Voltages (RF transmission powers) will be expected to be additive in value It can get more noisy, likely to be more erroneous

Page 68: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

CDMA - Example (refer Figure 4-16)

Four nodes, A, B, C, and D each with unique 8-bit Chip 0-bits in the Chip sequence can be treated as -1 for voltage or

transmission power point of view Two or more nodes transmitting together simply adds their voltages

(addition of negative values indicate voltage or RF power reduction --> this is a major source of error, in analog handling)

The design of the Chip sequences ensure that A <norm. inner prod.> B = 0 A <norm. inner prod.> (complement of B) = 0

Suppose, A and C transmit a 1, while B transmit a 0 T= (A + not(B) + C) is transmitted. Everyone receives this. Receiver node D, trying to listen to C, computes C <norm. inner prod> T

= C.A + C.(not(B)) + C.C = 1, where “1” is what C transmitted

Page 69: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

figure 4-16

Page 70: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

CDMA Example (contd...)

Suppose, C transmitted a 0 in the previous example T = (A + not(B) + not(C)) The receiving node D will compute

C . T = C.A + C.(not(B)) + C.(not(C)) = 0 + 0 + (-1) = -1

0-bit is assumed to have a value = -1

Efficiency of CDMA Theoretically, can be arbitrarily large In practice, the noise level, analog value handling and #bits/Chip

pose limitations Design rule: if you want to enhance b/w, and can live with some

noise - go for CDMA (Korean Telecom) Question: Why the name “Chip”

Page 71: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Theory to Practice

CSMA/CD MAC Protocol with various degrees of Persistency IEEE 802.3 is a specific implementation Random delay, if collision occurs, is based on a Binary Exponential

Backoff algorithm Average case performance: Moderate However, no worst case delay guarantee for individual stations

Token Bus and Token Ring Protocols Worst case bounded delay, may be useful for Real-Time application IEEE 802.4 and 802.5 LAN standards LAN to MAN and fairness issues

Distributed Queue Dual Bus (DQDB), IEEE 802.6 IEEE 802.2: Logical Link Control

Page 72: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Ethernet 802.3

Essentially, it is a 1-persistent CSMA/CD Protocol Looking for an idle channel

If not found, i.e., Channel=busy, station waits in a greedy mode If Channel = idle, station immediately attempts to transmit data

If no collision, then successful transmission If collision, stop transmission immediately and go into a random delay

wait more

Requires broadcast mode cable topology Linear, Backbone, Tree, Segments with Repeaters Figure 4-19 Worst case delay in broadcast transmission affects performance

(Efficiency, for example)

Page 73: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

figure 4-19

Page 74: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Binary Exponential Backoff Algorithm for Random Delay Wait

Motivation: Random delay to ensure that collissions will eventually be resolved

Minimize the probability that two (or more) colliding stations will keep colliding again and again

Once done so, then minimize the absolute ranges of delay periods during these random wait cycles

If few stations compete, the range of random delays should be smaller Chances of consecutive collisions is less, hence minimize the random

delay period If collisions occur in consecutive attempts, then the range of random

delays should be increased (perhaps, rapidly) to quickly resolve the colliding stations

Here, two or more stations are repeatedly colliding. Hence, most immediate priority is to resolve the conflict between them.

Page 75: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Binary Exponential Backoff (contd...)

After the first collision random wait period is either 0 (i.e., re-try next slot) or 1

After the second consecutive collision random wait period is in the range {0, 1, 2, and 3}

After the i-th consecutive collision, i<= 10 random wait period is in the range {0, 1, 2, ..., 2^i -1} For, 11 <= i <=15, the random wait period range is fixed: {0, 1023} For i=16, an abnormal transmission event interrupt is sent to the message

source

Features For fewer stations, and fewer collisions ==> average randon wait is small For many stations, and lot of collisions ==> collision gets resolve quickly

Page 76: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Ethernet Addressing

Frame Format Transmission @ frame quantums (viz. collision detection advt.) Preamble: 7 Bytes

Each byte = 10101010 => 10 MHz square wave for 5.6 microsec Used for clock synchronization

Start Delimitter: 1 Byte (10101011) Destination (and, Source) Address: 2 or 6 Bytes Data Length: 2 Bytes; Actual Data: 0 to 1500 Bytes Pad: 0 to 46 Bytes (used for ensuring >= 64 bytes after dest.addr) Checksum: 4 Bytes (32-bit CRC + 8 bit end-delimiter)

Preamble Start Dest.Addr. Source Addr Length Data Pad Checksum

7 1 2 or 6 2 or 6 2 0-1500 0-46 4

Page 77: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

802.3 Frame Format

Insert Figure 4-21 here

Page 78: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Ethernet Addressing (contd.)

Data length: 0 to 1500 Bytes Effects of short data frames

Too small data length can confuse the receiver Is it a collided frame, or real (short) data ? Also, two frames may start at distant ends of the cable Answer: Each frame must be at least 64 bytes after the destination

address If actual data size is small, then create a Pad (upto 46 bytes)

Why 64 Bytes ? 10-Mbps LAN, 2.5 km cable (specs), and 2t collision window Minimum frame width = 51.2 microsec ==> 64 Bytes length

Page 79: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Broadcast and Multicast Addresses

Destination Address Msb: 1 for “group” (multicast or broadcast), 0 for “unicast” Address = all 1’s: indication of Broadcast How does multicast work ?

Group addr. id = programmed to listen at individual nodes 2nd Msb: Local vs. Global addresses

Useful for address filtering, and flooding control

Uniqueness of Node Addresses Total 46 bit addressing (6 bytes - 2 msb) Approx. 7 * 10^13 addresses Can provide unique address to every node !! Manufacturers procure a bulk of address ranges

Page 80: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Broadcast, Multicast, and Unicast

Each transmitted frame is listened to by every adapter Adaptors act as filters

Frames that are ok-ed by the filter are sent to the backend host computer

Filter Modes Listen to self-address only: Unicast Promiscuous: Listen to all addresses (useful for gateway design) Listen to addresses with all 1’s: Broadcast Listen to specific group-ID: Multicast

Page 81: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

ARP vs. RARP

Issue: Upper Layer Address vs. Ethernet Address Forward and Reverse Mapping

Address Resolution Protocol 32-bit IP address => 48 bit Ethernet address Naive Approach: Configuration Files (IP address vs. Ethernet

address) ARP Algorithm: Broadcast IP address and seek a response ARP records can be cached, optimized for locality

Reverse Address Resolution Protocol Host machine (at boot time) transmits ethernet address and seeks

IP address (from RARP server)

Page 82: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Ethernet Connectors

10Base 5 (Thick Ethernet) Vampire Tap

10Base 2 (Thin Ethernet) Flexible Connector

10Base T (Central Hub) Nodes connect twisted pair cable to a switch

10Base F Version for optical fiber

Page 83: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Worst Case Collision Detection

Insert Fig 4-22 here

Page 84: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Performance of 802.3

Simplistic analysis Assume a fixed number of, k, stations always with data to transmit p = probability with which each station transmits during a contention slot Then, the probability that one of those k stations will successfully acquire

the channel is A = k * p * (1-p)^{k-1} k times, one for each station being the channel winner (k-1) stations did not transmit, while the winner stations did transmit ==>

Probability = p * (1-p)^{k-1}

Probability that the contention interval is exactly j slots, will be = A * (1-A)^{j-1} Contention interval is not in the (j-1) slots ==> (1-A)^{j-1} It is at the j-th slot ==> A * (1-A)^{j-1}

Page 85: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Performance of 802.3 (contd...)

Mean number of slots per contention sum (from j=0, to j=infinity) [ j * A * (1-A)^{j-1} ] = 1/A

Each slot is a duration = 2where is the worst case broadcast delay Hence, mean contention interval = 2 * 1/A

If the average frame takes P time units to transmit, then the total time taken to transmit = P + mean contention interval = P + 2 Hence, Channel efficiency = P / ( P + 2/A ) Refer Figure 4-23, for channel efficiency as a function of the #stations

trying to send data Large P ==> higher efficiency, but increased frame fragmentation

Page 86: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

figure 4-23

Page 87: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Switched Ethernet

Switched Ethernet Intelligent processing allows packet filtering Useful for traffic reduction, containment Example: multicast filtering, broadcast filtering, … Other usage: security, workgroup establishment

Design Paradox Ethernet had not been initially meant to be point-to-point However, design needs led it to becoming point-to-point Its still called Ethernet, and behaves like Ethernet - for compliance,

and ability to (still !!) use existing ethernet adapter cards Sometimes, it is an expensive mistake to carry one !!

Page 88: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Full Duplex Ethernet

Design Rationale Ethernet does not scale well # Connect Points, also bandwidth... Solution: Several 802.3 LAN connected via a faster switch

Each 802.3 LAN is in reality a plug-in card at the switch Full Duplex Switched Ethernet

FDSE Architecture Not a shared bus LAN Instead, a point-to-point protocol around a fast switch

Switch has several (<=32) “Plug-in Cards” Each Plug-in Card has few (<=8) Connectors Each connector is a 10Base T link to a host computer

Page 89: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

FDSE Block Diagram

Insert Fig 4-24 here

Page 90: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

FDSE Structure

FDSE

hosthost to other FDSE

802.3 LAN

Hub hosts

Page 91: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

FDSE Design

Idenaitcal frame format, addressing, .... On-Card LAN

If a frame is addressed to another node on the same card, then the frame is locally copied

Else, it is transmitted over the high-speed backbone bus to another on-card LAN

Input Buffering Collision resolution with on-card LAN

(Btw, collision never occurs across multiple cards) Approach 1: adopt CSMA/CD within each card Approach 2: Input packet buffering + scheduling

Whao !! Feasibility for Packet Prioritization, Periodic traffic support...

Page 92: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Packet Priority in FDSE LAN

802.3 has no support for priority 802.4 and 5 evolved precisely for these reasons

However, FDSE is a much digressed version of the initial “ethernet” It is point-to-point, instead of shared media It is input buffered, and scheduled, instead of collision and re-try

Hence, packet priority establishment is feasible in FDSE Priority implementation in the scheduling of input buffer Still, ethernet frame format does not accommodate priority values One way to accommodate priority is as part of the data field

Priority support from upper OSI layers (e.g., TCP) is always feasible

Page 93: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Periodic Traffic Support in FDSE LAN

Not directly supported But, can always be implemented from TCP or IPX layer Admission Control Stage

At TCP or upper application layer

Dynamic Scheduling Stage At FDSE input buffer scheduling algorithm

Upper OSI Layer Connection-Oriented Virtual Circuit can solve this problem

Aperiodic RT Traffic Support Use placeholder (i.e., stub) periodic traffic

Page 94: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

FDSE LAN of LANs

FDSE as a switch easily lends itself to hierarchical construction as LAN or MAN / WAN (as LAN os LANs)

Page 95: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

FDSE Flow Control

Prevent over-bandwidth situations, and recover from congestions and hot spots

Objective: Forward packets from in=>out ports without any loss of packets and minimum (=0 ?) latency

TCP or Window Based Protocol Several packets transmitted before a “destination port” overloaded

message can be reverse ackn-ed Solution: modest sized buffer, time to fill up the buffer (due to

destnation port jamming) is adequate to inform the sender node Disadvantage: Large buffer ==> large (individual) packet latency

Another solution: reduce the window size of the upper layer protocol (e.g., TCP or IPX)

Page 96: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Learn Table (Address Mapping)

Learn table: a table of information associating 48-bit Ethernet addresses with ports

New frame arrival: Look up the port address, from (destination’s) ethernet address If port address unavailable, then broadcast (unfortunate situation,

cannot be helped) - “are you out there, please respond” type

Learn Table: updated bu current lookup information Recent failures in lookup, and eventual resolution (by broadcast) Old entries are flushed in a cache-page update manner

LRU, or FIFO

Page 97: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

FDSE and Fast Ethernet Connectors

100 Base - Fx Specs for 100 Mbps Fast Ethernet over fiber Similar to FDDI specs Signals are unscrambled, 4B5B encoded

100 Base - T4 same as above, except for category 3 or better twisted pair cabling Full duplex not supported under T4

100 Base - TX same as above, except for category 5 twisted pair cabling Similar to CDDI specs, signals are scrambled, 4B5B encoded

Page 98: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Limitations of 802.3

No worst case delay bound for any given stations No notion of priorities to any of the nodes/stations Focusses on the overall channel efficiency, not on the

individual user station needs Certainly not good for time-critical traffic IEEE 802.4 evolves from 802.3

Token Bus structure, logically Each one of the N nodes takes turn in sending their respective frames If each node takes T time units, then no node will have to wait more

then NT time units Figure 4-25 as an example Token Bus

Page 99: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

figure 4-25

Page 100: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

IEEE 802.4: Token Bus

Logical linear connection Each node has a predecessor and a successor node The Token arrives from the predecessor node, and is destined to the

successor node after usage by the current node The highest numbered station sends the first frame If a node has no data to send, it passes the Token immediately Logically the nodes are organized as a Ring (Fig. 4-25) Collison avoidance by mutual exclusion in Token ownership

Physically, the nodes may be in any connection pattern Tree, Bus, ... Essentially, a broadcast transmission medium is needed Logical ordering of the stations is independent of the physical locations

Page 101: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Priority in Token Bus

Worst case response time for each node < N*T time units, for N nodes and T time units per node (i.e., per Token) This prevents from unbounded response delay situations Yet, may not follow hard real-time guarantees

How to assign priorities to the traffic within each node ? Token Bus defines four priority classes, 0, 2, 4 and 6

Priority 6 is the highest, Priority 0 is the least When a node acquires the Token, say for T time units

First, it allocates transmission from Priority 6 messages After all the data from Priority 6 set is exhausted, if any more time is still

left ==> allocate traffic from Priority 4 messages After all Priority 4 messages are over, if still some time is left, then use for

Priority 2 messages, and so on

Page 102: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Synchronous Traffic in Token Bus

The bandwidth for at least one of the Priority 6 messages is guaranteed “<=T” (as much as desired) time units of transmission per every

N*T time units Synchronous traffic, e.g., live video, multimedia, automated

factories and production environments, are supported Limitations

Ranges of deadline that can be honored No notion of periodic traffic support

Fault-Tolerance What if a node/station goes down while holding a Token ? A max-time parameter for claiming tokens

Page 103: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Token Ring: IEEE 802.5

Token Bus Requires a broadcast channel

Large delay Analog characteristics

Enjoys the freedom of logical predecessor/successor assignment

Token Ring A set of point-to-point connections Most typically digital connections Suitable for most physical media, e.g., twisted pair, co-ax, fiber Predecessor/Successor defined by the physical topology

In contrast to 802.4, where it was a Logical relationship Refer Fig. 4-28

Page 104: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

figure 4-28

Page 105: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Token Ring Operations

The media is no longer a Broadcast bus Each point-to-point element of the Ring must now

Transmit data bits in/out for the speedy operation of the Ring Cater both to the originating/destined traffic, as well as the traffic passing by

the node

Circulating Token A 3-byte pattern, called Token, circulates around in the Ring The endless circulation ends anytime one (or, more) node(s) has data to send The transmitting node seizes the Token, changes a single, particular bit in the

Token The interpretation of the 3-byte immediately changes from Token to Data The station starts to pour its bit-stream on the Ring Length of the data (i.e., message frame) can be much longer than 3 bytes

Page 106: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Token Ring Operations (contd...)

A node can be in anyone of the two modes Listen: copy input bits to the output bits, with a 1-bit delay Transmit:

Break the connection between Input to the Output Be able to do so, i.e., switch from the “Listen” mode, within 1-bit delay

Enter the node’s own data into the Output bit-stream Remove the (previously) transmitted data bits from the Input bit-stream The entire frame may never have to appear in the Ring, hence no

limitation on the Frame length Unless exceed maximum Token-Hold time

After a node has finished transmission of all the frame’s bits Must re-generate the Token’s 3-byte pattern Switch back to “Listen” mode instantaneously after the last bit of the

Token has been generated and inserted into the Ring

Page 107: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Minimum Ring Delay

Consider an worst case, none of the nodes are transmitting The 3-byte Token pattern must be circulating around The Ring delay must be long enough to accommodate the 3-bytes Ring delay includes

Point-to-point transmission delay of each one of the Links 1-bit copying and re-transmission delay introduced at each station

Ring Length vs. Ring Data Rate For R Mbps data rate, every bit is transmitted at 1/R microsec Signal propagation delay, typically, 200 meters per microsec Each bit occupies around 200/R meters on the Ring A ring with 8 nodes: eight 1-bit delay by each one of the nodes

Additional 2-byte delay (NB: Token =3 bytes) must come from the Ring Hence, Ring must be 3200/R meters long, at least (= 3.2 KM, for 1 Mbps)

Page 108: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Ring Delay and Contention Resolution

Effect of Node Withdrawal from the Ring What if one or more nodes withdraw from the Ring ?

Due to failure, or un-willingness to participate for the time being The “Listen” mode must be honored by passive devices

Retain the “copy Input to Output” feature Maintain the 1-bit delay

Contention Resolution Mutually exclusive ownership of the Token by the nodes Once a node starts to transmit, i.e., has modified the Token and is in the

middle of data transmission -- no other station can acquire the Token A higher priority node can make reservations, in the special Reservation

Fields of the transmitting frame But, no interruption until the Token-Holding-Time expires

Page 109: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Performance of Token Ring

Light traffic Idle circulation of the Token Occassional seizure by a transmitting node, transmission of a frame (or

arbitrary size), and re-generation/insertion of the Token

Heavy traffic load Nodes will wait for transmission, with their respective input Queues A node currently transmitting will either finish its frame, or have a time-out: Next --

The immediate next (in priority order, followed by the round-robin Ring order) waiting node will acquire the Token

Can lead to nearly 100% channel efficiency under heavy traffic load

Some implementation notions Wire center, to better accommodate broken cables Centralized monitor station, elected to one of the nodes (i.e., de-centralized)

Page 110: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Priority in Token Ring

Supports multiple priority frames Second byte 3-byte Token contains a “priority” field A node with Priority=n data to transmit

Must wait and obtain a Token whose priority < n May make a reservation on the current transmission, but only if no

other higher priority traffic has already made reservation When the current frame transmission is over, the new Token generated

will have a priority = highest priority reservation being waited

Fairness in Priority Management Token will raise the priority level arbitrarily, untill some node

explicitly lowers the priority A node raising the priority is responsible for lowering the priority

again, after it is done with its transmission

Page 111: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Comparison of 802.3, 802.4 and 802.5

Expect similar performances, overall They all use similar LAN technologies

802.3 - Advantages Popular in usage, simple; Passive cable, no modem Nodes can be added, deleted without any re-work (i.e., scalable) Little delay at low load

802.3 - Disadvantages Lots of analog stuff, including analog “Carrier-Sense” Cable length restricted due to sensing delays (affects the Channel efficiency) Minimal frame size restriction, leading to frame fragmentation and wastage Non-deterministic, not for RT applications, no notion of priorities Lot of collisions at high traffic load ==> decreasing utilization

Page 112: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Comparison (contd...)

802.4 - Advantages Reliable, usage of cable TV equipment More deterministic than 802.3, yet may not be for tight deadline RT

applications Can support priorities, handle fixed bandwidth synchronous traffic At high traffic load, it becomes close to TDM ==> good throughput

and efficiency Short frames are possible

802.4 - Disadvantages Still analog devices, including amplifiers and modems Substantial delay at low traffic load Complex protocol

Page 113: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Comparisons (contd...)

802.5 - Advantages Fully digital, and flexible/cheap connectors (e.g., twisted pairs) Handles priorities, despite the fairness issue Both short and arbitrarily long frames are possible

Limited only by the Max-Token-Holding time Good throughput and efficiency at high traffic load

Like 802.4, but unlike 802.3

802.5 - Disadvantages Usage of a (floating) centralized monitor Relatively high delay at low load, due to waiting for the Token

Which one is best ? Depends on your traffic model

Page 114: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

DQDB: IEEE 802.6

Distributed Queue Dual Bus - Evolve from LAN to MAN 2 Bus structure, leftward (Bus B) and rightward (Bus A) directions Parallel, unidirectional Buses spanning through the metropolitan area Each bus, with a Head-end, generates a steady stream of 53-byte cells

ATM cells ? AAL compatibility... Cells (empty, or with data) travel from the Head-end to the Tail-end Cells fall off after exceeding the Tail-end

Cell format 44 byte payload Protocols Bits: Busy (= cell is occupied, or not), and Request (a third

party station with data to transmit can set this bit on)

Page 115: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Transmission Sequence in DQDB

Station P has a data/cell to send, to station Q If Q is rightward of P, then use Bus A If Q is leftward of P, then use Bus B

What a Naive Sequence Could Do Station P seeks (in a greedy mode) for an empty cell Since cells are originated from the Head-end, stations near the

Head-end will get preference in receiving empty cells Stations far away from the Head-end can lead to starvation

A Key Design Objective of DQDB Implement FIFO (i.e., fairness ? ) among the transmitting stations Issue: how to implement a FIFO ordering, where the transmission

requests are really generating in a distributed manner

Page 116: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Distributed FIFO Ordering

A node with data to transmit, does not immediately try to seize an empty cell and proceed with the transmission

Instead, the node checks out how many, if any, downstreams nodes had made prior transmission requests

Downstream in regard to the intended transmission direction Why downstream ? Because the downstream nodes are likely victims of “unfairness” Note: a node can never be unfair to another upstream node

If there had been k prior downstream transmission requests, then the node will wait (i.e., skip) k empty cells

Next the node will transmit its own cell (assume, for now, that the node has only 1 cell to transmit)

Finally, the node will wait (i.e., skip) for m additional cells, where m fresh transmission requests might have arrived while waiting for the k cell skips (3rd bullet above)

Page 117: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Distributed FIFO: Implementation

How does a station (S) know about how many downstream nodes would have made prior transmission requests ? Have (all) the prior transmission requests explicitly notify every upstream node

Require a counter, called Request Counter (RC), at each node to sum up the number of such prior requests (= parameter k, in the previous slide)

Node S, when it’s data become ready to transmit, it begins to skip (i.e., wait) for k empty cells pass by towards downstream

This step will ensure that all the prior-requesting downstream nodes will be served before S gets served

Will it ??? (Hint: transient effects) During this wait (for k empty cells to pass by) time, node S will also count how

many additional RC requests arrive (=parameter m, in the previous slide). (Node S swaps RC=k value with an alternate counter, CD.)

Transmission schedule (for S): k cell skips, transmit its own cell, m cell skips

Page 118: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

FIFO Transmission - Example

Refer Figure 4-32 Essentially, stations are operating in a “polite” mode Instead of seize asap, they allow transmission requests from the potential

victims to take place Node D had an earlier request, which got notified to Node B Later, when node B had a data (i.e., cell) to send, B will wait (i.e., skip) an

empty cell to let node D finish Beyond this, node B is free to use an empty cell, but it will keep track of late

arriving requests from downstream, and plan for a subsequent wait period as well

Questions Is this truly implementing a FIFO ? How can you extend this scheme, for multi-cell data from a particular node ?

Page 119: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

figure 4-32

Page 120: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

LAN Bridges

Bridges are used to connect multiple LANs Rationale for Bridge design

A set of previously designed LANs need to connect up at a later date, due to evolving network infrastructure...

For geographically spread organizations, localized LANs and bridges connecting them can be lot cheaper than a single large-sized LAN running across the entire organization

For load sharing, it may be wise to split a LAN into multiple LANs and interconnect them using bridges

Single LAN may not handle a long distance networking need -- multiple LANs and bridges connecting them can be an wise solution

Multiple LANs (and, Bridges interconnecting them) could be more reliable than a single (large!!) LAN

Bridges can conduct information filtering/screening ==> more secure

Page 121: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

LAN Bridges (Example)

Insert Fig. 4-38 here

Page 122: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

LAN Bridges (Example)

Insert Fig. 4-40 here

Page 123: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

FDDI - High-Speed LAN

802.3/4/5/6 LAN/MANs are meant for low speed and short distances For higher speed, and longer spread

Fiber is recommended It has higher bandwidth, thin/lightweight, no electro-magnetic interference,

and enhanced security feature FDDI (Fiber Distributed Data Interface) is one such fiber based LAN

FDDI Token ring LAN operating at 100 Mbps Commonly used to interconnect LANs (refer Fig. 4-44) FDDI-II is an updated version, which can handle synchronous traffic (with

reservation, i.e., circuit-swicthing) NB: sense a blend of 802.5 with Synchronous traffic feature of 802.4...

Page 124: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

figure 4-44

Page 125: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

FDDI - Physical Layer

Multimode fiber Singlemode, i.e., thinner and applicable for longer-distance, is not necessary

here. Singlemode fiber is lot more expensive...

Devoid of lasers, and usage of normal spectrum light (using LEDs) Environment friendly, in case fiber is cut open and viewed

Two fiber rings (Ring I and II), running parallel to each other Bit error < 10^(-9) range Two classes of stations: A and B

Stations of type A connect to both the Rings, i.e., Ring I and II Stations of type B (cheaper) connect to only one ring

Ring failure One fails, the other serves as a backup Both fails ==> join the two as a new ring (twice the length, Fig. 4-45)

Page 126: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

figure 4-45

Page 127: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

FDDI - Protocol

Similar to 802.5 Node wants to transmit (asynchronous) data

First, capture the Token Then transmit frame(s), and keep removing the frame(s) when they cycle back Unlike 802.5: One can generate the Token immediately after transmission ends

(since the Ring is longer, and it is wasteful to wait until the last frame re-cycles back)

Node wants to transmit (synchronous) data Handled similar to 802.4

Synchronization All clocks are stable (per hardware design) within 0.005 percent Thus, around 2000 bytes transmission ==> 1% clock error Re-synchronization (using a preamble bit-pattern) in <= 4500 bytes

Page 128: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

FDDI - Synchronous Data and Priority Handling

Synchronous frames generated every 125 microsec Provides 8k samples per second for PCM or ISDN data Synchronous frame includes 96 Byte data

Can accommodate upto 4 T1 data lines 24 byte * 8000 frames * 8 bits = T1 line’s bandwidth (1.544 Mbps)

Synchronous traffic is guaranteed bandwidth Once allocated, stays connected until the node transmitts the last frame Remaining bandwidth (= 96 Bytes * 8000 - load offered by Synchronous

traffic) is allocated on demand Priority assignment similar to 802.4 (i.e., within node)

System Parameters Token holding timer - maximum token holding time Token rotation timer - check for long absent token (NB: fault detection

Page 129: 1 Sourav@asu.edu Computer Networks - Theory and Practice Sourav Bhattacharya Computer Science & Engineering Arizona State University CSE 434 / 598 Spring.

[email protected]

Switched Architecture - Way to Go !!

Insert Figure 4-48 here