Computer Networks

110
Shashank Agnihotri Computer Networks – Page 1 Queueing theory Queueing theory is the mathematical study of waiting lines, or queues. The theory enables mathematical analysis of several related processes, including arriving at the (back of the) queue, waiting in the queue (essentially a storage process), and being served at the front of the queue. The theory permits the derivation and calculation of several performance measures including the average waiting time in the queue or the system, the expected number waiting or receiving service, and the probability of encountering the system in certain states, such as empty, full, having an available server or having to wait a certain time to be served. Queueing theory has applications in diverse fields, [1] including telecommunications, [2] traffic engineering, computing [3] and the design of factories, shops, offices and hospitals. [4] Overview The word queue comes, via French, from the Latin cauda, meaning tail. The spelling "queueing" over "queuing" is typically encountered in the academic research field. In fact, one of the flagship journals of the profession is named "Queueing Systems". Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide service. It is applicable in a wide variety of situations that may be encountered in business, commerce, industry, healthcare, [5] public service and engineering. Applications are frequently encountered in customer service situations as well as transport and telecommunication. Queueing theory is directly applicable to intelligent transportation systems, call centers, PABXs,networks, telecommunications, server queueing, mainframe computer of telecommunications terminals, advanced telecommunications systems, and traffic flow. Notation for describing the characteristics of a queueing model was first suggested by David G. Kendall in 1953. Kendall's

Transcript of Computer Networks

Page 1: Computer Networks

Shashank Agnihotri Computer Networks – Page 1

Queueing theory

Queueing theory is the mathematical study of waiting lines, or queues. The theory enables

mathematical analysis of several related processes, including arriving at the (back of the) queue,

waiting in the queue (essentially a storage process), and being served at the front of the queue.

The theory permits the derivation and calculation of several performance measures including the

average waiting time in the queue or the system, the expected number waiting or receiving

service, and the probability of encountering the system in certain states, such as empty, full,

having an available server or having to wait a certain time to be served.

Queueing theory has applications in diverse fields,[1] including telecommunications,[2] traffic

engineering, computing[3] and the design of factories, shops, offices and hospitals.[4]

Overview

The word queue comes, via French, from the Latin cauda, meaning tail. The spelling "queueing"

over "queuing" is typically encountered in the academic research field. In fact, one of the flagship

journals of the profession is named "Queueing Systems".

Queueing theory is generally considered a branch of operations research because the results

are often used when making business decisions about the resources needed to provide service.

It is applicable in a wide variety of situations that may be encountered in business, commerce,

industry, healthcare,[5] public service and engineering. Applications are frequently encountered

in customer service situations as well as transport and telecommunication. Queueing theory is

directly applicable to intelligent transportation systems, call

centers, PABXs,networks, telecommunications, server queueing, mainframe computer of

telecommunications terminals, advanced telecommunications systems, and traffic flow.

Notation for describing the characteristics of a queueing model was first suggested by David G.

Kendall in 1953. Kendall's notation introduced an A/B/C queueing notation that can be found in

all standard modern works on queueing theory, for example, Tijms.[6]

The A/B/C notation designates a queueing system having A as interarrival time distribution, B as

service time distribution, and C as number of servers. For example, "G/D/1" would indicate a

General (may be anything) arrival process, a Deterministic (constant time) service process and a

single server. More details on this notation are given in the article about queueing models.

History

Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange,

published the first paper on queueing theory in 1909.[7]

Page 2: Computer Networks

Shashank Agnihotri Computer Networks – Page 2

David G. Kendall introduced an A/B/C queueing notation in 1953. Important work on queueing

theory used in modern packet switchingnetworks was performed in the early 1960s by Leonard

Kleinrock.

Application to telephony

The public switched telephone network (PSTN) is designed to accommodate the offered traffic

intensity with only a small loss. Theperformance of loss systems is quantified by their grade of

service, driven by the assumption that if sufficient capacity is not available, the call is refused

and lost.[8] Alternatively, overflow systems make use of alternative routes to divert calls via

different paths — even these systems have a finite traffic carrying capacity.[8]

However, the use of queueing in PSTNs allows the systems to queue their customers' requests

until free resources become available. This means that if traffic intensity levels exceed available

capacity, customer's calls are not lost; customers instead wait until they can be served.[9] This

method is used in queueing customers for the next available operator.

A queueing discipline determines the manner in which the exchange handles calls from

customers.[9] It defines the way they will be served, the order in which they are served, and the

way in which resources are divided among the customers.[9][10] Here are details of four queueing

disciplines:

First in first out 

This principle states that customers are served one at a time and that the customer that has

been waiting the longest is served first.[10]

Last in first out  

This principle also serves customers one at a time, however the customer with the shortest

waiting time will be served first.[10] Also known as a stack.

Processor sharing  

Customers are served equally. Network capacity is shared between customers and they all

effectively experience the same delay.[10]

Priority  

Customers with high priority are served first.[10]

Page 3: Computer Networks

Shashank Agnihotri Computer Networks – Page 3

Queueing is handled by control processes within exchanges, which can be modelled using state

equations.[9][10] Queueing systems use a particular form of state equations known as a Markov

chain that models the system in each state.[9] Incoming traffic to these systems is modelled via

a Poisson distribution and is subject to Erlang’s queueing theory assumptions viz.[8]

Pure-chance traffic – Call arrivals and departures are random and independent events.[8]

Statistical equilibrium – Probabilities within the system do not change.[8]

Full availability – All incoming traffic can be routed to any other customer within the

network.[8]

Congestion is cleared as soon as servers are free.[8]

Classic queueing theory involves complex calculations to determine waiting time, service time,

server utilization and other metrics that are used to measure queueing performance.[9][10]

Queueing networks

Networks of queues are systems which contain an arbitrary, but finite, number m of queues.

Customers, sometimes of different classes,[11]travel through the network and are served at the

nodes. The state of a network can be described by a vector  , where ki is the number

of customers at queue i. In open networks, customers can join and leave the system, whereas in

closed networks the total number of customers within the system remains fixed.

The first significant results in this area were Jackson networks, for which an efficient product

form equilibrium distribution exists and the mean value analysis which allows average metrics

such as throughput and sojourn times to be computed.[12]

Role of Poisson process, exponential distributions

A useful queueing model represents a real-life system with sufficient accuracy and is analytically

tractable. A queueing model based on thePoisson process and its companion exponential

probability distribution often meets these two requirements. A Poisson process models random

events (such as a customer arrival, a request for action from a web server, or the completion of

the actions requested of a web server) as emanating from a memoryless process. That is, the

length of the time interval from the current time to the occurrence of the next event does not

depend upon the time of occurrence of the last event. In the Poisson probability distribution, the

observer records the number of events that occur in a time interval of fixed length. In the

Page 4: Computer Networks

Shashank Agnihotri Computer Networks – Page 4

(negative) exponential probability distribution, the observer records the length of the time interval

between consecutive events. In both, the underlying physical process is memoryless.

Models based on the Poisson process often respond to inputs from the environment in a manner

that mimics the response of the system being modeled to those same inputs. The analytically

tractable models that result yield both information about the system being modeled and the form

of their solution. Even a queueing model based on the Poisson process that does a relatively

poor job of mimicking detailed system performance can be useful. The fact that such models

often give "worst-case" scenario evaluations appeals to system designers who prefer to include

a safety factor in their designs. Also, the form of the solution of models based on the Poisson

process often provides insight into the form of the solution to a queueing problem whose

detailed behavior is poorly mimicked. As a result, queueing models are frequently modeled

as Poisson processes through the use of the exponential distribution.

Limitations of queueing theory

The assumptions of classical queueing theory may be too restrictive to be able to model real-

world situations exactly. The complexity of production lines with product-specific characteristics

cannot be handled with those models. Therefore specialized tools have been developed to

simulate, analyze, visualize and optimize time dynamic queueing line behavior.

For example; the mathematical models often assume infinite numbers of customers, infinite

queue capacity, or no bounds on inter-arrival or service times, when it is quite apparent that

these bounds must exist in reality. Often, although the bounds do exist, they can be safely

ignored because the differences between the real-world and theory is not statistically significant,

as the probability that such boundary situations might occur is remote compared to the expected

normal situation. Furthermore, several studies show the robustness of queueing models outside

their assumptions. In other cases the theoretical solution may either prove intractable or

insufficiently informative to be useful.

Alternative means of analysis have thus been devised in order to provide some insight into

problems that do not fall under the scope of queueing theory, although they are often scenario-

specific because they generally consist of computer simulations or analysis of experimental

data. See network traffic simulation.

Page 5: Computer Networks

Shashank Agnihotri Computer Networks – Page 5

Birth–death processThe birth–death process is a special case of continuous-time Markov process where the states

represent the current size of a population and where the transitions are limited to births and

deaths. Birth–death processes have many applications in demography, queueing

theory,performance engineering, or in biology, for example to study the evolution of bacteria.

When a birth occurs, the process goes from state n to n + 1. When a death occurs, the process

goes from state n to state n − 1. The process is specified by birth rates   and death

rates  .

Examples

A pure birth process is a birth–death process where   for all  .

A pure death process is a birth–death process where   for all  .

A (homogeneous) Poisson process is a pure birth process where   for all 

M/M/1 model and M/M/c model, both used in queueing theory, are birth–death processes used

to describe customers in an infinite queue.

Use in queueing theory

In queueing theory the birth–death process is the most fundamental example of a queueing

model, the M/M/C/K/ /FIFO (in completeKendall's notation) queue. This is a queue with

Poisson arrivals, drawn from an infinite population, and C servers with exponentially

distributedservice time with K places in the queue. Despite the assumption of an infinite

population this model is a good model for various telecommunication systems.

M/M/1 queue

The M/M/1 is a single server queue with an infinite buffer size. In a non-random environment the

birth–death process in queueing models tend to be long-term averages, so the average rate of

Page 6: Computer Networks

Shashank Agnihotri Computer Networks – Page 6

arrival is given as   and the average service time as  . The birth and death process is

aM/M/1 queue when,

The difference equations for the probability that the system is in state k at time t are,

M/M/C queue

The M/M/C is multi-server queue with C servers and an infinite buffer. This differs from

the M/M/1 queue only in the service time which now becomes,

and

with

M/M/1/K queue

The M/M/1/K queue is a single server queue with a buffer of size K. This queue has applications

in telecommunications, as well as in biology when a population has a capacity limit. In

telecommunication we again use the parameters from the M/M/1 queue with,

In biology, particularly the growth of bacteria, when the population is zero there is no ability to

grow so,

Additionally if the capacity represents a limit where the population dies from over population,

The differential equations for the probability that the system is in state k at time t are,

Page 7: Computer Networks

Shashank Agnihotri Computer Networks – Page 7

Equilibrium

A queue is said to be in equilibrium if the limit   exists. For this to be the case,   

must be zero.

Using the M/M/1 queue as an example, the steady state (equilibrium) equations are,

If   and   for all   (the homogenous case), this can be reduced to

Limit behaviour

In a small time  , only three types of transitions are possible: one death, or one birth, or no

birth nor death. If the rate of occurrences (per unit time) of births is   and that for deaths is  ,

then the probabilities of the above transitions are  ,  , and  respectively.

For a population process, "birth" is the transition towards increasing the population by 1 while

"death" is the transition towards decreasing the population size by 1.

Page 8: Computer Networks

Shashank Agnihotri Computer Networks – Page 8

Page 9: Computer Networks

Shashank Agnihotri Computer Networks – Page 9

Protocol E-mail Print A AA AAA LinkedIn Facebook Twitter RSS Reprints In information technology, a protocol is the special set of rules that end points in a telecommunication connection use when they communicate. Protocols specify interactions between the communicating entities. 

Protocols exist at several levels in a telecommunication connection. For example, there are protocols for the data interchange at the hardware device level and protocols for data interchange at the application program level. In the standard model known as Open Systems Interconnection (OSI), there are one or more protocols at each layer in the telecommunication exchange that both ends of the exchange must recognize and observe. Protocols are often described in an industry or international standard.

Networking Tutorials and Guides

Telecom Routing and Switching

IP Telephony Systems The TCP/IP  Internet protocols, a common example, consist of:

Transmission Control Protocol (TCP), which uses a set of rules to exchange messages

with other Internet points at the information packet level

Internet Protocol  (IP), which uses a set of rules to send and receive messages at the

Internet address level

Additional protocols that include the Hypertext Transfer Protocol (HTTP) and File Transfer

Protocol (FTP), each with defined sets of rules to use with corresponding programs elsewhere

on the InternetThere are many other Internet protocols, such as the Border Gateway Protocol (BGP) and the Dynamic Host Configuration Protocol (DHCP).

The word protocol comes from the Greek protocollon, meaning a leaf of paper glued to a manuscript volume that describes the contents.

Page 10: Computer Networks

Shashank Agnihotri Computer Networks – Page 10

OSI model

OSI model

7. Application layer

NNTP  · SIP  · SSI  · DNS  · FTP  · Gopher · HTTP  · NFS  · NTP  · SMPP  · SMTP  ·SNMP  · T

elnet  · DHCP  · Netconf  · RTP · SPDY  · (more)

6. Presentation layer

MIME  · XDR  · TLS  · SSL

5. Session layer

Named pipe  · NetBIOS  · SAP  · PPTP ·SOCKS

4. Transport layer

TCP  · UDP  · SCTP  · DCCP  · SPX

3. Network layer

IP (IPv4, IPv6)  · ICMP  · IPsec  · IGMP  ·IPX  · AppleTalk

2. Data link layer

ATM  · SDLC  · HDLC  · ARP  · CSLIP  ·SLIP  · GFP  · PLIP  · IEEE   802.2   · LLC  ·L2TP  · IEE

E   802.3   · Frame   Relay   · ITU-T   G.hn   DLL   · PPP  · X.25  · Network switch

1. Physical layer

EIA/TIA-232  · EIA/TIA-449  · ITU-T   V-Series   · I.430  · I.431  · POTS  · PDH  ·SONET/SDH  · P

ON  · OTN  · DSL  ·IEEE   802.3   · IEEE   802.11   · IEEE   802.15  · IEEE   802.16   ·

 IEEE   1394   · ITU-T   G.hn   PHY   · USB  · Bluetooth  · Hubs

Page 11: Computer Networks

Shashank Agnihotri Computer Networks – Page 11

The Open Systems Interconnection (OSI) model is a product of the Open Systems

Interconnection effort at the International Organization for Standardization. It is a prescription of

characterizing and standardizing the functions of a communications system in terms

ofabstraction layers. Similar communication functions are grouped into logical layers. A layer

serves the layer above it and is served by the layer below it.

For example, a layer that provides error-free communications across a network provides the

path needed by applications above it, while it calls the next lower layer to send and receive

packets that make up the contents of that path. Two instances at one layer are connected by a

horizontal connection on that layer.

History

Work on a layered model of network architecture was started and the International Organization

for Standardization (ISO) began to develop its OSI framework architecture. OSI had two major

components: an abstract model of networking, called the Basic Reference Model or seven-layer

model, and a set of specific protocols.

The concept of a seven-layer model was provided by the work ofCharles Bachman, Honeywell

Information Services. Various aspects of OSI design evolved from experiences with the

ARPANET, the fledgling Internet, NPLNET, EIN, CYCLADESnetwork and the work in IFIP

WG6.1. The new design was documented in ISO 7498 and its various addenda. In this model, a

networking system was divided into layers. Within each layer, one or more entities implement its

functionality. Each entity interacted directly only with the layer immediately beneath it, and

provided facilities for use by the layer above it.

Protocols enabled an entity in one host to interact with a corresponding entity at the same layer

in another host. Service definitions abstractly described the functionality provided to an (N)-layer

by an (N-1) layer, where N was one of the seven layers of protocols operating in the local host.

Page 12: Computer Networks

Shashank Agnihotri Computer Networks – Page 12

The OSI standards documents are available from the ITU-T as the X.200-series of

recommendations.[1] Some of the protocol specifications were also available as part of the ITU-T

X series. The equivalent ISO and ISO/IEC standards for the OSI model were available from ISO,

but only some of them without fees.[2]

Description of OSI layers

According to recommendation X.200, there are seven layers, labeled 1 to 7, with layer 1 at the

bottom. Each layer is generically known as an N layer. An "N+1 entity" (at layer N+1) requests

services from an "N entity" (at layer N).

At each level, two entities (N-entity peers) interact by means of the N protocol by

transmitting protocol data units (PDU).

A Service Data Unit (SDU) is a specific unit of data that has been passed down from an OSI

layer to a lower layer, and which the lower layer has not yet encapsulated into a protocol data

unit (PDU). An SDU is a set of data that is sent by a user of the services of a given layer, and is

transmitted semantically unchanged to a peer service user.

The PDU at a layer N is the SDU of layer N-1. In effect the SDU is the 'payload' of a given PDU. That is,

the process of changing an SDU to a PDU, consists of an encapsulation process, performed by the lower

layer. All the data contained in the SDU becomes encapsulated within the PDU. The layer N-1 adds

headers or footers, or both, to the SDU, transforming it into a PDU of layer N-1. The added headers or

footers are part of the process used to make it possible to get data from a source to a destination.

OSI Model

Data unit Layer Function

Hostlayers

Data

7. Application Network process to application

6. PresentationData representation, encryption and decryption, convert machine dependent data to machine independent data

5. SessionInterhost communication, managing sessions between applications

Segments 4. Transport End-to-end connections, reliability and flow control

Media Packet/Datagram 3. Network Path determination andlogical addressing

Page 13: Computer Networks

Shashank Agnihotri Computer Networks – Page 13

layersFrame 2. Data link Physical addressing

Bit 1. Physical Media, signal and binary transmission

Some orthogonal aspects, such as management and security, involve every layer.

Security services are not related to a specific layer: they can be related by a number of layers,

as defined by ITU-T X.800 Recommendation.[3]

These services are aimed to improve the CIA triad (confidentiality, integrity, and availability) of

transmitted data. Actually the availability of communication service is determined by network

design and/or network management protocols. Appropriate choices for these are needed to

protect against denial of service.

Layer 1: physical layer

The physical layer defines electrical and physical specifications for devices. In particular, it

defines the relationship between a device and atransmission medium, such as a copper or fiber

optical cable. This includes the layout

of pins, voltages, cable specifications, hubs,repeaters, network adapters, host bus

adapters (HBA used in storage area networks) and more.

The major functions and services performed by the physical layer are:

Establishment and termination of a connection to a communications medium.

Participation in the process whereby the communication resources are effectively shared

among multiple users. For example, contentionresolution and flow control.

Modulation , or conversion between the representation of digital data in user equipment

and the corresponding signals transmitted over a communications channel. These are signals

operating over the physical cabling (such as copper and optical fiber) or over a radio link.

Parallel SCSI buses operate in this layer, although it must be remembered that the

logical SCSI protocol is a transport layer protocol that runs over this bus. Various physical-layer

Ethernet standards are also in this layer; Ethernet incorporates both this layer and the data link

layer. The same applies to other local-area networks, such as token ring, FDDI, ITU-

T G.hn and IEEE 802.11, as well as personal area networks such as Bluetooth and IEEE

802.15.4.

Layer 2: data link layer

Page 14: Computer Networks

Shashank Agnihotri Computer Networks – Page 14

The data link layer provides the functional and procedural means to transfer data between

network entities and to detect and possibly correct errors that may occur in the physical layer.

Originally, this layer was intended for point-to-point and point-to-multipoint media, characteristic

of wide area media in the telephone system. Local area network architecture, which included

broadcast-capable multiaccess media, was developed independently of the ISO work in IEEE

Project 802. IEEE work assumed sublayering and management functions not required for WAN

use. In modern practice, only error detection, not flow control using sliding window, is present in

data link protocols such as Point-to-Point Protocol (PPP), and, on local area networks, the IEEE

802.2 LLC layer is not used for most protocols on the Ethernet, and on other local area

networks, its flow control and acknowledgment mechanisms are rarely used. Sliding window flow

control and acknowledgment is used at the transport layer by protocols such as TCP, but is still

used in niches where X.25 offers performance advantages.

The ITU-T G.hn standard, which provides high-speed local area networking over existing wires

(power lines, phone lines and coaxial cables), includes a complete data link layer which provides

both error correction and flow control by means of a selective repeat Sliding Window Protocol.

Both WAN and LAN service arrange bits, from the physical layer, into logical sequences called

frames. Not all physical layer bits necessarily go into frames, as some of these bits are purely

intended for physical layer functions. For example, every fifth bit of the FDDI bit stream is not

used by the layer.

WAN protocol architecture

Connection-oriented WAN data link protocols, in addition to framing, detect and may correct

errors. They are also capable of controlling the rate of transmission. A WAN data link layer might

implement a sliding window flow control and acknowledgment mechanism to provide reliable

delivery of frames; that is the case for Synchronous Data Link Control (SDLC) and HDLC, and

derivatives of HDLC such as LAPB andLAPD.

IEEE 802 LAN architecture

Practical, connectionless LANs began with the pre-IEEE Ethernet specification, which is the

ancestor of IEEE 802.3. This layer manages the interaction of devices with a shared medium,

which is the function of a media access control (MAC) sublayer. Above this MAC sublayer is the

media-independent IEEE 802.2 Logical Link Control (LLC) sublayer, which deals with

addressing and multiplexing on multiaccess media.

Page 15: Computer Networks

Shashank Agnihotri Computer Networks – Page 15

While IEEE 802.3 is the dominant wired LAN protocol and IEEE 802.11 the wireless LAN

protocol, obsolescent MAC layers include Token Ring and FDDI. The MAC sublayer detects but

does not correct errors.

Layer 3: network layer

The network layer provides the functional and procedural means of transferring variable

length data sequences from a source host on one network to a destination host on a different

network, while maintaining the quality of service requested by the transport layer (in contrast to

the data link layer which connects hosts within the same network). The network layer performs

network routing functions, and might also perform fragmentation and reassembly, and report

delivery errors. Routers operate at this layer, sending data throughout the extended network and

making the Internet possible. This is a logical addressing scheme – values are chosen by the

network engineer. The addressing scheme is not hierarchical.

The network layer may be divided into three sublayers:

1. Subnetwork access – that considers protocols that deal with the interface to networks,

such as X.25;

2. Subnetwork-dependent convergence – when it is necessary to bring the level of a transit

network up to the level of networks on either side

3. Subnetwork-independent convergence – handles transfer across multiple networks.

An example of this latter case is CLNP, or IPv7 ISO 8473. It manages

the connectionless transfer of data one hop at a time, from end system to ingress router, router

to router, and from egress router to destination end system. It is not responsible for reliable

delivery to a next hop, but only for the detection of erroneous packets so they may be discarded.

In this scheme, IPv4 and IPv6 would have to be classed with X.25 as subnet access protocols

because they carry interface addresses rather than node addresses.

A number of layer-management protocols, a function defined in the Management Annex, ISO

7498/4, belong to the network layer. These include routing protocols, multicast group

management, network-layer information and error, and network-layer address assignment. It is

the function of the payload that makes these belong to the network layer, not the protocol that

carries them.

Layer 4: transport layer

Page 16: Computer Networks

Shashank Agnihotri Computer Networks – Page 16

The transport layer provides transparent transfer of data between end users, providing reliable

data transfer services to the upper layers. The transport layer controls the reliability of a given

link through flow control, segmentation/desegmentation, and error control. Some protocols are

state- and connection-oriented. This means that the transport layer can keep track of the

segments and retransmit those that fail. The transport layer also provides the acknowledgement

of the successful data transmission and sends the next data if no errors occurred.

OSI defines five classes of connection-mode transport protocols ranging from class 0 (which is

also known as TP0 and provides the least features) to class 4 (TP4, designed for less reliable

networks, similar to the Internet). Class 0 contains no error recovery, and was designed for use

on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP

contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all

OSI TP connection-mode protocol classes provide expedited data and preservation of record

boundaries. Detailed characteristics of TP0-4 classes are shown in the following table:[4]

Feature Name TP0 TP1 TP2 TP3 TP4

Connection oriented network Yes Yes Yes Yes Yes

Connectionless network No No No No Yes

Concatenation and separation No Yes Yes Yes Yes

Segmentation and reassembly Yes Yes Yes Yes Yes

Error Recovery No Yes Yes Yes Yes

Reinitiate connection (if an excessive number of PDUs are unacknowledged)

No Yes No Yes No

Multiplexing and demultiplexing over a single virtual circuit No No Yes Yes Yes

Explicit flow control No No Yes Yes Yes

Page 17: Computer Networks

Shashank Agnihotri Computer Networks – Page 17

Retransmission on timeout No No No No Yes

Reliable Transport Service No Yes No Yes Yes

Perhaps an easy way to visualize the transport layer is to compare it with a Post Office, which

deals with the dispatch and classification of mail and parcels sent. Do remember, however, that

a post office manages the outer envelope of mail. Higher layers may have the equivalent of

double envelopes, such as cryptographic presentation services that can be read by the

addressee only. Roughly speaking, tunneling protocols operate at the transport layer, such as

carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end

encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network-

layer protocol, if the encapsulation of the payload takes place only at endpoint, GRE becomes

closer to a transport protocol that uses IP headers but contains complete frames or packets to

deliver to an endpoint. L2TP carries PPP frames inside transport packet.

Although not developed under the OSI Reference Model and not strictly conforming to the OSI

definition of the transport layer, theTransmission Control Protocol (TCP) and the User Datagram

Protocol (UDP) of the Internet Protocol Suite are commonly categorized as layer-4 protocols

within OSI.

Layer 5: session layer

The session layer controls the dialogues (connections) between computers. It establishes,

manages and terminates the connections between the local and remote application. It provides

for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment,

termination, and restart procedures. The OSI model made this layer responsible for graceful

close of sessions, which is a property of theTransmission Control Protocol, and also for session

checkpointing and recovery, which is not usually used in the Internet Protocol Suite. The session

layer is commonly implemented explicitly in application environments that use remote procedure

calls. On this level, Inter-Process_(computing) communication happen (SIGHUP, SIGKILL, End

Process, etc.).

Layer 6: presentation layer

The presentation layer establishes context between application-layer entities, in which the

higher-layer entities may use different syntax and semantics if the presentation service provides

a mapping between them. If a mapping is available, presentation service data units are

encapsulated into session protocol data units, and passed down the stack.

Page 18: Computer Networks

Shashank Agnihotri Computer Networks – Page 18

This layer provides independence from data representation (e.g., encryption) by translating

between application and network formats. The presentation layer transforms data into the form

that the application accepts. This layer formats and encrypts data to be sent across a network. It

is sometimes called the syntax layer.[5]

The original presentation structure used the basic encoding rules of Abstract Syntax Notation

One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded

file, or serialization of objects and other data structures from and to XML.

Layer 7: application layer

The application layer is the OSI layer closest to the end user, which means that both the OSI

application layer and the user interact directly with the software application. This layer interacts

with software applications that implement a communicating component. Such application

programs fall outside the scope of the OSI model. Application-layer functions typically include

identifying communication partners, determining resource availability, and synchronizing

communication. When identifying communication partners, the application layer determines the

identity and availability of communication partners for an application with data to transmit. When

determining resource availability, the application layer must decide whether sufficient network or

the requested communication exist. In synchronizing communication, all communication

between applications requires cooperation that is managed by the application layer. Some

examples of application-layer implementations also include:

On OSI stack:

FTAM  File Transfer and Access Management Protocol

X.400  Mail

Common management information protocol  (CMIP)

On TCP/IP stack:

Hypertext Transfer Protocol  (HTTP),

File Transfer Protocol  (FTP),

Simple Mail Transfer Protocol  (SMTP)

Simple Network Management Protocol  (SNMP).

Cross-layer functions

This "datagram service model" reference in MPLS may be confusing or unclear to readers. Please help clarify the "datagram service model" reference in MPLS; suggestions may be found on the talk page

There are some functions or services that are not tied to a given layer, but they can affect more

than one layer. Examples include the following:

Page 19: Computer Networks

Shashank Agnihotri Computer Networks – Page 19

security service (telecommunication) [3]  as defined by ITU-T X.800 Recommendation.

management functions, i.e. functions that permit to configure, instantiate, monitor,

terminate the communications of two or more entities: there is a specific application layer

protocol, common management information protocol (CMIP) and its corresponding

service, common management information service (CMIS), they need to interact with every layer

in order to deal with their instances.

Multiprotocol Label Switching  (MPLS) operates at an OSI-model layer that is generally

considered to lie between traditional definitions of layer 2 (data link layer) and layer 3 (network

layer), and thus is often referred to as a "layer-2.5" protocol. It was designed to provide a unified

data-carrying service for both circuit-based clients and packet-switching clients which provide a

datagram service model. It can be used to carry many different kinds of traffic, including IP

packets, as well as native ATM, SONET, and Ethernet frames.

ARP is used to translate IPv4 addresses (OSI layer 3) into Ethernet MAC addresses (OSI

layer 2).

Interfaces

Neither the OSI Reference Model nor OSI protocols specify any programming interfaces, other

than as deliberately abstract service specifications. Protocol specifications precisely define the

interfaces between different computers, but the software interfaces inside computers, known

as network sockets are implementation-specific.

For example Microsoft Windows' Winsock, and Unix's Berkeley sockets and System V Transport

Layer Interface, are interfaces between applications (layer 5 and above) and the transport (layer

4). NDIS and ODI are interfaces between the media (layer 2) and the network protocol (layer 3).

Interface standards, except for the physical layer to media, are approximate implementations of

OSI service specifications.

Examples

Page 20: Computer Networks

Shashank Agnihotri Computer Networks – Page 20

Layer

OSI protocols TCP/IP protocols

Signaling

System 7 [6]

AppleTalk IPX SNA UMTS Misc. examples

# Name

7 ApplicationFTAM, X.400,X.500, DAP,ROSE, RTSE,ACSE [7] CMIP [8]

NNTP, SIP, SSI,DNS, FTP,Gopher, HTTP,NFS, NTP, DHCP,SMPP, SMTP,SNMP, Telnet,RIP, BGP

INAP,MAP,TCAP,ISUP,TUP

AFP, ZIP,RTMP,

NBP

RIP,

SAPAPPC HL7, Modbus

6 PresentationISO/IEC 8823, X.226, ISO/IEC 9576-1, X.236

MIME, SSL, TLS,XDR AFPTDI, ASCII, EBCDIC,MIDI, MPEG

5 SessionISO/IEC 8327, X.225, ISO/IEC 9548-1, X.235

Sockets. Session establishment inTCP, RTP

ASP,ADSP,PAP NWLink DLC?

Named pipes, NetBIOS,SAP, half duplex, full duplex, simplex, RPC,SOCKS

4 Transport

ISO/IEC 8073, TP0, TP1, TP2, TP3, TP4 (X.224), ISO/IEC 8602, X.234

TCP, UDP, SCTP,DCCP DDP,SPX NBF

3 Network ISO/IEC 8208,X.25 (PLP), ISO/IEC 8878,X.223, ISO/IEC 8473-1, CLNPX.233.

IP, IPsec, ICMP,IGMP, OSPF

SCCP,MTP

ATP

(TokenTalkorEtherTalk)

IPX RRC (Radio Resource Control)Packet Data Convergence Protocol (PDCP)

NBF, Q.931, IS-IS

Page 21: Computer Networks

Shashank Agnihotri Computer Networks – Page 21

and BMC(Broadcast/Multicast Control)

2 Data Link

ISO/IEC 7666,X.25 (LAPB),Token Bus, X.222, ISO/IEC 8802-2 LLC Type 1 and 2[9]

PPP, SBTV SLIP,PPTPMTP,Q.710

LocalTalk, AppleTalk Remote Access,PPP

IEEE 802.3framing,Ethernet II framing

SDLC

LLC (Logical Link Control), MAC(Media Access Control)

802.3 (Ethernet),802.11a/b/g/n MAC/LLC,802.1Q (VLAN), ATM,HDP, FDDI, Fibre Channel, Frame Relay,HDLC, ISL, PPP, Q.921,Token Ring, CDP, NDP ARP  (maps layer 3 to layer 2 address), ITU-T G.hn DLLCRC, Bit stuffing, ARQ,Data Over Cable Service Interface Specification (DOCSIS), interface bonding

1 PhysicalX.25 (X.21bis,EIA/TIA-232,EIA/TIA-449,EIA-530,G.703)[9]

MTP,Q.710

RS-232,RS-422,STP,

PhoneNet

Twinax

UMTS Physical layer or L1

RS-232, Full duplex,RJ45, V.35, V.34, I.430,I.431, T1, E1, 10BASE-T, 100BASE-TX, POTS,SONET, SDH, DSL,802.11a/b/g/n PHY, ITU-T G.hn PHY, Controller Area Network, Data Over Cable Service Interface Specification (DOCSIS)

Page 22: Computer Networks

Shashank Agnihotri Computer Networks – Page 22

Comparison with TCP/IP model

In the TCP/IP model of the Internet, protocols are deliberately not as rigidly designed into strict

layers as in the OSI model.[10] RFC 3439contains a section entitled "Layering considered

harmful." However, TCP/IP does recognize four broad layers of functionality which are derived

from the operating scope of their contained protocols, namely the scope of the software

application, the end-to-end transport connection, the internetworking range, and the scope of the

direct links to other nodes on the local network.

Even though the concept is different from the OSI model, these layers are nevertheless often

compared with the OSI layering scheme in the following way: The Internet application

layer includes the OSI application layer, presentation layer, and most of the session layer. Its

end-to-end transport layer includes the graceful close function of the OSI session layer as well

as the OSI transport layer. The internetworking layer (Internet layer) is a subset of the OSI

network layer (see above), while the link layer includes the OSI data link and physical layers, as

well as parts of OSI's network layer. These comparisons are based on the original seven-layer

protocol model as defined in ISO 7498, rather than refinements in such things as the internal

organization of the network layer document.

The presumably strict peer layering of the OSI model as it is usually described does not present

contradictions in TCP/IP, as it is permissible that protocol usage does not follow the hierarchy

implied in a layered model. Such examples exist in some routing protocols (e.g., OSPF), or in

the description of tunneling protocols, which provide a link layer for an application, although the

tunnel host protocol may well be a transport or even an application layer protocol in its own right.

Page 23: Computer Networks

Shashank Agnihotri Computer Networks – Page 23

Data Link Layer (Layer 2) 

The second-lowest layer (layer 2) in the OSI Reference Model stack is the data link layer, often abbreviated “DLL” (though that abbreviation has other meanings as well in the computer world). The data link layer, also sometimes just called the link layer, is where many wired and wireless local area networking (LAN) technologies primarily function. For example, Ethernet, Token Ring, FDDI and 802.11 (“wireless Ethernet” or “Wi-Fi’) are all sometimes called “data link layer technologies”. The set of devices connected at the data link layer is what is commonly considered a simple“network”, as opposed to an internetwork.

Data Link Layer Sublayers: Logical Link Control (LLC) and Media Access Control (MAC)

The data link layer is often conceptually divided into two sublayers: logical link control (LLC) and media access control (MAC). This split is based on the architecture used in the IEEE 802 Project, which is the IEEE working group responsible for creating the standards that define many networking technologies (including all of the ones I mentioned above except FDDI). By separating LLC and MAC functions, interoperability of different network technologies is made easier, as explained in our earlier discussion of networking model concepts.

Data Link Layer Functions

The following are the key tasks performed at the data link layer:

o Logical Link Control (LLC): Logical link control refers to the functions required for the establishment and control of logical links between local devices on a network. As mentioned above, this is usually considered a DLL sublayer; it provides services to the network layer above it and hides the rest of the details of the data link layer to allow different technologies to work seamlessly with the higher layers. Most local area networking technologies use the IEEE 802.2 LLC protocol. 

o Media Access Control (MAC): This refers to the procedures used by devices to control access to the network medium. Since many networks use a shared medium (such as a single network cable, or a series of cables that are electrically connected into a single virtual medium) it is necessary to have rules for managing the medium to avoid conflicts. For example. Ethernet uses the CSMA/CD method of media access control, while Token Ring uses token passing. o Data Framing: The data link layer is responsible for the final encapsulation of higher-level messages into framesthat are sent over the network at the physical layer. o Addressing: The data link layer is the lowest layer in the OSI model that is concerned with addressing: labeling information with a particular destination location. Each device on a network has a unique number, usually called ahardware address or MAC address, that is used by the data link layer protocol to ensure that data intended for a specific machine gets to it properly. o Error Detection and Handling: The data link layer handles errors that occur at the lower levels of the network stack. For example, a cyclic redundancy check (CRC) field is often employed to allow the station receiving data to detect if it was received correctly.

Page 24: Computer Networks

Shashank Agnihotri Computer Networks – Page 24

Physical Layer Requirements Definition and Network Interconnection Device Layers

As I mentioned in the topic discussing the physical layer, that layer and the data link layer are very closely related. The requirements for the physical layer of a network are often part of the data link layer definition of a particular technology. Certain physical layer hardware and encoding aspects are specified by the DLL technology being used. The best example of this is the Ethernet standard, IEEE 802.3, which specifies not just how Ethernet works at the data link layer, but also its various physical layers.

Since the data link layer and physical layer are so closely related, many types of hardware are associated with the data link layer. Network interface cards (NICs) typically implement a specific data link layer technology, so they are often called “Ethernet cards”, “Token Ring cards”, and so on. There are also a number of network interconnection devices that are said to “operate at layer 2”, in whole or in part, because they make decisions about what to do with data they receive by looking at data link layer frames. These devices include most bridges, switches and barters, though the latter two also encompass functions performed by layer three.

Some of the most popular technologies and protocols generally associated with layer 2 are Ethernet, Token Ring, FDDI (plus CDDI), HomePNA, IEEE 802.11, ATM, and TCP/IP's Serial Link Interface Protocol (SLIP) and Point-To-Point Protocol (PPP).

Key Concept: The second OSI Reference Model layer is the data link layer. This is the place where most LAN and wireless LAN technologies are defined. Layer two is responsible for logical link control, media access control, hardware addressing, error

detection and handling, and defining physical layer standards. It is often divided into the logical link control (LLC) and media access control (MAC) sublayers, based on the IEEE 802 Project that uses that architecture.

The Data-Link layer is the protocol layer in a program that handles the moving of data in and out

across a physical link in a network. The Data-Link layer is layer 2 in the Open Systems

Interconnect (OSI) model for a set of telecommunication protocols.

The Data-Link layer contains two sublayers that are described in the IEEE-802 LAN standards:

Media Access Control (MAC)

Logical Link Control (LLC)

The Data-Link layer ensures that an initial connection has been set up, divides output data into

data frames, and handles the acknowledgements from a receiver that the data arrived

successfully. It also ensures that incoming data has been received successfully by analyzing bit

patterns at special places in the frames.

Page 25: Computer Networks

Shashank Agnihotri Computer Networks – Page 25

Physical Layer (Layer 1) 

The lowest layer of the OSI Reference Model is layer 1, the physical layer; it is commonly abbreviated “PHY”. The physical layer is special compared to the other layers of the model, because it is the only one where data is physically moved across the network interface. All of the other layers perform useful functions to create messages to be sent, but they must all be transmitted down the protocol stack to the physical layer, where they are actually sent out over the network.

Note: The physical layer is also “special” in that it is the only layer that really does not apply specifically to TCP/IP. Even in studying TCP/IP, however, it is still important to understand its significance and role in relation to the other layers where TCP/IP protocols

reside.

Understanding the Role of the Physical Layer

The name “physical layer” can be a bit problematic. Because of that name, and because of what I just said about the physical layer actually transmitting data, many people who study networking get the impression that the physical layer is only about actual network hardware. Some people may say the physical layer is “the network interface cards and cables”. This is not actually the case, however. The physical layer defines a number of network functions, not just hardware cables and cards.

A related notion is that “all network hardware belongs to the physical layer”. Again, this isn't strictly accurate. All hardware must have some relation to the physical layer in order to send data over the network, but hardware devices generally implement multiple layers of the OSI model, including the physical layer but also others. For example, an Ethernet network interface card performs functions at both the physical layer and the data link layer.

Physical Layer Functions

The following are the main responsibilities of the physical layer in the OSI Reference Model:

o Definition of Hardware Specifications: The details of operation of cables, connectors, wireless radio transceivers, network interface cards and other hardware devices are generally a function of the physical layer (although also partially the data link layer; see below). 

o Encoding and Signaling: The physical layer is responsible for various encoding and signaling functions that transform the data from bits that reside within a computer or other device into signals that can be sent over the network. o Data Transmission and Reception: After encoding the data appropriately, the physical layer actually transmits the data, and of course, receives it. Note that this applies equally to wired and wireless networks, even if there is no tangible cable in a wireless network! o Topology and Physical Network Design: The physical layer is also considered the domain of many hardware-related network design issues, such as LAN and WAN topology.

Page 26: Computer Networks

Shashank Agnihotri Computer Networks – Page 26

In general, then, physical layer technologies are ones that are at the very lowest level and deal with the actual ones and zeroes that are sent over the network. For example, when considering network interconnection devices, the simplest ones operate at the physical layer: repeaters, conventional hubs and transceivers. These devices have absolutely no knowledge of the contents of a message. They just take input bits and send them as output. Devices like switches and routers operate at higher layers and look at the data they receive as being more than voltage or light pulses that represent one or zero.

Relationship Between the Physical Layer and Data Link Layer

It's important to point out that while the physical layer of a network technology primarily defines the hardware it uses, the physical layer is closely related to the data link layer. Thus, it is not generally possible to define hardware at the physical layer “independently” of the technology being used at the data link layer. For example, Ethernet is a technology that describes specific types of cables and network hardware, but the physical layer of Ethernet can only be isolated from its data link layer aspects to a point. While Ethernet cables are “physical layer”, for example, their maximum length is related closely to message format rules that exist at the data link layer.

Furthermore, some technologies perform functions at the physical layer that are normally more closely associated with the data link layer. For example, it is common to have the physical layer perform low-level (bit level) repackaging of data link layer frames for transmission. Error detection and correction may also be done at layer 1 in some cases. Most people would consider these “layer two functions”.

In many technologies, a number of physical layers can be used with a data link layer. Again here, the classic example is Ethernet, where dozens of different physical layer implementations exist, each of which uses the same data link layer (possibly with slight variations.)

Physical Layer Sublayers

Finally, many technologies further subdivide the physical layer into sublayers. In order to increase performance, physical layer encoding and transmission methods have become more complex over time. The physical layer may be broken into layers to allow different network media to be supported by the same technology, while sharing other functions at the physical layer that are common between the various media. A good example of this is the physical layer architecture used for Fast Ethernet, Gigabit Ethernet and 10-Gigabit Ethernet.

Note: In some contexts, the physical layer technology used to convey bits across a network or communications line is called a transport method. Don't confuse this with the functions of the OSI transport layer (layer 4).

Key Concept: The lowest layer in the OSI Reference Model is the physical layer. It is the realm of networking hardware specifications, and is the place where technologies reside that perform data encoding, signaling, transmission and reception functions. The physical

Page 27: Computer Networks

Shashank Agnihotri Computer Networks – Page 27

layer is closely related to the data link layer.

Page 28: Computer Networks

Shashank Agnihotri Computer Networks – Page 28

The ALOHA protocol

Pure ALOHA

Pure ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which have

collided.

The first version of the protocol (now called "Pure ALOHA", and the one implemented in

ALOHAnet) was quite simple:

If you have data to send, send the data

If the message collides with another transmission, try resending "later"

Note that the first step implies that Pure ALOHA does not check whether the channel is busy

before transmitting. The critical aspect is the "later" concept: the quality of the backoff scheme

chosen significantly influences the efficiency of the protocol, the ultimate channel capacity, and

the predictability of its behavior.

To assess Pure ALOHA, we need to predict its throughput, the rate of (successful) transmission

of frames. (This discussion of Pure ALOHA's performance follows Tanenbaum.[9]) First, let's

make a few simplifying assumptions:

All frames have the same length.

Stations cannot generate a frame while transmitting or trying to transmit. (That is, if a

station keeps trying to send a frame, it cannot be allowed to generate more frames to send.)

The population of stations attempts to transmit (both new frames and old frames that

collided) according to a Poisson distribution.

Let "T" refer to the time needed to transmit one frame on the channel, and let's define "frame-

time" as a unit of time equal to T. Let "G" refer to the mean used in the Poisson distribution over

transmission-attempt amounts: that is, on average, there are G transmission-attempts per

frame-time.

Page 29: Computer Networks

Shashank Agnihotri Computer Networks – Page 29

Overlapping frames in the pure ALOHA protocol. Frame-time is equal to 1 for all frames.

Consider what needs to happen for a frame to be transmitted successfully. Let "t" refer to the

time at which we want to send a frame. We want to use the channel for one frame-time

beginning at t, and so we need all other stations to refrain from transmitting during this time.

Moreover, we need the other stations to refrain from transmitting between t-T and t as well,

because a frame sent during this interval would overlap with our frame.

For any frame-time, the probability of there being k transmission-attempts during that frame-time

is:

Comparison of Pure Aloha and Slotted Aloha shown on Throughput vs. Traffic Load plot.

The average amount of transmission-attempts for 2 consecutive frame-times is 2G. Hence, for

any pair of consecutive frame-times, the probability of there being ktransmission-attempts during

those two frame-times is:

Therefore, the probability ( ) of there being zero transmission-attempts between t-

T and t+T (and thus of a successful transmission for us) is:

Page 30: Computer Networks

Shashank Agnihotri Computer Networks – Page 30

The throughput can be calculated as the rate of transmission-attempts multiplied by the

probability of success, and so we can conclude that the throughput ( ) is:

The maximum throughput is 0.5/e frames per frame-time (reached when G = 0.5), which is

approximately 0.184 frames per frame-time. This means that, in Pure ALOHA, only about 18.4%

of the time is used for successful transmissions.

Slotted ALOHA

Slotted ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which are in the

same slots.

An improvement to the original ALOHA protocol was "Slotted ALOHA", which introduced discrete

timeslots and increased the maximum throughput.[10] A station can send only at the beginning of

a timeslot, and thus collisions are reduced. In this case, we only need to worry about the

transmission-attempts within 1 frame-time and not 2 consecutive frame-times, since collisions

can only occur during each timeslot. Thus, the probability of there being zero transmission-

attempts in a single timeslot is:

the probability of k packets is:

The throughput is:

The maximum throughput is 1/e frames per frame-time (reached when G = 1), which is

approximately 0.368 frames per frame-time, or 36.8%.

Page 31: Computer Networks

Shashank Agnihotri Computer Networks – Page 31

Slotted ALOHA is used in low-data-rate tactical satellite communications networks by military

forces, in subscriber-based satellite communications networks, mobile telephony call setup, and

in the contactless RFID technologies.

Other Protocols

The use of a random access channel in ALOHAnet led to the development of Carrier Sense

Multiple Access (CSMA), a 'listen before send' random access protocol which can be used when

all nodes send and receive on the same channel. The first implementation of CSMA

wasEthernet, and CSMA was extensively modeled in.[11]

ALOHA and the other random-access protocols have an inherent variability in their throughput

and delay performance characteristics. For this reason, applications which need highly

deterministic load behavior often used polling or token-passing schemes (such as token ring)

instead of contention systems. For instance ARCNET was popular in embedded data

applications in the 1980s.

Design

Network architecture

Two fundamental choices which dictated much of the ALOHAnet design were the two-channel

star configuration of the network and the use of random accessing for user transmissions.

The two-channel configuration was primarily chosen to allow for efficient transmission of the

relatively dense total traffic stream being returned to users by the central time-sharing computer.

An additional reason for the star configuration was the desire to centralize as many

communication functions as possible at the central network node (the Menehune), minimizing

the cost of the original all-hardware terminal control unit (TCU) at each user node.

The random access channel for communication between users and the Menehune was

designed specifically for the traffic characteristics of interactive computing. In a conventional

communication system a user might be assigned a portion of the channel on either a frequency-

division multiple access (FDMA) or time-division multiple access (TDMA) basis. Since it was well

known that in time-sharing systems [circa 1970], computer and user data are bursty, such fixed

assignments are generally wasteful of bandwidth because of the high peak-to-average data

rates that characterize the traffic.

To achieve a more efficient use of bandwidth for bursty traffic, ALOHAnet developed the random

access packet switching method that has come to be known as a pure ALOHA channel. This

approach effectively dynamically allocates bandwidth immediately to a user who has data to

Page 32: Computer Networks

Shashank Agnihotri Computer Networks – Page 32

send, using the acknowledgment/retransmission mechanism described earlier to deal with

occasional access collisions. While the average channel loading must be kept below about 10%

to maintain a low collision rate, this still results in better bandwidth efficiency than when fixed

allocations are used in a bursty traffic context.

Two 100 kHz channels in the experimental UHF band were used in the implemented system,

one for the user-to-computer random access channel and one for the computer-to-user

broadcast channel. The system was configured as a star network, allowing only the central node

to receive transmissions in the random access channel. All user TCUs received each

transmission made by the central node in the broadcast channel. All transmissions were made in

bursts at 9600 bit/s, with data and control information encapsulated in packets.

Each packet consisted of a 32-bit header and a 16-bit header parity check word, followed by up

to 80 bytes of data and a 16-bit parity check word for the data. The header contained address

information identifying a particular user so that when the Menehune broadcast a packet, only the

intended user's node would accept it.

Remote units

The original user interface developed for the system was an all-hardware unit called an

ALOHAnet Terminal Control Unit (TCU), and was the sole piece of equipment necessary to

connect a terminal into the ALOHA channel. The TCU was composed of a UHF antenna,

transceiver, modem, buffer and control unit. The buffer was designed for a full line length of 80

characters, which allowed handling of both the 40 and 80 character fixed-length packets defined

for the system. The typical user terminal in the original system consisted of a Teletype Model

33 or a dumb CRT user terminal connected to the TCU using a standard RS-232C interface.

Shortly after the original ALOHA network went into operation, the TCU was redesigned with one

of the first Intel microprocessors, and the resulting upgrade was called a PCU (Programmable

Control Unit).

Additional basic functions performed by the TCU's and PCU’s were generation of a cyclic-parity-

check code vector and decoding of received packets for packet error-detection purposes, and

generation of packet retransmissions using a simple random interval generator. If an

acknowledgment was not received from the Menehune after the prescribed number of automatic

retransmissions, a flashing light was used as an indicator to the human user. Also, since the

TCU's and PCU’s did not send acknowledgments to the Menehune, a steady warning light was

displayed to the human user when an error was detected in a received packet. Thus it can be

seen that considerable simplification was incorporated into the initial design of the TCU as well

as the PCU, making use of the fact that it was interfacing a human user into the network.

Page 33: Computer Networks

Shashank Agnihotri Computer Networks – Page 33

The Menehune

The central node communications processor was an HP 2100 minicomputer called the

Menehune, which is the Hawaiian language word for “imp”, or dwarf people,[12] and was named

for its similar role to the original ARPANET Interface Message Processor (IMP) which was being

deployed at about the same time. In the original system, the Menehune forwarded correctly-

received user data to the UH central computer, an IBM System 360/65 time-sharing system.

Outgoing messages from the 360 were converted into packets by the Menehune, which were

queued and broadcast to the remote users at a data rate of 9600 bit/s. Unlike the half-duplex

radios at the user TCUs, the Menehune was interfaced to the radio channels with full-duplex

radio equipment.

Later developments

In later versions of the system, simple radio relays were placed in operation to connect the main

network on the island of Oahu to other islands in Hawaii, and Menehune routing capabilities

were expanded to allow user nodes to exchange packets with other user nodes, theARPANET,

and an experimental satellite network. More details are available in [3] and in the technical reports

listed in the Further Reading section below.

Page 34: Computer Networks

Shashank Agnihotri Computer Networks – Page 34

Carrier sense multiple accessCarrier Sense Multiple Access (CSMA) is a probabilistic Media Access Control (MAC) protocol

in which a node verifies the absence of other traffic before transmitting on a shared transmission

medium, such as an electrical bus, or a band of the electromagnetic spectrum.

"Carrier Sense" describes the fact that a transmitter uses feedback from a receiver that detects

a carrier wave before trying to send. That is, it tries to detect the presence of an

encoded signal from another station before attempting to transmit. If a carrier is sensed, the

station waits for the transmission in progress to finish before initiating its own transmission. In

other words, CSMA is based on the principle "sense before transmit" or "listen before talk".

"Multiple Access" describes the fact that multiple stations send and receive on the medium.

Transmissions by one node are generally received by all other stations using the medium.

Protocol modifications

Carrier sense multiple access with collision detection (CSMA/CD) is a modification of CSMA.

CSMA/CD is used to improve CSMA performance by terminating transmission as soon as a

collision is detected, and reducing the probability of a second collision on retry.

Carrier sense multiple access with collision avoidance (CSMA/CA) is a modification of CSMA.

Collision avoidance is used to improve the performance of CSMA by attempting to be less

"greedy" on the channel. If the channel is sensed busy before transmission then the

transmission is deferred for a "random" interval. This reduces the probability of collisions on the

channel.

CSMA access modes

1-persistent 

When the sender (station) is ready to transmit data, it checks if the physical medium is busy. If

so, it senses the medium continually until it becomes idle, and then it transmits a piece of data

(a frame). In case of a collision, the sender waits for a random period of time and attempts to

transmit again. 1-persistent CSMA is used in CSMA/CD systems including Ethernet.

P-persistent 

This is a sort of trade-off between 1 and non-persistent CSMA access modes. When the sender

is ready to send data, it checks continually if the medium is busy. If the medium becomes idle,

the sender transmits a frame with a probability p. If the station chooses not to transmit (the

probability of this event is 1-p), the sender waits until the next available time slot and transmits

again with the same probability p. This process repeats until the frame is sent or some other

sender starts transmitting. In the latter case the sender monitors the channel, and when idle,

Page 35: Computer Networks

Shashank Agnihotri Computer Networks – Page 35

transmits with a probability p, and so on. p-persistent CSMA is used in CSMA/CA systems

including WiFiand other packet radio systems.

Non-persistent 

Non persistent CSMA is less aggressive compared to P persistent protocol. In this protocol,

before sending the data, the station senses the channel and if the channel is idle it starts

transmitting the data. But if the channel is busy, the station does not continuously sense it but

instead of that it waits for random amount of time and repeats the algorithm. Here the algorithm

leads to better channel utilization but also results in longer delay compared to 1 –persistent.

O-persistent 

Each station is assigned a transmission order by a supervisor station. When medium goes idle,

stations wait for their time slot in accordance with their assigned transmission order. The station

assigned to transmit first transmits immediately. The station assigned to transmit second waits

one time slot (but by that time the first station has already started transmitting). Stations monitor

the medium for transmissions from other stations and update their assigned order with each

detected transmission (i.e. they move one position closer to the front of the queue).[1] O-

persistent CSMA is used by CobraNet, LonWorks and the controller area network.

Page 36: Computer Networks

Shashank Agnihotri Computer Networks – Page 36

Token bus network

Token passing in a Token bus network

Token bus is a network implementing the token ring protocol over a "virtual ring" on a coaxial

cable. A token is passed around the network nodes and only the node possessing the token

may transmit. If a node doesn't have anything to send, the token is passed on to the next node

on the virtual ring. Each node must know the address of its neighbour in the ring, so a special

protocol is needed to notify the other nodes of connections to, and disconnections from, the ring.

Token bus was standardized by IEEE standard 802.4. It is mainly used for industrial

applications. Token bus was used by GM (General Motors) for their Manufacturing Automation

Protocol (MAP) standardization effort. This is an application of the concepts used in token

ring networks. The main difference is that the endpoints of the bus do not meet to form a

physical ring. The IEEE 802.4 Working Group is disbanded. In order to guarantee the packet

delay and transmission in Token bus protocol, a modified Token bus was proposed in

Manufacturing Automation Systems and flexible manufacturing system (FMS).

Page 37: Computer Networks

Shashank Agnihotri Computer Networks – Page 37

Token ring

Internet protocol suite

Application layer

DHCP 

DHCPv6 

DNS 

FTP 

HTTP 

IMAP 

IRC

LDAP 

MGCP 

NNTP 

NTP 

POP 

RPC 

RTP

RTSP 

SIP 

SMTP 

SNMP 

SOCKS 

SSH

Telnet 

TLS/SSL 

XMPP 

(more)

Transport layer

TCP 

UDP 

DCCP 

SCTP 

RSVP 

RIP 

Page 38: Computer Networks

Shashank Agnihotri Computer Networks – Page 38

BGP

ECN 

(more)

Internet layer

IP 

IPv4 

IPv6 

ICMP 

ICMPv6 

IGMP 

OSPF

IPsec 

(more)

Link layer

ARP/InARP 

NDP 

Tunnels  

L2TP 

PPP

Media access control  

Ethernet 

DSL 

ISDN

FDDI

(more)

Two examples of token ring networks: a) Using a single MAU b) Using several MAUs connected

to each other

Page 39: Computer Networks

Shashank Agnihotri Computer Networks – Page 39

Token ring network

IBM hermaphroditic connector with locking clip

An IBM 8228 MAU

Madge 4/16Mbps TokenRing ISA NIC

Token ring local area network (LAN) technology is a local area network protocol which resides

at the data link layer (DLL) of the OSI model. It uses a special three-byte frame called a token

that travels around the ring. Token-possession grants the possessor permission to transmit on

the medium. Token ring frames travel completely around the loop.

Initially used only in IBM computers, it was eventually standardized with protocol IEEE 802.5.

Description

Stations on a token ring LAN are logically organized in a ring topology with data being

transmitted sequentially from one ring station to the next with a control token circulating around

Page 40: Computer Networks

Shashank Agnihotri Computer Networks – Page 40

the ring controlling access. This token passing mechanism is shared by ARCNET, token bus,

and FDDI, and has theoretical advantages over the stochastic CSMA/CD of Ethernet.

Physically, a token ring network is wired as a star, with 'hubs' and arms out to each station and

the loop going out-and-back through each.

Cabling is generally IBM "Type-1" shielded twisted pair, with unique hermaphroditic connectors,

commonly referred to as IBM data connectors. The connectors have the disadvantage of being

quite bulky, requiring at least 3 x 3 cm panel space, and being relatively fragile.

Initially (in 1985) token ring ran at 4 Mbit/s, but in 1989 IBM introduced the first 16 Mbit/s token

ring products and the 802.5 standard was extended to support this. In 1981, Apollo

Computerintroduced their proprietary 12 Mbit/s Apollo token ring (ATR) and Proteon introduced

their 10 Mbit/s ProNet-10 token ring network in 1984. However, IBM token ring was not

compatible with ATR or ProNet-10.

Each station passes or repeats the special token frame around the ring to its nearest

downstream neighbour. This token-passing process is used to arbitrate access to the shared

ring media. Stations that have data frames to transmit must first acquire the token before they

can transmit them. Token ring LANs normally use differential Manchester encoding of bits on the

LAN media.

IBM popularized the use of token ring LANs in the mid 1980s when it released its IBM token ring

architecture based on active MAUs (Media Access Unit, not to be confused with Medium

Attachment Unit) and the IBM Structured Cabling System. The Institute of Electrical and

Electronics Engineers (IEEE) later standardized a token ring LAN system as IEEE 802.5.[1]

Token ring LAN speeds of 4 Mbit/s and 16 Mbit/s were standardized by the IEEE 802.5 working

group. An increase to 100 Mbit/s was standardized and marketed during the wane of token ring's

existence while a 1000 Mbit/s speed was actually approved in 2001, but no products were ever

brought to market.[2]

When token ring LANs were first introduced at 4 Mbit/s, there were widely circulated claims that

they were superior to Ethernet,[3] but these claims were fiercely debated.[4][5]

With the development of switched Ethernet and faster variants of Ethernet, token ring

architectures lagged behind Ethernet, and the higher sales of Ethernet allowed economies of

scale which drove down prices further, and added a compelling price advantage.

Token ring networks have since declined in usage and the standards activity has since come to

a standstill as 100Mbps switched Ethernet has dominated the LAN/layer 2 networking market.

[edit]Token frame

Page 41: Computer Networks

Shashank Agnihotri Computer Networks – Page 41

When no station is transmitting a data frame, a special token frame circles the loop. This special

token frame is repeated from station to station until arriving at a station that needs to transmit

data. When a station needs to transmit data, it converts the token frame into a data frame for

transmission. Once the sending station receives its own data frame, it converts the frame back

into a token. If a transmission error occurs and no token frame, or more than one, is present, a

special station referred to as the Active Monitor detects the problem and removes and/or

reinserts tokens as necessary (see Active and standby monitors). On 4 Mbit/s Token Ring, only

one token may circulate; on 16 Mbit/s Token Ring, there may be multiple tokens.

The special token frame consists of three bytes as described below (J and K are special non-

data characters, referred to as code violations).

Token priority

Token ring specifies an optional medium access scheme allowing a station with a high-priority

transmission to request priority access to the token.

8 priority levels, 0–7, are used. When the station wishing to transmit receives a token or data

frame with a priority less than or equal to the station's requested priority, it sets the priority bits to

its desired priority. The station does not immediately transmit; the token circulates around the

medium until it returns to the station. Upon sending and receiving its own data frame, the station

downgrades the token priority back to the original priority.

Token ring frame format

A data token ring frame is an expanded version of the token frame that is used by stations to

transmit media access control (MAC) management frames or data frames from upper layer

protocols and applications.

Token Ring and IEEE 802.5 support two basic frame types: tokens and data/command frames.

Tokens are 3 bytes in length and consist of a start delimiter, an access control byte, and an end

delimiter. Data/command frames vary in size, depending on the size of the Information field.

Data frames carry information for upper-layer protocols, while command frames contain control

information and have no data for upper-layer protocols.

Page 42: Computer Networks

Shashank Agnihotri Computer Networks – Page 42

Data/Command Frame

SD AC FC DA SA PDU from LLC (IEEE 802.2) CRC ED FS

8 bits 8 bits 8 bits 48 bits 48 bits up to 18200x8 bits 32 bits 8 bits 8 bits

Starting Delimiter 

consists of a special bit pattern denoting the beginning of the frame. The bits from most

significant to least significant are J,K,0,J,K,0,0,0. J and K are code violations. Since Manchester

encoding is self clocking, and has a transition for every encoded bit 0 or 1, the J and K codings violate

this, and will be detected by the hardware.Both the Starting Delimiter and Ending Delimiter fields are

used to mark frame boundaries

J K 0 J K 0 0 0

1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit

Access Control 

this byte field consists of the following bits from most significant to least significant bit order:

P,P,P,T,M,R,R,R. The P bits are priority bits, T is the token bit which when set specifies that this is a

token frame, M is the monitor bit which is set by the Active Monitor (AM) station when it sees this frame,

and R bits are reserved bits.

+ Bits 0–2 3 4 5–7

0 Priority Token Monitor Reservation

Frame Control 

a one byte field that contains bits describing the data portion of the frame contents which indicates

whether the frame contains data or control information. In control frames, this byte specifies the type of

control information.

+ Bits 0–1 Bits 2–7

0 Frame type Control Bits

Page 43: Computer Networks

Shashank Agnihotri Computer Networks – Page 43

Frame type – 01 indicates LLC frame IEEE 802.2 (data) and ignore control bits; 00 indicates

MAC frame and control bits indicate the type ofMAC control frame

Destination address 

a six byte field used to specify the destination(s) physical address .

Source address 

Contains physical address of sending station . It is six byte field that is either the local assigned

address (LAA) or universally assigned address (UAA) of the sending station adapter.

Data 

a variable length field of 0 or more bytes, the maximum allowable size depending on ring speed

containing MAC management data or upper layer information.Maximum length of 4500 bytes

Frame Check Sequence 

a four byte field used to store the calculation of a CRC for frame integrity verification by the

receiver.

Ending Delimiter 

The counterpart to the starting delimiter, this field marks the end of the frame and consists of the

following bits from most significant to least significant: J,K,1,J,K,1,I,E. I is the intermediate frame bit and

E is the error bit.

J K 1 J K 1 I E

1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit

Frame Status  

a one byte field used as a primitive acknowledgement scheme on whether the frame was recognized and

copied by its intended receiver.

A C 0 0 A C 0 0

1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit

Page 44: Computer Networks

Shashank Agnihotri Computer Networks – Page 44

A = 1 , Address recognized C = 1 , Frame copied

Token Frame

Start Delimiter Access Control End Delimiter

8 bits 8 bits 8 bits

Abort Frame

SD ED

8 bits 8 bits

Used to abort transmission by the sending station

Active and standby monitors

Every station in a token ring network is either an active monitor (AM) or standby monitor (SM)

station. However, there can be only one active monitor on a ring at a time. The active monitor is

chosen through an election or monitor contention process.

The monitor contention process is initiated when

a loss of signal on the ring is detected.

an active monitor station is not detected by other stations on the ring.

a particular timer on an end station expires such as the case when a station hasn't seen a

token frame in the past 7 seconds.

When any of the above conditions take place and a station decides that a new monitor is

needed, it will transmit a "claim token" frame, announcing that it wants to become the new

monitor. If that token returns back to the sender, it is OK for it to become the monitor. If some

other station tries to become the monitor at the same time then the station with the highest MAC

address will win the election process. Every other station becomes a standby monitor. All

stations must be capable of becoming an active monitor station if necessary.The active monitor

Page 45: Computer Networks

Shashank Agnihotri Computer Networks – Page 45

performs a number of ring administration functions. The first function is to operate as the master

clock for the ring in order to provide synchronization of the signal for stations on the wire.

Another function of the AM is to insert a 24-bit delay into the ring, to ensure that there is always

sufficient buffering in the ring for the token to circulate. A third function for the AM is to ensure

that exactly one token circulates whenever there is no frame being transmitted, and to detect a

broken ring. Lastly, the AM is responsible for removing circulating frames from the ring

.Token ring insertion process

Token ring stations must go through a 5-phase ring insertion process before being allowed to

participate in the ring network. If any of these phases fail, the token ring station will

not insert into the ring and the token ring driver may report an error.

Phase 0 (Lobe Check) — A station first performs a lobe media check. A station

is wrapped at the MSAU and is able to send 2000 test frames down its transmit pair which will

loop back to its receive pair. The station checks to ensure it can receive these frames without

error.

Phase 1 (Physical Insertion) — A station then sends a 5 volt signal to the MSAU to open

the relay.

Phase 2 (Address Verification) — A station then transmits MAC frames with its own MAC

address in the destination address field of a token ring frame. When the frame returns and if the

Address Recognized (AR) and Frame Copied (FC) bits in the frame-status are set to 0

(indicating that no other station currently on the ring uses that address), the station must

participate in the periodic (every 7 seconds) ring poll process. This is where stations identify

themselves on the network as part of the MAC management functions.

Phase 3 (Participation in ring poll) — A station learns the address of its Nearest Active

Upstream Neighbour (NAUN) and makes its address known to its nearest downstream

neighbour, leading to the creation of the ring map. Station waits until it receives an AMP or SMP

frame with the AR and FC bits set to 0. When it does, the station flips both bits (AR and FC) to 1,

if enough resources are available, and queues an SMP frame for transmission. If no such frames

are received within 18 seconds, then the station reports a failure to open and de-inserts from the

ring. If the station successfully participates in a ring poll, it proceeds into the final phase of

insertion, request initialization.

Phase 4 (Request Initialization) — Finally a station sends out a special request to a

parameter server to obtain configuration information. This frame is sent to a special functional

address, typically a token ring bridge, which may hold timer and ring number information the

new station needs to know.

Page 46: Computer Networks

Shashank Agnihotri Computer Networks – Page 46

IEEE 802

IEEE 802 refers to a family of IEEE standards dealing with local area networks and metropolitan

area networks.

More specifically, the IEEE 802 standards are restricted to networks carrying variable-size

packets. (By contrast, in cell relay networks data is transmitted in short, uniformly sized units

called cells. Isochronous networks, where data is transmitted as a steady stream of octets, or

groups of octets, at regular time intervals, are also out of the scope of this standard.) The

number 802 was simply the next free number IEEE could assign,[1] though “802” is sometimes

associated with the date the first meeting was held — February 1980.

The services and protocols specified in IEEE 802 map to the lower two layers (Data Link and

Physical) of the seven-layer OSI networking reference model. In fact, IEEE 802 splits the OSI

Data Link Layer into two sub-layers named Logical Link Control (LLC) and Media Access

Control (MAC), so that the layers can be listed like this:

Data link layer

LLC Sublayer

MAC Sublayer

Physical layer

The IEEE 802 family of standards is maintained by the IEEE 802 LAN/MAN Standards

Committee (LMSC). The most widely used standards are for the Ethernet family, Token Ring,

Wireless LAN, Bridging and Virtual Bridged LANs. An individual Working Group provides the

focus for each area.

Working groups

Name Description Note

IEEE 802.1 Bridging (networking) and Network Management

IEEE 802.2 LLC inactive

IEEE 802.3 Ethernet

Page 47: Computer Networks

Shashank Agnihotri Computer Networks – Page 47

IEEE 802.4 Token bus disbanded

IEEE 802.5 Defines the MAC layer for a Token Ring inactive

IEEE 802.6 MANs disbanded

IEEE 802.7 Broadband LAN using Coaxial Cable disbanded

IEEE 802.8 Fiber Optic TAG disbanded

IEEE 802.9 Integrated Services LAN disbanded

IEEE 802.10 Interoperable LAN Security disbanded

IEEE 802.11 a/b/g/n

Wireless LAN (WLAN) & Mesh (Wi-Fi certification)

IEEE 802.12 100BaseVG disbanded

IEEE 802.13 Unused[2]

IEEE 802.14 Cable modems disbanded

IEEE 802.15 Wireless PAN

IEEE 802.15.1 Bluetooth certification

IEEE 802.15.2 IEEE 802.15 and IEEE 802.11 coexistence

IEEE 802.15.3 High-Rate wireless PAN

Page 48: Computer Networks

Shashank Agnihotri Computer Networks – Page 48

IEEE 802.15.4Low-Rate wireless PAN (e.g., ZigBee, WirelessHART, MiWi, etc.)

IEEE 802.15.5 Mesh networking for WPAN

IEEE 802.16 Broadband Wireless Access (WiMAX certification)

IEEE 802.16.1 Local Multipoint Distribution Service

IEEE 802.17 Resilient packet ring

IEEE 802.18 Radio Regulatory TAG

IEEE 802.19 Coexistence TAG

IEEE 802.20 Mobile Broadband Wireless Access

IEEE 802.21 Media Independent Handoff

IEEE 802.22 Wireless Regional Area Network

IEEE 802.23 Emergency Services Working GroupNew (March, 2010)

Page 49: Computer Networks

Shashank Agnihotri Computer Networks – Page 49

Medium Access Sublayer

Unit-3 Medium Access Sublayer

Structure:

3.0 Objectives

3.1 LAN and WAN

3.2 ALOHA Protcols

3.3 LAN Protocols

3.4 IEEE 802 Standards for LANs

3.5 Fiber Optic Networks

3.6 Summary

3.7 Self Assessment Questions

3.8 Terminal Questions

3.9 Answers to Self Assessment Questions

3.10 Answers to Terminal Questions

3.0 Objectives

This unit provides the reader the necessary theory for understanding the Medium Access (MAC) sublayer of the data link layer.

After completion of this unit you will be able to:

· Define LAN and MAN

· Describe the channel allocation mechanisms used in various LANs and MANs

· Describe ALOHA protocols

· Compare and Contrast various LAN protocols

· Explain various IEEE standards for LANs

3.1 LAN and WAN

Page 50: Computer Networks

Shashank Agnihotri Computer Networks – Page 50

i) Static Channel Allocation in LAN and MAN

ii) Dynamic Channel Allocation in LAN and MAN

As the data link layer is overloaded, it is split into MAC and LLC sub layers. MAC sub-layer is the bottom part of the data link layer. Medium access control is often used as a synonym to multiple access protocol, since the MAC sub layer provides the protocol and control mechanisms that are required for a certain channel access method. This unit deals with broadcast networks and their protocols.

In any broadcast network, the key issue is how to determine who gets to use the channel when there is a competition. When only one single channel is available, determining who should get access to the channel for transmission is a very complex task. Many protocols for solving the problem are known and they form the contents of this unit.

Thus unit provides an insight of a channel access control mechanism that makes it possible for several terminals or network nodes to communicate within a multipoint network. The MAC layer is essentially important in local area networks(LAN’s), many of which use a multi-access channel as the basis for communication. WAN’s in contrast use a point to point networks.

To get a head start, let us define LANs and MANs.

Definition: A Local Area Network (LAN) is a network of systems spread over small geographical area, for example a network of computers within a building or small campus.

The owner of a LAN may be the same organization within which the LAN network is set up. It has higher data rates i.e. in scales of Mbps (Rates at which the data are transferred from one system to another) because the systems to be spanned are very close to each other in proximity.

Definition: A WAN (Wide Area Network) typically spans a set of countries that have data rates less than 1Mbps, because of the distance criteria.

The LANs may be owned by multiple organizations since the spanned distance is spread over some countries.

i) Static Channel Allocation in LAN and MAN

Before going for the exact theory behind the methods of channel allocations, we need to understand the base behind this theory, which is given below:

The channel allocation problem

We can classify the channels as static and dynamic. The static channel is where the number of users are stable and the traffic is not bursty. When the number of users using the channel keeps on varying the channel is considered as a dynamic channel. The traffic on these dynamic channels also keeps on varying. For example: In most computer systems, the data traffic is extremely

Page 51: Computer Networks

Shashank Agnihotri Computer Networks – Page 51

bursty. We see that in this system, the peak traffic to mean traffic ratios of 1000:1 are common.

· Static channel allocation

The usual way of allocating a single channel among the multiple users is frequency division multiplexing (FDM). If there are N users, the bandwidth allocated is split into N equal sized portions. FDM is simple and efficient technique for small number of users. However when the number of senders is large and continuously varying or the traffic is bursty, FDM is not suitable.

The same arguments that apply to FDM also apply to TDM. Thus none of the static channels allocation methods work well with bursty traffic we explore the dynamic channels.

· Dynamic channels allocation in LAN’s and MAN’s

Before discussing the channel allocation problems that is multiple access methods we will see the assumptions that we are using so that the analysis will become simple.

Assumptions:

1. The Station Model:

The model consists of N users or independent stations. Stations are sometimes called terminals. The probability of frame being generated in an interval of length ∆t is λ.Δt, where λ is a constant and defines the arrival rate of new frames. Once the frame has been generated, the station is blocked and does nothing until the frame has been successfully transmitted.

2. Single Channel Assumption:

A single channel is available for all communication. All stations can transmit using this single channel. All can receive from this channel. As far as the hard is concerned, all stations are equivalent. It is possible the software or the protocols used may assign the priorities to them.

3. Collisions:

If two frames are transmitted simultaneously, they overlap in time and the resulting signal is distorted or garbled. This event or situation is called a collision. We assume that all stations can detect collisions. A collided frame must be retransmitted again later. Here we consider no other errors for retransmission other than those generated because of collisions.

4. Continuous Time

For a continuous time assumption we mean, that the frame transmission on the channel can begin any instant of time. There is no master clock dividing the time into discrete intervals.

5. Slotted Time

Page 52: Computer Networks

Shashank Agnihotri Computer Networks – Page 52

In case of slotted time assumption, the time is divided into discrete slots or intervals. The frame transmission on the channel begins only at the start of a slot. A slot may contain 0, 1, or more frames. The 0 frame transmission corresponds to idle slot, 1 frame transmission corresponds to successful transmission, and more frame transmission corresponds to a collision.

6. Carrier Sense

Using this facility the users can sense the channel. i.e. the stations can tell if the channel is in use before trying to use it. If the channel is sensed as busy, no station will attempt to transmit on the channel unless and until it goes idle.

7. No Carrier Sense:

This assumption implies that this facility is not available to the stations. i.e. the stations cannot tell if the channel is in use before trying to use it. They just go ahead and transmit. It is only after transmission of the frame they determine whether the transmission was successful or not.

The first assumption states that the station is independent and work is generated at a constant rate. It also assumes that each station has only one program or one user. Thus when the station is blocked no new work is generated. The single channel assumption is the heart of this station model and this unit. The collision assumption is also very basic. Two alternate assumptions about time are discussed. For a given system only one assumption about time holds good, i.e. either the channel is considered to be continuous time based or slotted time based. Also a channel can be sensed or not sensed by the stations. Generally LANs can sense the channel but wireless networks cannot sense the channel effectively. Also stations on wired carrier sense networks can terminate their transmission prematurely if they discover collision. But in case of wireless networks collision detection is rarely done.

3.2 ALOHA Protocols

In 1970s, Norman Abramson and his colleagues at University of Hawaii devised a new and elegant method to solve the channel allocation problem. Their work has been extended by many researchers since then. His work is called the ALOHA system which uses ground-based radio broadcasting. This basic idea is applicable to any system in which uncoordinated users are competing for the use of a shared channel.

Pure or Un-slotted Aloha

The ALOHA network was created at the University of Hawaii in 1970 under the leadership of Norman Abramson. The Aloha protocol is an OSI layer 2 protocol for LAN networks with broadcast topology.

The first version of the protocol was basic:

· If you have data to send, send the data

Page 53: Computer Networks

Shashank Agnihotri Computer Networks – Page 53

· If the message collides with another transmission, try resending it later

Figure 3.1: Pure ALOHA

Figure 3.2: Vulnerable period for the node: frame

The Aloha protocol is an OSI layer 2 protocol used for LAN. A user is assumed to be always in two states: typing or waiting. The station transmits a frame and checks the channel to see if it was successful. If so the user sees the reply and continues to type. If the frame transmission is not successful, the user waits and retransmits the frame over and over until it has been successfully sent.

Let the frame time denote the amount of time needed to transmit the standard fixed length frame. We assume the there are infinite users and generate the new frames according Poisson distribution with the mean N frames per frame time.

· If N>1 the users are generating the frames at higher rate than the channel can handle. Hence all frames will suffer collision.

· Hence the range for N is

0<N<1

· If N>1 there are collisions and hence retransmission frames are also added with the new frames

Page 54: Computer Networks

Shashank Agnihotri Computer Networks – Page 54

for transmissions.

Let us consider the probability of k transmission attempts per frame time. Here the transmission of frames includes the new frames as well as the frames that are given for retransmission. This total traffic is also poisoned with the mean G per frame time. That is G ≥ N

· At low load: N is approximately =0, there will be few collisions. Hence few retransmissions that is G=N

· At high load: N >>1, many retransmissions and hence G>N.

· Under all loads: throughput S is just the offered load G times the probability of successful transmission P0

S = G*P0

The probability that k frames are generated during a given frame time is given by Poisson distribution

P[k]= Gke-G / K!

So the probability of zero frames is just e-G. The basic throughput calculation follows a Poisson distribution with an average number of arrivals of 2G arrivals per two frame time. Therefore, the lambda parameter in the Poisson distribution becomes 2G.

Hence P0 = e-2G

Hence the throughput S = GP0 = Ge-2G

We get for G = 0.5 resulting in a maximum throughput of 0.184, i.e. 18.4%.

Pure Aloha had a maximum throughput of about 18.4%. This means that about 81.6% of the total available bandwidth was essentially wasted due to losses from packet collisions.

Slotted or Impure ALOHA

An improvement to the original Aloha protocol was Slotted Aloha. It is in 1972 Roberts published a method to double the throughput of a pure ALOHA by using discrete time-slots. His proposal was to divide the time into discrete slots corresponding to one frame time. This approach requires the users to agree to the frame boundaries. To achieve synchronization one special station emits a pip at the start of each interval similar to a clock. Thus the capacity of slotted ALOHA increased to the maximum throughput of 36.8%.

The throughput for pure and slotted ALOHA system is shown in figure 3.3. A station can send only at the beginning of a timeslot and thus collisions are reduced. In this case, the average number of aggregate arrivals is G arrivals per 2X seconds. This leverages the lambda parameter to

Page 55: Computer Networks

Shashank Agnihotri Computer Networks – Page 55

be G. The maximum throughput is reached for G = 1.

Figure 3.3: Throughput versus offered load traffic

With Slotted Aloha, a centralized clock sends out small clock tick packets to the outlying stations. Outlying stations are allowed to send their packets immediately after receiving a clock tick. If there is only one station with a packet to send, this guarantees that there will never be a collision for that packet. On the other hand if there are two stations with packets to send, this algorithm guarantees that there will be a collision, and the whole of the slot period up to the next clock tick is wasted. With some mathematics, it is possible to demonstrate that this protocol does improve the overall channel utilization, by reducing the probability of collisions by a half.

It should be noted that Aloha’s characteristics are still not much different from those experienced today by Wi - Fi, and similar contention-based systems that have no carrier sense capability. There is a certain amount of inherent inefficiency in these systems. It is typical to see these types of networks’ throughput break down significantly as the number of users and message burstiness increase. For these reasons, applications which need highly deterministic load behavior often use token-passing schemes (such as token ring) instead of contention systems.

For instance ARCNET is very popular in embedded applications. Nonetheless, contention based systems also have significant advantages, including ease of management and speed in initial communication. Slotted Aloha is used on low bandwidth tacticalSatellite communications networks by the US Military, subscriber based Satellite communications networks, and contact lessRFID technologies.

3.3 LAN Protocols

With slotted ALOHA, the best channel utilization that can be achieved is 1 / e. This is hardly surprising since with stations transmitting at will, without paying attention to what other stations are doing, there are bound to be many collisions. In LANs, it is possible to detect what other stations are doing, and adapt their behavior accordingly. These networks can achieve a better utilization than 1 / e.

CSMA Protocols:

Protocols in which stations listen for a carrier (a transmission) and act accordingly are

Page 56: Computer Networks

Shashank Agnihotri Computer Networks – Page 56

called Carrier Sense Protocols."Multiple Access" describes the fact that multiple nodes send and receive on the medium. Transmissions by one node are generally received by all other nodes using the medium. Carrier Sense Multiple Access (CSMA) is a probabilistic Media Access Control (MAC) protocol in which a node verifies the absence of other traffic before transmitting on a shared physical medium, such as an electrical bus, or a band of electromagnetic spectrum.

The following three protocols discuss the various implementations of the above discussed concepts:

i) Protocol 1. 1-persistent CSMA:

When a station has data to send, it first listens to the channel to see if any one else is transmitting. If the channel is busy, the station waits until it becomes idle. When the station detects an idle channel, it transmits a frame. If a collision occurs, the station waits a random amount of time and starts retransmission.

The protocol is so called because the station transmits with a probability of a whenever it finds the channel idle.

ii) Protocol 2. Non-persistent CSMA:

In this protocol, a conscious attempt is made to be less greedy than in the 1-persistent CSMA protocol. Before sending a station senses the channel. If no one else is sending, the station begins doing so itself. However, if the channel is already in use, the station does not continuously sense the channel for the purpose of seizing it immediately upon detecting the end of previous transmission. Instead, it waits for a random period of time and then repeats the algorithm. Intuitively, this algorithm should lead to better channel utilization and longer delays than 1-persistent CSMA.

iii) Protocol 3. p - persistent CSMA

It applies to slotted channels and the working of this protocol is given below:

When a station becomes ready to send, it senses the channel. If it is idle, it transmits with a probability p. With a probability of q = 1 – p, it defers until the next slot. If that slot is also idle, it either transmits or defers again, with probabilities p and q. This process is repeated until either the frame has been transmitted or another station has begun transmitting. In the latter case, it acts as if there had been a collision. If the station initially senses the channel busy, it waits until the next slot and applies the above algorithm.

CSMA/CD Protocol

In computer networking, Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a network control protocol in which a carrier sensing scheme is used. A transmitting data station that detects another signal while transmitting a frame, stops transmitting that frame, transmits a jam

Page 57: Computer Networks

Shashank Agnihotri Computer Networks – Page 57

signal, and then waits for a random time interval. The random time interval also known as "backoff delay" is determined using the truncated binary exponential backoff algorithm. This delay is used before trying to send that frame again. CSMA/CD is a modification of pure Carrier Sense Multiple Access (CSMA).

Collision detection is used to improve CSMA performance by terminating transmission as soon as a collision is detected, and reducing the probability of a second collision on retry. Methods for collision detection are media dependent, but on an electrical bus such as Ethernet, collisions can be detected by comparing transmitted data with received data. If they differ, another transmitter is overlaying the first transmitter’s signal (a collision), and transmission terminates immediately. Here the collision recovery algorithm is nothing but an binary exponential algorithm that determines the waiting time for retransmission. If the number of collisions for the frame hits 16 then the frame is considered as not recoverable.

CSMA/CD can be in anyone of the following three states as shown in figure 3.4.

1. Contention period

2. transmission period

3. Idle period

Figure 3.4: States of CSMA / CD: Contention, Transmission, or Idle

A jam signal is sent which will cause all transmitters to back off by random intervals, reducing the probability of a collision when the first retry is attempted. CSMA/CD is a layer 2 protocol in the OSI model. Ethernet is the classic CSMA/CD protocol.

Collision Free Protocols

Although collisions do not occur with CSMA/CD once a station has unambiguously seized the channel, they can still occur during the contention period. These collisions adversely affect the system performance especially when the cable is long and the frames are short. And also CSMA/CD is not universally applicable. In this section, we examine some protocols that resolve the contention for the channel without any collisions at all, not even during the contention period.

Page 58: Computer Networks

Shashank Agnihotri Computer Networks – Page 58

In the protocols to be described, we assume that there exists exactly N stations, each with a unique address from 0 to N-1 “wired” into it. We assume that the propagation delay is negligible.

i) A Bit Map Protocol

In this method, each contention period consists of exactly N slots. If station 0 has a frame to send, it transmits a 1 bit during the zeroth slot. No other station is allowed to transmit during this slot. Regardless of what station 0 is doing, station 1 gets the opportunity to transmit a 1 during slot 1, but only if it has a frame queued. In general, station j may announce that it has a frame to send by inserting a 1 bit into slot j. After all N stations have passed by, each station has complete knowledge of which stations wish to transmit. At that point, they begin transmitting in a numerical order.

Since everyone agrees on who goes next, there will never be any collisions. After the last ready station has transmitted its frame, an event all stations can monitor, another N bit contention period is begun. If a station becomes ready just after its bit slot has passed by, it is out of luck and must remain silent until every station has had a chance and the bit map has come around again.

Protocols like this in which the desire to transmit is broadcast before the actual transmission are called Reservation Protocols.

ii) Binary Countdown

A problem with basic bit map protocol is that the overhead is 1 bit per station, so it odes not scale well to networks with thousands of stations. We can do better by using binary station address.

A station wanting to use the channel now broadcasts its address as a binary bit string, starting with the high-order bit. All addresses are assumed to be of the same length. The bits in each address position from different stations are Boolean ORed together. We call this protocol as Binary countdown, which was used in Datakit. It implicitly assumes that the transmission delays are negligible so that all stations see asserted bits essentially simultaneously.

To avoid conflicts, an arbitration rule must be applied: as soon as a station sees that a high-order bit position that is 0 in its address has been overwritten with a 1, it gives up.

Example: If stations 0010, 0100, 1001, and 1010 are all trying to get the channel for transmission, these are ORed together to form a 1. Stations 0010 and 0100 see the 1 and know that a higher numbered station is competing for the channel, so they give up for the current round. Stations 1001 and 1010 continue.

The next bit is 0, and both stations continue. The next bit is 1, so station 1001 gives up. The winner is station 1010 because t has the highest address. After winning the bidding, it may now transmit a frame, after which another bidding cycle starts.

This protocol has the property that higher numbered stations have a higher priority than lower numbered stations, which may be either good or bad depending on the context.

Page 59: Computer Networks

Shashank Agnihotri Computer Networks – Page 59

iii) Limited Contention Protocols

Until now we have considered two basic strategies for channel acquisition in a cable network: Contention as in CSMA, and collision – free methods. Each strategy can be rated as to how well it does with respect to the two important performance measures, delay at low load, and channel efficiency at high load.

Under conditions of light load, contention (i.e. pure or slotted ALOHA) is preferable due to its low delay. As the load increases, contention becomes increasingly less attractive, because the overhead associated with channel arbitration becomes greater. Just the reverse is true for collision free protocols. At low load, they have high delay, but as the load increases, the channel efficiency improves.

It would be more beneficial if we could combine the best features of contention and collision free protocols and arrive at a protocol that uses contention at low load to provide low delay, but uses a collision free technique at high load to provide good channel efficiency. Such protocols can be called Limited Contention protocols.

iv) Adaptive Tree Walk Protocol

A simple way of performing the necessary channel assignment is to use the algorithm devised by US army for testing soldiers for syphilis during World War II. The Army took a blood sample from N soldiers. A portion of each sample was poured into a single test tube. This mixed sample was then tested for antibodies. If none were found, all the soldiers in the group were declared healthy. If antibodies were present, two new mixed samples were prepared, one from soldiers 1 through N/2 and one from the rest. The process was repeated recursively until the infected soldiers were detected.

For the computerized version of this algorithm, let us assume that stations are arranged as the leaves of a binary tree as shown in figure 3.4 below:

Figure 3.5: A tree for four stations

Page 60: Computer Networks

Shashank Agnihotri Computer Networks – Page 60

In the first contention slot following a successful frame transmission, slot 0, all stations are permitted to acquire the channel. If one of them does, so fine. If there is a collision, then during slot 1 only stations falling under node 2 in the tree may compete. If one of them acquires the channel, the slot following the frame is reserved for those stations under node 3. If on the other hand, two or more stations under node 2 want to transmit, there will be a collisions during slot 1, in which case it is node 4’s turn during slot 2.

In essence, if a collision occurs during slot 0, the entire tree is searched, depth first to locate all ready stations. Each bit slot is associated with some particular node in a tree. If a collision occurs, the search continues recursively with the node’s left and right children. If a bit slot is idle or if only one station transmits in it, the searching of its node can stop because all ready stations have been located.

When the load on the system is heavy, it is hardly worth the effort to dedicate slot 0 to node 1, because that makes sense only in the unlikely event that precisely one station has a frame to send.

At what level in the tree should the search begin? Clearly, the heavier the load, the farther down the tree the search should begin.

3.4 IEEE 802 standards for LANs

IEEE has standardized a number of LAN’s and MAN’s under the name of IEEE 802. Few of the standards are listed in figure 3.6. The most important of the survivor’s are 802.3 (Ethernet) and 802.11 (wireless LAN). Both these two standards have different physical layers and different MAC sub layers but converge on the same logical link control sub layer so they have same interface to the network layer.

IEEE No Name Title

802.3 Ethernet CSMA/CD Networks (Ethernet)

802.4 

Token Bus Networks

802.5 

Token Ring Networks

802.6 

Metropolitan Area Networks

802.11 WiFi Wireless Local Area Networks

802.15.1 Bluetooth Wireless Personal Area Networks

802.15.4 ZigBee Wireless Sensor Networks

Page 61: Computer Networks

Shashank Agnihotri Computer Networks – Page 61

802.16 WiMa Wireless Metropolitan Area Networks

Figure 3.6: List of IEEE 802 Standards for LAN and MAN

Ethernets

Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems, although there are major differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The common cable providing the communication channel was likened to the ether and it was from this reference the name "Ethernet" was derived.

From this early and comparatively simple concept, Ethernet evolved into the complex networking technology that today powers the vast majority of local computer networks. The coaxial cable was later replaced with point-to-point links connected together by hubs and/or switches in order to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. Star LAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted-pair network.

Above the physical layer, Ethernet stations communicate by sending each other data packets, small blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used both to specify the destination and the source of each data packet. Network interface cards (NICs) or chips normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique address, but this can be overridden, either to avoid an address change when an adapter is replaced, or to use locally administered addresses.

The most kinds of Ethernets used were with the data rate of 10Mbps. The table 3.1 gives the details of the medium used, number of nodes per segment and distance it supported, along with the application.

Table 3.1 Different 10Mbps Ethernets used

Name Cable Type

Max Segment Length

Nodes per Segment

Advantages

10Base5 Thick coax 500 m 100 Original Cable; Now Obsolete

10Base2 Thin coax 185 m 30 No hub needed

Page 62: Computer Networks

Shashank Agnihotri Computer Networks – Page 62

10Base-T Twisted Pair

100 m 1024 Cheapest system

10Base-F Fiber Optics

2000 m 1024 Best between buildings

Fast Ethernet

Fast Ethernet is a collective term for a number of Ethernet standards that carry traffic at the nominal rate of 100 Mbit/s, against the original Ethernet speed of 10 Mbit/s. Of the 100 megabit Ethernet standards 100baseTX is by far the most common and is supported by the vast majority of Ethernet hardware currently produced. Full duplex fast Ethernet is sometimes referred to as "200 Mbit/s" though this is somewhat misleading as that level of improvement will only be achieved if traffic patterns are symmetrical. Fast Ethernet was introduced in 1995 and remained the fastest version of Ethernet for three years before being superseded by gigabit Ethernet.

A fast Ethernet adaptor can be logically divided into a medium access controller (MAC) which deals with the higher level issues of medium availability and a physical layer interface (PHY). The MAC may be linked to the PHY by a 4 bit 25 MHz synchronous parallel interface known as MII. Repeaters (hubs) are also allowed and connect to multiple PHYs for their different interfaces.

· 100BASE-T is any of several Fast Ethernet standards for twisted pair cables.

· 100BASE-TX (100 Mbit/s over two-pair Cat5 or better cable),

· 100BASE-T4 (100 Mbit/s over four-pair Cat3 or better cable, defunct),

· 100BASE-T2 (100 Mbit/s over two-pair Cat3 or better cable, also defunct).

The segment length for a 100BASE-T cable is limited to 100 meters. Most networks had to be rewired for 100-megabit speed whether or not they had supposedly been CAT3 or CAT5 cable plants. The vast majority of common implementations or installations of 100BASE-T are done with 100BASE-TX.

100BASE-TX is the predominant form of Fast Ethernet, and runs over two pairs of category 5 or above cable. A typical category 5 cable contains 4 pairs and can therefore support two 100BASE-TX links. Each network segment can have a maximum distance of 100 metres. In its typical configuration, 100BASE-TX uses one pair of twisted wires in each direction, providing 100 Mbit/s of throughput in each direction (full-duplex).

The configuration of 100BASE-TX networks is very similar to 10BASE-T. When used to build a local area network, the devices on the network are typically connected to a hub or switch, creating a star network. Alternatively it is possible to connect two devices directly using a crossover cable.

In 100BASE-T2, the data is transmitted over two copper pairs, 4 bits per symbol. First, a 4 bit symbol is expanded into two 3-bit symbols through a non-trivial scrambling procedure based on a

Page 63: Computer Networks

Shashank Agnihotri Computer Networks – Page 63

linear feedback shift register.

100BASE-FX is a version of Fast Ethernet over optical fiber. It uses two strands of multi-mode optical fiber for receive (RX) and transmit (TX). Maximum length is 400 metres for half-duplex connections or 2 kilometers for full-duplex.

100BASE-SX is a version of Fast Ethernet over optical fiber. It uses two strands of multi-mode optical fiber for receive and transmit. It is a lower cost alternative to using 100BASE-FX, because it uses short wavelength optics which are significantly less expensive than the long wavelength optics used in 100BASE-FX. 100BASE-SX can operate at distances up to 300 meters.

100BASE-BX is a version of Fast Ethernet over a single strand of optical fiber (unlike 100BASE-FX, which uses a pair of fibers). Single-mode fiber is used, along with a special multiplexer which splits the signal into transmit and receive wavelengths.

Gigabit Ethernet

Gigabit Ethernet (GbE or 1 GigE) is a term describing various technologies for transmitting Ethernet packets at a rate of a gigabit per second, as defined by the IEEE 802.3-2005 standard. Half duplex gigabit links connected through hubs are allowed by the specification but in the marketplace full duplex with switches is the norm.

Gigabit Ethernet was the next iteration, increasing the speed to 1000 Mbit/s. The initial standard for gigabit Ethernet was standardized by the IEEE in June 1998 as IEEE 802.3z. 802.3z is commonly referred to as 1000BASE-X (where -X refers to either -CX, -SX, -LX, or -ZX).

IEEE 802.3ab, ratified in 1999, defines gigabit Ethernet transmission over unshielded twisted pair (UTP) category 5, 5e, or 6cabling and became known as 1000BASE-T. With the ratification of 802.3ab, gigabit Ethernet became a desktop technology as organizations could utilize their existing copper cabling infrastructure.

Initially, gigabit Ethernet was deployed in high-capacity backbone network links (for instance, on a high-capacity campus network). Fiber gigabit Ethernet has recently been overtaken by 10 gigabit Ethernet which was ratified by the IEEE in 2002 and provided data rates 10 times that of gigabit Ethernet. Work on copper 10 gigabit Ethernet over twisted pair has been completed, but as of July 2006, the only currently available adapters for 10 gigabit Ethernet over copper requires specialized cabling.

InfiniBand connectors and is limited to 15 m. However, the 10GBASE-T standard specifies use of the traditional RJ-45connectors and longer maximum cable length. Different gigabits Ethernet are listed in table 3.2.

Table 3.2 Different Gigabit Ethernets

Name medium

Page 64: Computer Networks

Shashank Agnihotri Computer Networks – Page 64

1000BASE-T unshielded twisted pair

1000BASE-SX multi-mode fiber

1000BASE-LX single-mode fiber

1000BASE-CX balanced copper cabling

1000BASE-ZX single-mode fiber

IEEE 802.3 Frame format

Preamble SOF Destination Address

Source Address

Length Data Pad Checksum

Figure 3.7: Frame format of IEEE 802.3

· Preamble field

Each frame starts with a preamble of 8 bytes, each containing bit patterns “10101010”. Preamble is encoded using Manchester encoding. Thus the bit patterns produce a 10MHz square wave for 6.4 micro sec to allow the receiver’s clock to synchronize with the sender’s clock.

· Address field

The frame contains two addresses, one for the destination and another for the sender. The length of address field is 6 bytes. The MSB of destination address is ‘0’ for ordinary addresses and ‘1’ for group addresses. Group addresses allow multiple stations to listen to a single address. When a frame is sent to a group of users, all stations in that group receive it. This type of transmission is referred to as multicasting. The address consisting of all ‘1’ bits is reserved for broadcasting.

· SOF: This field is 1 byte long and is used to indicate the start of the frame.

· Length:

This field is of 2 bytes long. It is used to specify the length of the data in terms of bytes that is present in the frame. Thus the combination of the SOF and the length field is used to mark the end of the frame.

· Data :

The length of this field ranges from zero to a maximum of 1500 bytes. This is the place where the

Page 65: Computer Networks

Shashank Agnihotri Computer Networks – Page 65

actual message bits are to be placed.

· Pad:

When a transceiver detects a collision, it truncates the current frame, which means the stray bits and pieces of frames appear on the cable all the time. To make it easier to distinguish valid frames from garbage, Ethernet specifies that valid frame must be at least 64 bytes long, from the destination address to the checksum, including both. That means the data field come must be of 46 bytes. But if there is no data to be transmitted and only some acknowledgement is to be transmitted then the length of the frame is less than what is specified for the valid frame. Hence these pad fields are provided, i.e. if the data field is less than 46 bytes then the pad field comes into picture such that the total data and pad field must be equal to 46 bytes minimum. If the data field is greater than 46 bytes then pad field is not used.

· Checksum:

It is 4 byte long. It uses a 32-bit hash code of the data. If some data bits are in error, then the checksum will be wrong and the error will be detected. It uses CRC method and it is used only for error detection and not for forward error correction.

IEEE 802.4 Standard - Token Bus

This standard was proposed by Dirvin and Miller in 1986.

In this standard, physically the token bus is a linear or tree-shaped cable onto which the stations are attached. Logically, the stations are organized into a ring, with each station knowing the address of the station to its “left” or “right”. When the logical ring is initialized, the highest numbered station may send the first frame. After it is done, it passes permission to its immediate neighbor by sending the neighbor a special control frame called a token. The token propagates around the logical ring, with only the token holder being permitted to transmit frames. Since only one station at a time holds the token, collisions do not occur.

Note: The physical order in which the stations are connected to the cable is not important.

Since the cable is inherently a broadcast medium, each station receives each frame, discarding those not addressed to it. When a station passes the token, it sends a token frame specifically addressed to its logical neighbor in the ring, irrespective of where the station is physically located on the cable.

Page 66: Computer Networks

Shashank Agnihotri Computer Networks – Page 66

Figure 3.8: Token Passing

IEEE 802.5 Standard - Token Ring

A ring is really not a broadcast medium, but a collection of individual point-to-point links that happen to form a circle. Ring engineering is almost entirely digital. A ring is also fair and has a known upper bound on channel access.

sec, each bit occupies 200/R meters on the ring. This means, for example, that a 1-Mbps ring whose circumference is 1000 meters can contain only 5 bits on it at once. sec. With a typical propagation speed of about 200 m/A major issue in the design and analysis of any ring network is the “physical length” of a bit. If the data rate of the ring is R Mbps, a bit is emitted every 1/R

A ring really consists of a collection of ring interfaces connected by point-to-point lines. Each bit arriving at an interface is copied into a 1-bit buffer and then copied out onto the ring again. While in the buffer, the bit can be inspected and possibly modified before being written out. This copying step introduces a 1-bit delay at each interface.

In a token ring a special bit pattern, called the token, circulates around the ring whenever all stations are idle. When a station wants to transmit a frame, it is required to seize the token and remove it from the ring before transmitting. Since there is only one token, only one station can transmit at a given instant, thus solving the channel access problem the same way the token bus solves it.

3.5 Fiber Optic Networks

Fiber optics is becoming increasingly important, not only for wide area point-to-point links, but also for MANs and LANs. Fiber has high bandwidth, is thin and lightweight, is not affected by electromagnetic interference from heavy machinery, power surges or lightning, and has excellent

Page 67: Computer Networks

Shashank Agnihotri Computer Networks – Page 67

security because it is nearly impossible to wiretap without detection.

FDDI (Fiber Distributed Data Interface)

It is a high performance fiber optic token ring LAN running at 100 Mbps over distances up to 200 km with up to 1000 stations connected. It can be used in the same way as any of the 802 LANs, but with its high bandwidth, another common use is as a backbone to connect copper LANs.

FDDI – II is a successor of FDDI modified to handle synchronous circuit switched PCM data for voice or ISDN traffic, in addition to ordinary data.

FDDI uses multimode fibers. It also uses LEDs rather than lasers because FDDI may sometimes be used to connect directly to workstations.

The FDDI cabling consists of two fiber rings, one transmitting clockwise and the other transmitting counter clockwise. If any one breaks, the other can be used as a backup.

FDDI defines two classes of stations A and B. Class A stations connect to both rings. The cheaper class B stations only connect to one of the rings. Depending on how important fault tolerance is, an installation can choose class A or class B stations, or some of each.

S/NET

It is another kind of fiber optic network with an active star for switching. It was designed and implemented at Bell laboratories. The goal of S/NET is very fast switching.

Each computer in the network has two 20-Mbps fibers running to the switch, one for input and one for output. The fibers terminate in a BIB (Bus Interface Board). The CPUs each have an I/O device register that acts like a one-word window into BIB memory. When a word is written to that device register, the interface board in the CPU transmits the bits serially over the fiber to the BIB, where they are reassembled as a word in BIB memory. When the whole frame to be transmitted has been copied to BIB memory, the CPU writes a command to another I/O device register to cause the switch to copy the frame to the memory of the destination BIB and interrupt the destination CPU.

Access to this bus is done by a priority algorithm. Each BIB has a unique priority. When a BIB wants access to the bus it asserts a signal on the bus corresponding to its priority. The requests are recorded and granted in priority order, with one word transferred (16 bits in parallel) at a time. When all requests have been granted, another round of bidding is started and BIBs can again request the bus. No bus cycles are lost to contention, so switching speed is 16 bits every 200 nsec, or 80 Mbps.

3.6 Summary

This unit discusses the Medium Access Sublayer. It discusses in detail about LANs and WANs. It discusses the basic LAN protocols called ALOHA protocols. It describes the IEEE 802 standards for LANS. It discusses the importance of Fiber Optic Networks and cabling used as backbone for

Page 68: Computer Networks

Shashank Agnihotri Computer Networks – Page 68

LAN connectivity.

3.7 Self Assessment Questions

1. The Data Link Layer of the ISO OSI model is divided into ______ sublayers

a) 1 b) 4 c) 3 d) 2

2. The ______ layer is responsible for resolving access to the shared media or resources.

a) physical b) MAC sublayer c) Network d) Transport

3. A WAN typically spans a set of countries that have data rates less than _______ Mbps

a) 2 b) 1 c) 4 d) 100

4. The ________ model consists of N users or independent stations.

5. The Aloha protocol is an OSI _______ protocol for LAN networks with broadcast topology

6. In ______ method, each contention period consists of exactly N slots

3.8 Terminal Questions

1. Discuss ALOHA protocols

2. Discuss various LAN protocols

3. Discuss IEEE 802 standards for LANs

3.9 Answers to Self Assessment Questions

1. d

2. b

3. b

4. Station

5. layer 2

6. A Bit Map Protocol

3.10 Answers to Terminal Questions

Page 69: Computer Networks

Shashank Agnihotri Computer Networks – Page 69

1. Refer to section 3.2

2. Refer to section 3.3

3. Refer to section 3.4

ATM Protocol StructureFigure 33 shows the ATM layered architecture as described in ITU-T recommendation I.321 (1992). This is the basis on which the B-ISDN Protocol Reference Model has been defined.

Figure 33: ATM Protocol Architecture ATM Physical LayerThe physical layer accepts or delivers payload cells at its point of access to the ATM layer. It provides for cell delineation which enables the receiver to recover cell boundaries. It generates and verifies the HEC field. If the HEC cannot be verified or corrected, then the physical layer will discard the errored cell. Idle cells are inserted in the transmit direction and removed in the receiving direction.For the physical transmission of bits, 5 types of transmission frame adaptations are specified (by the ITU and the ATM Forum). Each one of them has its own lower bound or upper bound for the amount of bits it can carry (from 12.5 Mbps to 10 Gbps so far).

1. Synchronous Digital Hierarchy (SDH)   155 Mbps;

2. Plesiochronous Digital Hierarchy (PDH)   34 Mbps;

3. Cell Based   155 Mbps;4. Fibre Distributed Data Interface (FDDI) = 100 Mbps;

5. Synchronous Optical Network (SONET)   51 Mbps.The actual physical link could be either optical or coaxial with the possibility of Unshielded Twisted Pair (UTP Category 3/5) and Shielded Twisted Pair (STP Category 5) in the mid range (12.5 to 51 Mbps). ATM Layer

Page 70: Computer Networks

Shashank Agnihotri Computer Networks – Page 70

ATM layer mainly performs switching, routing and multiplexing. The characteristic features of the ATM layer are independent of the physical medium. Four functions of this layer have been identified.1. cell multiplexing (in the transmit direction)2. cell demultiplexing (at the receiving end)3. VPI/VCI translation4. cell header generation/extraction.This layer accepts or delivers cell payloads. It adds appropriate ATM cell headers when transmitting and removes cell headers in the receiving direction so that only the cell information field is delivered to the ATM Adaptation Layer.At the ATM switching/cross connect nodes VPI and VCI translation occurs. At a VC switch new values of VPI and VCI are obtained whereas at a VP switch only new values for the VPI field are obtained (see Figure 34). Depending on the direction, either the individual VPs and VCs are multiplexed into a single cell or the single cell is demultiplexed to get the individual VPs and VCs.

Figure 34: VC/VP Switching in ATM ATM Adaptation Layer (AAL)The ATM Adaptation Layer (AAL) is between ATM layer and the higher layers. Its basic function is the enhanced adaptation of services provided by the ATM layer to the requirements of the higher layers.This layer accepts and delivers data streams that are structured for use with user's own communication protocol. It changes these protocol data structures into ATM cell payloads when receiving and does the reverse when transmitting. It inserts timing information required by users into cell payloads or extracts from them. This is done in accordance with five AAL service classes defined as follows.1. AAL1 - Adaptation for Constant Bit Rate (CBR) services (connection oriented, 47 byte payload);2. AAL2 - Adaptation for Variable Bit Rate (VBR) services (connection oriented, 45 byte payload);3. AAL3 - Adaptation for Variable Bit Rate data services (connection oriented, 44 byte payload);4. AAL4 - Adaptation for Variable Bit Rate data services (connection less, 44 byte payload);5. AAL5 - Adaptation for signalling and data services (48 byte payload).In the case of transfer of information in real time, AAL1 and AAL2 which support connection oriented services are important. AAL4 which supports a connection less service was originally meant for data which is sensitive to loss but not to delay. However, the introduction of AAL5 which uses a 48 byte payload with no overheads, has made AAL3/4 redundant. Frame Relay

Page 71: Computer Networks

Shashank Agnihotri Computer Networks – Page 71

and MPEG -2 (Moving Pictures Expert Group) video are two services which will specifically use AAL5.ATM Services CBR ServiceThis supports the transfer of information between the source and destination at a constant bit rate. CBR service uses AAL1. A typical example is the transfer of voice at 64 Kbps over ATM. Another usage is for the transport of fixed rate video.This type of service over an ATM network is sometimes called circuit emulation (similar to a voice circuit on a telephone network). VBR ServiceThis service is useful for sources with variable bit rates. Typical examples are variable bit rate audio and video. ABR and UBR ServicesThe definition of CBR and VBR has resulted in two other service types called Available Bit Rate (ABR) services and Unspecified Bit Rate (UBR) services.ABR services use the instantaneous bandwidth available after allocating bandwidths for CBR and VBR services. This makes the bandwidth of the ABR service to be variable. Although there is no guaranteed time of delivery for the data transported using ABR services, the integrity of data is guaranteed. This is ideal to carry time insensitive (but loss sensitive) data such as in LAN-LAN interconnect and IP over ATM.UBR service, as the name implies, has an unspecified bit rate which the network can use to transport information relating to network management, monitoring, etc.

EXAMPLE NETWORKS – connection-oriented networks: X.25, Frame Relay and ATMby admin under Computer Networks

2 connection-oriented networks: X.25, Frame Relay and ATM

Since the beginning of connectivity arose a war between those who support subnets no connection-oriented (ie, datagrams) and supporters of the subnets oriented the connection. The main supporters of the subnets are connection oriented in community ARPANET / Internet.

Remember that the original desire of the DoD to establish and build ARPANET was to have a network that could function even after multiple impacts of weapons destroy nuclear numerous routers and transmission lines.

Therefore, tolerance errors was on his list of priorities, not so much they could charge customers.

This approach led to a design not connection oriented where each packet is routed independently of any other package.

Therefore, if some routers fall during a session, no harm because the system can reconfigure itself dynamically for subsequent packets to find a route to your destination, even if it is different from the one used by the previous packages.

The connection-oriented field comes from the world of telephone companies.

Page 72: Computer Networks

Shashank Agnihotri Computer Networks – Page 72

In the telephone system, the caller must dial the number of the party to call and wait connection before you can talk or send data.

This establishes a connection setup route through the phone system that is maintained until the call is terminated.

All words or packets follow the same route.

If a line or switch goes down along the way, the call is canceled. This property is precisely what the DoD did not like.

So why do you like the phone companies? For two reasons:1. Quality service.2. Billing.By first establishing a connection, the subnet can reserve resources such as space buffering and processing power (CPU) in routers.

Attempting to set a call and available resources are insufficient, the call is rejected and the caller receives a busy signal.

once a connection is established, it gives good service.

With no network connection-oriented, if too many packets arrive at the same router to same time, the router is saturated and may lose some packets.Perhaps the issuer notice this and send it back, but the quality of service is uneven and inadequate to audio or video unless the network has low.

Needless to say, to provide adequate audio quality is something the phone companies take great care, hence the preference for the connections.

The second reason that phone companies prefer the connection-oriented service is that they are accustomed to charging for connection time.

When a call long distance (domestic or international) is charged per minute.

When they arrived networks were drawn precisely to a model in which the charge per minute would be easy to do.

If you have to establish a connection before sending the data, that is when billing clock starts running. If no connection, no charge.

Ironically, maintaining billing records is very expensive.

If a telephone company adopt a flat monthly rate with unlimited calls and no billing or maintenance of a record, probably save a lot of money, despite the increase in calls this policy would generate.

However, there are policies, regulations and other factors that weigh in against doing this.

Page 73: Computer Networks

Shashank Agnihotri Computer Networks – Page 73

Interestingly, the flat rate service exists in other sectors. For example, cable TV is billed at a flat monthly rate regardless of how many programs display.

It could have been designed with pay per view as a basic concept, but it was not, in part by the high cost of turnover (and given the quality of most television programs, shame can not be discounted entirely).

Also, many parks charge an admission fee per day with unlimited access to games, in contrast to carnivals that charge per game.

That said, we should not be surprising that all the networks designed by the telephone industry have been connection-oriented subnets.

What is surprising is that the Internet is also inclined in that direction in order to provide better audio and video service.

For now examine some connection-oriented networks.

X.25 and Frame Relay

Our first example of connection-oriented network is X.25, which was the first network public data.

Deployed in the 1970′s, when telephone service was a monopoly everywhere and the telephone company in each country expected to have a data network country itself.

To use X.25, a computer, first established a connection to the remote computer, that is, made a phone call.

This connection was a connection number for use in the transfer of data packets (since it could open many connections at the same time).

Data sets were very simple, consisted of a 3-byte header and up to 128 bytes of data.

The header consists of a number of 12-bit connection, a packet sequence number, a receipt confirmation number and some number of bits.

X.25 networks operated for almost ten years with mixed results.

In the 1980′s, X.25 networks were replaced largely by a new type of network called Frame Relay.

This is a connection-oriented network without error control or flow.

As might be connection-oriented, packets are delivered in order (if to surrender all).

The properties of order delivery, no error control or flow made the Frame Relay LAN-like wide area.

Page 74: Computer Networks

Shashank Agnihotri Computer Networks – Page 74

Its most important application is the interconnection LANs in multiple locations of a company.

Frame Relay enjoyed modest success and even is still used in some parts.

Asynchronous Transfer Mode

Another type of network connection-oriented, perhaps more importantly, ATM (Asynchronous Transfer Mode).

The reason for this strange name is because the telephone system in most of the transmission is synchronous (the closest thing to a clock) and ATM do not.

ATM was designed in the early 1990s and launched in the midst of an incredible exaggeration (Ginsburg, 1996; Goralski, 1995; Ibe, 1997; kimnaras et al., 1994, and Stallings, 2000).

ATM would solve all the problems of merging telecommunications connectivity and voice, data, cable, telex, telegraph, carrier pigeons, cans connected by string, drums, smoke signals and everything else in a single integrated system that could provide all services for all needs. That did not happen.

In large part, the problems were similar to those described in the issue of OSI, ie an unwelcome appearance, along with technology, implementation and misguided policies.

Having knocked out telephone companies in the first assault, much of the Internet community saw ATM as when the Internet was the opponent of the telcos: the second part.

But not so in reality and this time even in datagrams compromisers fans realized that the quality of Internet service left much to be desired.

To make a long story, was much more successful ATM OSI and currently use deep within the telephone system, often in the transport of IP packets.

As businesses today is used primarily to carry its internal transport, users are unaware of its existence, but definitely alive and has health.

ATM virtual circuits

Since ATM networks are connection oriented, data transmission requires that you first send a package to connect.

Establishment as the message continues its path through the subnet, all switches are in the path created an entry in its internal tables noting the existence of the connection and reserving whatever resources they need the connection.

Often the connections are called virtual circuits, in analogy with the physical circuits used in the telephone system.

Page 75: Computer Networks

Shashank Agnihotri Computer Networks – Page 75

Most ATM networks also support permanent virtual circuits, the connections standing between two hosts (distant). They are similar to leased lines in the world phone.

Each connection, temporarily or permanently, has a single connection handle.

Once connected, each side can begin transmitting data.

The basic idea that is based ATM is to transmit all the information into small packets of fixed size, called cells.

The cells have a size of 53 bytes, five of which are header and 48 payload, so the sender and receiver hosts and all intermediate switches can know which cells belong to which connections.

This information allows each switch knows how to send each incoming cell.

The switching of cells is done in hardware at high speed.

In fact, the main argument for having fixed-size cells is that it is easy build hardware switches to handle small cells of fixed length.

Packages variable length IP must be routed through software, which is a slower process.

Another advantage of ATM is that the hardware can be configured to send an incoming cell to multiple output lines, a property necessary for the management of a television program to be broadcast to multiple receivers.

Finally, the small cells do not block any long line, making it easier to guarantee quality of service.

All cells follow the same route to the destination.

The delivery of cells is not guaranteed, but the order itself. If the cells 1 and 2 are sent in that order, then they should arrive in the same order, never first 2 then 1.

However, one or both of these may be lost in the way.

At higher levels of their proper protocols cell recover losses.

Note that while this warranty is not perfect, is better than the Internet.

There, the packets only lost, but also delivered in disarray.

ATM, in contrast, ensures that cells were never delivered disorder.

ATM networks are organized as traditional WAN, with lines and switches (routers).

Page 76: Computer Networks

Shashank Agnihotri Computer Networks – Page 76

The most common speeds for ATM networks are 155 and 622 Mbps, although higher speeds are supported.Speed was chosen because it is 155 Mbps which required to transmit high-definition television.

The exact choice of 155.52 Mbps was made for compatibility with the SONET transmission system from AT & T.

The physical layer is concerned with the physical environment: voltages, bit timing and other aspects.

ATM does not prescribe a particular set of rules, only specifies that the cells ATM can be sent as is cable or fiber, but can be packed into the payload of other transport systems.

In other words, ATM is designed to be independent of the transmission medium.

The ATM layer is responsible for the cells and transport.

Defines the layout of a cell and indicates the mean fields of the header.

It also deals with the establishment and the release of the virtual circuits. Congestion control is also located here.

Since most applications do not need to work directly with the cells (although some may do so), has defined a top layer to the ATM layer to users send packets bigger than a cell.

ATM interface segments these packets, transmitted from individual cells and reassembles at the other end.

This layer is AAL (ATM Adaptation Layer).

Unlike the first two-dimensional reference models, the ATM model is defined as if three dimensional.

The user plane

deals with data transport, flow control, error correction and other user functions.

In contrast, the control plane deals with connection management.

The management functions of the layer plane and are related to resource management and coordination between coats.

Each of the physical layer and AAL are divided into two subnetworks, one in the bottom that does the job and the convergence sublayer on top that provides the interface own immediate upper layer.

The PMD sublayer (Physical Medium Dependent) interacts with the real cable.

Page 77: Computer Networks

Shashank Agnihotri Computer Networks – Page 77

Move bit in and out and handles the timing bits, ie the time between each bit totransmission.This layer will be different for different carriers and cables.

The other sublayer of the physical layer is the sublayer TC (Transmission Convergence).

When cells are transmitted, the TC layer sends a string of bits to the PMD layer.

This is simple. At the other extreme, the TC sublayer receives a series of input bits of the PMD sublayer.

Its job is to convert this bit stream into a stream of cells to the ATM layer.

Manages all aspects related to the indications of where the cells begin and end the flow of bits.

In the ATM model, this feature occurs in the physical layer.

In the OSI model and large of the other networks, the work of framing, ie convert a number of bits in the rough a sequence of frames or cells, is the task of the data link layer.

As mentioned earlier, the ATM layer handles cells, including their generation and transportation.

The most interesting aspect of ATM is located here. It is a combination of the data link layer and network OSI model, there is a split in sublayers.

The AAL layer is divided into a sublayer SAR (Segmentation and Reassembly) and CS (Convergence Sublayer).

The lower sublayer packet fragmented cells in the transmit side and rejoins at the destination.

The upper sublayer allows ATM systems offer different types of services to different applications (eg, file transfer and video on demand have different requirements concerning error handling, timing, etc.).

However, since there is a substantial installed base, it is likely that he was still in use for some years.

Page 78: Computer Networks

Shashank Agnihotri Computer Networks – Page 78

Permanent and switched virtual circuits in ATM, frame relay, and X.25Switched virtual circuits (SVCs) are generally set up on a per-call basis and are disconnected when the call is terminated; however, apermanent virtual circuit (PVC) can be established as an option to provide a dedicated circuit link between two facilities. PVC configuration is usually preconfigured by the service provider. Unlike SVCs, PVC are usually very seldom broken/disconnected.

A switched virtual circuit (SVC) is a virtual circuit that is dynamically established on demand and is torn down when transmission is complete, for example after a phone call or a file download. SVCs are used in situations where data transmission is sporadic and/or not always between the same data terminal equipment (DTE) endpoints.

A permanent virtual circuit (PVC) is a virtual circuit established for repeated/continuous use between the same DTE. In a PVC, the long-term association is identical to the data transfer phase of a virtual call. Permanent virtual circuits eliminate the need for repeated call set-up andclearing.

Frame relay  is typically used to provide PVCs. ATM  provides both switched virtual connections and permanent virtual connections, as they are called in ATM terminology. X.25  provides both virtual calls and PVCs, although not all X.25 service providers or DTE implementations support PVCs as their use was much less common than SVCs

X.25

X.25 is packet-switched network based WAN protocol for WAN communications. It delineates data exchange and control of information within a user appliance, Data Terminal Equipment (DTE) and a network node, Data Circuit Terminating Equipment (DCE). X.25 comprises physical links such as packet-switching exchange (PSE) nodes for networking hardware, leased lines, and telephone or ISDN connections. Its unique functionality is its capacity to work effectively on any type of system that is connected to the network. X.25, although replaced by superior technology, continues to be in use. It utilizes a connection-oriented service that enables data packets to be transmitted in an orderly manner.

Page 79: Computer Networks

Shashank Agnihotri Computer Networks – Page 79

Network congestion

In data networking and queueing theory, network congestion occurs when a link or node is

carrying so much data that its quality of servicedeteriorates. Typical effects include queueing

delay, packet loss or the blocking of new connections. A consequence of these latter two is that

incremental increases in offered load lead either only to small increases in network throughput,

or to an actual reduction in network throughput.

Network protocols which use aggressive retransmissions to compensate for packet loss tend to

keep systems in a state of network congestion even after the initial load has been reduced to a

level which would not normally have induced network congestion. Thus, networks using these

protocols can exhibit two stable states under the same level of load. The stable state with low

throughput is known as congestive collapse.

Modern networks use congestion control and network congestion avoidance techniques to try to

avoid congestion collapse. These include: exponential backoff  in protocols such

as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in

devices such as routers. Another method to avoid the negative effects of network congestion is

implementing priority schemes, so that some packets are transmitted with higher priority than

others. Priority schemes do not solve network congestion by themselves, but they help to

alleviate the effects of congestion for some services. An example of this is 802.1p. A third

method to avoid network congestion is the explicit allocation of network resources to specific

flows. One example of this is the use of Contention-Free Transmission Opportunities

(CFTXOPs) in the ITU-T G.hn  standard, which provides high-speed (up to 1 Gbit/s) Local area

networking over existing home wires (power lines, phone lines and coaxial cables).

RFC 2914 addresses the subject of congestion control in detail.

Network capacity

The fundamental problem is that all network resources are limited, including router processing

time and link throughput.

For example:

Today's (2006) Wireless LAN effective bandwidth throughput (15-100Mbit/s) is easily

filled by a single personal computer.

Even on fast computer networks (e.g. 1 Gbit), the backbone can easily be congested by a

few servers and client PCs.

Page 80: Computer Networks

Shashank Agnihotri Computer Networks – Page 80

Because P2P scales very well, file transmissions by P2P have no problem filling and will

fill an uplink or some other network bottleneck, particularly when nearby peers are preferred

over distant peers.

Denial of service attacks  by botnets are capable of filling even the largest Internet

backbone network links (40 Gbit/s as of 2007), generating large-scale network congestion

Congestive collapse

Congestive collapse (or congestion collapse) is a condition which a packet

switched computer network can reach, when little or no useful communication is happening due

to congestion. Congestion collapse generally occurs at choke points in the network, where the

total incoming traffic to a node exceeds the outgoing bandwidth. Connection points between

a local area network and a wide area network are the most likely choke points.

When a network is in such a condition, it has settled (under overload) into a stable state where

traffic demand is high but little usefulthroughput is available, and there are high levels

of packet delay and loss (caused by routers discarding packets because their output queuesare

too full) and general quality of service is extremely poor.

HistoryCongestion collapse was identified as a possible problem as far back as 1984 (RFC 896, dated

6 January). It was first observed on the early Internet in October 1986, when the NSFnet phase-I

backbone dropped three orders of magnitude from its capacity of 32 kbit/s to 40 bit/s, and this

continued to occur until end nodes started implementing Van Jacobson's congestion

control between 1987 and 1988.

CauseWhen more packets were sent than could be handled by intermediate routers, the intermediate

routers discarded many packets, expecting the end points of the network to retransmit the

information. However, early TCP implementations had very bad retransmission behavior. When

this packet loss occurred, the end points sent extra packets that repeated the information lost;

doubling the data rate sent, exactly the opposite of what should be done during congestion. This

pushed the entire network into a 'congestion collapse' where most packets were lost and the

resultant throughput was negligible.

Congestion control

Page 81: Computer Networks

Shashank Agnihotri Computer Networks – Page 81

Congestion control concerns controlling traffic entry into a telecommunications network, so as

to avoid congestive collapse by attempting to avoid oversubscription of any of the processing

or link capabilities of the intermediate nodes and networks and taking resource reducing steps,

such as reducing the rate of sending packets. It should not be confused with flow control, which

prevents the sender from overwhelming the receiver.

Theory of congestion controlThe modern theory of congestion control was pioneered by Frank Kelly, who

applied microeconomic theory and convex optimization theory to describe how individuals

controlling their own rates can interact to achieve an "optimal" network-wide rate allocation.

Examples of "optimal" rate allocation are max-min fair allocation and Kelly's suggestion

of proportional fair allocation, although many others are possible.

The mathematical expression for optimal rate allocation is as follows. Let   be the rate of flow 

,   be the capacity of link  , and   be 1 if flow   uses link   and 0 otherwise. Let  ,   and   be

the corresponding vectors and matrix. Let   be an increasing, strictly convexfunction, called

the utility, which measures how much benefit a user obtains by transmitting at rate  . The

optimal rate allocation then satisfies

such that 

The Lagrange dual of this problem decouples, so that each flow sets its own rate, based only on

a "price" signalled by the network. Each link capacity imposes a constraint, which gives rise to a

Lagrange multiplier,  . The sum of these Lagrange multipliers,   is the price to

which the flow responds.

Congestion control then becomes a distributed optimisation algorithm for solving the above

problem. Many current congestion control algorithms can be modelled in this framework, with   

being either the loss probability or the queueing delay at link  .

A major weakness of this model is that it assumes all flows observe the same price, while sliding

window flow control causes "burstiness" which causes different flows to observe different loss or

delay at a given link.

Classification of congestion control algorithmsMain article: Taxonomy of congestion control

Page 82: Computer Networks

Shashank Agnihotri Computer Networks – Page 82

There are many ways to classify congestion control algorithms:

By the type and amount of feedback received from the network: Loss; delay; single-bit or

multi-bit explicit signals

By incremental deployability on the current Internet: Only sender needs modification;

sender and receiver need modification; only router needs modification; sender, receiver and

routers need modification.

By the aspect of performance it aims to improve: high bandwidth-delay product networks;

lossy links; fairness; advantage to short flows; variable-rate links

By the fairness criterion it uses: max-min, proportional, "minimum potential delay"

Avoidance

The prevention of network congestion and collapse requires two major components:

1. A mechanism in routers to reorder or drop packets under overload,

2. End-to-end flow control mechanisms designed into the end points which respond to

congestion and behave appropriately.

The correct end point behaviour is usually still to repeat dropped information, but progressively

slow the rate that information is repeated. Provided all end points do this, the congestion lifts

and good use of the network occurs, and the end points all get a fair share of the available

bandwidth. Other strategies such as slow-start ensure that new connections don't overwhelm the

router before the congestion detection can kick in.

The most common router mechanisms used to prevent congestive collapses are fair

queueing and other scheduling algorithms, and random early detection, or RED, where packets

are randomly dropped proactively triggering the end points to slow transmission before

congestion collapse actually occurs. Fair queueing is most useful in routers at choke points with

a small number of connections passing through them. Larger routers must rely on RED.

Some end-to-end protocols are better behaved under congested conditions than others. TCP is

perhaps the best behaved. The first TCP implementations to handle congestion well were

developed in 1984[citation needed], but it was not until Van Jacobson's inclusion of an open source

solution in the Berkeley Standard Distribution UNIX ("BSD") in 1988 that good TCP

implementations became widespread.

UDP does not, in itself, have any congestion control mechanism. Protocols built atop UDP must

handle congestion in their own way. Protocols atop UDP which transmit at a fixed rate,

Page 83: Computer Networks

Shashank Agnihotri Computer Networks – Page 83

independent of congestion, can be troublesome. Real-time streaming protocols, including

many Voice over IP protocols, have this property. Thus, special measures, such as quality-of-

service routing, must be taken to keep packets from being dropped from streams.

In general, congestion in pure datagram networks must be kept out at the periphery of the

network, where the mechanisms described above can handle it. Congestion in the Internet

backbone is very difficult to deal with. Fortunately, cheap fiber-optic lines have reduced costs in

the Internet backbone. The backbone can thus be provisioned with enough bandwidth to keep

congestion at the periphery.[citation needed]

Practical network congestion avoidance

Implementations of connection-oriented protocols, such as the widely-used TCP protocol,

generally watch for packet errors, losses, or delays (see Quality of Service) in order to adjust the

transmit speed. There are many different network congestion avoidance processes, since there

are a number of different trade-offs available. [1]

TCP/IP congestion avoidance

Main article: TCP congestion avoidance algorithm

The TCP congestion avoidance algorithm is the primary basis for congestion control in the

Internet. [2] [3] [4] [5] [6]

Problems occur when many concurrent TCP flows are experiencing port queue buffer tail-drops.

Then TCP's automatic congestion avoidance is not enough. All flows that experience port queue

buffer tail-drop will begin a TCP retrain at the same moment - this is called TCP global

synchronization.

Active Queue Management (AQM)

Main article: Active Queue Management

Purpose

Random early detection

Main article: Random early detection

Main article: Weighted random early detection

One solution is to use random early detection (RED) on network equipments port queue

buffer. [7] [8] On network equipment ports with more than one queue buffer, weighted random

early detection (WRED) could be used if available.

Page 84: Computer Networks

Shashank Agnihotri Computer Networks – Page 84

RED indirectly signals to sender and receiver by deleting some packets, e.g. when the average

queue buffer lengths are more than e.g. 50% (lower threshold) filled and deletes linearly more or

(better according to paper) cubical more packets, [9] up to e.g. 100% (higher threshold). The

average queue buffer lengths are computed over 1 second at a time.

Robust random early detection (RRED)

Main article: Robust random early detection

Robust Random Early Detection (RRED) algorithm was proposed to improve the TCP

throughput against Denial-of-Service (DoS) attacks, particularly Low-rate Deinal-of-Service

(LDoS) attacks. Experiments have confirmed that the existing RED-like algorithms are notably

vulnerable under Low-rate Denial-of-Service (LDoS) attacks due to the oscillating TCP queue

size caused by the attacks [10]. RRED algorithm can significantly improve the performance of

TCP under Low-rate Denial-of-Service attacks [10].

Recent Publications in low-rate Denial-of-Service (DoS) attacks

Flowbased-RED/WRED

Some network equipment are equipped with ports that can follow and measure each flow

(flowbased-RED/WRED) and are hereby able to signal to a too big bandwidth flow according to

some QoS policy. A policy could divide the bandwidth among all flows by some criteria.

IP ECN

Main article: Explicit Congestion Notification

Another approach is to use IP ECN.[11] ECN is only used when the two hosts signal that they

want to use it. With this method, an ECN bit is used to signal that there is explicit congestion.

This is better than the indirect packet delete congestion notification performed by the

RED/WRED algorithms, but it requires explicit support by both hosts to be effective. [12] Some

outdated or buggy network equipment drops packets with the ECN bit set, rather than ignoring

the bit. More information on the status of ECN including the version required for Cisco IOS,

by Sally Floyd,[7] one of the authors of ECN.

When a router receives a packet marked as ECN capable and anticipates (using RED)

congestion, it will set an ECN-flag notifying the sender of congestion. The sender then ought to

decrease its transmission bandwidth; e.g. by decreasing the tcp window size (sending rate) or by

other means.

Cisco AQM: Dynamic buffer limiting (DBL)

Page 85: Computer Networks

Shashank Agnihotri Computer Networks – Page 85

Cisco has taken a step further in their Catalyst 4000 series with engine IV and V. Engine IV and

V has the possibility to classify all flows in "aggressive" (bad) and "adaptive" (good). It ensures

that no flows fill the port queues for a long time. DBL can utilize IP ECN instead of packet-

delete-signalling. [13] [14]

TCP Window Shaping

Congestion avoidance can also efficiently be achieved by reducing the amount of traffic flowing

into a network. When an application requests a large file, graphic or web page, it usually

advertises a "window" of between 32K and 64K. This results in the server sending a full window

of data (assuming the file is larger than the window). When there are many applications

simultaneously requesting downloads, this data creates a congestion point at an upstream

provider by flooding the queue much faster than it can be emptied. By using a device to reduce

the window advertisement, the remote servers will send less data, thus reducing the congestion

and allowing traffic to flow more freely. This technique can reduce congestion in a network by a

factor of 40.[citation needed]

Side effects of congestive collapse avoidance

Radio links

The protocols that avoid congestive collapse are often based on the idea that data loss on the

Internet is caused by congestion. This is true in nearly all cases; errors during transmission are

rare on today's fiber based Internet. However, this causes WiFi, 3G or other networks with a

radio layer to have poor throughput in some cases since wireless networks are susceptible to

data loss due to interference. The TCP connections running over a radio based physical

layer see the data loss and tend to believe that congestion is occurring when it isn't and

erroneously reduce the data rate sent.

Short-lived connections

The slow-start protocol performs badly for short-lived connections. Older web browsers would

create many consecutive short-lived connections to the web server, and would open and close

the connection for each file requested. This kept most connections in the slow start mode, which

resulted in poor response time.

To avoid this problem, modern browsers either open multiple connections simultaneously

or reuse one connection for all files requested from a particular web server. However, the initial

performance can be poor, and many connections never get out of the slow-start regime,

significantly increasing latency.