Network Notes

258
http://www.e-tutes.com/ Lesson 1: Networking Basics Lesson 2: OSI Reference Model Lesson 3: Introduction to TCP/IP Lesson 4: LAN Basics Lesson 5: Understanding Switching Lesson 6: WAN Basics Lesson 7: Understanding Routing Lesson 8: What Is Layer 3 Switching? Lesson 9: Understanding Virtual LANs Lesson 10: Understanding Quality of Service Lesson 11: Security Basics Lesson 12: Understanding Virtual Private Networks Lesson 13: Voice Technology Basics Lesson 14: Network Management Basics Lesson 15: The Internet Lesson 1: Networking Basic

Transcript of Network Notes

Page 1: Network Notes

http://www.e-tutes.com/

Lesson 1: Networking Basics

Lesson 2: OSI Reference Model

Lesson 3: Introduction to TCP/IP

Lesson 4: LAN Basics

Lesson 5: Understanding Switching

Lesson 6: WAN Basics

Lesson 7: Understanding Routing

Lesson 8: What Is Layer 3 Switching?

Lesson 9: Understanding Virtual LANs

Lesson 10: Understanding Quality of Service

Lesson 11: Security Basics

Lesson 12: Understanding Virtual Private Networks

Lesson 13: Voice Technology Basics

Lesson 14: Network Management Basics

Lesson 15: The Internet

Lesson 1: Networking Basic

Page 2: Network Notes

This lesson covers the very basics of networking. We‘ll start with a little history that describes how the networking industry evolved. We‘ll then move on to a section that

describes how a LAN is built: essentially the necessary components (like NIC cards and cables). We then cover LAN topologies. And finally we‘ll discuss the key

networking devices: hubs, bridges, switches, and routers.

This module is an overview only. It will familiarize you with much of the vocabulary you hear with regards to networking. Some of these concepts are covered in more detail in later lessons

The Agenda

- Networking History

- How a LAN Is Built

- LAN Topologies - LAN/WAN Devices

Networking History

Early networks

From a historical perspective, electronic communication has actually been around a long time, beginning with Samuel Morse and the telegraph. He sent the first

telegraph message May 24, 1844 from Washington DC to Baltimore MD, 37 miles away. The message? ―What hath God wrought.‖

Less than 25 years later, Alexander Graham Bell invented the telephone – beating out

a competitor to the patent office only by a couple of hours on Valentine‘s Day in 1867. This led to the development of the ultimate analog network – the telephone system.

The first bit-oriented language device was developed by Emile Baudot – the printing

telegraph. By bit-oriented we mean the device sent pulses of electricity which were either positive or had no voltage at all. These machines did not use Morse code.

Baudot‘s five-level code sent five pulses down the wire for each character transmitted. The machines did the encoding and decoding, eliminating the need for operators at both ends of the wires. For the first time, electronic messages could be sent by

anyone.

Telephone Network

Page 3: Network Notes

But it‘s really the telephone network that has had the greatest impact on how businesses communicate and connect today. Until 1985, the Bell Telephone

Company, now known as AT&T, owned the telephone network from end to end. It represented a phenomenal network, the largest then and still the largest today.

Let‘s take a look at some additional developments in the communications industry that had a direct impact on the networking industry today.

Developments in Communication

In 1966, an individual named ―Carter‖ invented a special device that attached to a

telephone receiver that would allow construction workers to talk over the telephone from a two-way radio.

Bell telephone had a problem with this and sued – and eventually lost.

As a result, in 1975, the Federal Communications Commission ruled that devices

could attach to the phone system, if they met certain specifications. Those specifications were approved in 1977 and became known as FCC Part 68. In fact, years ago you could look at the underside of a telephone not manufactured by Bell,

and see the ―Part 68‖ stamp of approval.

This ruling eventually led to the breakup of American Telephone and Telegraph in 1984, thus creating nine regional Bell operating companies like Pacific Bell, Bell

Atlantic, Bell South, Mountain Bell, etc. The break up of AT&T in 1984 opened the door for other competitors in the telecommunications market. Companies like Microwave Communications, Inc. (MCI),

and Sprint. Today, when you make a phone call across the country, it may go through three or four different carrier networks in order to make the connection.

Page 4: Network Notes

Now, let‘s take a look at what was happening in the computer industry about the

same time.

1960's - 1970's Communication

In the 1960‘s and 1970‘s, traditional computer communications centered around the mainframe host. The mainframe contained all the applications needed by the users,

as well as file management, and even printing. This centralized computing environment used low-speed access lines that tied terminals to the host. These large mainframes used digital signals – pulses of electricity or zeros and ones,

what is called binary -- to pass information from the terminals to the host. The information processing in the host was also all digital.

Problems faced in communication

This brought about a problem. The telephone industry wanted to use computers to

switch calls faster and the computer industry wanted to connect remote users to the mainframe using the telephone service. But the telephone networks speak analog and

computers speak digital. Let‘s take a closer look at this problem.

Digital signals are seen as one‘s and zero‘s. The signal is either on or off. Whereas analog signals are like audio tones – for example, the high-pitched squeal you hear when you accidentally call a fax machine. So, in order for the computer world to use

the services of the telephone system, a conversion of the signal had to occur.

Page 5: Network Notes

The solution

The solution – a modulator/demodulator or ―modem.‖ The modem takes the digital

signals from the computer and modulates the signal into analog format. In sending information from a desktop computer to a host using POTS or plain old telephone

service, the modem takes the digital signals from the computer and modulates the signal into analog format to go through the telephone system. From the telephone system, the analog signal goes through another modem which converts the signal to

digital format to be processed by the host computer. This helped solve some of the distance problems, at least to a certain extent.

Multiplexing or muxing

Another problem is how to connect multiple terminals to a single cable. The

technology solution is multiplexing or muxing. What we can do with multiplexing is we can take multiple remote terminals, connect

them back to our single central site, our single mainframe at the central site, but we can do it all over a single communications channel, a single line. So what you see is we have some new terminology here in our diagram. Our single

central site we refer to as a broadband connection. That's referred to as a broadband connection because whenever we talk about broadband we're talking about carrying multiple communications channels over a single communication pipe.

So what we're saying here is we have multiple communication channels as in four terminals at the remote site going back to a single central site over one common

channel. But again in the case of our definition of broadband here, we're referring to the fact that we have four communication channels, one for each remote terminal over a

single physical path. Now out at the end stations at the terminals, you see we have the term Baseband and

Page 6: Network Notes

what we mean by the term Baseband is, in our example, between the terminal and the multiplexer we have a single communication channel per wire, so each of those

wires leading into the multiplexer has a dedicated channel or a dedicated path. Now the function of the multiplexer is to take each of those Baseband paths and

break it up and allocate time slots. What that allows us to do is allocate a time slot per terminal so each terminal has its own time slot across that common Baseband connection between the remote

terminals and the central mainframe site. That is the function of the multiplexer is to allocate the time slots and then also on the other side to put the pieces back together for delivery to the mainframe.

So muxing is our fundamental concept here. Let‘s look at the different ways to do our muxing.

Baseband and broadband

You see again the terms here, Baseband and broadband.

Again, the analogy that they're using here is that in the case of Baseband we said we had a single communications channel per physical path.

An example of some Baseband technology you're probably familiar with is Ethernet for example. Most implementations of Ethernet use Baseband technology.

We have a single communications channel going over a single physical path or a

Page 7: Network Notes

single physical cable. On the other hand on the bottom part of our diagram you see a reference to

broadband and the analogy here would be multiple trains inside of a single tunnel. Maybe we see that in the real world, we're probably familiar with broadband as

something we do every day, is cable TV. With cable TV we have multiple channels coming in over a single cable. We plug a single cable into the back of our TV and over that single cable certainly we

know we can get 12 or 20 or 40 or 60 or more channels over that single cable. So cable TV is a good example of broadband.

Given the addition of multiplexing and the use of the modem, let‘s see how we can grow our network.

How networks are growing

Example:-

Using all the technology available, companies were able to team up with the phone

company and tie branch offices to the headquarters. The speeds of data transfer were often slow and were still dependent on the speed and capacity of the host computers at the headquarters site.

The phone company was also able to offer leased line and dial-up options. With leased-lines, companies paid for a continuous connection to the host computer.

Companies using dial-up connections paid only for time used. Dial-up connections were perfect for the small office or branch.

Birth of the personal computer

Page 8: Network Notes

The birth of the personal computer in 1981 really fueled the explosion of the networking marketplace. No longer were people dependent on a mainframe for

applications, file storage, processing, or printing. The PC gave users incredible freedom and power.

The Internet 1970's - 1980's

The 70‘s and 80‘s saw the beginnings of the Internet. The Internet as we know it

today began as the ARPANET — The Advanced Research Projects Agency Network – built by a division of the Department of Defense essentially in the mid ‗60's through grant-funded research by universities and companies. The first actual packet-

switched network was built by BBN. It was used by universities and the federal government to exchange information and research. Many local area networks

connected to the ARPANET with TCP/IP. TCP/IP was developed in 1974 and stands for Transmission Control Protocol / Internet Protocol. The ARPANET was shut down in 1990 due to newer network technology and the need for greater bandwidth on the

backbone. In the late ‗70‘s the NSFNET, the National Science Foundation Network was developed. This network relied on super computers in San Diego; Boulder;

Champaign; Pittsburgh; Ithaca; and Princeton. Each of these six super computers had a microcomputer tied to it which spoke TCP/IP. The microcomputer really

handled all of the access to the backbone of the Internet. Essentially this network was overloaded from the word "go". Further developments in networking lead to the design of the ANSNET -- Advanced

Networks and Services Network. ANSNET was a joint effort by MCI, Merit and IBM specifically for commercial purposes. This large network was sold to AOL in 1995.

The National Science Foundation then awarded contracts to four major network access providers: Pacific Bell in San Francisco, Ameritech in Chicago, MFS in Washington DC and Sprint in New York City. By the mid ‗80's the collection of

networks began to be known as the ―Internet‖ in university circles. TCP/IP remains the glue that holds it together. In January 1992 the Internet Society was formed – a misleading name since the

Internet is really a place of anarchy. It is controlled by those who have the fastest lines and can give customers the greatest service today.

The primary Internet-related applications used today include: Email, News retrieval, Remote Login, File Transfer and World Wide Web access and development.

1990's Globle Internetworking

With the growth and development of the Internet came the need for speed – and bandwidth. Companies want to take advantage of the ability to move information

around the world quickly. This information comes in the form of voice, data and video – large files which increase the demands on the network. In the future, global

internetworking will provide an environment for emerging applications that will require even greater amounts of bandwidth. If you doubt the future of global

Page 9: Network Notes

internetworking consider this – the Internet is doubling in size about every 11 months.

How a LAN can build

In the previous section, we discussed how networking evolved and some of the problems involved in the transmission of data such as conflict and multiple

terminals. In this section some of the basic elements needed to build local area networks (LANs) will be described.

LAN(Local Area Netwok)

The term local-area network, or LAN, describes of all the devices that communicate

together—printers, file server, computers, and perhaps even a host computer. However, the LAN is constrained by distance. The transmission technologies used in LAN applications do not operate at speed over long distances. LAN distances are in

the range of 100 meters (m) to 3 kilometers (km). This range can change as new technologies emerge.

For systems from different manufacturers to interoperate—be it a printer, PC, and file server—they must be developed and manufactured according to industry-wide protocols and standards.

More details about protocols and standards will be given later, but for now, just keep in mind they represent rules that govern how devices on a network exchange information. These rules are developed by industry-wide special interest groups

(SIGs) and standards committees such as the Institute of Electrical and Electronics Engineers (IEEE).

Most of the network administrator‘s tasks deal with LANs. Major characteristics of

LANs are: - The network operates within a building or floor of a building. The geographic

scope for ever more powerful LAN desktop devices running more powerful applications is for less area per LAN.

- LANs provide multiple connected desktop devices (usually PCs) with access to high-bandwidth media.

- An enterprise purchases the media and connections used in the LAN; the

Page 10: Network Notes

enterprise can privately control the LAN as it chooses.

- LANs rarely shut down or restrict access to connected workstations; local services are usually always available.

- By definition, the LAN connects physically adjacent devices on the media.

So let‘s look at the components of a LAN.

Components of LAN

- Network operating system(NOS)

In order for computers to be able to communicate with each other, they must first

have the networking software that tells them how to do so. Without the software, the system will function simply as a ―standalone,‖ unable to utilize any of the resources

on the network. Network operating software may by installed by the factory, eliminating the need for you to purchase it, (for example AppleTalk), or you may install it yourself.

111111111111111

Network interface card(NIC)

In addition to network operating software, each network device must also have a network interface card. These cards today are also referred to as adapters, as in

―Ethernet adapter card‖ or ―Token Ring adapter card.‖ The NIC card amplifies electronic signals which are generally very weak within the

computer system itself. The NIC is also responsible for packaging data for

Page 11: Network Notes

transmission, and for controlling access to the network cable. When the data is packaged properly, and the timing is right, the NIC will push the data stream onto

the cable. The NIC also provides the physical connection between the computer and the

transmission cable (also called ―media‖). This connection is made through the connector port. Examples of transmission media are Ethernet, Token Ring, and FDDI.

- Writing Hub

In order to have a network, you must have at least two devices that communicate with each other. In this simple model, it is a computer and a printer. The printer also has an NIC installed (for example, an HP Jet Direct card), which in turn is plugged

into a wiring hub. The computer system is also plugged into the hub, which facilitates communication between the two devices.

Additional components (such as a server, a few more PCs, and a scanner) may be connected to the hub. With this connection, all network components would have access to all other network components.

The benefit of building this network is that by sharing resources a company can afford higher quality components. For example, instead of providing an inkjet printer for every PC, a company may purchase a laser printer (which is faster, higher

capacity, and higher quality than the inkjet) to attach to a network. Then, all computers on that network have access to the higher quality printer.

- Cables or Transmission Media

The wires connecting the various devices together are referred to as cables.

- Cable prices range from inexpensive to very costly and can comprise of a significant cost of the network itself.

Page 12: Network Notes

- Cables are one example of transmission media. Media are various physical

environments through which transmission signals pass. Common network media include twisted-pair, coaxial cable, fiber-optic cable, and the atmosphere (through

which microwave, laser, and infrared transmission occurs). Another term for this is ―physical media.‖ *Note that not all wiring hubs support all medium types.

The other component shown in this fig1. is the connector. - As their name implies, the connector is the physical location where the NIC card

and the cabling connect.

- Registered jack (RJ) connectors were originally used to connect telephone lines. RJ connectors are now used for telephone connections and for 10BaseT and other types of network connections. Different connectors are able support different

speeds of transmission because of their design and the materials used in their manufacture.

- RJ-11 connectors are used for telephones, faxes, and modems. RJ-45 connectors are used for NIC cards, 10BaseT cabling, and ISDN lines.

Network Cabling

Cable is the actual physical path upon which an electrical signal travels as it moves

from one component to another. Transmission protocols determine how NIC cards take turns transmitting data onto

the cable. Remember that we discussed how LAN cables (baseband) carry one signal, while WAN cables (broadband) carry multiple signals. There are three primary cable types:

- Twisted-pair (or copper)

- Coaxial cable and

- Fiber-optic cable

Twisted-pair (or copper)

Page 13: Network Notes

Unshielded twisted-pair (UTP) is a four-pair wire medium used in a variety of networks. UTP does not require the fixed spacing between connections that is

necessary with coaxial-type connections. There are five types of UTP cabling commonly used as shown below:

- Category 1: Used for telephone communications. It is not suitable for transmitting data.

- Category 2: Capable of transmitting data at speeds up to 4 Mbps.

- Category 3: Used in 10BaseT networks and can transmit data at speeds up to 10 Mbps.

- Category 4: Used in Token Ring networks. Can transmit data at speeds up to 16 Mbps.

- Category 5: Can transmit data at speeds up to 100 Mbps.

Shielded twisted-pair (STP) is a two-pair wiring medium used in a variety of network implementations. STP cabling has a layer of shielded insulation to reduce EMI. Token

Ring runs on STP.

Using UTP and STP: - Speed is usually satisfactory for local-area distances.

- These are the least expensive media for data communication. UTP is cheaper than STP.

- Because most buildings are already wired with UTP, many transmission standards

are adapted to use it to avoid costly re-wiring of an alternative cable type.

Page 14: Network Notes

Coaxial cable

Coaxial cable consists of a solid copper core surrounded by an insulator, a combination shield and ground wire, and an outer protective jacket.

The shielding on coaxial cable makes it less susceptible to interference from outside sources. It requires termination at each end of the cable, as well as a single ground

connection. Coax supports 10/100 Mbps and is relatively inexpensive, although more costly than UTP.

Coaxial can be cabled over longer distances than twisted-pair cable. For example, Ethernet can run at speed over approximately 100 m (300 feet) of twisted pair. Using

coaxial cable increases this distance to 500 m.

Fiber-optic cable

Fiber-optic cable consists of glass fiber surrounded by shielding protection: a plastic

shield, kevlar reinforcing, and an outer jacket. Fiber-optic cable is the most expensive of the three types discussed in this section, but it supports 100+ Mbps line speeds.

There are two types of fiber cable: - Single or mono-mode—Allows only one mode (or wavelength) of light to propagate

Page 15: Network Notes

through the fiber; is capable of higher bandwidth and greater distances than multimode. Often used for campus backbones. Uses lasers as the light generating

method. Single mode is much more expensive than multimode cable. Maximum cable length is 100 km.

- Multimode—Allows multiple modes of light to propagate through the fiber. Often used for workgroup applications. Uses light-emitting diodes (LEDs) as light

generating device. Maximum cable length is 2 km.

Throughput Needs....!!

Super servers, high-capacity workstations, and multimedia applications have also fueled the need for higher capacity bandwidths.

The examples on abow image shows that the need for throughput capacity grows as a result of a desire to transmit more voice, video, and graphics. The rate at which this

information may be sent (transmission speed) is dependent how data is transmitted and the medium used for transmission. The ―how‖ of this equation is satisfied by a transmission protocol.

Each protocol runs at a different speed. Two terms are used to describe this speed: throughput rate and bandwidth.

The throughput rate is the rate of information arriving at, and possibly passing through, a particular point in a network.

In this chapter, the term bandwidth means the total capacity of a given network medium (twisted pair, coaxial, or fiber-optic cable) or protocol.

- Bandwidth is also used to describe the difference between the highest and the lowest frequencies available for network signals. This quantity is measured in

Megahertz (MHz). - The bandwidth of a given network medium or protocol is measured in bits per

second (bps). Some of the available bandwidth specified for a given medium or protocol is used up

in overhead, including control characters. This overhead reduces the capacity available for transmitting data.

Page 16: Network Notes

This table shows the tremendous variation in transmission time with different throughput rates. In years past, megabit (Mb) rates were considered fast. In today‘s

modern networks, gigabit (Gb) rates are possible. Nevertheless, there continues to be a focus on greater throughput rates.

LAN Topologies

You may hear the word topology used with respect to networks. ―Topology‖ refers to

the physical arrangement of network components and media within an enterprise networking structure. There are four primary kinds of LAN topologies: bus, tree, star, and ring.

Bus and Tree topology

Bus topology is

- A linear LAN architecture in which transmissions from network components propagate the length of the medium and are received by all other components. - The bus portion is the common physical signal path composed of wires or other

media across which signals can be sent from one part of a network to another. Sometimes called a highway.

- Ethernet/IEEE 802.3 networks commonly implement a bus topology Tree topology is

Page 17: Network Notes

- Similar to bus topology, except that tree networks can contain branches with

multiple nodes. As in bus topology, transmissions from one component propagate the length of the medium and are received by all other components.

The disadvantage of bus topology is that if the connection to any one user is broken, the entire network goes down, disrupting communication between all users. Because

of this problem, bus topology is rarely used today. The advantage of bus topology is that it requires less cabling (therefore, lower cost) than star topology.

Star topology

Star topology is a LAN topology in which endpoints on a network are connected to a common central switch or hub by point-to-point links. Logical bus and ring

topologies re often implemented physically in a star topology. - The benefit of star topology is that even if the connection to any one user is broken,

the network stays functioning, and communication between the remaining users is not disrupted. - The disadvantage of star topology is that it requires more cabling (therefore, higher

cost) than bus topology.

Star topology may be thought of as a bus in a box.

Ring topology

Page 18: Network Notes

Ring topology consists of a series of repeaters connected to one another by unidirectional transmission links to form a single closed loop.

- Each station on the network connects to the network at a repeater.

- While logically a ring, ring topologies are most often organized in a closed-loop star. A ring topology that is organized as a star implements a unidirectional closed-loop star, instead of point-to-point links.

- One example of a ring topology is Token Ring.

Redundancy is used to avoid collapse of the entire ring in the event that a connection between two components fails.

LAN/WAN Devices

Let‘s now take a look at some of the devices that move traffic around the network.

The approach taken in this section will be simple. As networking technology continues to evolve, the actual differences between networking devices is beginning to

blur slightly. Routers today are switching packets faster and yielding the performance of switches. Switches, on the other hand, are being designed with more intelligence and able to act more like routers. Hubs, while traditionally not intelligent in terms of

the amount of software they run, are now being designed with software that allows the hub to be ―intelligent‖ acting more like a switch. In this section, we‘ll keep these different types of product separate so that you can

understand the basics. Let‘s start off with the hub.

Hub

Star topology networks generally have a hub in the center of the network that connects all of the devices together using cabling. When bits hit a networking device,

be they hubs, switches, or routers, the devices will strengthen the signal and then

Page 19: Network Notes

send it on its way. A hub is simple a multiport repeater. There is usually no software to load, and no

configuration required (i.e. network administrators don‘t have to tell the device what to do).

Hubs operate very much the same way as a repeater. They amplify and propagate signals received out all ports, with the exception of the port from which the data arrived.

For example in the above image, if system 125 wanted to print on the printer 128, the message would be sent to all systems on Segment 1, as well as across the hub to all

systems on Segment 2. System 128 would see that the message is intended for it and would process it. Devices on the network are constantly listening for data. When devices sense a frame

of information that is addressed (and we will talk more about addressing later) for it, then it will accept that information into memory found on the network interface card (NIC) and begin processing the data.

In fairly small networks, hubs work very well. However, in large networks the limitations of hubs creates problems for network managers. In this example, Ethernet

is the standard being used. The network is also baseband, only one station can use the network at a time. If the applications and files being used on this network are large, and there are more nodes on the network, contention for bandwidth will slow

the responsiveness of the network down.

Bridges

Bridges improve network throughput and operate at a more intelligent level than do hubs. A bridge is considered to be a store and forward device that uses unique

hardware addresses to filter traffic that would otherwise travel from one segment to another. A bridge performs the following functions:

- Reads data frame headers and records source address/port (segment) pairs - Reads the destination address of incoming frames and uses recorded addresses to

Page 20: Network Notes

determine the appropriate outbound port for the frame. - Uses memory buffers to store frames during periods of heavy transmission, and

forwards them when the medium is ready.

Let‘s take a look at an example.

The bridge divides this Ethernet LAN into two segments in the above image, each connecting to a hub and then to a bridge port. Stations 123-125 are on segment 1 and stations 126-128 are on segment 2.

When station 124 transmits to station 125, the frame goes into the hub (who repeats it and sends it out all connected ports) and then on to the bridge. The bridge will not forward the frame because it recognizes that stations 124 and 125 are on the same

segment. Only traffic between segments passes through the bridge. In this example, a data frame from station 123, 124, or 125 to any station on segment 2 would be

forwarded, and so would a message from any station on segment 2 to stations on segment 1. When one station transmits, all other stations must wait until the line is silent again

before transmitting. In Ethernet, only one station can transmit at a time, or data frames will collide with each other, corrupting the data in both frames.

Bridges will listen to the network and keep track of who they are hearing. For instance, the bridge in this example will know that system 127 is on Segment 2, and that 125 is on segment 1. The bridge may even have a port (perhaps out to the

Internet) where it will send all packets that it cannot identify a destination for.

Switches

Switches use bridging technology to forward traffic between ports. They provide full dedicated transmission rates between two stations that are directly connected to the

switch ports. Switches also build and maintain address tables just like bridges do. These address tables are known as ―content addressable memory.‖

Let‘s look at an example.

Page 21: Network Notes

Replacing the two hubs and the bridge with an Ethernet switch provides the users

with dedicated bandwidth. Each station has a full 10Mbps ―pipe‖ to the switch. With a switch at the center of the network, combined with the 100Mbps links, users have greater access to the network.

Given the size of the files and applications on this network, additional bandwidth for access to the sever or to the corporate intranet is possible by using a switch that has

both 10Mbps and 100Mbps Fast Ethernet ports. The 10Mbps links could be used to support all the desktop devices, including the printer, while the 100Mbps switch ports would be used for higher bandwidth needs.

Routers

A router has two basic functions, path determination using a variety of metrics, and

forwarding packets from one network to another. Routing metrics can include load on the link between devices, delay, bandwidth, and reliability, or even hop count (i.e. the

number of devices a packet must go through in order to reach its destination). In essence, routers will do all that bridges and switches will do, plus more. Routers have the capability of looking deeper into the data frame and applying network

services based on the destination IP address. Destination and Source IP addresses are a part of the network header added to a packet encapsulation at the network layer.

- SUMMARY -

* LANs are designed to operate within a limited geographic area * Key LAN components are computers, NOS, NICs, hubs, and cables

* Common LAN topologies include bus, tree, star, and ring

* Common LAN/WAN devices are hubs, bridges, switches, and routers

Lesson 2: OSI Reference Model

Page 22: Network Notes

This lesson covers the OSI reference model. It is sometimes also called ISO or 7 layer reference model. The model was developed by the International Standards

Organization in the early 1980's. It describes the principles for interconnection of computer systems in an Open System Interconnection environment.

The Agenda

- The Layered Model

- Layers 1 & 2: Physical & Data Link Layers - Layer 3: Network Layer

- Layers 4–7: Transport, Session, Presentation, and Application Layers

The Layered Model

The concept of layered communication is essential to ensuring interoperability of all

the pieces of a network. To introduce the process of layered communication, let‘s take a look at a simple example.

In this image, the goal is to get a message from Location A to Location B. The sender doesn‘t know what language the receiver speaks – so the sender passes the message

on to a translator. The translator, while not concerned with the content of the message, will translate it into a language that may be globally understood by most, if not all translators – thus

it doesn‘t matter what language the final recipient speaks. In this example, the language is Dutch. The translator also indicates what the language type is, and then passes the message to an administrative assistant.

The administrative assistant, while not concerned with the language, or the message, will work to ensure the reliable delivery of the message to the destination. In this

example, she will attach the fax number, and then fax the document to the destination – Location B.

Page 23: Network Notes

The document is received by an administrative assistant at Location B. The assistant at Location B may even call the assistant at Location A to let her know the fax was

properly received. The assistant at Location B will then pass the message to the translator at her office. The translator will see that the message is in Dutch. The translator, knowing that the

person to whom the message is addressed only speaks French, will translate the message so the recipient can properly read the message. This completes the process

of moving information from one location to another.

Upon closer study of the process employed to communicate, you will notice that communication took place at different layers. At layer 1, the administrative assistants communicated with each other. At layer 2, the translators communicated with each

other. And, at layer 3 the sender was able to communicate with the recipient.

Why a Layered Network Model.........?

That‘s essentially the same thing that goes in networking with the OSI model. This image illustrates the model.

Page 24: Network Notes

So, why use a layered network model in the first place? Well, a layered network model does a number of things. It reduces the complexity of the problems from one large

one to seven smaller ones. It allows the standardization of interfaces among devices. It also facilitates modular engineering so engineers can work on one layer of the network model without being concerned with what happens at another layer. This

modularity both accelerates evolution of technology and finally teaching and learning by dividing the complexity of internetworking into discrete, more easily learned operation subsets.

Note that a layered model does not define or constrain an implementation; it provides a framework. Implementations, therefore, do not conform to the OSI reference model,

but they do conform to the standards developed from the OSI reference model principles.

Devices Function at Layers

Let‘s put this in some context. You are already familiar with different networking

devices such as hubs, switches, and routers. Each of these devices operate at a different level of the OSI Model.

NIC cards receive information from upper level applications and properly package data for transmission on to the network media. Essentially, NIC cards live at the lower four layers of the OSI Model.

Hubs, whether Ethernet, or FDDI, live at the physical layer. They are only concerned with passing bits from one station to other connected stations on the network. They do not filter any traffic.

Bridges and switches on the other hand, will filter traffic and build bridging and switching tables in order to keep track of what device is connected to what port.

Page 25: Network Notes

Routers, or the technology of routing, lives at layer 3. These are the layers people are referring to when they speak of ―layer 2‖ or ―layer 3‖

devices. Let‘s take a closer look at the model.

Host Layers & Media Layers

Host Layers :-

The upper four layers, Application, Presentation, Session, and Transport, are responsible for accurate data delivery between computers. The tasks or functions of

these upper four layers must ―interoperate‖ with the upper four layers in the system being communicated with.

Media Layers :-

The lower three layers – Network, Data Link and Physical -- are called the media layers. The media layers are responsible for seeing that the information does indeed

arrive at the destination for which it was intended.

Layer Functions

- Application Layer

If we take a look at the model from the top layer, the Application Layer, down, I think

you will begin to get a better idea of what the model does for the industry.

The applications that you run on a desktop system, such as Power Point, Excel and Word work above the seven layers of the model. The application layer of the model helps to provide network services to the

applications. Some of the application processes or services that it offers are electronic mail, file transfer, and terminal emulation.

Page 26: Network Notes

- Presentation Layer

The next layer of the seven layer model is the presentation layer. It is responsible for

the overall representation of the data from the application layer to the receiving system. It insures that the data is readable by the receiving system.

- Session Layer

The session layer is concerned with inter-host communication. It establishes, manages and terminates sessions between applications.

- Trasport Layer

Layer 4, the Transport layer is primarily concerned with end-to-end connection reliability. It is concerned with issues such as data transport information flow and fault detection and the recovery.

Page 27: Network Notes

- Network Layer

The network layer is layer 3. This is the layer that is associated with addressing and looking for the best path to send information on. It provides connectivity and path selection between two systems.

The network layer is essentially the domain of routing. So when we talk about a device having layer 3 capability, we mean that that device is capable of addressing

and best path selection.

- Data Link Layer

The link layer (formally referred to as the data link layer) provides reliable transit of

data across a physical link. In so doing, the link layer is concerned with physical (as opposed to network or logical) addressing, network topology, line discipline (how end systems will use the network link), error notification, ordered delivery of frames, and

flow control.

- Physical Layer

Page 28: Network Notes

The physical layer is concerned with binary transmission. It defines the electrical, mechanical, procedural, and functional specifications for activating, maintaining, and

deactivating the physical link between end systems. Such characteristics as voltage levels, physical data rates, and physical connectors are defined by physical layer

specifications. Now you know the role of all 7 layers of the OSI model.

Peer-to-Peer Communications

Let‘s see how these layers work in a Peer to Peer Communications Network. In this exercise we will package information and move it from Host A, across network lines to Host B.

Each layer uses its own layer protocol to communicate with its peer layer in the other system. Each layer‘s protocol exchanges information, called protocol data units

(PDUs), between peer layers. This peer-layer protocol communication is achieved by using the services of the layers below it. The layer below any current or active layer provides its services to the

current layer. The transport layer will insure that data is kept segmented or separated from one other data. At the network layer we get packets that begin to be assembled. At the

data link layer those packets become frames and then at the physical layer those frames go out on the wires from one host to the other host as bits

Data Encapsulation

Page 29: Network Notes

This whole process of moving data from host A to host B is known as data encapsulation – the data is being wrapped in the appropriate protocol header so it

can be properly received. Let‘s say we compose an email that we wish to send from system A to system B. The

application we are using is Eudora. We write the letter and then hit send. Now, the computer translates the numbers into ASCII and then into binary (1s and 0s). If the email is a long one, then it is broken up and mailed in pieces. This all happens by the

time the data reaches the Transport layer.

At the network layer, a network header is added to the data. This header contains

information required to complete the transfer, such as source and destination logical addresses.

The packet from the network layer is then passed to the data link layer where a frame header and a frame trailer are added thus creating a data link frame.

Page 30: Network Notes

Finally, the physical layer provides a service to the data link layer. This service includes encoding the data link frame into a pattern of 1s and 0s for transmission on

the medium (usually a wire).

Layers 1 & 2: Physical & Data Link Layers

Now let‘s take a look at each of the layers in a bit more detail and with some context. For Layers 1 and 2, we‘re going to look at physical device addressing, and the

resolution of such addresses when they are unknown.

Physical and Logical Addressing

Locating computer systems on an internetwork is an essential component of any network system – the key to this is addressing. Every NIC card on the network has its own MAC address. In this example we have a

computer with the MAC address 000.0C12.3456. The MAC address is a hexadecimal number so the numbers in this address here don‘t go just from zero to nine, but go from zero to nine and then start at "A" and go through "F". So, there are actually

sixteen digits represented in this counting system. Every type of device on a network has a MAC address, whether it is a Macintosh computer, a Sun Work Station, a hub

or even a router. These are known as physical addresses and they don‘t change. Logical addresses exist at Layer 3 of the OSI reference model. Unlike link-layer addresses, which usually exist within a flat address space, network-layer addresses

are usually hierarchical. In other words, they are like mail addresses, which describe a person‘s location by providing a country, a state, a zip code, a city, a street, and

address on the street, and finally, a name. One good example of a flat address space is the U.S. social security numbering system, where each person has a single, unique security number.

MAC Address

Page 31: Network Notes

For multiple stations to share the same medium and still uniquely identify each other, the MAC sub layer defines a hardware or data link address called the MAC

address. The MAC address is unique for each LAN interface. On most LAN-interface cards, the MAC address is burned into ROM—hence the term,

burned-in address (BIA). When the network interface card initializes, this address is copied into RAM. The MAC address is a 48-bit address expressed as 12 hexadecimal digits. The first 6

hexadecimal digits of a MAC address contain a manufacturer identification (vendor code) also known as the organizationally unique identifier (OUI). To ensure vendor uniqueness the Institute of Electrical and Electronic Engineers (IEEE) administers

OUIs. The last 6 hexadecimal digits are administered by each vendor and often represent the interface serial number.

Layer 3: Network Layer

Now let‘s take a look a layer 3--the domain of routing.

Network Layer: Path Determination

Which path should traffic take through the cloud of networks? Path determination occurs at Layer 3. The path determination function enables a router to evaluate the available paths to a destination and to establish the preferred handling of a packet.

Data can take different paths to get from a source to a destination. At layer 3, routers really help determine which path. The network administrator configures the router

enabling it to make an intelligent decision as to where the router should send information through the cloud. The network layer sends packets from source network to destination network.

After the router determines which path to use, it can proceed with switching the packet: taking the packet it accepted on one interface and forwarding it to another

interface or port that reflects the best path to the packet‘s destination.

Page 32: Network Notes

To be truly practical, an internetwork must consistently represent the paths of its

media connections. As the graphic shows, each line between the routers has a number that the routers use as a network address. These addresses contain information about the path of media connections used by the routing process to pass

packets from a source toward a destination. The network layer combines this information about the path of media connections–

sets of links–into an internetwork by adding path determination, path switching, and route processing functions to a communications system. Using these addresses, the network layer also provides a relay capability that interconnects independent

networks. The consistency of Layer 3 addresses across the entire internetwork also improves

the use of bandwidth by preventing unnecessary broadcasts which tax the system.

Addressing—Network and Node

Each device in a local area network is given a logical address. The first part is the network number – in this example that is a single digit – 1. The second part is a node number, in this example we have nodes 1, 2, and 3. The router uses the network

number to forward information from one network to another.

Protocol Addressing Variations

The two-part network addressing scheme extends across all the protocols covered in

this course. How do you interpret the meaning of the address parts? What authority allocates the addresses? The answers vary from protocol to protocol.

For example, in the TCP/IP address, dotted decimal numbers show a network part and a host part. Network 10 uses the first of the four numbers as the network part and the last three numbers–8.2.48 as a host address. The mask is a companion

Page 33: Network Notes

number to the IP address. It communicates to the router the part of the number to interpret as the network number and identifies the remainder available for host

addresses inside that network. The Novell Internet Package Exchange or IPX example uses a different variation of

this two-part address. The network address 1aceb0b is a hexadecimal (base 16) number that cannot exceed a fixed maximum number of digits. The host address 0000.0c00.6e25 (also a hexadecimal number) is a fixed 48 bits long. This host

address derives automatically from information in hardware of the specific LAN device. These are the two most common Layer 3 address types.

Network Layer Protocol Operations

Let‘s take a look at the flow of packets through a routed network. For examples sake, let‘s say it is an Email message from you at Station X to your mother in Michigan

who is using System Y. The message will exit Station X and travel through the corporate internal network until it gets to a point where it needs the services of an Internet service provider. The

message will bounce through their network and eventually arrive at Mom‘s Internet provider in Dearborn. Now, we have simplified this transmission to three routers,

when in actuality, it could travel through many different networks before it arrives at its destination. Let‘s take a look, from the OSI models reference point, at what is happening to the

message as it bounces around the Internet on its way to Mom‘s.

Page 34: Network Notes

As information travels from Station X it reaches the network level where a network

address is added to the packet. At the data link layer, the information is encapsulated in an Ethernet frame. Then it goes to the router – here it is Router A – and the router de-encapsulates and examines the frame to determine what type of

network layer data is being carried. The network layer data is sent to the appropriate network layer process, and the frame itself is discarded.

The network layer process examines the header to determine the destination network. The packet is again encapsulated in the data-link frame for the selected interface and

queued for delivery. This process occurs each time the packet switches through another router. At the router connected to the network containing the destination host – in this case, C --

the packet is again encapsulated in the destination LAN‘s data-link frame type for delivery to the protocol stack on the destination host, System Y.

Multiprotocol Routing

Routers are capable of understanding address information coming from many

different types of networks and maintaining associated routing tables for several routed protocols concurrently. This capability allows a router to interleave packets from several routed protocols over the same data links.

As the router receives packets from the users on the networks using IP, it builds a routing table containing the addresses of the network of these IP users. Now some Macintosh AppleTalk users are adding to the traffic on this link of the

Page 35: Network Notes

network. The router adds the AppleTalk addresses to the routing table. Routing tables can contain address information from multiple protocol networks.

In addition to the AppleTalk and IP users, there is also some IPX traffic from some Novell NetWare networks.

Finally, we see some DEC traffic from the VAX minicomputers attached to the Ethernet networks. Routers can pass traffic from these (and other) protocols across the common Internet.

The various routed protocols operate separately. Each uses routing tables to determine paths and switches over addressed ports in a ―ships in the night‖ fashion; that is, each protocol operates without knowledge of or coordination with any of the

other protocol operations. Now, we have spent some time with routed protocols; let‘s take some time talking

about routing protocols.

Routed Versus Routing Protocol

It is easy to confuse the similar terms routed protocol and routing protocol: Routed protocols are what we have been talking about so far. They are any network

protocol suite that provides enough information in its network layer address to allow a packet to direct user traffic. Routed protocols define the format and use of the fields

within a packet. Packets generally are conveyed from end system to end system. The Internet protocol IP and Novell‘s IPX are examples of routed protocols.

Routing protocol support a routed protocol by providing mechanisms for sharing

routing information. Routing protocol messages move between the routers. A routing protocol allows the routers to communicate with other routers to update and

Page 36: Network Notes

maintain tables. Routing protocol messages do not carry end-user traffic from network to network. A routing protocol uses the routed protocol to pass information

between routers. TCP/IP examples of routing protocols are Routing Information Protocol (RIP), Interior Gateway Routing Protocol (IGRP), and Open Shortest Path

First (OSPF).

Static Versus Dynamic Routes

Routers must be aware of what links, or lines, on the network are up and running, which ones are overloaded, or which ones may even be down and unusable. There are two primary methods routers use to determine the best path to a destination:

static and dynamic Static knowledge is administered manually: a network administrator enters it into the

router‘s configuration. The administrator must manually update this static route entry whenever an internetwork topology change requires an update. Static knowledge is private–it is not conveyed to other routers as part of an update process.

Dynamic knowledge works differently. After the network administrator enters configuration commands to start dynamic routing, route knowledge is updated automatically by a routing process whenever new topology information is received

from the internetwork. Changes in dynamic knowledge are exchanged between routers as part of the update process.

Static Route : Uses a protocol route that a network administrator enters into the

router

Dynamic Route : Uses a route that a network protocol adjusts automatically for topology or traffic changes

Dynamic routing tends to reveal everything known about an internetwork. For security reasons, it might be appropriate to conceal parts of an internetwork. Static

routing allows an internetwork administrator to specify what is advertised about restricted partitions. When an internetwork partition is accessible by only one path, a static route to the

partition can be sufficient. This type of partition is called a stub network. Configuring static routing to a stub network avoids the overhead of dynamic routing.

Page 37: Network Notes

Adapting to Topology Change

The internetwork shown in the graphic adapts differently to topology changes depending on whether it uses statically or dynamically configured knowledge. Static knowledge allows the routers to properly route a packet from network to

network. The router refers to its routing table and follows the static knowledge there to relay the packet to Router D. Router D does the same and relays the packet to

Router C. Router C delivers the packet to the destination host.

But what happens if the path between Router A and Router D fails? Obviously Router

A will not be able to relay the packet to Router D. Until Router A is reconfigured to relay packets by way of Router B, communication with the destination network is impossible.

Dynamic knowledge offers more automatic flexibility. According to the routing table generated by Router A, a packet can reach its destination over the preferred route

through Router D. However, a second path to the destination is available by way of Router B. When Router A recognizes the link to Router D is down, it adjusts its routing table, making the path through Router B the preferred path to the

destination. The routers continue sending packets over this link. When the path between Routers A and D is restored to service, Router A can once

again change its routing table to indicate a preference for the counter-clockwise path through Routers D and C to the destination network.

Page 38: Network Notes

LAN-to-LAN Routing

Example 01:-

The next two examples will bring together many of the concepts we have discussed.

The network layer must relate to and interface with various lower layers. Routers must be capable of seamlessly handling packets encapsulated into different lower-level frames without changing the packets‘ Layer 3 addressing.

Let‘s look at an example of this in a LAN-to-LAN routing situation. Packet traffic from source Host 4 on Ethernet network 1 needs a path to destination Host 5 on Token

Ring Network 2. The LAN hosts depend on the router and its consistent network addressing to find the best path. When the router checks its router table entries, it discovers that the best path to

destination Network 2 uses outgoing port To0, the interface to a Token Ring LAN.

Although the lower-layer framing must change as the router switches packet traffic from the Ethernet on Network 1 to the Token Ring on Network 2, the Layer 3 addressing for source and destination remains the same - in this example it is Net 2,

Host 5 despite the different lower-layer encapsulations. The packet is then reframed and sent on to the destination Token Ring network.

Page 39: Network Notes

LAN-to-WAN Routing

Now, let‘s look at an example using a Wide Area Network.

Example 02:-

The network layer must relate to and interface with various lower layers for LAN-to-

WAN traffic, as well. As an internetwork grows, the path taken by a packet might encounter several relay points and a variety of data-link types beyond the LANs. For example, in the graphic, a packet from the top workstation at address 1.3 must

traverse three data links to reach the file server at address 2.4 shown on the bottom: The workstation sends a packet to the file server by encapsulating the packet in a Token Ring frame addressed to Router A.

When Router A receives the frame, it removes the packet from the Token Ring frame, encapsulates it in a Frame Relay frame, and forwards the frame to Router B.

Page 40: Network Notes

Router B removes the packet from the Frame Relay frame and forwards the packet to

the file server in a newly created Ethernet frame. When the file server at 2.4 receives the Ethernet frame, it extracts and passes the packet to the appropriate upper-layer process through the process of de-

encapsulation. The routers enable LAN-to-WAN packet flow by keeping the end-to-end source and

destination addresses constant while encapsulating the packet at the port to a data link that is appropriate for the next hop along the path.

Layers 4–7: Transport, Session, Presentation, and Application Layers

Let‘s look at the upper layers of the OSI seven layer model now. Those layers are the transport, session, presentation, and application layers.

Transport Layer

Transport services allow users to segment and reassemble several upper-layer applications onto the same transport layer data stream. It also establishes the end-to-end connection, from your host to another host. As the

transport layer sends its segments, it can also ensure data integrity. Essentially the transport layer opens up the connection from your system through a network and then through a wide area cloud to the receiving system at the other end.

- Segments upper-layer applications

- Establishes an end-to-end connection - Sends segments from one end host to another

- Optionally, ensures data reliability

Page 41: Network Notes

Transport Layer— Segments Upper-Layer Applications

The transport layer has several functions. First, it segments upper layer application information. You might have more than one application running on your desktop at a

time. You might be sending electronic mail open while transferring a file from the Web, and opening a terminal session. The transport layer helps keep straight all of the information coming from these different applications.

Transport Layer— Establishes Connection

Another function of the transport layer is to establish the connection from your

system to another system. When you are browsing the Web and double-click on a link your system tries to establish a connection with that host. Once the connection

has been established, there is some negotiation that happens between your system and the system that you are connected to in terms of how data will be transferred. Once the negotiations are completed, data will begin to transfer. As soon as the data

transfer is complete, the receiving station will send you the end message and your browser will say done. Essentially, the transport layer is responsible then for

connecting and terminating sessions from your host to another host.

Page 42: Network Notes

Transport Layer— Sends Segments with Flow Control

Another important function of the transport layer is to send segments and maintain the sending and receiving of information with flow control. When a connection is established, the host will begin to send frames to the receiver.

When frames arrive too quickly for a host to process, it stores them in memory temporarily. If the frames are part of a small burst, this buffering solves the problem.

If the traffic continues, the host or gateway eventually exhausts its memory and must discard additional frames that arrive. Instead of losing data, the transport function can issue a not ready indicator to the

sender. Acting like a stop sign, this indicator signals the sender to discontinue sending segment traffic to its peer. After the receiver has processed sufficient segments that its buffers can handle additional segments, the receiver sends a ready

transport indicator, which is like a go signal. When it receives this indicator, the sender can resume segment transmission.

Transport Layer— Reliability with Windowing

In the most basic form of reliable connection-oriented data transfer, a sequence of data segments must be delivered to the recipient in the same sequence that they were transmitted. The protocol here represents TCP. It fails if any data segments are lost,

damaged, duplicated, or received in a different order. The basic solution is to have a receiving system acknowledge the receipt of every data segment.

If the sender had to wait for an acknowledgment after sending each segment, throughput would be low. Because time is available after the sender finishes

Page 43: Network Notes

transmitting the data segment and before the sender finishes processing any received acknowledgment, the interval is used for transmitting more data. The number of data

segments the sender is allowed to have outstanding–without yet receiving an acknowledgment– is known as the window.

In this scenario, with a window size of 3, the sender can transmit three data segments before expecting an acknowledgment. Unlike this simplified graphic, there is a high probability that acknowledgments and packets will intermix as they

communicate across the network.

Transport Layer— An Acknowledgement Technique

Reliable delivery guarantees that a stream of data sent from one machine will be delivered through a functioning data link to another machine without duplication or

data loss. Positive acknowledgment with retransmission is one technique that guarantees reliable delivery of data streams. Positive acknowledgment requires a

receiving system or receiver to communicate with the source, sending back an acknowledgment message when it receives data. The sender keeps a record of each packet it sends and waits for an acknowledgment before sending the next packet.

In this example, the sender is transmitting packets 1, 2, and 3. The receiver acknowledges receipt of the packets by requesting packet number 4. The sender, upon receiving the acknowledgment sends packets 4, 5, and 6. If packet number 5

does not arrive at the destination, the receiver acknowledges with a request to resend packet number 5. The sender resends packet number 5 and must receive an

acknowledgment to continue with the transmission of packet number 7.

Page 44: Network Notes

Transport to Network Layer

The transport layer assumes it can use the network as a given ―cloud‖ as segments cross from sender source to receiver destination. If we open up the functions inside the ―cloud,‖ we reveal issues like, ―Which of several

paths is best for a given route?‖ We see the role that routers perform in this process, and we see the segments of Layer 4 transport further encapsulated into packets.

Session Layer

- Network File System (NFS)

- Structured Query Language (SQL) - Remote-Procedure Call (RPC) - X Window System

- AppleTalk Session Protocol (ASP) - DEC Session Control Protocol (SCP)

The session layer establishes, manages, and terminates sessions among applications. This layer is primarily concerned with coordinating applications as they interact on

different hosts. Some popular session layer protocols are listed here, Network File Systems (NFS), Structured Query Language or SQL, X Window Systems; even

AppleTalk Session Protocol is part of the session layer.

Page 45: Network Notes

Presentation Layer

The presentation layer is primarily concerned with the format of the data. Data and text can be formatted as ASCII files, as EBCDIC files or can even be Encrypted. Sound may become a Midi file. Video files can be formatted as MPEG video files or

QuickTime files. Graphics and visual images can be formatted as PICT, TIFF, JPEG, or even GIF files. So that is really what happens at the presentation layer.

Application Layer

The application layer is the highest level of the seven layer model. Computer

applications that you use on your desktop everyday, applications like word processing, presentation graphics, spreadsheets files, and database management, all sit above the application layer. Network applications and internetwork applications

allow you, as the user, to move computer application files through the network and through the internetwork.

Examples:-

COMPUTER APPLICATIONS

- Word Processor - Presentation Graphics - Spreadsheet - Database

- Design/Manufacturing - Project Planning - Others

NETWORK APPLICATIONS

- Electronic Mail - File Transfer

Page 46: Network Notes

- Remote Access - Client-Server Process - Information Location - Network Management - Others

INTERNETWORK APPLICATIONS - Electronic Data Interchange - World Wide Web - E-Mail Gateways - Special-Interest Bulletin Boards - Financial Transaction Services - Internet Navigation Utilities - Conferencing (Voice, Video, Data) - Others

- SUMMARY -

- OSI reference model describes building blocks of functions for program-to-program

communications between similar or dissimilar hosts - Layers 4–7 (host layers) provide accurate data delivery between computers

- Layers 1–3 (media layers) control physical delivery of data over the network

The OSI reference model describes what must transpire for program to program

communications to occur between even dissimilar computer systems. Each layer is responsible to provide information and pointers to the next higher layer in the OSI Reference Model.

The Application Layer (which is the highest layer in the OSI model) makes available network services to actual software application programs.

The presentation layer is responsible for formatting and converting data and ensuring that the data is presentable for one application through the network to another application.

The session layer is responsible for coordinating communication interactions between applications. The reliable transport layer is responsible for segmenting and multiplexing information, keeping straight all the various applications you might be

using on your desktop, the synchronization of the connection, flow control, error recovery as well as reliability through the process of windowing. The network layer is

responsible for addressing and path determination. The link layer provides reliable transit of data across a physical link. And finally the physical layer is concerned with binary transmission.

Page 47: Network Notes

This lesson provides an introduction to TCP/IP. I am sure you‘ve heard of TCP/IP… though you may wonder why you need to understand it. Well, TCP/IP is the language

that governs communications between all computers on the Internet. A basic understanding of TCP/IP is essential to understanding Internet technology and how

it can bring benefits to an organization. We‘re going to explain what TCP/IP is and the different parts that make it up. We‘ll also discuss IP addresses.

The Agenda

- What Is TCP/IP?

- IP Addressing

What Is TCP/IP?

TCP/IP is shorthand for a suite of protocols that run on top of IP. IP is the Internet Protocol, and TCP is the most important protocol that runs on top of IP. Any

application that can communicate over the Internet is using IP, and these days most internal networks are also based on TCP/IP.

Protocols that run on top of IP include: TCP, UDP and ICMP. Most TCP/IP implementations support all three of these protocols. We‘ll talk more about them later.

Protocols that run underneath IP include: SLIP and PPP. These protocols allow IP to run across telecommunications lines. TCP/IP protocols work together to break data into packets that can be routed

efficiently by the network. In addition to the data, packets contain addressing, sequencing, and error checking information. This allows TCP/IP to accurately

reconstruct the data at the other end. Here‘s an analogy of what TCP/IP does. Say you‘re moving across the country. You pack your boxes and put your new address on them. The moving company picks

them up, makes a list of the boxes, and ships them across the country using the most efficient route. That might even mean putting different boxes on different trucks. When the boxes arrive at your new home, you check the list to make sure

everything has arrived (and in good shape), and then you unpack the boxes and ―reassemble‖ your house.

- A suite of protocols - Rules that dictate how packets of information are sent across - multiple networks

- Addressing - Error checking

IP

Let‘s start with IP, the Internet Protocol.

Page 48: Network Notes

Every computer on the Internet has at least one address that uniquely identifies it from all other computers on the Internet (aptly called it‘s IP address!). When you send

or receive data—say an email message or web page—the message gets divided into little chunks called packets or data grams. Each of these packets contains both the

source IP address and the destination IP address. IP looks at the destination address to decide what to do next. If the destination is on the local network, IP delivers the packet directly. If the destination is not on the local

network, then IP passes the packet to a gateway—usually a router. Computers usually have a single default gateway. Routers frequently have several gateways from which to choose. A packet may get passed through several gateways

before reaching one that is on a local network with the destination. Along the way, any router may break the IP packet into several smaller packets based

on transmission medium. For example, Ethernet usually allows packets of up to 1500 bytes, but it is not uncommon for modem-based PPP connections to only allow packets of 256 bytes. The last system in the chain (the destination) reassembles the

original IP packet.

TCP/IP Transport Layer

- 21 FTP—File Transfer Protocol - 23 Telnet

- 25 SMTP—Simple Mail Transfer Protocol - 37 Time - 69 TFTP—Trivial File Transfer Protocol

- 79 Finger - 103 X400

- 161 SNMP—Simple Network Management Protocol - 162 SNMPTRAP

After TCP/IP was invented and deployed, the OSI layered network model was accepted as a standard. OSI neatly divides network protocols into seven layers; the

bottom four layers are shown in this diagram. The idea was that TCP/IP was an interesting experiment, but that it would be replaced by protocols based on the OSI

Page 49: Network Notes

model. As it turned out, TCP/IP grew like wildfire, and OSI-based protocols only caught on

in certain segments of the manufacturing community. These days, while everyone uses TCP/IP, it is common to use the OSI vocabulary.

TCP/IP Applications

- Application layer - File Transfer Protocol (FTP)

- Remote Login (Telnet) - E-mail (SMTP)

- Transport layer

- Transport Control Protocol (TCP) - User Datagram Protocol (UDP)

- Network layer

- Internet Protocol (IP) - Data link & physical layer

- LAN Ethernet, Token Ring, FDDI, etc.

- WAN Serial lines, Frame Relay, X.25, etc. Roughly, Ethernet corresponds to both the physical layer and the data link layer.

Other media (T1, Frame Relay, ATM, ISDN, analog) and other protocols (SLIP, PPP) are down here as well. Roughly, IP corresponds to the network layer.

Roughly, TCP and UDP correspond to the transport layer. TCP is the most important of all the IP protocols. Most Internet applications you can

think of use TCP, including: Telnet, HTTP (Web), POP & SMTP (email) and FTP (file transfer).

TCP Transmission Control Protocol

TCP stands for Transmission Control Protocol.

Page 50: Network Notes

TCP establishes a reliable connection between two applications over the network. This means that TCP guarantees accurate, sequential delivery of your data. If something goes wrong, TCP reports an error, so you always know whether your data

arrived at the other end. Here‘s how it works:

Every TCP connection is uniquely identified by four numbers: - source IP address

- source port - destination IP address - destination port

Typically, a client will use a random port number, but a server will use a ―well

known‖ port number, e.g. 25=SMTP (email), 80=HTTP (Web) and so on. Because every TCP connection is unique, even though many people may be making requests to the same Web server, TCP/IP can identify your packets among the crowd.

In addition to the port information, each TCP packet has a sequence number. Packets may arrive out of sequence (they may have been routed differently, or one may have

been dropped), so the sequence numbers allow TCP to reassemble the packets in the correct order and to request retransmission of any missing packets. TCP packets also include a checksum to verify the integrity of the data. Packets that

fail checksum get retransmitted.

UDP User Datagram Protocol

- Unreliable - Fast

- Assumes application will retransmit on error - Often used in diskless workstations

UDP is a fast, unreliable protocol that is suitable for some applications.

Page 51: Network Notes

Unreliable means there is no sequencing, no guaranteed delivery (no automatic retransmission of lost packets) and sometimes no checksums.

Fast means there is no connection setup time, unlike TCP. In reality, once a TCP session is established, packets will go just as fast over a TCP connection as over UDP.

UDP is useful for applications such as streaming audio that don‘t care about dropped packets and for applications such as TFTP that inherently do their own sequencing and checksums. Also, applications such as NFS that usually run on very reliable

physical networks and which need fast, connectionless transactions use UDP.

ICMP Ping

Ping is an example of a program that uses ICMP rather than TCP or UDP. Ping sends

an ICMP echo request from one system to another, then waits for an ICMP echo reply. It is mostly used for testing.

IPv4 Addressing

Most IP addresses today use IP version 4—we‘ll talk about IP version 6 later. IPv4 addresses are 32 bits long and are usually written in ―dot‖ notation. An example

would be 192.1.1.17. The Internet is actually a lot of small local networks connected together. Part of an IP

address identifies which local network, and part of an IP address identifies a specific system or host on that local network. What part of an IP address is for the ―network‖ and what part is for the ―host‖ is

determined by the class or the subnet.

IP Addressing—Three Classes

- Class A: NET.HOST.HOST.HOST - Class B: NET.NET.HOST.HOST - Class C: NET.NET.NET.HOST

Before the introduction of subnet masks, the only way to tell the network part of an

IP address from the host part was by its class. Class A addresses have 8 bits (one octet) for the network part and 24 bits for the host

part. This allows for a small number of large networks. Class B addresses have 16 bits each for the network and host parts.

Page 52: Network Notes

Class C addresses have 24 bits for the network and 8 bits for the host. This allows for a fairly large number of networks with up to 254 systems on each.

To summarize: IPv4 addresses are 32 bits with a network part and a host part.

Unless you are using subnets, you divide an IP address into the network and host parts based on the address class.

The network part of an address is used for routing packets over the Internet. The host part is used for final delivery on the local net.

IP Addressing—Class A

Here‘s an example of a class A address. Any IPv4 address in which the first octet is less than 128 is by definition a class A address.

This address is for host #222.135.17 on network #10, although the host is always referred to by its full address.

Examlpe:- 10.222.135.17

- Network # 10 - Host # 222.135.17

- Range of class A network IDs: 1–126 - Number of available hosts: 16,777,214

IP Addressing—Class B

Here‘s an example of a class B address. Any IPv4 address in which the first octet is

between 128 and 191 is by definition a class B address Examlpe:- 128.128.141.245

- Network # 128.128

Page 53: Network Notes

- Host # 141.245 - Range of class B network IDs: 128.1–191.254

- Number of available hosts: 65,534

IP Addressing—Class C

Here‘s an example of a class C address. Most IPv4 addresses in which the first octet is 192 or higher are class C addresses, but some of the higher ranges are reserved for

multicast applications. Examlpe:- 192.150.12.1

-Network # 192.150.12

-Host # 1 -Range of class C network IDs: 192.0.1–223.255.254 -Number of available hosts: 254

IP Subnetting

As it turns out, dividing IP addresses into classes A, B and C is not flexible enough. In particular, it does not make efficient use of the available IP addresses and it does

not give network administrators enough control over their internal LAN configurations.

In this diagram, the class B network 131.108 is split (probably into 256 subnets), and a router connects the 131.108.2 subnet to the 131.108.3 subnet.

IP Subnet Mask

A subnet mask tells a computer or a router how to divide a range of IP addresses into the network part and the host part.

Given:

Address = 131.108.2.160

Page 54: Network Notes

Subnet Mask = 255.255.255.0

Subnet = 131.108.2.0

In this example, without a subnet mask the address would be treated as class B and

the network number would be 131.108. But because someone supplied a subnet mask of 255.255.255.0, the network number is actually 131.108.2.

These days, routers and computers always use subnet masks if they are supplied. If there is no subnet mask for an address, then the class A, B, C scheme is used.

Remember that a network mask determines which portion of an IP address identifies the network and which portion identifies the host, while a subnet mask describes

which portion of an address refers to the subnet and which part refers to the host.

IP Address Assignment

- ISPs assign addresses to customers - IANA assigns addresses to ISPs

- CIDR block: bundle of addresses

Historically, an organization was assigned a class A, B or C address and carried that address around. This is no longer the case.

Usually an organization is assigned IP addresses by its ISP. If an organization changes ISPs, it changes IP addresses. This is usually not a problem, since most people refer to IP addresses using the DNS. For example, www.acme.com might point

to 192.1.1.1 today and point to 128.7.7.7 tomorrow, but nobody other than the system administrator at acme.com has to worry about it. IANA—the Internet Assigned Numbers Authority—assigns IP addresses to ISPs. These

days no one gets a class A or a class B network—they are pretty much all gone. Usually the IANA bundles 8 or 16 or 32 class C networks together and calls it a CIDR

(pronounced ―cider‖) block. CIDR stands for Class Independent Routing, and it greatly simplifies routing among the Internet backbones. CIDR blocks are sometimes called supernets (as opposed to subnets).

IPv6 Addressing

- 128-bit addresses

- 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses Example1:- 5F1B:DF00:CE3E:E200:0020:0800:5AFC:2B36 Example2:- 0:0:0:0:0:0:192.1.1.17 With the explosive growth of the Internet, there are not enough IPv4 addresses to go

Page 55: Network Notes

around. IPv6 is now released, and many organizations are already migrating. While IPv6 has a number of nice features, its biggest claim to fame is a huge number

of IP addresses. IPv4 was only 32 bits; IPv6 is 128 bits. To ease migration, IPv6 completely contains all of IPv4, as shown in the second

example above. Most network applications will have to be modified slightly to accommodate IPv6.

- SUMMARY -

- TCP/IP is a suite of protocols

- TCP/IP defines communications between computers on the Internet

- IP determines where packets are routed based on their destination address - TCP ensures packets arrive correctly at their destination address

Lesson 4: LAN Basics

In this lesson, we will cover the fundamentals of LAN technologies. We‘ll look at

Ethernet, Token Ring, and FDDI. For each one, we‘ll look at the technology as well as its operations.

The Agenda

- Ethernet

- Token Ring

- FDDI

Common LAN Technologies

The three LAN technologies shown here account for virtually all deployed LANs: The most popular local area networking protocol today is Ethernet. Most network

administrators building a network from scratch use Ethernet as a fundamental technology.

Page 56: Network Notes

Token Ring technology is widely used in IBM networks.

FDDI networks are popular for campus LANs – and are usually built to support high

bandwidth needs for backbone connectivity.

Let‘s take a look at Ethernet in detail.

Ethernet

Ethernet and IEEE 802.3

Ethernet was initially developed by Xerox. They were later joined by Digital Equipment Corporation (DEC) and Intel to define the Ethernet 1 specification in 1980. There have been further revisions including the Ethernet standard (IEEE

Standard 802.3) which defines rules for configuring Ethernet as well as specifying how elements in an Ethernet network interact with one another. Ethernet is the most popular physical layer LAN technology because it strikes a good

balance between speed, cost, and ease of installation. These strong points, combined with wide acceptance in the computer marketplace and the ability to support

virtually all popular network protocols, make Ethernet an ideal networking technology for most computer users today. The Fast Ethernet standard (IEEE 802.3u) has been established for networks that

need higher transmission speeds. It raises the Ethernet speed limit from 10 Mbps to 100 Mbps with only minimal changes to the existing cable structure. Incorporating

Fast Ethernet into an existing configuration presents a host of decisions for the network manager. Each site in the network must determine the number of users that

Page 57: Network Notes

really need the higher throughput, decide which segments of the backbone need to be reconfigured specifically for 100BaseT and then choose the necessary hardware to

connect the 100BaseT segments with existing 10BaseT segments. Gigabit Ethernet is an extension of the IEEE 802.3 Ethernet standard. It increases

speed tenfold over Fast Ethernet, to 1000 Mbps, or 1 Gbps.

Benefits and background - Ethernet is the most popular physical layer LAN technology because it strikes a

good balance between speed, cost, and ease of installation - Supports virtually all network protocols - Xerox initiated, then joined by DEC & Intel in 1980

Revisions of Ethernet specification

- Fast Ethernet (IEEE 802.3u) raises speed from 10 Mbps to 100 Mbps - Gigabit Ethernet is an extension of IEEE 802.3 which increases speeds to 1000

Mbps, or 1 Gbps

One thing to keep in mind in Ethernet is that there are several framing variations that exist for this common LAN technology.

These differences do not prohibit manufacturers from developing network interface cards that support the common physical layer, and software that recognizes the differences between the two data links

Ethernet Protocol Names

Ethernet protocol names follow a fixed scheme. The number at the beginning of the name indicates the wire speed. If the word ―base‖ appears next, the protocol is for

baseband applications. If the word ―broad‖ appears, the protocol is for broadband applications. The alphanumeric code at the end of the name indicates the type of

cable and, in some cases, the cable length. If a number appears alone, you can determine the maximum segment length by multiplying that number by 100 meters. For example 10Base2 is a protocol with a maximum segment length of approximately

200 meters (2 x 100 meters).

Page 58: Network Notes

Ethernet and Fast Ethernet

This chart give you an idea of the range of Ethernet protocols including their data rate, maximum segment length, and medium.

Ethernet has survived as an essential media technology because of its tremendous flexibility and its relative simplicity to implement and understand. Although other

technologies have been touted as likely replacements, network managers have turned to Ethernet and its derivatives as effective solutions for a range of campus implementation requirements. To resolve Ethernet‘s limitations, innovators (and

standards bodies) have created progressively larger Ethernet pipes. Critics might dismiss Ethernet as a technology that cannot scale, but its underlying transmission scheme continues to be one of the principal means of transporting data for

contemporary campus applications. The most popular today is 10BaseT and 100BaseT… 10Mbps and 100Mbps

respectively using UTP wiring. Let‘s take a look at how Ethernet works.

Page 59: Network Notes

Ethernet Operation

Example:-

Let‘s say in our example here that station A is going to send information to station D. Station A will listen through its NIC card to the network. If no other users are using the network, station A will go ahead and send its message out on to the network.

Stations B and C and D will all receive the communication.

At the data link layer it will inspect the MAC address. Upon inspection station D will see that the MAC address matches its own and then will process the information up through the rest of the layers of the seven layer model.

Page 60: Network Notes

As for stations B & C, they too will pull this packet up to their data link layers and inspect the MAC addresses. Upon inspection they will see that there is no match

between the data link layer MAC address for which it is intended and their own MAC address and will proceed to dump the packet.

Ethernet Broadcast

Broadcasting is a powerful tool that sends a single frame to many stations at the same time. Broadcasting uses a data link destination address of all 1s. In this example, station A transmits a frame with a destination address of all 1s, stations B,

C, and D all receive and pass the frame to their respective upper layers for further processing. When improperly used, however, broadcasting can seriously impact the performance

of stations by interrupting them unnecessarily. For this reason, broadcasts should be used only when the MAC address of the destination is unknown or when the

destination is all stations.

Ethernet Reliability

Ethernet is known as being a very reliable local area networking protocol. In this example, A is transmitting information and B also has information to transmit. Let‘s

say that A & B listen to the network, hear no traffic and broadcast at the same time. A collision occurs when these two packets crash into one another on the network. Both transmissions are corrupted and unusable.

Page 61: Network Notes

When a collision occurs on the network, the NIC card sensing the collision, in this case, station C sends out a jam signal that jams the entire network for a designated

amount of time.

Once the jam signal has been received and recognized by all of the stations on the

network, stations A and D will both back off for different amounts of time before they try to retransmit. This type of technology is known as Carrier Sense Multiple Access With Collision Detection – CSMA/CD.

High-Speed Ethernet Options

Page 62: Network Notes

- Fast Ethernet - Fast EtherChannel®

- Gigabit Ethernet - Gigabit EtherChannel

We‘ve mentioned that Ethernet also has high speed options that are currently

available. Fast Ethernet is used widely at this point and provides customers with 100 Mbps performance, a ten-fold increase. Fast EtherChannel is a Cisco value-added feature that provides bandwidth up to 800 Mbps. There is now a standard for Gigabit

Ethernet as well and Cisco provides Gigabit Ethernet solutions with 1000 Mbps performance.

Let‘s look more closely at Fast EtherChannel and Gigabit Ethernet.

What Is Fast EtherChannel?

Grouping of multiple Fast Ethernet interfaces into one logical transmission path

- Scalable bandwidth up to 800+ Mbps - Using industry-standard Fast Ethernet

- Load balancing across parallel links - Extendable to Gigabit Ethernet

Fast EtherChannel provides a solution for network managers who require higher bandwidth between servers, routers, and switches than Fast Ethernet technology can

currently provide. Fast EtherChannel is the grouping of multiple Fast Ethernet interfaces into one logical transmission path providing parallel bandwidth between switches, servers,

and Cisco routers. Fast EtherChannel provides bandwidth aggregation by combining parallel 100-Mbps Ethernet links (200-Mbps full-duplex) to provide flexible, incremental bandwidth between network devices.

For example, network managers can deploy Fast EtherChannel consisting of pairs of full-duplex Fast Ethernet to provide 400+ Mbps between the wiring closet and the

data center, while in the data center bandwidths of up to 800 Mbps can be provided between servers and the network backbone to provide large amounts of scalable incremental bandwidth.

Cisco‘s Fast EtherChannel technology builds upon standards-based 802.3 full-duplex Fast Ethernet. It is supported by industry leaders such as Adaptec, Compaq, Hewlett-

Page 63: Network Notes

Packard, Intel, Micron, Silicon Graphics, Sun Microsystems, and Xircom and is scalable to Gigabit Ethernet in the future.

What Is Gigabit Ethernet?

In some cases, Fast EtherChannel technology may not be enough.

The old 80/20 rule of network traffic (80 percent of traffic was local, 20 percent was over the backbone) has been inverted by intranets and the World Wide Web. The rule

of thumb today is to plan for 80 percent of the traffic going over the backbone.

Gigabit networking is important to accommodate these evolving needs. Gigabit Ethernet builds on the Ethernet protocol but increases speed tenfold over

Fast Ethernet, to 1000 Mbps, or 1 Gbps. It promises to be a dominant player in high-speed LAN backbones and server connectivity. Because Gigabit Ethernet significantly leverages on Ethernet, network managers will be able to leverage their existing

knowledge base to manage and maintain Gigabit networks.

The Gigabit Ethernet spec addresses three forms of transmission media though not all are available yet:

- 1000BaseLX: Long-wave (LW) laser over single-mode and multimode fiber - 1000BaseSX: Short-wave (SW) laser over multimode fiber

- 1000BaseCX: Transmission over balanced shielded 150-ohm 2-pair STP copper cable - 1000BaseT: Category 5 UTP copper wiring Gigabit Ethernet allows Ethernet to

scale from 10 Mbps at the desktop, to 100 Mbps to the workgroup, to 1000 Mbps in the data center. By leveraging the current Ethernet standards as well as the installed base of Ethernet and Fast Ethernet switches and routers, network

managers do not need to retrain and relearn a new technology to provide support for Gigabit Ethernet.

Token Ring (IEEE 802.5)

The Token Ring network was originally developed by IBM in the 1970s. It is still

IBM‘s primary LAN technology and is second only to Ethernet in general LAN popularity. The related IEEE 802.5 specification is almost identical to and completely

Page 64: Network Notes

compatible with IBM‘s Token Ring network. Collisions cannot occur in Token Ring networks. Possession of the token grants the

right to transmit. If a node receiving the token has no information to send, it passes the token to the next end station. Each station can hold the token for a maximum

period of time. Token-passing networks are deterministic, which means that it is possible to calculate the maximum time that will pass before any end station will be able to

transmit. This feature and several reliability features make Token Ring networks ideal for applications where delay must be predictable and robust network operation is important. Factory automation environments are examples of such applications.

Token Ring is more difficult and costly to implement. However, as the number of users in a network rises, Token Ring‘s performance drops very little. In contrast,

Ethernet‘s performance drops significantly as more users are added to the network.

Token Ring Bandwidth

Here are some of the speeds associated with Token Ring. Note that Token Ring runs at 4 Mbps or 16 Mbps. Today, most networks operate at 16 Mbps. If a network

contains even one component with a maximum speed of 4 Mbps, the whole network must operate at that speed. When Ethernet first came out, networking professionals believed that Token Ring

would die, but this has not happened. Token Ring is primarily used with IBM networks running Systems Network Architecture (SNA) networking operating

systems. Token Ring has not yet left the market because of the huge installed base of IBM mainframes being used in industries such as banking. The practical difference between Ethernet and Token Ring is that Ethernet is much

cheaper and simpler. However, Token Ring is more elegant and robust.

Page 65: Network Notes

Token Ring Topology

The logical topology of an 802.5 network is a ring in which each station receives signals from its nearest active upstream neighbor (NAUN) and repeats those signals

to its downstream neighbor. Physically, however, 802.5 networks are laid out as stars, with each station connecting to a central hub called a multistation access unit

or MAU. The stations connect to the central hub through shielded or unshielded twisted-pair wire. Typically, a MAU connects up to eight Token Ring stations. If a Token Ring network

consists of more stations than a MAU can handle, or if stations are located in different parts of a building–for example on different floors–MAUs can be chained together to create an extended ring. When installing an extended ring, you must

ensure that the MAUs themselves are oriented in a ring. Otherwise, the Token Ring will have a break in it and will not operate.

Token Ring Operation

Station access to a Token Ring is deterministic; a station can transmit only when it receives a special frame called a token. One station on a token ring network is designated as the active monitor. The active monitor will prepare a token. A token is

usually a few bits with significance to each one of the network interface cards on the network. The active monitor will pass the token into the multistation access unit. The multistation access unit then will pass the token to the first downstream neighbor.

Let‘s say in this example that station A has something to transmit. Station A will seize the token and append its data to the token. Station A will then send its token

back to the multistation access unit. The MAU will then grab the token and push it to

Page 66: Network Notes

the next downstream neighbor. This process is followed until the token reaches the destination for which it is intended.

If a station receiving the token has no information to send, it simply passes the token to the next station. If a station possessing the token has information to transmit, it

claims the token by altering one bit of the frame, the T bit. The station then appends the information it wishes to transmit and sends the information frame to the next

station on the Token Ring.

The information frame circulates the ring until it reaches the destination station, where the frame is copied by the station and tagged as having been copied. The information frame continues around the ring until it returns to the station that

originated it, and is removed. Because frames proceed serially around the ring, and because a station must claim the token before transmitting, collisions are not expected in a Token Ring network.

Broadcasting is supported in the form of a special mechanism known as explorer packets. These are used to locate a route to a destination through one or more source

route bridges.

Page 67: Network Notes

- Token Ring Summary -

- Reliable transport, minimized collisions

- Token passing/token seizing

- 4- or 16-Mbps transport - Little performance impact with increased number of users

- Popular at IBM-oriented sites such as banks and automated factories

FDDI - Fiber Distributed Data Interface

FDDI is an American National Standards Institute (ANSI) standard that defines a dual Token Ring LAN operating at 100 Mbps over an optical fiber medium. It is used primarily for corporate and carrier backbones.

Token Ring and FDDI share several characteristics including token passing and a ring architecture which were explored in the previous section on Token Ring.

Copper Distributed Data Interface (CDDI) is the implementation of FDDI protocols over STP and UTP cabling. CDDI transmits over relatively short distances (about 100 meters), providing data rates of 100 Mbps using a dual-ring architecture to provide

redundancy. While FDDI is fast, reliable, and handles a lot of data well, its major problem is the

use of expensive fiber-optic cable. CDDI addresses this problem by using UTP or STP. However, notice that the maximum segment length drops significantly. FDDI was developed in the mid-1980s to fill the needs of growing high-speed

engineering workstation capacity and network reliability. Today, FDDI is frequently used as a high-speed backbone technology because of its support for high bandwidth and greater distances than copper.

FDDI Network Architecture

FDDI uses a dual-ring architecture. Traffic on each ring flows in opposite directions (called counter-rotating). The dual-rings consist of a primary and a secondary ring. During normal operation, the primary ring is used for data transmissions, and the

Page 68: Network Notes

secondary ring remains idle. The primary purpose of the dual rings is to provide superior reliability and robustness.

One of the unique characteristics of FDDI is that multiple ways exist to connect devices to the ring. FDDI defines three types of devices: single-attachment station

(SAS) such as PCs, dual attachment station (DAS) such as routers and servers, and a concentrator.

- Dual-ring architecture - Primary ring for data transmissions

- Secondary ring for reliability and robustness

- Components - Single attachment station (SAS)—PCs

- Dual attachment station (DAS)—Servers - Concentrator

- FDDI concentrator

- Also called a dual-attached concentrator (DAC) - Building block of an FDDI network - Attaches directly to both rings and ensures that any SAS failure or power-down

does not bring down the ring

Example:-

An FDDI concentrator (also called a dual-attachment concentrator [DAC]) is the building block of an FDDI network. It attaches directly to both the primary and

secondary rings and ensures that the failure or power-down of any single attachment station (SAS) does not bring down the ring. This is particularly useful when PCs, or similar devices that are frequently powered on and off, connect to the ring.

Page 69: Network Notes

- FDDI Summary -

- Features

- 100-Mbps token-passing network - Single-mode (100 km), double-mode (2 km)

- CDDI transmits at 100 Mbps over about 100 m - Dual-ring architecture for reliability

- Optical fiber advantages versus copper - Security, reliability, and performance are enhanced because it does not emit

electrical signals - Much higher bandwidth than copper

- Used for corporate and carrier backbones

- Summary -

- LAN technologies include Ethernet, Token Ring, and FDDI

- Ethernet

- Most widely used - Good balance between speed, cost, and ease of installation

- 10 Mbps to 1000 Mbps - Token Ring

- Primarily used with IBM networks - 4 Mbps to 16 Mbps

- FDDI

- Primarily used for corporate backbones - Supports longer distances

- 100 Mbps

Lesson 5: Understanding LAN Switching

This lession covers an introduction to switching technology.

Page 70: Network Notes

The Agenda

- Shared LAN Technology

- LAN Switching Basics

- Key Switching Technologies

We'll begin by looking at traditional shared LAN technologies. We'll then look at LAN switching basics, and then some key switching technologies, such as spanning tree and multicast controls.

Let's begin our discussion by reviewing shared LAN technologies.

Shared LAN Technology

Early Local Area Networks

The earliest Local Area Network technologies that were installed widely were either thick Ethernet or thin Ethernet infrastructures. And it's important to understand some of he limitations of these to see where we're at today with LAN switching.With

thick Ethernet installations there were some important limitations such as distance, for example. Early thick Ethernet networks were limited to only 500 meters before the

signal degraded.In order to extend beyond the 500 meter distance, they required to install repeaters to boost and amplify that signal.There were also limitations on the number of stations and servers we could have on our network, as well as the

placement of those workstations on the network. The cable itself was relatively expensive, it was also large in diameter, which made it

difficult or more challenging to install throughout the building, as we pulled it through the walls and ceilings and so on. As far as adding new users, it was relatively

simple.There could use what was known as a non-intrusive tap to plug in a new station anywhere along the cable.And in terms of the capacity that was provided by this thick Ethernet network, it provided 10 megabits per second, but this was shared

bandwidth, meaning that that 10 megabits was shared amongst all users on a given segment.

A slight improvement to thick Ethernet was thin Ethernet technology, commonly

referred to as cheaper net.This was less expensive and it required less space in terms of installation than thick Ethernet because it was actually thinner in diameter, which is where the name thin Ethernet came from.It was still relatively challenging to

install, though, as it sometimes required what we call home runs, or a direct run from a workstation back to a hub or concentrator.And also adding users required a momentary interruption in the network, because we actually had to cut or make a

break in a cable segment in order to add a new server or workstation. So those are some of the limitations of early thin and thick Ethernet networks.An improvement on

Page 71: Network Notes

thin and thick Ethernet technology was adding hubs or concentrators into our network. And this allowed us to use something known as UTP cabling, or Unshielded

Twisted Pair cabling.

As you can see indicated in the diagram on the left, Ethernet is fundamentally what we call a shared technology.And that is, all users of a given LAN segment are fighting for the same amount of bandwidth. And this is very similar to the cars you see in our

diagram, here, all trying to get onto the freeway at once.This is really what our frames, or packets, do in our network as we're trying to make transmissions on our

Ethernet network. So, this is actually what's occurring on our hub.Even though each device has its own cable segment connecting into the hub, we're still all fighting for the same fixed amount of bandwidth in the network.Some common terms that we

hear associated with the use of hubs, sometimes we call these Ethernet concentrators, or Ethernet repeaters, and they're basically self-contained Ethernet

segments within a box.So while physically it looks like everybody has their own segment to their workstation, they're all interconnected inside of this hub, so it's still a shared Ethernet technology.Also, these are passive devices, meaning that they're

virtually transparent to the end users, the end users don't even know that those devices exist, and they don't have any role in terms of a forwarding decision in the network whatsoever, they also don't provide any segmentation within the network

whatsoever.And this is basically because they work at Layer 1 in the OSI framework.

Collisions: Telltale Signs

A by-product that we have in any Ethernet network is something called collisions.

And this is a result of the fundamental characteristic of how any Ethernet network

Page 72: Network Notes

works.Basically, what happens in an Ethernet network is that many stations are sharing the same segment. So what can happen is any one of these stations can

transmit at any given time.And if 2 or more stations try to transmit at the same time, it's going to result in what we call a collision. And this is actually one of the early tell-

tale signs that your Ethernet network is becoming too congested. Or we simply have too many users on the same segment.And when we get to a certain number of collisions in the network, where they become excessive, this is going to cause

sluggish network response times, and a good way to measure that is by the increasing number of user complaints that are reported to the network manager.

Other Bandwidth Consumers

It's also important to understand fundamentally how transmissions can occur in the network. There's basically three different ways that we can communicate in the

network. The most common way is by way of unicast transmissions.And when we make a unicast transmission, we basically have one transmitter that's trying to reach

one receiver, which is by far the most common, or hopefully the most common form of communication in our network.

Another way to communicate is with a mechanism known as a broadcast. And that is when one transmitter is trying to reach all receivers in the network.So, as you can see in the diagram, in the middle, our server station is sending out one message, and

it's being received by everyone on that particular segment.

Page 73: Network Notes

The last mechanism we have is what is known as a multicast.And a multicast is when one transmitter is trying to reach, not everyone, but a subset or a group of the

entire segment.So as you can see in the bottom diagram, we're reaching two stations, but there's one station that doesn't need to participate, so he's not in our multicast group. So those are the three basic ways that we can communicate within our Local

Area Network.

Broadcasts Consume Bandwidth

Now, in terms of broadcast, it's relatively easy to broadcast in a network, and that's a transmission mechanism that many different protocols use to communicate certain

information, such as address resolution, for example.Address resolution is something that all protocols need to do in order to map Layer 2 MAC addresses up to logical layer, or Layer 3, addresses. For example, in an IP network we do something known

as an ARP, an Address Resolution Protocol.And this allows us to map Layer 3 IP addresses down to Layer 2 MAC-layer addresses. Also, in terms of distributing routing protocol information, we do this by way of broadcasting, and also some key

network services in our networks rely on broadcast mechanisms as well.

And it doesn't really matter what our protocol is, whether it's AppleTalk or Novell IPX,

or TCP IP, for example, all of these different Layer 3 protocols rely on the broadcast mechanism. So, in other words, all of these protocols produce broadcast traffic in a network.

Page 74: Network Notes

Broadcasts Consume Processor Performance

Now, in addition to consuming bandwidth on the network, another by-product of broadcast traffic in the network is that they consume CPU cycles as well.Since

broadcast traffic is sent out and received by all stations on the network, that means that we must interrupt the CPU of all stations connected to the network.So here in

this diagram you see the results of a study that was performed with several different CPUs on a network. And it shows you the relative level of CPU degradation as the number of broadcasts on a network increases.

So you can see, we did this study based on a SPARC2 CPU, a SPARC5 CPU and also

a Pentium CPU. And as the number of broadcasts increased, the amount of CPU cycles consumed, simply by processing and listening to that broadcast traffic, increased dramatically.So, the other thing we need to recognize is that a lot of times

the broadcast traffic in our network is not needed by the stations that receive it.So what we have then in shared LAN technologies is our broadcast traffic running throughout the network, needlessly consuming bandwidth, and needlessly

consuming CPU cycles.

Hub-Based LANs

So hubs are introduced into the network as a better way to scale our thinand thick Ethernet networks. It's important to remember, though, that these are still shared

Ethernet networks, even though we're using hubs.

Page 75: Network Notes

Basically what we have is an individual desktop connection for each individual workstation or server in the network, and this allows us to centralize all of our

cabling back to a wiring closet for example. There are still security issues here, though.It's still relatively easy to tap in and monitor a network by way of a hub. In

fact it's even easier to do that because all of the resources are generally located centrally.If we need to scale this type of network we're going to rely on routers to scale this network beyond the workgroup, for example.

It's makes adds, moves and changes easier because we can simply go to the wiring

closet and move cables around, but we'll see later on with LAN switching that it's even easier with LAN switching.Also, in terms of our workgroups, in a hub or concentrator based network, the workgroups are determined simply by the physical

hub that we plug into. And once again we'll see later on with LAN switching how we can improve this as well.

Bridges

Another way is to add bridges. In order to scale our networks we need to do something known as segmentation. And bridges provide a certain level of

segmentation in our network.And bridges do this by adding a certain amount of intelligence into the network. Bridges operate at Layer 2, while hubs operate at Layer

1. So operating at Layer 2 gives us more intelligence in order to make an intelligent forwarding decision.

That's why we say that bridges are more intelligent than a hub, because they can actually listen in, or eavesdrop on the traffic going through the bridge, they can look

at source and destination addresses, and they can build a table that allows them to make intelligent forwarding decisions.

They actually collect and pass frames between two network segments and while

they're doing this they're making intelligent forwarding decisions. As a result, they can actually provide greater control of the traffic within our network.

Page 76: Network Notes

Switches—Layer 2

To provide even better control we're going to look to switches to provide the most control in our network, at least at Layer 2. And as you can see in the diagram, have

improved the model of traffic going through our network.

Getting back to our traffic analogy, as you can see looking at the highway here, we've actually subdivided the main highway so that each particular car has it's own lane

that they can drive on through the network. And fundamentally, this is what we can provide in our data networks as well.So that when we look at our network we see that physically each station has its own cable into the network, well, conceptually we can

think of this as each workstation having their own lane through the highway.Basically there is something known as micro-segmentation. That's a fancy

way simply to say that each workstation gets its own dedicated segment through the network.

Switches versus Hubs

If we compare that with a hub or with a bridge, we're limited on the number of simultaneous conversations we can have at a time.Remember that if two stations

tried to communicate in a hubbed environment, that caused something known as collisions. Well, in a switched environment we're not going to expect collisions

because each workstation has its own dedicated path through the network.What that means in terms of bandwidth, and in terms of scalability, is we have dramatically more bandwidth in the network. Each station now will have a dedicated 10 megabits

per second worth of bandwidth.

Page 77: Network Notes

So when we look at our switches versus our hubs, and the top diagram, remember that we're looking at a hub. And this is when all of our traffic was fighting for the

same fixed amount of bandwidth.Looking at the bottom diagram you can see that we've improved our traffic flow through the network, because we've provided a

dedicated lane for each workstation.

The Need for Speed: Early Warning Signs

Now, how can you tell if you have congestion problems in your network? Well, some early things to look at, some early things to watch out for, include increased delay on our file transfers.If basic file transfers are taking a long, long time in the network,

that means we may need more bandwidth. Also, another thing to watch out for is print jobs that take a very long time to print out.From the time we queue them from

our workstation, till the time they actually get printed, if that's increasing, that's an indication that we may have some LAN congestion problems.Also, if your organization is looking to take advantage of multimedia applications, you're going to need to move

beyond basic shared LAN technologies, because those shared LAN technologies don't have the multicast controls that we're going to need for multimedia applications.

Typical Causes of Network Congestion

Some causes of this congestion, if we're seeing those early warning signs some things

we might want to look for, if we have too many users on a shared LAN segment. Remember that shared LAN segments have a fixed amount of bandwidth.As we add users, proportionally, we're degrading the amount of bandwidth per user. So we're

going to get to a certain number of users and it's going to be too much congestion, too many collisions, too many simultaneous conversations trying to occur all at the same time.

And that's going to reduce our performance. Also, when we look at the newer technologies that we're using in our workstations. With early LAN technologies the

workstations were relatively limited in terms of the amount of traffic they could dump on the network.Well, with newer, faster CPUs, faster busses, faster peripherals and

so on, it's much easier for a single workstation to fill up a network segment.So by virtue of the fact that we have much faster PCs, we can also do more with the applications that are on there, we can more quickly fill up the available bandwidth

that we have.

Page 78: Network Notes

Network Traffic Impact from Centralization of Servers

Also, the way the traffic is distributed on our network can have an impact as well. A very common thing to do in many networks is to build what's known as a server farm

for example.Well, in a server farm effectively what we're doing is centralizing all of the resources on our network that need to be accessed by all of the workstations in our

network.So what happens here is we cause congestion on those centralized segments within the network. So, when we start doing that, what we're going to do is cause congestion on those centralized or backbone resources.

Servers are gradually moving into a central area (data center) versus being located throughout the company to:

- Ensure company data integrity

- Maintain the network and ensure operability - Maintain security - Perform configuration and administrative functions

More centralized servers increase the bandwidth demands on campus and workgroup backbones

Today’s LANs

- Mostly switched resources; few shared

- Routers provide scalability - Groups of users determined by physical location

When we look at today's LANs, the ones that are most commonly implemented today, we're looking at mostly switched infrastructures, because of the price point of

deploying switches, many companies are bypassing the shared hub technologies and

Page 79: Network Notes

moving directly to switches.Even within switched networks, at some point we still need to look to routers to provide scalability. And also we see that in terms of the

grouping of users, they're largely determined by the physical location.So that's a quick look at traditional shared LAN technologies. What we want to do now, since we

know those limitations, we want to look at how we can fix some of those issues. We want to see how we can deploy LAN switches to take advantage of some new, improved technologies.

LAN Switching Basics

- Enables dedicated access

- Eliminates collisions and increases capacity - Supports multiple conversations at the same time

First of all, it's important to understand the reason that we use LAN switching. Basically, they do this to provide what we called earlier as micro-segmentation. Again, micro-segmentation provides dedicated bandwidth for each user on the

network.What this is going to do is eliminate collisions in our network, and it's going to effectively increase the capacity for each station connected to the network.It'll also

support multiple, simultaneous conversations at any given time, and this will dramatically improve the bandwidth that's available, and it'll dramatically improve the scalability in our network.

LAN Switch Operation

So let's take a look at the fundamental operation of a LAN switch to see what it can

do for us. As you can see indicated in the diagram, we have some data that we need to transmit from Station A to Station B.

Page 80: Network Notes

Now, as we watch this traffic go through the network, remember that the switch

operates at Layer 2. What that means is the switch has the ability to look at the MAC-layer address, the Media Access Control address, that's on each frame as it goes

through the network.

And we're going to see that the switch actually looks at the traffic as it goes through to pick off that MAC address and store it in an address table.So, as the traffic goes

through, you can see that we've made an entry into this table in terms of which station and the port that it's connected to on the switch.

Page 81: Network Notes

Now what happens, once that frame of data is in the switch, we have no choice but to flood it to all ports. The reason that we flood it to all ports is because we don't know

where the destination station resides.

Once that address entry is made into the table, though, when we have a response coming back from Station B, going back to Station A, we now know where Station A

is connected to the network.

So what we do is we transmit our data into the switch,but notice the switch doesn't

flood that traffic this time, it sends it only out port number 3. The reason is because we know exactly where Station A is on the network, because of that original transmission we made.On that original transmission we were able to note where that

MAC address came from. That allows us to more efficiently deliver that traffic in the network.

Switching Technology: Full Duplex

Another concept that we have in LAN switching that allows us to dramatically

improve the scalability, is something known as full duplex transmission. And that effectively doubles the amount of bandwidth between nodes.This can be important, for example, between high bandwidth consumers such as between a switch and a

Page 82: Network Notes

server connection, for example. It provides essentially collision free transmissions in the network.

And what this provides, for example, in 10 megabit per second connections, it effectively provides 10 meg of transmit capacity, and 10 megabit of receive capacity, for effectively 20 megabits of capacity on a single connection.Likewise, for a 100

megabit per second connection, we can get effectively 200 megabits per second of throughput

Switching Technology: Two Methods

Another concept that we have in switching is that we have actually two different

modes of switching. And this is important because it can actually effect the performance or the latency of the switching through our network.

Cut-through

First of all we have something known as cut through switching. What cut through switching does, is, as the traffic flows through the switch, the switch simple reads the destination MAC address, in other words we find out where the traffic needs to go

through, go to.And as the data flows through the switch we don't actually look at all of the data. We simply look at that destination address, and then, as the name implies, we cut it through to its destination without continuing to read the rest of the

frame.

Store-and-forward

And that allows to improve performance over another method known as store and forward. With store and forward switching, what we do is we actually read, not only

the destination address, but we read the entire frame of data.As we read that entire

Page 83: Network Notes

frame we then make a decision on where it needs to go, and send it on it's way. The obvious trade-off there is, if we're going to read the entire frame it takes longer to do

that.

But the reason that we read the entire frame is that we can do some error correction, or error detection, on that frame, that may increase the reliability if we're having

problems with that in a switched network.So cut through switching is faster, but the trade-off is that we can't do any error detection in our switched network.

Key Switching Technologies

let's look at some key technologies within LAN switching.

- 802.1d Spanning-Tree Protocol

- Multicasting

The Need for Spanning Tree

Specifically we'll look at the Spanning Tree Protocol, and also some multicasting controls that we have in our network.As we build out large networks, one of the

problems we have at Layer 2 in the OSI model, is if we're just making forwarding decisions at Layer 2, that means that we cannot have any Physical Layer loops in our

network.

So if we have a simple network, as we see in the diagram here, what these switches are going to do is that anytime they have any multicast, broadcast traffic, or any

unknown traffic, that's going to create storms of traffic that are going to get looped endlessly through our network.So in order to prevent that situation we need to cut

out any of the loops.

Page 84: Network Notes

802.1d Spanning-Tree Protocol (STP)

Spanning Tree Protocol, or STP. This is actually an industry standard that's defined by the IEEE standards committee, it's known as the 802.1d Spanning Tree

Protocol.This allows us to have physical redundancy in the network, but it logically disconnects those loops.

It's important to understand that we logically disconnect the loops because that

allows us to dynamically re-establish a connection if we need to, in the event of a failure within our network.The way that the switches do this, and actually bridges can do this as well, is that they simply communicate by way of a protocol, back and

forth. The basically exchange these little hello messages.

If they stop hearing a given communication from a certain device on the network, we know that a network device has failed. And when a network failure occurs we have to

re-establish a link in order to maintain that redundancy.technically, these little exchanges are known as BPDUs or Bridge Protocol Data Units.

Now, Spanning Tree protocol works just fine, but one of the issues with Spanning Tree is that it can take anywhere from half a minute to a full minute in order for the

network to fully converge, or in order for all devices to know the status of the network.So in order to improve on this, there are some refinements that Cisco has introduced, such as PortFast and UplinkFast, and this allows your Spanning Tree

protocol to converge even faster.

Multicasting

Now, another issue that we have in Layer 2 networks, or switched networks, is control of our multicast traffic. There's a lot of new applications that are emerging

today such as video based applications, desktop conferencing, and so on, that take advantage of multicasting

But without special controls in the network, multicasting is going to quickly congest our network. Okay, so what we need is to add intelligent multicasting in the network.

Multipoint Communications

Now, again, let's understand that there are a few fundamental ways that we have in

order to achieve multipoint communications, because effectively, that's what we're trying to do with our video based applications or any of our multimedia type

applications that use this mechanism.

Page 85: Network Notes

One way is to broadcast our traffic. And what that does is it effectively sends our messages everywhere. The problem, and the obvious down side there is that not everybody necessarily needs to hear these communications.So while it will get the job

done, it's not the most efficient way to get the job done. So the better way to do this is by way of multicasting.

And that is, the applications will use a special group address to communicate to only

those stations or group of stations that need to receive these transmissions.And that's what we mean by multipint communications. That's going to be the more effective way to do that.

Multicast

This also needs to be done dynamically because these multicast groups are going to

change over time at any given moment. So, in order to do this, we need some special protocols in our network. First of all, in the Wide Area, we need something known as

multicast routing protocols.Certainly, in our Wide Area we already have routing protocols such as RIP, the Routing Information Protocol, or OSPF, or IGRP, for example, but what we need to do is add multicast extensions so that these routing

protocols need, understand how to handle the need for our multicast groups.

An example of a multicast routing protocol would be PIM, or Protocol Independent multicasting, for example. This is simply an extension of the existing routing

protocols in our network.Another protocol we have is known as IGMP, or the Internet Group Management Protocol. And IGMP simply allows us to identify the group membership of the IP stations that want to participate in a given multicast

conversation.

Page 86: Network Notes

So as you can see indicated by the red traffic in our network, we have channel #1 being multicast through the network. And by way of IGMP, the workstations can signal back to the original video servers that they want to participate.And by way of

the multicast routing protocols are added, we can efficiently deliver our traffic in the Wide Area.Now, another challenge that we have is once our traffic gets to the Local

Area Network, or the switch, by default that traffic is going to be flooded to all stations in the network.

End-to-End Multicast

And that's because IGMP works at Layer 3,, but our LAN switch works at Layer 2. So the switch has no concept of our Layer 3 group membership. So what we need to do

is add some intelligence to our switch.The intelligence that going to add is a protocol such as CGMP, for example, or Cisco Group Management Protocol. Another similar

technology that we could add, is called IGMP Snooping, which has the same effect in the Local Area Network.

And that effect is, as you see in the diagram, to limit our multicast traffic to only

those stations that want to participate in the group. So now, as you can see, the red channel, or channel number 1, is delivered to only station #1 and station #3.

Page 87: Network Notes

The station 2 does not receive this content because he doesn't wish to participate.So the advantage of adding protocols such as IGMP, CGMP, IGMP Snooping, and Protocol Independent multicasting into our network, that achieved bandwidth savings

for our multicast traffic.

Why Use Multicast?

What we see indicated in the red is, as we add stations to our multicast group, the amount of bandwidth we need to do that is going to increase in a linear fashion.But

by adding multicast controls, you can see the amount of bandwidth is reduced dramatically. Because these intelligent multicast controls can better make, can make better use of the bandwidth in our network.So by adding multicast controls that's

going to also reduce the cost of networking as well because we've reduced the bandwidth that we need, so that's going to provide a dramatic improvement to our

Local Area Network.

- Summary -

- Switches provide dedicated access - Switches eliminate collisions and increase capacity

- Switches support multiple conversations at the same time

- Switches provide intelligence for multicasting

Page 88: Network Notes

Lesson 6: WAN Basics

In this Lesson, we‘ll discuss the WAN. We‘ll start by defining what a WAN is, and then move on to talking about basic technology such as WAN devices and circuit and

packet switching. also cover transmission options from POTS (plain old telephone service) to Frame

Relay, to leased lines, and more. Finally, we‘ll discuss wide area requirements including a section on minimizing WAN charges with bandwidth optimization features.

The Agenda

- WAN Basics

- Transmission Options

- WAN Requirements & Solutions

WAN Basics

What Is a WAN?

So, what is a WAN? A WAN is a data communications network that serves users

across a broad geographic area and often uses transmission facilities provided by common carriers such as telephone companies. These providers are companies like MCI, AT&T, UuNet, and Sprint. There are also many small service providers that

provide connectivity to one of the larger carriers‘ networks and may even have email servers to store clients mail until it is retrieved.

- Telephone service is commonly referred to as plain old telephone service (POTS).

- WAN technologies function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.

Common WAN network components include WAN switches, access servers, modems, CSU/DSUs, and ISDN Terminals.

WAN Devices

Page 89: Network Notes

A WAN switch is a multiport internetworking device used in carrier networks. These devices typically switch traffic such as Frame Relay, X.25, and SMDS and operate at

the data link layer of the OSI reference model. These WAN switches can share bandwidth among allocated service priorities, recover from outages, and provide

network design and management systems. A modem is a device that interprets digital and analog signals, enabling data to be

transmitted over voice-grade telephone lines. At the source, digital signals are converted to analog. At the destination, these analog signals are returned to their digital form.

An access server is a concentration point for dial-in and dial-out connections.

A channel service unit/digital service unit (CSU/DSU) is a digital interface device that adapts the physical interface on a data terminal equipment device (such as a

terminal) to the interface of a data circuit terminating (DCE) device (such as a switch) in a switched-carrier network. The CSU/DSU also provides signal timing for

communication between these devices. An ISDN terminal is a device used to connect ISDN Basic Rate Interface (BRI)

connections to other interfaces, such as EIA/TIA-232. A terminal adapter is essentially an ISDN modem.

WAN Terminating Equipment

The WAN physical layer describes the interface between the data terminal equipment (DTE) and the data circuit-terminating equipment (DCE). Typically, the DCE is the service provider, and the DTE is the attached device (the customer‘s device). In this

model, the services offered to the DTE are made available through a modem or channel service unit/data service unit (CSU/DSU).

CSU/DSU (Channel Service Unit / Data Service Unit) Device that connects the end-user equipment to the local digital telephone loop or to the service providers data transmission loop. The DSU adapts the physical interface on a DTE device to a

Page 90: Network Notes

transmission facility such as T1 or E1. Also responsible for such functions as signal timing for synchronous serial transmissions.

Unless a company owns (literally) the lines over which they transport data, they must utilize the services of a Service Provider to access the wide area network.

Circuit Switching

- Dedicated physical circuit established, maintained, and terminated through a

carrier network for each communication session - Datagram and data stream transmissions

- Operates like a normal telephone call

- Example: ISDN

Service providers typically offer both circuit switching packet switching services. Circuit switching is a WAN switching method in which a dedicated physical circuit is

established, maintained, and terminated through a carrier network for each communication session. Circuit switching accommodates two types of transmissions:

datagram transmissions and data-stream transmissions. Used extensively in telephone company networks, circuit switching operates much like a normal telephone call. Integrated Services Digital Network (ISDN) is an example of a circuit-

switched WAN technology.

Packet Switching

Page 91: Network Notes

Packet switching is a WAN switching method in which network devices share a single point-to-point link to transport packets from a source to a destination across a

carrier network. Statistical multiplexing is used to enable devices to share these circuits. Asynchronous Transfer Mode (ATM), Frame Relay, Switched Multimegabit

Data Service (SMDS), and X.25 are examples of packet-switched WAN technologies.

- Network devices share a point-to-point link to transport packets from a source to a destination across a carrier network

- Statistical multiplexing is used to enable devices to share these circuits - Examples: ATM, Frame Relay, SMDS, X.25

WAN Virtual Circuits

- A logical circuit ensuring reliable communication between two devices - Switched virtual circuits (SVCs)

- Dynamically established on demand

- Torn down when transmission is complete - Used when data transmission is sporadic

- Permanent virtual circuits (PVCs) - Permanently established

- Save bandwidth for cases where certain virtual circuits must exist all the time

- Used in Frame Relay, X.25, and ATM A virtual circuit is a logical circuit created to ensure reliable communication between

two network devices. Two types of virtual circuits exist: switched virtual circuits (SVCs) and permanent virtual circuits (PVCs). Virtual circuits are used in Frame Relay and X.25 and ATM.

SVCs are dynamically established on demand and are torn down when transmission is complete. SVCs are used in situations where data transmission is sporadic.

PVCs are permanently established. PVCs save bandwidth associated with circuit establishment and tear down in situations where certain virtual circuits must exist all the time.

WAN Protocols

The OSI model provides a conceptual framework for communication between

computers, but the model itself is not a method of communication. Actual communication is made possible by using communication protocols. A protocol

implements the functions of one or more of the OSI layers. A wide variety of

Page 92: Network Notes

communication protocols exist, but all tend to fall into one of the following groups:

- LAN protocols: operate at the physical and data link layers and define communication over the various LAN media

- WAN protocols: operate at the lowest three layers and define communication over the various wide-area media.

- Network protocols: are the various upper-layer protocols in a given protocol suite.

- Routing protocols: network-layer protocols responsible for path determination and traffic switching.

SDLC:-

Synchronous Data Link Control. IBM‘s SNA data link layer communications protocol. SDLC is a bit-oriented, full-duplex serial protocol that has spawned numerous similar protocols, including HDLC and LAPB.

HDLC:-

High-Level Data Link Control. Bit-oriented synchronous data link layer protocol developed by ISO. Specifies a data encapsulation method on synchronous serial links using frame characters and checksums.

LAPB:-

Link Access Procedure, Balanced. Data link layer protocol in the X.25 protocol stack. LAPB is a bit-oriented protocol derived from HDLC.

PPP:- Point-to-Point Protocol. Provides router-to-router and host-to-network connections over synchronous and asynchronous circuits with built-in security features. Works

with several network layer protocols, such as IP, IPX, & ARA.

X.25 PTP:- Packet level protocol. Network layer protocol in the X.25 protocol stack. Defines how connections are maintained for remote terminal access and computer

Page 93: Network Notes

communications in PDNs. Frame Relay is superseding X.25.

ISDN:- Integrated Services Digital Network. Communication protocol, offered by telephone

companies, that permits telephone networks to carry data, voice, and other source traffic.

Frame Relay:- Industry-standard, switched data link layer protocol that handles multiple virtual circuits using HDLC encapsulation between connected devices. Frame Relay is more

efficient than X.25, and generally replaces it.

There are a number of transmission options available today. They fall either into the analog or digital category. Next let‘s take a brief look at each of these transmission

types.

POTS Using Modem Dialup

Page 94: Network Notes

Analog modems using basic telephone service are asynchronous transmission-based, and have the following benefits:

- Available everywhere

- Easy to set up - Dial anywhere on demand - The lowest cost alternative of any wide-area service

Integrated Services Digital Network (ISDN)

ISDN is a digital service that can use asynchronous or, more commonly, synchronous transmission. ISDN can transmit data, voice, and video over existing copper phone lines. Instead of leasing a dedicated line for high-speed digital transmission, ISDN

offers the option of dialup connectivity—incurring charges only when the line is active.

ISDN provides a high-bandwidth, cost-effective solution for companies requiring light or sporadic high-speed access to either a central or branch office. ISDN can transmit data, voice, and video over existing copper phone lines.

Instead of leasing a dedicated line for high-speed digital transmission, ISDN offers the option of dialup connectivity —incurring charges only when the line is active. Companies needing more permanent connections should evaluate leased-line

connections.

- High bandwidth - Up to 128 Kbps per basic rate interface - Dial on demand

- Multiple channels - Fast connection time

- Monthly rate plus cost-effective, usage-based billing - Strictly digital

Page 95: Network Notes

ISDN comes in two flavors, Basic Rate Interface (BRI) and Primary Rate Interface (PRI). BRI provides two ―B‖ or bearer channels of 64 Kbps each and one additional signaling channel called the ―D‖ or delta channel.

While it requires only one physical connection, ISDN provides two channels that remote telecommuters use to connect to the company network. PRI provides up to 23 bearer channels of 64 Kbps each and one D channel for

signaling. That‘s 23 channels but with only one physical connection, which makes it an elegant solution- there‘s no wiring mess (PRI service typically provides 30 bearer

channels outside the U.S. and Canada). You‘ll want to use PRI at your central site if you plan to have many ISDN dial-in clients.

Leased Line

Leased lines are most cost-effective if a customer‘s daily usage exceeds four to six

hours. Leased lines offer predictable throughput with bandwidth typically 56 Kbps to 1.544 Mbps. They require one connection per physical interface (namely, a

synchronous serial port).

- One connection per physical interface - Bandwidth: 56 kbps–1.544 Mbps - T1/E1 and fractional T1/E1

- Cost effective at 4–6 hours daily usage - Dedicated connections with predictable throughput

- Permanent - Cost varies by distance

Page 96: Network Notes

Frame Relay

Frame Relay provides a standard interface to the wide-area network for bridges, routers, front-end processors (FEPs), and other LAN devices. A Frame Relay interface

is designed to act like a wide-area LAN- it relays data frames directly to their destinations at very high speeds. Frame Relay frames travel over predetermined virtual circuit paths, are self-routing, and arrive at their destination in the correct

order. Frame Relay is designed to handle the LAN-type bursty traffic efficiently. The guaranteed bandwidth (known as committed information rate or CIR) is typically

between 56 Kbps and 1.544 Mbps. The cost is normally not distance-sensitive.

Connecting Offices with Frame Relay

Companies who require office-to-office communications, usually choose between a

dedicated leased-line connection or a packet-based service, such as Frame Relay or X.25. As a rule, higher connect times make leased-line solutions more cost-effective. Like ISDN, Frame Relay requires only one physical connection to the Frame Relay

network, but can support many Permanent Virtual Circuits, or PVCs.

Frame Relay service is often less expensive than leased lines, and the cost is based on:

- The committed information rate (CIR), which can be exceeded up to the port speed when the capacity is available on your carrier‘s network.

Page 97: Network Notes

- Port speed - The number of permanent virtual circuits (PVCs) you require; a benefit to users

who need reliable, dedicated connections to resources simultaneously.

X.25

X.25 networks implement the internationally accepted ITU-T standard governing the operation of packet switching networks. Transmission links are used only when needed. X.25 was designed almost 20 years ago when network link quality was

relatively unstable. It performs error checking along each hop from source node to destination node.

The bandwidth is typically between 9.6Kbps and 64Kbps. X.25 is widely available in many parts of the world including North America, Europe, and Asia.

There is a large installed base of X.25 devices.

Digital Subscriber Line (xDSL)

- DSL is a pair of ―modems‖ on each end of a copper wire pair

- DSL converts ordinary phone lines into high-speed data conduits - Like dial, cable, wireless, and T1, DSL by itself is a transmission technology, not a

complete solution - End-users don‘t ―buy‖ DSL, they ―buy‖ services, such as high-speed Internet access, intranet, leased line, voice, VPN, and video on demand

- Service is limited to certain geographical areas

Digital subscriber line (DSL) technology is a high-speed service that, like ISDN, operates over ordinary twisted-pair copper wires supplying phone service to

businesses and homes in most areas. DSL is often more expensive than ISDN in markets where it is offered today. Using special modems and dedicated equipment in the phone company's switching

office, DSL offers faster data transmission than either analog modems or ISDN service, plus-in most cases-simultaneous voice communications over the same lines. This means you don't need to add lines to supercharge your data access speeds. And

Page 98: Network Notes

since DSL devotes a separate channel to voice service, phone calls are unaffected by data transmissions.

DSL Modem Technology

DSL has several flavors. ADSL delivers asymmetrical data rates (for example, data

moves faster on the way to your PC than it does on the way out to Internet). Other DSL technologies deliver symmetrical data (same speeds traveling in and out of your

PC). The type of service available to you will depend on the carriers operating in your area. Because DSL works over the existing telephone infrastructure, it should be easy to

deploy over a wide area in a relatively short time. As a result, the pursuit of market share and new customers is spawning competition between traditional phone

companies and a new breed of firms called competitive local exchange carriers (CLECs).

Asynchronous Transfer Mode (ATM)

ATM is short for Asynchronous Transfer Mode, and it is a technology capable of

transferring voice, video and data through private and public networks. It uses VLSI technology to segment data at high speeds into units called cells. Basically it carves

up Ethernet or Token ring packets and creates cells out of them.

Each cell contains 5 bites of header information, 48 bites of payload for 53 bites total

in every cell. Each cell contains identifiers that specify the data stream to which they belong. ATM is capable of T3 speeds, E3 speeds in Europe as well as Fiber speed, like

Page 99: Network Notes

Sonet which is asynchronous optical networking speeds of OC-1 and up. ATM technology is primarily used in enterprise backbones or in WAN links.

How to choose Service?

Analog services are the least expensive type of service. ISDN costs somewhat more but improves performance over even the fastest current analog offerings. Leased lines are the costliest of these three options, but offer dedicated, digital service for more

demanding situations. Which is right? You‘ll need to answer a few questions:

- Will employees use the Internet frequently? - Will the Internet be used for conducting business (for example, inventory

management, online catalog selling or account information or bidding on new jobs)? - Do you anticipate a large volume of traffic between branch offices of the business? - Is there a plan to use videoconferencing or video training between locations?

- Who will use the main office‘s connection to the Internet - individual employees at the central office, telecommuting workers dialing in from home, mobile workers

dialing in from the road? The more times the answer is ―yes‖, the more likely that leased line services are

required. It is also possible to mix and match services. For example, small branch offices or individual employees dialing in from home might connect to the central office using ISDN, while the main connection from the central office to the Internet

can be a T1. Which service you select also depends on what the Internet Service Provider (is using.

If the ISP‘s maximum line speed is 128K, as with ISDN, it wouldn‘t make sense to connect to that ISP with a T1 service. It is important to understand that as the bandwidth increases, so do the charges, both from the ISP and the phone company.

Keep in mind that rates for different kinds of connections vary from location to location.

Let‘s compare our technology options, assuming all services are available in our region. To summarize:

Page 100: Network Notes

- A leased-line service provides a dedicated connection with a fixed bandwidth at a

flat rate. You pay the same monthly fee regardless how much or how little you use the connection.

- A packet-switched service typically provides a permanent connection with specific, guaranteed bandwidth (Frame Relay). Temporary connections (such as X.25) may

also be available. The cost of the line is typically a flat rate, plus an additional charge based on actual usage.

- A circuit-switched service provides a temporary connection with variable bandwidth, with cost primarily based on actual usage.

Wide-Area Network Requirements

- Minimize bandwidth costs

- Maximize efficiency - Maximize performance - Support new/emerging applications

- Maximize availability - Minimize management and maintenance

Manage Bandwidth to Control Cost

Because transmission costs are by far the largest portion of a network‘s cost, there

are a number of bandwidth optimization features you should be aware of that enable the cost-effective use of WAN links. These include dial-on-demand routing, bandwidth-on-demand, snapshot routing, IPX protocol spoofing, and compression.

Dial-on-demand ensures that you‘re only paying for bandwidth when it‘s needed for switched services such as ISDN and asynchronous modem (and switched 56Kb in the

U.S. and Canada only). Bandwidth-on-demand gives you the flexibility to add additional WAN bandwidth when it‘s needed to accommodate heavy network loads such as file transfers.

Snapshot routing prevents unnecessary transmissions. It inhibits your switched network from being dialed solely for the purpose of exchanging routing updates at

Page 101: Network Notes

short intervals (e.g.: 30 seconds). Many of you are familiar with compression, which is also a good method of optimization.

Lets take a close look at a few features that will keep your WAN costs down.

- Dial-on-Demand Routing

Dial-on-demand routing allows a router to automatically initiate and close a circuit-

switched session. With dial-on-demand routing, the router dials up the WAN link only when it senses ―interesting‖ traffic. Interesting traffic might be defined as any traffic destined for the

remote network, or only traffic related to a specific host address or service. Equally important, dial-on-demand routing enables the router to take down the connection when it is no longer needed, ensuring that the user will not have

unnecessary WAN usage charges.

- Bandwidth-on-Demand

Bandwidth-on-demand works in a similar way. When the router senses that the traffic level on the primary link has reached a

certain threshold—say, when a user starts a large file transfer—it automatically dials up additional bandwidth through the PSTN to accommodate the increased load. For example, if you‘re using ISDN, you may decide that when the first B channel

reaches 75% saturation for more than one minute, your router will automatically dial up a second B channel. When the traffic load on the second B channel falls below

40%, the channel is automatically dropped.

Page 102: Network Notes

- Snapshot Routing

By default, routing protocols such as RIP exchange routing tables every 30 seconds.

If placed as calls, these routine updates will drive up WAN costs unnecessarily, and Snapshot Routing limits these calls to the remote site. A remote router with this feature only requests a routing update when the WAN link

is already up for the purpose of transferring user application data. Without Snapshot Routing, your ISDN connection would be dialed every 30 seconds;

this feature ensures that the remote router always has the most up-to-date routing information but only when needed.

- IPX Protocol Spoofing

Protocol spoofing allows the user to improve performance while providing the ability to use lower line speeds over the WAN.

- Compression

Compression reduces the space required to store data, thus reducing the bandwidth required to transmit. The benefit of these compression algorithms is that users can utilize lower line speeds if needed to save costs. Compression also provides the ability

to move more data over a link than it would normally bear.

Page 103: Network Notes

- Three types Header

Link Payload

- Van Jacobson header compression RFC 1144 Reduces header from 40 to ~5 bytes

- Dial Backup

Dial backup addresses a customer‘s need for reliability and guaranteed uptime. Dial backup capability offers users protection against WAN downtime by allowing them to

configure a backup serial line via a circuit-switched connection such as ISDN. When the software detects the loss of a signal from the primary line device or finds that the

line protocol is down, it activates the secondary line to establish a new session and continue the job of transmitting traffic over the backup line.

- Summary -

- The network operates beyond the local LAN‘s geographic scope. It uses the services

of carriers like regional bell operating companies (RBOCs), Sprint, and MCI. - WANs use serial connections of various types to access bandwidth over wide-area

geographies. - An enterprise pays the carrier or service provider for connections used in the WAN;

the enterprise can choose which services it uses; carriers are usually regulated by tariffs.>

- WANs rarely shut down, but since the enterprise must pay for services used, it might restrict access to connected workstations. All WAN services are not available

in all locations.

Page 104: Network Notes

Lesson 7: Understanding Routing

The objective of this lesson is to explain routing. We‘ll start by first defining what routing is. We‘ll follow that with a discussion on addressing.

There is a section on routing terminology which covers subjects like routed vs. routing protocols and dynamic and static routing.

Finally, we‘ll talk about routing protocols.

The Agenda - What Is Routing?

- Network Addressing

- Routing Protocols

What Is Routing?

Routing is the process of finding a path to a destination host and of moving information across an internetwork from a source to a destination. Along the way, at

least one intermediate node typically is encountered. Routing is very complex in large networks because of the many potential intermediate destinations a packet might

traverse before reaching its destination host. A router is a device that forwards packets from one network to another and determines the optimal path along which network traffic should be forwarded.

Routers forward packets from one network to another based on network layer information. Routers are occasionally called gateways (although this definition of gateway is becoming increasingly outdated).

Routers—Layer 3

A router is a more sophisticated device than a hub or a switch.. It determines the appropriate network path to send the packet along by keeping an up-to-date network topology in memory, its routing table.

Page 105: Network Notes

A router keeps a table of network addresses and knows which path to take to get to

each network. Routers keep track of each other‘s routes by alternately listening, and periodically

sending, route information. When a router hears a routing update, it updates its routing table. Routing is often contrasted with bridging, which might seem to accomplish precisely the same thing to the causal observer. The primary difference

between the two is that bridging occurs at Layer 2 (the data link layer) of the OSI reference model, whereas routing occurs at Layer 3 (the network layer). This distinction provides routing and bridging with different information to use in the

process of moving information from source to destination, so that the two functions accomplish their tasks in different ways.

In addition, bridges can‘t block a broadcast (where a data packet is sent to all nodes on a network). Broadcasts can consume a great deal of bandwidth. Routers are able to block broadcasts, so they provide security and assist in bandwidth control.

You might ask, if bridging is faster than routing, why do companies move from a bridged/switched network to a routed network?

There are many reasons, but LAN segmentation is a key reason. Also, routers increase scalability and control broadcast transmissions.

Where are Routers Used?

A router can perform LAN-to-LAN routing through its ability to route packet traffic from one network to another. It checks its router table entries to determine the best

path to the destination network. A router can perform LAN-to-WAN and remote access routing through its ability to

route packet traffic from one network to another while handling different WAN services in between. Popular WAN service options include Integrated Services Digital Network, or ISDN, leased lines, Frame Relay, and X.25.

Let‘s look at routing in more detail.

LAN-to-LAN Connectivity

This illustrates the flow of packets through a routed network using the example of an

e-mail message being sent from system X to system Y. The message exits system X and travel through an organization‘s internal network

Page 106: Network Notes

until it gets to a point where it needs an Internet service provider. The message will bounce through their network and eventually arrive at system Y‘s

internet provider. While this example shows three routers, the message could actually travel through many different networks before it arrives at its destination.

From the OSI model reference point of view, when the e-mail is converted into packets and sent to a different network, a data-link frame is received on one of a router's interfaces.

- The router de-encapsulates and examines the frame to determine what type of network layer data is being carried. The network layer data is sent to the

appropriate network layer process, and the frame itself is discarded.

- The network layer process examines the header to determine the destination network and then references the routing table that associates networks to outgoing interfaces.

- The packet is again encapsulated in the link frame for the selected interface and

sent on. This process occurs each time the packet transfers to another router. At the router

connected to the network containing the destination host, the packet is encapsulated in the destination LAN‘s data-link frame type for delivery to the protocol stack on the destination host.

Path Determination

Routing involves two basic activities: determining optimal routing paths and transporting information groups (typically called packets) through an internetwork.

In the context of the routing process, the latter of these is referred to as switching. Although switching is relatively straightforward, path determination can be very

complex. During path determination, routers evaluate the available paths to a destination and to establish the preferred handling of a packet.

- Routing services use internetwork topology information (such as metrics) when

evaluating network paths. This information can be configured by the network administrator or collected through dynamic processes running in the internetwork.

- After the router determines which path to use, it can proceed with switching the

Page 107: Network Notes

packet: Taking the packet it accepted on one interface and forwarding it to another interface or port that reflects the best path to the packet‘s destination.

Multiprotocol Routing

Routers can support multiple independent routing algorithms and maintain

associated routing tables for several routed protocols concurrently. This capability allows a router to interleave packets from several routed protocols over the same data links.

The various routed protocols operate separately. Each uses routing tables to determine paths and switches over addressed ports in a ―ships in the night‖ fashion;

that is, each protocol operates without knowledge of or coordination with any of the other protocol operations. In the example above, as the router receives packets from the users on the networks

using IP, it begins to build a routing table containing the addresses of the network of these IP users. As the router receives packets from Macintosh AppleTalk users. Again, the router adds the AppleTalk addresses. Routing tables can contain address

information from multiple protocol networks. This process may continue with IPX traffic from Novell NetWare networks and Digital traffic from VAX minicomputers

attached to Ethernet networks.

Routing Tables

To aid the process of path determination, routing algorithms initialize and maintain routing tables, which contain route information. Route information varies depending on the routing algorithm used. Routing algorithms fill routing tables with a variety of

information. Two examples are destination/next hop associations and path desirability.

- Destination/next hop associations tell a router that a particular destination is linked to a particular router representing the ―next hop‖ on the way to the final

destination. When a router receives an incoming packet, it checks the destination address and attempts to associate this address with a next hop.

- With path desirability, routers compare metrics to determine optimal routes.

Page 108: Network Notes

Metrics differ depending on the routing algorithm used. A metric is a standard of measurement, such as path length, that is used by routing algorithms to determine

the optimal path to a destination.

Routers communicate with one another and maintain their routing tables through the transmission of a variety of messages.

- Routing update messages may include all or a portion of a routing table. By analyzing routing updates from all other routers, a router can build a detailed picture of network topology.

- Link-state advertisements inform other routers of the state of the sender‘s link so

that routers can maintain a picture of the network topology and continuously determine optimal routes to network destinations.

Routing Algorithm Goals

Routing tables contain information used by software to select the best route. But how, specifically, are routing tables built? What is the specific nature of the

information they contain? How do routing algorithms determine that one route is preferable to others?

Routing algorithms often have one or more of the following design goals: Optimality - the capability of the routing algorithm to select the best route,

depending on metrics and metric weightings used in the calculation. For example, one algorithm may use a number of hops and delays, but may weight delay more

heavily in the calculation. Simplicity and low overhead - efficient routing algorithm functionality with a

minimum of software and utilization overhead. Particularly important when routing algorithm software must run on a computer with limited physical resources.

Robustness and stability - routing algorithm should perform correctly in the face of unusual or unforeseen circumstances, such as hardware failures, high load

conditions, and incorrect implementations. Because of their locations at network junctions, failures can cause extensive problems.

Rapid convergence - Convergence is the process of agreement, by all routers, on

optimal routes. When a network event causes changes in router availability, recalculations are need to restablish networks. Routing algorithms that converge slowly can cause routing loops or network outages.

Flexibility - routing algorithm should quickly and accurately adapt to a variety of

network circumstances. Changes of consequence include router availability, changes in network bandwidth, queue size, and network delay.

Page 109: Network Notes

Routing Metrics

Routing algorithms have used many different metrics to determine the best route. Sophisticated routing algorithms can base route selection on multiple metrics,

combining them in a single (hybrid) metric. All the following metrics have been used:

Path length - The most common metric. The sum of either an assigned cost per network link or hop count, a metric specify the number of passes through network devices between source and destination.

Reliability - dependability (bit-error rate) of each network link. Some network links

might go down more often than others. Also, some links may be easier or faster to repair after a failure.

Delay - The length of time required to move a packet from source to destination through the internetwork. Depends on bandwidth of intermediate links, port

queues at each router, network congestion, and physical distance. A common and useful metric.

Bandwidth - available traffic capacity of a link.

Load - Degree to which a network resource, such as a router, is busy (uses CPU utilization or packets processed per second).

Communication cost - operating expenses of network links (private versus public lines).

Now let‘s talk a little about network addressing.

Network Addressing

Network and Node Addresses

Each network segment between routers is is identified by a network address. These

addresses contain information about the path used by the router to pass packets from a source to a destination.

For some network layer protocols, a network administrator assigns network

Page 110: Network Notes

addresses according to some preconceived internetwork addressing plan. For other network layer protocols, assigning addresses is partially or completely dynamic.

Most network protocol addressing schemes also use some form of a node address. The node address refers to the device‘s port on the network. The figure in this slide

shows three nodes sharing network address 1 (Router 1.1, PC 1.2, and PC 1.3). For LANs, this port or device address can reflect the real Media Access Control or MAC address of the device.

Unlike a MAC address that has a preestablished and usually fixed relationship to a device, a network address contains a logical relationship within the network topology..

The hierarchy of Layer 3 addresses across the entire internetwork improves the use of bandwidth by preventing unnecessary broadcasts. Broadcasts invoke unnecessary

process overhead and waste capacity on any devices or links that do not need to receive the broadcast. By using consistent end-to-end addressing to represent the path of media connections, the network layer can find a path to the destination

without unnecessarily burdening the devices or links on the internetwork with broadcasts.

Examples:-

For TCP/IP, dotted decimal numbers show a network part and a host part. Network

10 uses the first of the four numbers as the network part and the last three numbers—8.2.48-as a host address. The mask is a companion number to the IP

address. It communicates to the router the part of the number to interpret as the network number and identifies the remainder available for host addresses inside that network.

For Novell IPX, the network address 1aceb0b is a hexadecimal (base 16) number that cannot exceed a fixed maximum number of digits. The host address 0000.0c00.6e25

(also a hexadecimal number) is a fixed 48 bits long. This host address derives automatically from information in the hardware of the specific LAN device.

Page 111: Network Notes

Subnetwork Addressing

Subnetworks or subnets are networks arbitrarily segmented by a network administrator in order to provide a multilevel, hierarchical routing structure while

shielding the subnetwork from the addressing complexity of attached networks. Subnetting allows single routing entries to refer either to the larger block or to its individual constituents. This permits a single, general routing entry to be used

through most of the Internet, more specific routes only being required for routers in the subnetted block.

A subnet mask is a 32-bit number that determines how an IP address is split into network and host portions, on a bitwise basis. For example, 131.108.0.0 is a standard Class B subnet mask; the first two bytes identify the network and the last

two bytes identify the host. A subnet mask is a 32-bit address mask used in IP to indicate the bits of an IP address that are being used for the subnet address. Sometimes referred to simply as

mask. The term mask derives from the fact that the non-host portions of the IP address bits are masked by 0‘s to form the subnet mask.

Subnetting helps to organize the network, allows rules to be developed and applied to the network, and provides security and shielding. Subnetting also enables scalability by controlling the size of links to a logical grouping of nodes that have reason to

communicate with each other (such as within Human Resources, R&D, or Manufacturing).

Routing Algorithm Types

Routing algorithms can be classified by type. Key differentiators include:

- Single-path versus multi-path: Multi-path routing algorithms support multiple paths

Page 112: Network Notes

to the same destination and permit traffic multiplexing over multiple lines. Multi-path routing algorithms can provide better throughput and reliability.

- Flat versus hierarchical: In a flat routing system, the routers are peers of all others.

In a hierarchical routing system, some routers form what amounts to a routing backbone. In hierarchical systems, some routers in a given domain can communicate with routers in other domains, while others can communicate only

with routers in their own domain.

- Host-intelligent versus router-intelligent: In host-intelligent routing algorithms, the source end- node determines the entire route and routers act simply as store-and-forward devices. In router- intelligent routing algorithms, host are assumed to know

nothing about routes and routers determine the optimal path.

- Intradomain versus interdomain: Some routing algorithms work only within domains; others work within and between domains.

- Static versus dynamic - this classification will be discussed in the following two slides.

- Link state versus distance vector: will be discussed after static versus dynamic

routing.

Static Routing

Static routing knowledge is administered manually: a network administrator enters it into the router‘s configuration. The administrator must manually update this static

route entry whenever an internetwork topology change requires an update. Static knowledge is private—it is not conveyed to other routers as part of an update process.

Static routing has several useful applications when it reflects a network administrator‘s special knowledge about network topology. When an internetwork partition is accessible by only one path, a static route to the

partition can be sufficient. This type of partition is called a stub network. Configuring static routing to a stub network avoids the overhead of dynamic routing.

Page 113: Network Notes

Dynamic Routing

After the network administrator enters configuration commands to start dynamic

routing, route knowledge is updated automatically by a routing process whenever new topology information is received from the internetwork. Changes in dynamic

knowledge are exchanged between routers as part of the update process. Dynamic routing tends to reveal everything known about an internetwork. For security reasons, it might be appropriate to conceal parts of an internetwork. Static

routing allows an internetwork administrator to specify what is advertised about restricted partitions. In the illustration above, the preferred path between routers A and C is through

router D. If the path between Router A and Router D fails, dynamic routing determines an alternate path from A to C. According to the routing table generated by

Router A, a packet can reach its destination over the preferred route through Router D. However, a second path to the destination is available by way of Router B. When Router A recognizes that the link to Router D is down, it adjusts its routing table,

making the path through Router B the preferred path to the destination. The routers continue sending packets over this link. When the path between Routers A and D is restored to service, Router A can once

again change its routing table to indicate a preference for the counterclockwise path through Routers D and C to the destination network.

Distance Vector versus Link State

Distance vector versus link state is another possible routing algorithm classification.

- Link state algorithms (also known as shortest path first algorithms) flood routing information about its own link to all network nodes. The link-state (also called

shortest path first) approach recreates the exact topology of the entire internetwork (or at least the partition in which the router is situated).

- Distance vector algorithms send all or some portion of their routing table only to neighbors. The distance vector routing approach determines the direction (vector)

Page 114: Network Notes

and distance to any link in the internetwork.

- A third classification in this course, called hybrid, combines aspects of these two basic algorithms.

There is no single best routing algorithm for all internetworks. Network administrators must weigh technical and non-technical aspects of their network to

determine what‘s best.

state routing protocol based on IS-IS. IS-IS - Intermediate System-to-Intermediate System. OSI link-state hierarchical

routing protocol based on DECnet Phase V routing, whereby ISs (routers) exchange routing information based on a single metric, to determine network topology.

Hybrid

EIGRP - Enhanced Interior Gateway Routing Protocol. Advanced version of IGRP developed by Cisco. Provides superior convergence properties and operating

efficiency, and combines the advantages of link state protocols with those of distance vector protocols.

RIP and IGRP

RIP takes the path with the least number of hops, but does not account for the speed

of the links. It only counts hops. The limitation of RIP is about 15 hops. This creates a scalability issue when routing in large, heterogeneous networks.

IGRP was developed by Cisco and works only with Cisco products (although it has been licensed to some other vendors). It accounts for the varying speeds of each link. Additionally, IRGP can handle 224 to 252 hops, depending on the IOS version.

However, IGRP only supports IP.

Page 115: Network Notes

OSPF and EIGRP

OSPF - Open Shortest Path First. Link-state, hierarchical IGP routing algorithm proposed as a successor to RIP in the Internet community. OSPF features include

least-cost routing, multipath routing, and load balancing. OSPF was derived from an early version of the IS-IS protocol.

EIGRP - Enhanced Interior Gateway Routing Protocol. Advanced version of IGRP developed by Cisco. Provides superior convergence properties and operating efficiency, and combines the advantages of link state protocols with those of distance

vector protocols.

- Summary -

- Routers move data across networks from a source to a destination

- Routers determine the optimal path for forwarding network traffic

- Routing protocols communicate reachability information between routers

Page 116: Network Notes

Lesson 8: Layer 3 Switching

The term Layer 3 switching makes many people‘s eyes glaze over. In this module, we‘ll explain what Layer 3 switching is and how it compares with Layer 2 switching

and routing.

The Agenda

- What Is Layer 3 Switching?

- What is the Difference Between Layer 2 Switching, Layer 3 Switching, and Routing?

What Is Layer 3 Switching?

Recently, the industry has been bombarded with terminology such as Layer 3

switching, Layer 4 switching, multilayer switching, routing switches, switching routers, and gigabit routers. This ―techno-jargon‖ can be confusing to customers and resellers alike.

For purposes of this discussion, all these terms essentially represent the same function, and, as such, the term Layer 3 switching is used to represent them all. While the performance aspect of Layer 3 switching makes most of the headlines,

higher performance in switching packets does not, by itself, promise that all problems are solved in a network. There must be a recognition that application

design, mix of network protocols, placement of servers, placement of networking devices, management, as well as the implementation of end-to-end intelligent network services are at least as important—maybe more so—than simply adding

more bandwidth and switching capability to the network.

Why Do We Need Layer 3 Switching?

So, why do we need Layer 3 switching? Enterprise networks face unprecedented challenges today. Desktop computing power has tripled in the past two years and

shows no sign of leveling off. The proliferation of network-dependent intranet and multimedia applications has increased traffic volumes in many campus networks by an order of magnitude over the past several years. Network managers have responded

to this need to move data at greater speeds by moving more desktops to switched 10/100 Mbps and deploying LAN switching at unprecedented levels, both in the data

center and in the wiring closets to scale their end-to-end bandwidth. To effectively utilize the increased capacity, they must scale their Layer 3 performance to handle changing traffic patterns. Conventional wisdom that 80 percent of the traffic stays

local to the subnet and 20 percent or less traverses across subnets no longer holds. More than half of the traffic volume travels across subnet boundaries. Two factors contribute to these changing traffic patterns.

With Web-based computing, a PC can be both a subscriber and a publisher of information. As a result, information can now come from anywhere in the network,

Page 117: Network Notes

creating massive amounts of traffic that must travel across subnet boundaries. Users hop transparently between servers across the entire enterprise by using hyperlinks,

without the need to know where the data is located. The second factor leading to the loss of locality is the move toward server

consolidation. Enterprises are deploying centralized server farms because of the reduced cost of ownership and ease of management. All traffic from the client subnets to these servers must travel across the campus backbone, exacerbating

performance problems. Because of the rising levels of anywhere-to-everywhere communication, Layer 3 switching that can scale with increasing link speeds has become an imperative. Layer

3 switching is required to meet the demands of both client/server and peer-to-peer traffic on the intranet.

What Is Layer 2 Switching?

What is the difference between a Layer 2 switch, a Layer 3 switch, and a router?

A Layer 2 switch is essentially a multiport bridge. Switching and filtering are based on the Layer 2 MAC addresses, and, as such, a Layer 2 switch is completely transparent to network protocols and users‘ applications.

Layer 2 switching is the number one choice for providing plug-and-play performance.

What Is Routing?

In contrast to Layer 3 switches, routers make Layer 3 routing decisions by

implementing complex routing algorithms and data structures in software. Keep in mind this has little to do with the forwarding aspects of routing. Routing has two basic functions, path determination, using a variety of metrics, and

forwarding packets from one network to another. The path determination function enables a router to evaluate the available paths to a

destination and to establish the preferred handling of a packet. Data can take different paths to get from a source to a destination. At Layer 3, routers really help determine which path. The network administrator configures the

Page 118: Network Notes

router enabling it to make an intelligent decision as to where the router should send information through the cloud.

The network layer sends packets from source network to destination network. After the router determines which path to use, it can proceed with switching the

packet: taking the packet it accepted on one interface and forwarding it to another interface or port that reflects the best path to the packet‘s destination.

Packet Manipulation at Layer 3

How does Layer 3 switching differ from Layer 2 switching? Layer 3 switching requires rewriting the packet. This implies decrementing the TTL field, modifying the MAC

addresses, changing the VLAN-ID and recomputing the FCS. Doing all these actions at wire speed is difficult which is why an ASIC is necessary. True Layer 3 switching has all the advantage of routing, therefore it is rich in feature

and performance. Layer 2 switching, on the contrary, does not require packet rewriting. Without packet rewriting, no matter how you call it (e.g. virtual routing) it is NOT routing.

What Is Layer 3 Switching?

Page 119: Network Notes

Layer 3 switching is hardware-based routing. The packet forwarding is handled by specialized hardware, usually ASICs.

A Layer 3 switch can make switching and filtering decisions on both Layer 2 and Layer 3 addresses and can dynamically decide whether to route or switch incoming

traffic. Multilayer switching combines the ease of use of Layer 2 switching with the stability and security of Layer 3 routing.

To make Layer 3 switching decisions, routing table information must be assembled and exchanged between routing entities. Route calculation is performed by one or more route processors that reside in routers

or other devices. These route processors periodically distribute their routing tables to multilayer LAN switches to allow them to make very fast switching decisions.

Layer 3 switching is the favorite for highly scalable, resilient networking.

A Layer 3 Switch Has Two Distinct Components

ASICs:

- High-performance, hardware-based Layer 3 switching and services with consistent

low latency Routing software:

- Routing protocols to provide scalability

- Backbone redundancy

- Dynamic load balancing and fast convergence in the backbone - Reachability information

- Multiprotocol support for the campus

What Is the Difference Between Layer 3 Switching and Routing?

Layer 3 switches tend to have packet switching throughputs in the millions of packets per second (pps), while traditional general-purpose routers have evolved from

the 100,000 pps range to over a million pps. Aggregate performance is one of the key differences between Layer 3 switches and traditional routers. Traditional routers still offer key features used typically in WAN environments.

However, many of those features, such as multicast routing, multiprotocol routing, IBM feature sets, routing protocol stability, are still key for Layer 3 switches/campus routers.

Page 120: Network Notes

A Layer 3 or a Layer 2 Switch?— Scalability Advantages

Let‘s look more closely at when a customer might choose a Layer 3 switch over a traditional Layer 2 switch. Layer 3 switches offer considerable advantages depending

on the customer‘s requirements. Scalability— For customers with large networks that need increased performance to

handle the changing traffic patterns of today‘s new applications, Layer 3 switches offer increased scalability. Clearly a network of hubs does not scale. While bridges

helped, they were not sufficient to handle networks of many thousands of users and devices. Routers were the solution as they kept broadcasts local to a segment. Layer

3 switches avoid the problems associated with flat bridged or switched designs using traditional routing mechanisms allowing customers to scale their network infrastructure.

Layer 3 switches also utilize routing protocols thus avoiding the slow convergence problem of Spanning Tree Protocol and lack of load-balancing across multiple paths. Advanced services— Layer 3 switches also offer the benefit of broader intelligent

network services. These services permit applications to run on the network as well as

enable the creation of a cost-effective, operational environment to support day-to-day operations and management of the enterprise intranet.

Other Advantages

Other advantages include: Security—Layer 3 switches provide enhanced security functions to protect corporate

information while allowing appropriate access. Access control lists are supported by

Layer 3 switches with no performance degradation. Layer 3 switching is able to enforce the multiple levels of security traditionally only found on routers on every packet of the flow at wire speed. Management—Networks that use a multilayer model are by nature hierarchical. This

type of infrastructure is easier to manage as problems are more easily isolated.

Page 121: Network Notes

Redundancy/resiliency—Some Layer 3 switches offer significant redundancy and

resiliency options not available with Layer 2 switches. Default gateway redundancy is

provided by HSRP that enables Cisco switches to transparently switch over to the hot standby backup router instantly when the primary router goes off line, eliminating a

single point of failure in the network. UplinkFast provides alternative paths when a primary link fails. Load balancing is achieved by intelligent Layer 3 routing protocols.

While there are obvious advantages to a Layer 3 switch over a Layer 2 switch, other factors needed to be considered as well. Layer 3 switches are more expensive than Layer 2 switches and are more complex. Depending on the size of a customer‘s

network, the cost and complexity may not justify a Layer 3 switch. However, for customers with larger networks in need of enhanced scalability, Layer 3 switches will

actually simplify network infrastructure.

Not All Layer 3 Switches Are Created Equal

At its most basic, Layer 3 packet switching or forwarding is common across all vendors platforms, with perhaps exceptions in their multicast or DHCP services behavior.

The more scalable, flexible, and adaptable Layer 3 switches also offer a variety of routing protocols and services for topology discovery, load balancing, and resiliency.

Buying a Layer 3 switch without the richness and depth of routing protocols is somewhat akin to a driverless car. The car can certainly travel very fast in the direction that it is pointed, but the intelligence lies in the driver, who needs to make

all the decisions about where it should go and when to stop and turn. The more flexible and resilient these capabilities, the better reliability and adaptability the

switch offers. Finally, there are services. All the queuing, filtering, classification, multiprotocol, route summarization and redistribution functions, plus additional debugging,

statistics gathering and event logging services is what lets network managers deploy solutions that rise to the future challenges of mobility, multiservice, multimedia, and service level agreements for business critical applications.

- Summary -

- Layer 3 switching is ASIC-based routing - Traditional routers are better for WAN aggregation

- Layer 3 switches are more appropriate for scaling Layer 3 performance

- Layer 2 switches are more appropriate when the additional cost and complexity are not warranted

Page 122: Network Notes

Lesson 9: Understanding Virtual LANs

This lesson covers virtual LANs or VLANs. We‘ll start by defining what a VLAN is and then explaining how it works. We‘ll conclude the lesson by talking about some key

VLAN technologies such as ISL and VTP.

The Agenda

- What Is a VLAN?

- VLAN Technologies

What Is a VLAN?

Well, the reality of the work environment today is that personnel is always changing. Employees move departments; they switch projects. Keeping up with these changes

can consume significant network administration time. VLANs address the end-to-end mobility needs that businesses require. Traditionally, routers have been used to limit the broadcast domains of workgroups.

While routers provide well-defined boundaries between LAN segments, they introduce the following problems:

- Lack of scalability (e.g., restrictive addressing on subnets) - Lack of security (e.g., within shared segments)

- Insufficient bandwidth use (e.g., extra traffic results when segmentation of the network is based upon physical location and not necessarily by workgroups or interest group)

- Lack of flexibility (e.g., cost reconfigurations are required when users are moved)

Virtual LAN, or VLAN, technology solves these problems because it enables switches and routers to configure logical topologies on top of the physical network infrastructure. Logical topologies allow any arbitrary collection of LAN segments

within a network to be combined into an autonomous user group, appearing as a single LAN.

Virtual LANs

Page 123: Network Notes

A VLAN can be defined as a logical LAN segment that spans different physical LANs. VLANs provide traffic separation and logical network partitioning.

VLANs logically segment the physical LAN infrastructure into different subnets (broadcast domains for Ethernet) so that broadcast frames are switched only between

ports within the same VLAN. A VLAN is a logical grouping of network devices (users) connected to the port(s) on a LAN switch. A VLAN creates a single broadcast domain and is treated like a subnet.

Unlike a traditional segment or workgroup, you can create a VLAN to group users by their work functions, departments, the applications used, or the protocols shared irrespective of the users‘ work location (for example, an AppleTalk network that you

want to separate from the rest of the switched network). VLAN implementation is most often done in the switch software.

Remove the Physical Boundaries

Conceptually, VLANs provide greater segmentation and organizational flexibility. VLAN technology allows you to group switch ports and the users connected to them

into logically defined communities of interest. These groupings can be coworkers within the same department, a cross-functional product team, or diverse users

sharing the same network application or software (such as Lotus Notes users). Grouping these ports and users into communities of interest—referred to as VLAN organizations—can be accomplished within a single switch, or more powerfully,

between connected switches within the enterprise. By grouping ports and users together across multiple switches, VLANs can span single building infrastructures or

interconnected buildings. As shown here, VLANs completely remove the physical constraints of workgroup communications across the enterprise. Additionally, the role of the router evolves beyond the more traditional role of

firewalls and broadcast suppression to policy-based control, broadcast management, and route processing and distribution. Equally as important, routers remain vital for switched architectures configured as VLANs because they provide the communication

between logically defined workgroups (VLANs). Routers also provide VLAN access to shared resources such as servers and hosts, and connect to other parts of the

network that are either logically segmented with the more traditional subnet approach or require access to remote sites across wide-area links. Layer 3

Page 124: Network Notes

communication, either embedded in the switch or provided externally, is an integral part of any high-performance switching architecture.

VLAN Benefits

VLANs provide many internetworking benefits that are compelling.

Reduced administrative costs—Members of a VLAN group can be geographically dispersed. Members might be related because of their job functions or type of data

that they use rather than the physical location of their workspace. - The power of VLANs comes from the fact that adds, moves, and changes can be

achieved simply by configuring a port into the appropriate VLAN. Expensive, time-consuming recabling to extend connectivity in a switched LAN environment, or host

reconfiguration and re-addressing is no longer necessary, because network management can be used to logically ―drag and drop‖ a user from one VLAN group to another.

Better management and control of broadcast activity—A VLAN solves the scalability problems often found in a large flat network by breaking a single broadcast domain

into several smaller broadcast domains or VLAN groups. All broadcast and multicast traffic is contained within each smaller domain.

Tighter network security with establishment of secure user groups:

- High-security users can be placed in a separate VLAN group so that non-group members do not receive their broadcasts and cannot communicate with them.

- If inter-VLAN communication is necessary, a router can be added, and the traditional security and filtering functions of a router can be used. - Workgroup servers can be relocated into secured, centralized locations.

Scalability and performance—VLAN groups can be defined based on any criteria; therefore, you can determine a network‘s traffic patterns and associate users and

resources logically. For example, an engineer making intensive use of a networked CAD/CAM server can be put into a separate VLAN group containing just the engineer

and the server. The engineer does not affect the rest of the workgroup. The engineer‘s dedicated LAN increases throughput to the CAD/CAM server and helps performance for the rest of the group by not affecting its work.

VLAN Components

There are five key components within VLANs:

Page 125: Network Notes

Switches — For determining VLAN membership. This is where users/systems attach to the network.

Trunking — For exchanging VLAN information throughout the network. This is essential for larger environments that comprise several switches, routers, and

servers.

Multiprotocol routing — For supporting inter-VLAN communications. Remember that while all members within the same VLAN can communicate directly with one

another, routers are required for exchanging information between different VLANs.

Servers — Servers are not required within VLAN environments specifically; however, they are a staple within any network. Within a VLAN environment, users can utilize servers in several different ways, and we‘ll discuss them momentarily. Because

VLANs are used throughout the network, users from multiple VLANs will most likely need their services.

Management — For security, control, and administration within the network. Effective management and administration is essential within any network

environment, and it becomes even more imperative for networks using VLANs. The network management system appropriately recognize and administer logical

segments within the switched network. Let‘s look at some of these components in more detail.

Establishing VLAN Membership

Switches provide the means for users to access a network and join a VLAN. Various approaches exist for establishing VLAN membership.

Page 126: Network Notes

each of these methods has its positive and negative points.

Membership by Port

Let‘s look at the first method for determining or assigning VLAN membership:

Port-based — In this case, the port is assigned to a specific VLAN independent of the user or system attached to the port. This VLAN assignment is typically done by the network administrator and is not dynamic. In other words, the port cannot be

automatically changed to another VLAN without the personal supervision and processing of the network administrator. This approach is quite simple and fast, in that no complex lookup tables are required

to achieve this VLAN segregation. If this port-to-VLAN association is done via ASICs, the performance is very good.

This approach is also very easy to manage, and a Graphical user Interface, or GUI, illustrating the VLAN-to-port association is normally intuitive for most users. As in other VLAN approaches, the packets within this port-based method do not leak

into other VLAN domains on the network. The port is assigned to one and only one VLAN at any time, and no other packets from other VLANs will ―bleed‖ into or out of

this port.

Page 127: Network Notes

Membership by MAC Addresses

The other methods for determining VLAN membership provide more flexibility and are

more ―user-centric‖ than the port-based model. However, these methods are conducted with software in the switch and require more processing power and

resources within the switches and the network. These solutions require a packet-by-packet lookup method that decreases the overall performance of the switch. (Software solutions do not run as fast as hardware/ASIC-based solutions.)

In the MAC-based model, the VLAN assignment is linked to the physical media address or MAC address of the system accessing the network. This approach provides

enhanced security benefits of the more ―open‖ port-based approach, because all MAC addresses are unique. From an administrative aspect, the MAC-based approach requires slightly more work,

because a VLAN membership table must be created for all of the users within each VLAN on the network. As a user attaches to a switch, the switch must verify and confirm the MAC address with a central/main table and place it into the proper

VLAN. The network address and user ID approaches are also more flexible than the port-

based approach, but they also require even more overhead than the MAC-based method, because tables must exist throughout the network for all the relevant network protocols, subnets, and user addresses. With the user ID method, another

large configuration/policy table must exist containing all authorized user login IDs. Within both of these methods, the switches typically do not have enough resources

(CPU, memory) to accommodate such large tables. Therefore, these tables must exist within servers located elsewhere in the network. Additionally, the latencies resulting from the lookup process would be more significant in these approaches.

From an administrative aspect, the network and user ID-based approaches require more resources (memory and bandwidth) to use distributed tables on several switches or servers throughout the network. These two approaches also require

slightly more bandwidth to share this information between switches and servers.

Page 128: Network Notes

Multiple VLANs per Port

When addressing these various methods for implementing VLANs, customers always

question the use of multiple VLANs per switch port. Can this be done? Does this make sense? The means for implementing this type of design is based on using shared hubs off of

switch ports. Members using the hub belong to different VLANs, and thus, the switch port must also support multiple VLANs.

While this method does offer the flexibility of having VLANs completely port independent, this method also violates one of the general principle of implementing VLANs: broadcast containment. An incoming broadcast on any VLAN would be sent

to all hub ports — even though they may belong to a different VLAN. The switch, hub, and all endstations will have to process this broadcast even if it belongs to a different VLAN. This ―bleeding‖ of VLAN information does not provide true segmentation nor

does it effectively use resources.

Communicating Between VLANs

Another key component of VLANs is the router. Routers provide inter-VLAN communications and are essential for sharing VLAN information in large

environments. The Layer 3 routing capabilities provide additional security between networks (access lists, protocol filtering, and so on).

In general, there are two approaches to using routers as communication points for VLANs:

- Logical connection method— Using ISL within the router, a trunk can be established between the switch and the router. One high- speed port is used, and

multiple VLAN information runs across this trunk link. (We‘ll explain ISL in just a minute.)

Page 129: Network Notes

- Physical connection method— Multiple independent links are used between the router and the switch. Each link contains its own VLAN. This scenario does not

require ISL to be implemented on the router and also allows lower-speed links to be used.

The proper method to implement depends on the customer‘s needs and requirements.

(Does the customer need to conserve router and switch ports? Does the customer need a high-speed ISL port?) In both instances, the router still supports inter-VLAN

communication.

Server Connectivity

The network server is another key component of VLANs. Servers provide file, print, and storage services to users throughout the network regardless of VLANs. To optimize their network environments many customers deploy centralized server

farms in their networks.

This eases administration of the servers and Network Operating System, or NOS, significantly. These server farms contain servers that support the entire network, but

each server supports a specific VLAN or number of VLANs.

As in the use of routers within VLANs, there are two approaches to using servers as common access within a VLAN environment: Logical connection method

Using a server adapter (NIC) running ISL, a trunk can be established between the switch and the server. One high-speed port is used and information for multiple

VLANs runs across this trunk link. This method offers greater flexibility as well as a high-performance solution that is easy to administer. (that is one NIC to setup and

monitor). Note: ISL is now supported in several vendors‘ server NIC cards: Intel, CrossPoint. These adapters support up to 64 VLANs per port and cost approximately

Page 130: Network Notes

US$500. Physical Connection method

Multiple independent links are used between the server and the switch. Each link

contains its own VLAN. This method does not require ISL to be implemented on the server and also allows lower-speed links to be used.

The proper method to implement depends on the customer‘s needs and requirements. (Does the customer need to conserve switch ports? Does the customer need a high-speed ISL port? Does the customer want to use ISL server adapters?) In both

methods, the server still supports multiple VLANs.

VLAN Technologies

Let‘s take a look at some technologies that are essential for VLAN implementations.

Inter-Switch Link

Cisco developed the Inter-Switch Link, or ISL, mechanism to support high-speed trunking between switches and switches, routers, or servers in Fast Ethernet

environments.

Cisco‘s Inter-Switch Link protocol (ISL) enables VLAN traffic to cross LAN segments. ISL is used for interconnecting multiple switches and maintaining VLAN information

as traffic goes between switches. ISL uses ―packet tagging‖ to send VLAN packets between devices on the network without impacting switching performance or requiring the use and exchange of complex filtering tables. Each packet is tagged

depending on the VLAN to which it belongs.

Page 131: Network Notes

The benefits of packet tagging include manageable broadcast domains that span the campus; bandwidth management functions such as load distribution across

redundant backbone links and control over spanning tree domains; and a substantial cost reduction in the number of physical switch and router ports required to

configure multiple VLANs. The ISL protocol enables in excess of 1000 VLANs concurrently without requiring any

fragmentation or re assembly of the packets. Additionally, ISL wraps a 48-byte ―envelope‖ around the packet that handles processing, priority, and quality-of-service, or QoS, features. ISL is not limited to Fast

Ethernet/Ethernet packet sizes (1518 bytes) and can even accommodate large packet sizes up to 16000 bytes — which is appropriate for Token Ring. It is important to

understand that ISL (and 802.1q—a format used by some other vendors, for that matter) are both just packet-tagging formats. Neither sets up a standard for administration.

VLAN Standardization

While Cisco was first to market with its revolutionary packet tagging schemes for

Fast Ethernet and FDDI, they are proprietary solutions. Other vendors implemented their own unique methods for sharing VLAN information across the network. As a

result, a standards body was created within the IEEE to provide one common VLAN communication standard. This ultimately benefits customers using switches from various vendors in the marketplace.

Within the 802.1Q standard, packet tagging is the exchange vehicle for VLAN information.

Because ISL is so widely deployed in our installed customer base, Cisco will continue to support both ISL and 802.1Q. It is important to note that Cisco‘s dual mode support of both methods will be implemented via hardware ASICs, which will provide

tremendous performance.

VLAN Standard Implementation

Page 132: Network Notes

This diagram illustrates a typical customer implementation of the 802.1Q VLAN

standard. This scenario is based upon a customer network composed of two separate campuses based on different vendors‘ technology (Cisco and vendor X).

If the customer already has Cisco switches deployed, it can maintain its use of ISL. Also, it can maintain its use of the VLAN trunking scheme used by vendor X. However, the new joined network must use the 802.1Q standard to share VLAN

information between switches within the campus.

Virtual Trunk Protocol (VTP)

In addition to the ISL packet tagging method, Cisco also created the Virtual Trunking Protocol, or VTP, for dynamically configuring VLAN information across the network

regardless of media type (for example Fast Ethernet, ATM, FDDI, and so on). This VTP protocol is the software that makes ISL usable.

VTP enables VLAN communication from a centralized network management platform,

thus minimizing the amount of administration that is required when adding or changing VLANs anywhere within the network. VTP completely eliminates the need to administer VLANs on a per-switch basis, an essential characteristic as the number of

a network‘s switches and VLANs grows and reaches a point where changes can no longer be reliably administered on individual components. VTP allows for greater

scalability because it eliminates complex VLAN administration tasks across every

Page 133: Network Notes

switch.

Conceptually, VTP works like this: When you add a new VLAN to the network, let's say VLAN 1, VTP automatically goes out and configures the trunk interfaces across

the backbone for that VLAN. This includes the mapping of ISL to LANE or to 802.1Q. Adding a second VLAN is just as easy. VTP sends out new advertisements and maps the VLAN across the appropriate interfaces. The important thing to remember about

this second VLAN, is that VTP keeps track of the VLANs that already exist and eliminates any cross configurations between these two, especially if this configuration were to be done manually.

- Summary -

- VLANs enable logical (instead of physical) groups of users on a switch - VLANs address the needs for mobility and flexibility

- VLANs reduce administrative overhead, improve security, and provide more

efficient bandwidth utilization

Page 134: Network Notes

Lesson 10: Understanding Quality of Service

QoS is important to many network applications. Voice/data integration is not possible without. Nor is effective multimedia… or even VPNs. In this module, we‘ll

discuss what QoS is and some of its building blocks. Will also look at some specific examples of how QoS can be used.

The Agenda

- What Is QoS?

- QoS Building Blocks

- QoS in Action

What Is Quality of Service (QoS)?

Basically, QoS comprises the mechanisms that give network managers the ability to

control the mix of bandwidth, delay, variances in delay (jitter), and packet loss in the network in order to deliver a network service such as voice over IP; define different service-level agreements (SLAs) for divisions, applications, or organizations; or simply

prioritize traffic across a WAN.

QoS provides the ability to prioritize traffic and allocate resources across the network to ensure the delivery of mission-critical applications, especially in heavily loaded environments. Traffic is usually prioritized according to protocol.

So what does this really mean...

An analogy is the carpool lane on the highway. For business applications, we want to give high priority to mission-critical applications. All other traffic can receive equal

treatment.

Page 135: Network Notes

Mission-critical applications are given the right of way at all times. Multimedia applications take a lower priority. Bandwidth-consuming applications, such as file

transfers, can receive an even lower priority.

What Is Driving the Need for QoS?

There are two broad application areas that are driving the need for QoS in the network:

- Mission-critical applications need QoS to ensure delivery and that their traffic is not impacted by misbehaving applications using the network.

- Real-time applications such as multimedia and voice need QoS to guarantee

bandwidth and minimize jitter. This ensures the stability and reliability of existing applications when new applications are added.

Voice and data convergence is the first compelling application requiring delay-sensitive traffic handling on the data network. The move to save costs and add new features by converging the voice and data networks--using voice over IP, VoFR, or

VoATM--has a number of implications for network management:

- Users will expect the combined voice and data network to be as reliable as the voice network: 99.999% availability

- To even approach such a level of reliability requires a sophisticated management capability; policies come into play again

So what are mission critical applications?

Enterprise Resource Planning (ERP) applications

- Order entry - Finance - Manufacturing

- Human resources - Supply-chain management

- Sales-force automation What else is mission critical?

- SNA applications

- Selected physical ports - Selected hosts/clients

Page 136: Network Notes

QoS Benefits

QoS provides tremendous benefits. It allows network managers to understand and control which resources are being used by application, users, and departments.

It ensures the WAN is being used efficiently by the mission-critical applications and

that other applications get ―fair‖ service, but take a back seat to mission-critical traffic.

It also provides an infrastructure that delivers the service levels needed by new mission-critical applications, and lays the foundation for the ―rich media‖ applications of today and tomorrow.

Where Is QoS Important?

QoS is required wherever there is congestion. QoS has been a critical requirement for the WAN for years. Bandwidth, delay, and delay variation requirements are at a premium in the wide area.

LAN QoS requirements are emerging with the increased reliance on mission critical applications and the growing popularity of voice over LAN and WAN.

The importance of end-to-end QoS is increasing due to the rapid growth of intranets and extranet applications that have placed increased demands on the entire network.

QoS Example

Hopefully this Image provides a little context. It demonstrates a real example of how QoS could be used to manage network applications.

Page 137: Network Notes

QoS Building Blocks

Let‘s now take a look at some of the building blocks of QoS.

There are a wide range of QoS services. Queuing, traffic shaping, and filtering are essential to traffic prioritization and congestion control, determining how a router or

switch handles incoming and outgoing traffic. QoS signaling services determine how network nodes communicate to deliver the

specific end-to-end service required by applications, flows, or sets of users. Let‘s take a look at a few of these.

Classification

- IP Precedence - Committed Access Rate (CAR)

- Diff-Serv Code Point (DSCP) - IP-to-ATM Class of Service - Network-Based Application Recognition (NBAR)

- Resource Reservation Protocol (RSVP)

Policing

- Committed Access Rate (CAR) - Class-Based Weighted Fair Queuing (CB WFQ) - Weighted Fair Queuing (WFQ)

Shaping

- Generic Traffic Shaping (GTS)

- Distributed Traffic Shaping (DTS) - Frame Relay Traffic Shaping (FRTS)

Congestion Avoidance

- Weighted Random Early Detection (WRED) - Flow-Based WRED (Flow RED)

Congestion Management— Fancy Queuing

Weighted fair queuing is another queuing mechanism that ensures high priority for

sessions that are delay sensitive, while ensuring that other applications also get fair treatment.

For instance, in the Cisco network, Oracle SQLnet traffic, which consumes relatively low bandwidth, jumps straight to the head of the queue, while video and HTTP are

Page 138: Network Notes

serviced as well. This works out very well because these applications do not require a lot of bandwidth as long as they meet their delay requirements.

A sophisticated algorithm looks at the size and frequency of packets to determine

whether a specific session has a heavy traffic flow or a light traffic flow. It then treats the respective queues of each session accordingly.

Weighted fair queuing is self-configuring and dynamic. It is also turned on by default when routers are shipped.

Other options include:

- Priority queuing assigns different priority levels to traffic according to traffic types or source and destination addresses. Priority queuing does not allow any traffic of a lower priority to pass until all packets of high priority have passed. This works

very well in certain situations. For instance, it has been very successfully implemented in Systems Network Architecture (SNA) environments, which are very sensitive to delay.

- Custom queuing provides a guaranteed level of bandwidth to each application, in

the same way that a time-division multiplexer (TDM) divides bandwidth among channels. The advantage of custom queuing is that if a specific application is not using all the bandwidth it is allotted, other applications can use it. This assures

that mission-critical applications receive the bandwidth they need to run efficiently, while other applications do not time out either.

This has been implemented especially effectively in applications where SNA leased lines have been replaced to provide guaranteed transmission times for very time-

sensitive SNA traffic. What does ―no bandwidth wasted‖ mean?Traffic loads are redirected when and if space becomes available. If there is space and there is traffic, the bandwidth is used.

Page 139: Network Notes

Random Early Detection (RED)

Random Early Detection (RED) is a congestion avoidance mechanism designed for

packet switched networks that aims to control the average queue size by indicating to the end hosts when they should temporarily stop sending packets. RED takes

advantage of TCP‘s congestion control mechanism. By randomly dropping packets prior to periods of high congestion, RED tells the packet source to decrease its transmission rate.

Assuming the packet source is using TCP, it will decrease its transmission rate until all the packets reach their destination, indicating that the congestion is cleared. You

can use RED as a way to cause TCP to back off traffic. TCP not only pauses, but it also restarts quickly and adapts its transmission rate to the rate that the network

can support. RED distributes losses in time and maintains normally low queue depth while

absorbing spikes. When enabled on an interface, RED begins dropping packets when congestion occurs at a rate you select during configuration.

RED is recommended only for TCP/IP networks. RED is not recommended for protocols, such as AppleTalk or Novell Netware, that respond to dropped packets by retransmitting the packets at the same rate.

Weighted RED

Cisco‘s implementation of RED, called Weighted Random Early Detection (WRED),

combines the capabilities of the RED algorithm with IP Precedence. This combination provides for preferential traffic handling for higher priority packets. It can selectively

discard lower priority traffic when the interface begins to get congested, and provide differentiated performance characteristics for different classes of service. WRED differs from other congestion management techniques such as queuing strategies

because it attempts to anticipate and avoid congestion rather than controlling congestion once it occurs.

Page 140: Network Notes

WRED is useful on any output interface where you expect to have congestion. However, WRED is usually used in the core routers of a network, rather than the

network‘s edge. Edge routers assign IP precedences to packets as they enter the network. WRED uses these precedences to determine how it treats different types of

traffic. WRED provides separate thresholds and weights for different IP precedences, allowing you to provide different qualities of service for different traffic. Standard traffic may be dropped more frequently than premium traffic during periods of

congestion. Let‘s take a look at how WRED works.

By randomly dropping packets prior to periods of high congestion, WRED tells the packet source to decrease its transmission rate. Assuming the packet source is using

TCP, it will decrease its transmission rate until all the packets reach their destination, indicating that the congestion is cleared. WRED generally drops packets selectively based on IP Precedence. Packets with a higher IP Precedence are less likely

to be dropped than packets with a lower precedence. Thus, higher priority traffic is delivered with a higher probability than lower priority traffic. However, you can also configure WRED to ignore IP precedence when making drop decisions so that non

weighted RED behavior is achieved. WRED is also RSVP-aware, and can provide integrated services controlled-load QoS service.

WRED reduces the chances of tail drop by selectively dropping packets when the output interface begins to show signs of congestion. By dropping some packets early

rather than waiting until the buffer is full, WRED avoids dropping large numbers of packets at once and minimizes the chances of global synchronization. Thus, WRED

allows the transmission line to be used fully at all times. In addition, WRED statistically drops more packets from large users than small. Therefore, traffic sources that generate the most traffic are more likely to be slowed down than traffic

sources that generate little traffic.

Page 141: Network Notes

QoS Signalling Resource Reservation Protocol

RSVP is the first significant industry-standard protocol for dynamically setting up end-to-end QoS across a heterogeneous network. RSVP provides transparent operation through routers that do not support RSVP.

Explained simply, RSVP is the ability for an end station or host to request a certain level of QoS across a network. RSVP carries the request through the network, visiting each node that the network uses to carry the stream. At each node, RSVP attempts to

make a resource reservation for the data stream. RSVP is designed to utilize the robustness of current IP routing algorithms. This protocol does not perform its own

routing; instead, it uses underlying routing protocols to determine where it should carry reservation requests.

Example: No Quality of Service

Here‘s an example of how RSVP works. Let‘s first look at what the problem would be without RSVP.

In this example, the video traffic still gets through, but it is impacted by a large file

transfer in progress. This causes a negative effect on the quality of the video and the picture comes out all jittery.

Page 142: Network Notes

What we need is a method to reserve bandwidth from end-to-end on a per-application basis. RSVP can do this.

This figure explains how RSVP actually works. RSVP reserves bandwidth from end-to-end on a per-application basis for each user.

This is especially important for delay-sensitive applications, such as video.

As shown here, with RSVP, the client‘s application requests bandwidth be reserved at each of the network elements on the path. These elements will reserve the requested bandwidth using priority and queuing mechanisms.

Once the server receives the OK, bandwidth has been reserved across the whole path, and the video stream can start being transmitted. RSVP ensures clear video

reception. The good news is that RSVP is becoming widely accepted by industry leaders, such as

Microsoft and Intel, who are implementing RSVP support in their applications. These applications include Intel‘s Proshare and Microsoft‘s NetShow. To provide support on a network, Cisco routers also run RSVP.

End-to-End QoS

End-to-end QoS is essential. Following image provides a context for the different QoS features we looked at.

Page 143: Network Notes

QoS in Action

Example 1: Prioritization of IP Telephony

Page 144: Network Notes

Example 2: ERP Application

- SUMMARY -

The goal of QoS is to provide better and more predictable network service by

providing dedicated bandwidth, controlled jitter and latency, and improved loss characteristics. QoS achieves these goals by providing tools for managing network congestion, shaping network traffic, using expensive wide-area links more efficiently,

and setting traffic policies across the network.

- QoS provides guaranteed availability

- Prioritization of mission-critical versus noncritical applications - Interactive and time-sensitive applications - Voice, video, and data integration

- Key QoS building blocks

- classification - policing

- shaping - congestion avoidance

Page 145: Network Notes

Lesson 11: Security Basics

Welcome to the Lesson 11.Our goal here is to give you the terminology, the words that your customers are going to want you to know and want you to be able to

converse with.

The Agenda

- Why Security?

- Security Technology - Identity

- Integrity - Active Audit

All Networks Need Security

Security is very important. The Internet is a wonderful tool. Meteoric growth like that of Cisco from nowhere to a multi-billion dollar company in a decade would not be possible without leveraging the tools available with the internet and intranet.

But without well defined security, the Internet can be a dangerous place. The good

news is that the tools are available to make the Internet a safe place for your business. Some people think that only large sites are hacked. In reality, even small company sites are hacked.

There‘s a false impression from many small company owners that, "Hey, who would want to break into my company? I‘m a nobody.

I‘m not a big corporation like IBM or the Pentagon or something like that, so why would somebody want to break into my company?" The reality is that even small companies are hacked into very, very often.

Why Security?

Why network security? There‘s three primary reasons to explore network security.

- One is policy vulnerabilities.

- Another one, configuration vulnerabilities. - Lastly, there‘s technology vulnerabilities.

And the bottom line is there are people that are willing and eager to take advantage of these vulnerabilities.

Security Threats

Page 146: Network Notes

So these are some of the different things that we need to protect against: Loss of privacy: Without encryption, every message sent may be read by an

unauthorized party. This is probably the largest inhibitor of business-to-business

communications today.

Impersonation: You must also be careful to protect your identity on the Internet.

Many security systems today rely on IP addresses to uniquely identify users.

Unfortunately this system is quite easy to fool and has led to numerous break-ins.

Denial of service:And you must ensure that your systems are available. Over the

last several years, attackers have found deficiencies in the TCP/IP protocol suite that allows them to arbitrarily cause computer systems to crash.

Page 147: Network Notes

Loss of integrity:Even for data that is not confidential, one must still take measures

to ensure data integrity. For example, if you were able to securely identify yourself to the your bank using digital certificates, you would still want to ensure that the

transaction itself is not modified in some way, such as by changing the amount of the deposit.

Security Objective: Balance Business Needs with Risks

Objectives for security need to balance the risks of providing access with the need to protect network resources. Creating a security policy involves evaluating the risks, defining what‘s valuable, and determining whom you can trust. The security policy

plays three roles to help you specify what must be done to secure company assets.

-It specifies what is being protected and why, and the responsibility for that protection. -It provides grounds for interpreting and resolving conflicts in implementation,

without listing specific threats, machines, or individuals. A well-designed policy does not change much over time.

-It addresses scalability issues Employees expect access but an enterprise requires security. It is important to plan

with scalability and deployment of layered technologies in mind. Security policies that inhibit productivity may be too restrictive.

Page 148: Network Notes

Security Technology

Security technology typically falls into one of three categories. Identity:

Links user authentication and authorization on the network infrastructure; verifies

the identity of those requesting access and prescribe what users are allowed to do. Integrity:

Provides data confidentiality through firewalls, management control, routing, privacy and encryption, and access control. Active Audit:

Provides data on network activities and assist network administrators to account for

network usage, discover unauthorized activities, and scan the network for security vulnerabilities.

Identity

Let‘s start by looking at some Identity technologies. Again, identity is the recognition

of each individual user, and mapping of their identity, location and the time to policy; authorization of their network services and what they can do on the network.

Why is identity important? With IP addresses no longer being static (because of exhaustion of address space) and with solutions such as NAT and DHCP, etc., people are no longer tied to addresses. Ideally, we should be able to gain appropriate access

based on who we are.

Identity can be determined by a number of technologies — user name and password, token card, digital certificate—each can be configured for a policy setting that

indicates the degree of trust. Administrators can also configure access by time of day—identity authorizations can

also include a time metric for future time-based access capability.

The key to centralized identity and security policy management is the ―combination‖ of all key authentication mechanisms, from SecurID and DES Dial cards to MS Login, and their internetworking with one common identity repository.

To truly be centralized and configured once only, the identity mechanism must also be media independent; equally applicable to dial-users and campus users for example.

Let‘s look at some of these technologies.

Page 149: Network Notes

Username/Password

For basic security, user id‘s and passwords can be used to authenticate remote users.

First, a remote user dials into the network access server. The NAS, or network access

server, negotiates data link setup with the user using (most likely) PPP. As part of this negotiation, the user must send a password to the NAS. This is usually handled

by either the PAP or CHAP protocols, which we‘ll cover in more detail in a little bit. Next, the NAS forwards the user‘s password to a AAA server to verify that it is

legitimate. The protocol used between the NAS and AAA server is (most likely) either TACACS+ or RADIUS. I‘ll be covering these protocols in more detail in a minute.

When the AAA server gets the user id and password, it checks its database of legitimate users and looks for a match. If a match is found, the AAA server sends the

NAS a call accept message. If not, the AAA server sends the NAS a call reject message. If the call is accepted, the user is connected to the campus network.

PAP and CHAP Authentication

Now let‘s back up for a minute and explain a little more about the process of dial in

connections.

Many of you have probably heard of PPP (Point-to-Point Protocol) before. PPP is used primarily on dial-in connections since it provides a standard mechanism for passing

Page 150: Network Notes

authentication information such as a password from a remote user to the NAS. Two protocols are supported to carry the authentication information: PAP (Password

Authentication Protocol) and CHAP (Challenge/Handshake Authentication Protocol). These protocols are well documented in IETF RFCs and widely implemented in

vendor products. PAP provides a simple password protocol. User ID and password are sent at the beginning of the call, then validated by the access server using a central PAP

database. The PAP password database is encrypted, but the password is sent in clear text through the public network. A AAA server may be used to hold the password database.

The problem with PAP is that it is subject to sniffing and replay attacks. Hacker could

intercept communication and use information to spoof a legitimate user. CHAP provides an improved authentication protocol. The Access Server periodically

challenges remote access devices such as a router to provide a proper password. The initial CHAP authentication is performed during login; network administration can

specify the rate of subsequent authentication. These repeated challenges limit the time of exposure of any single attack. Password is sent encrypted. Both sides can use the challenge/response mechanism supported by CHAP to authenticate the device at

the other end.

One-Time Password

For a more restrictive security policy, a one-time password would be used.

One-time passwords are a unique combination of something a person knows (like a

PIN or password) and something a person possesses (like a token card). A one-time password is more secure than a simple password since it changes every time the user tries to login, and it can only be used once—therefore, it is safe against

spoofing and replay attacks. There are three commonly used ways to create one-time passwords:

- Token cards are the most common way. The 2 most common token cards are the

Page 151: Network Notes

SecurID card by Security Dynamics and the DES Gold card by Enigma Logic. In one, the user enters a PIN into the card and the card displays the one-time

password, which the user types in at their terminal. In the other, the user appends a PIN to the random number displayed on the token card, and enters this

new password at their terminal. - Soft tokens are the same as token cards except the user doesn‘t have to carry

around a physical card. Software runs on the user‘s PC that performs the same function as the token card, and the user need only enter a PIN.

- S-key is a PC application that presents a dialog box to the user upon login into which the user must enter the correct combination of six key words.

The process used to send the one-time password to the NAS is virtually the same as that used for the password example described in the previous slide. When the NAS

receives the one-time password, it forwards it to the AAA server using either TACACS+ or RADIUS protocol. When the AAA server receives the one-time password,

it forwards it to a token server for authentication. The accept or reject message flows back to the NAS through the AAA server.

Authentication, Authorization, and Accounting (AAA)

We‘ve mentioned AAA servers. What does this mean. AAA stands for Authentication, authorization, and accounting.

Authentication is to provide exact end user verification. I need to know exactly who this person is, and how they prove it to me

Authorization is the second step. Now that I know who you are, what can you do. I need to assign IP addresses, provide routes, block access to certain resources. All the

things I can do to a local user, I should be able to control with a remote user. Accounting is the last step. I need to create an accurate record of the transactions of

this user. How long were they connected? How much data did they FTP? What was the cause of there disconnection. This allows me to not only bill my customers

accurately, but understand my user base.

Page 152: Network Notes

AAA Services

A AAA server provides a centralized security database that offers per-user access

control.It supports services such as TACACS+ and RADIUS that we‘ll discuss in a minute as well as service such as:

- Per-User access-lists - load per user acls after authentication - Per-User static routes

- Lock&Key - AutoCommand - links user to user profile, so preferences take effect - adds

efficiency and provides limits to their access/use.

RADIUS

RADIUS is an access server authentication and accounting protocol that has gained wide support.

The RADIUS authentication server maintains user authentication and network access information. RADIUS clients run on access servers and send authentication requests to the RADIUS authentication server.

Page 153: Network Notes

TACACS+ Authentication

With TACACS authentication, when a user requests to log in to a terminal server or a router, the device will ask for a user login name and password. The device will then

send a request for validation to the TACACS server in its configuration. The server will validate the login and password pair with a TACACS password file. If the name and the password is validated, the login is successful.

There are two flavors of TACACS: an original TACACS and extended TACACS or TACACS+. The primary difference between the two is that TACACS+ provides more

information when a user logs in, thus allowing more control than the original TACACS.

Lock-and-Key Security

Lock and Key challenges users to respond to a login and password prompt before

loading a unique access list into the local or remote router.

In this example, Lock and Key security allows only authorized users to access services beyond the firewall at the corporate site.

Page 154: Network Notes

Calling Line Identification

Caller ID is another security mechanism for dial-in access. It allows routers to look at the ISDN number of a calling device and compare it with a list of known callers. If the

number is not in the list, the call is rejected and no charges are incurred by the calling party.

User Authentication with Kerberos

Kerberos is another technology. It is one that has been broken into historically; however, it provides a good level of security. With Kerberos you create a ticket that‘s

going to have a specific time allocated to it.

So with Kerberos, once a ticket is issued to me, the knowledge that that ticket was

sent plus my login itself is going to ensure that I have access to that system. So the tickets or credentials are issued by a trusted Kerberos server that you allow on with some specific ID that you have.

How Public Key Works

You‘ll hear a term called a Public Key. This is how a Public Key works. A Public Key works in conjunction with something called a Private Key.

Page 155: Network Notes

This is technology that was actually developed back in the ‘70s. The Private Key is going to be something that you‘re going to keep to yourself. The Private Key is going to be something that exists perhaps on your PC or perhaps

as a piece of code that you have. A Public Key is going to be something that you publish to the outside world. What

you‘ll do is take your document and send it out with your Public Key that‘s going to be able to be accessed by a user that‘s going to receive your document, but you‘re

going to encrypt it using your Private Key. So by using these two things together, another user that‘s going to receive your

document can utilize your Public Key to ensure that, in fact, the document that you send is the document that you thought it was.

So the two keys together, in essence, create a unique key, something that‘s uniquely known by the combination of the private and the Public Key.

Digital Signatures

Now, Digital Signatures takes us a little bit further. With Digital Signatures what we‘re going to do is take the original document and run it along with the Private Key

and we‘re going to create something called the Hash. This is going to be another unique document that‘s created with a Digital Signature.

Now, that unique document is going to be sent along, and your Public Key is going to be able to be used in conjunction with that new smaller document. If that Public Key

winds up with that document, then you know the confidentiality of the original document is in place.

Page 156: Network Notes

So here we‘ve ensured both the user that‘s sending the document as well as the document itself as being something that‘s truthful and, in fact, the document that we thought was sent out. So in this way, we know that the document hasn‘t been

altered.

Certificate Authority

You might want to ensure that important documents come out with some kind of encryption or data signatures so you know they are exactly what the sender

intended. Certificate Authority allows you to do just that. It relies on a third party to issue those kinds of certificates that are going to ensure that you are who you say you are.

Why would you want a third party to do that? Well, there‘s a number of reasons. One may be cost. Maybe it‘s more cost effective to have a third party do it rather than

issue Certificate Authority yourself. But another reason is if you‘re involved with third parties. Say I‘m a manufacturer and I have a supplier. Well, that same supplier may

issue supplies to a competitor of mine. So I don‘t want to issue certificates from my corporate database to the supplier

because it could be used maliciously by somebody at my competitor‘s site. So I want a trusted third party; somebody that everybody trusts equally. So the Certificate

Authority will verify identity. He knows who all the different players are. They‘ll sign the digital certificate containing the device‘s Public Key. So this becomes the equivalent of an ID card. Now, there‘s a number of different partners that we use with

this. These include Verisign, Entrust, Netscape, and Baltimore Technologies.

Page 157: Network Notes

Network Address Translation

Let‘s explore another methodology of making sure that your system is safe. This is different than the other ones we‘ve been touching on. Network Address Translation

means security through obscurity. It means by not advertising my IP address to the outside world, I can ensure that nobody can come in and pretend that they‘re me or

pretend that they‘re somebody trusted to me. So the way that that would work is your device, it might be a firewall, might be a

router, is going to have a pool of IP addresses that you want to utilize to go to the outside world. So whatever the address is on the inside, it‘s never seen. It‘s always changed when it gets to whatever your perimeter device is.

So through Network Address Translation we can provide increased security.

In addition to Network Address Translation, there‘s another technology you‘ll hear about called port address translation. With port address translation, that particular

device, be it a router or a firewall, that‘s issuing that IP address to the outside world, the IP address that the outside world is going to see, is going to put all its requests out along one single IP address.

The way it does that is by putting the different requests on a different port number, keeping track of that information, and changing the port number when it comes

back. The reason that you might want to implement port address translation is if you have difficulty getting enough IP addresses for all of the users on your network.

There can be some limitations. For an example, many multimedia applications require multiple ports on a single IP address. So it may not be appropriate for every

installation.

Integrity

Let's look at some of the different integrity solutions.

Integrity—Network Availability

One of the functions of integrity is making sure the network is up. You need to guarantee that data in fact gets where it‘s supposed to This is job 1! Your network

isn‘t worth a thing if your routers go down. If network infrastructure isn‘t reliable, business doesn‘t happen. Let‘s look at a few features.

Page 158: Network Notes

TCP Intercept

TCP Intercept is designed to prevent a SYN flooding Denial of Service attack by tracking, optionally intercepting and validating TCP connection requests. A SYN

flooding attack involves flooding a server with a barrage of requests for connection. However, since these messages have invalid return addresses, the connections can

never be established. The resulting volume of unresolved open connections eventually overwhelms the server and can cause it to deny service to valid requests. TCP Intercept is capable of operating in two different modes - intercept mode and monitor

mode. When used in intercept mode (the default setting), it checks for incoming TCP connection requests and will proxy-answer on behalf of the destination server to ensure that the request is valid before connecting to the server. In monitor mode,

TCP Intercept passively watches the connection requests flowing through, and, if a connection fails to get established in a configurable interval, it will intervene and

terminate the connection attempt.

Route Authentication

A common hacking technique is to instruct devices to send traffic along an alternate route, a less secure route, that opens up a doorway for the hacker to get in.

Route authentication enables routers to identify one another and verify each other‘s legitimacy before accepting route updates. So route authentication ensures that you have trusted devices talking to trusted devices.

Integrity—Perimeter Security

Integrity also means ensuring the safety of the network devices and the flows of

information between them, including payload data, configuration and configuration updates.

Page 159: Network Notes

Everyone is connecting to the Internet, so networks are vulnerable: you need to defend your perimeters. There are several kinds of network perimeter, and you may

need some kind of firewall protection at each perimeter access point to reflect your security policy. Perimeter security gives customers the ability to leverage the Internet

as a business resource, while protecting internal resources. The key to network integrity is that it be implemented across all types of devices with

full internetworking, so that every device in the network can participate and not be a weak link in the security implementation chain.

Let‘s look at some of these technologies.

Access Lists

So Access Control Lists are often the first wave of defense. Security is a multi-step

thing, and Access Control Lists can play an important part in this. Standard Access Control Lists can filter addresses.

So you can say, "Hey, I don't want traffic from particular places," maybe people that are known spammers or something like that. It may be anything. It's not part of your

extranet. So you can do permit and denies on an entire protocol suite. Maybe you don't want to see a particular class of service flowing through this particular router. There's also extended Access Control Lists where we can filter the

source and destination address. So if you have a list of people that you don't want to be making connections, you can tell that to your ACL, as Access Control Lists are called.

You can sort these both on inbound and outbound, on port number. For an example, maybe you want to create a demilitarized zone, or DMZ, and you only want traffic

that's on the Web port where HTML traffic goes, which is port 80. So this would be an example of using a port number to restrict traffic to a particular

part of the network. You can have permit and deny of specific protocols. Reflexive; in other words, Access Control Lists that can change based on certain criteria.

And also time based. Maybe you have a different set of rules during business hours

as opposed to after business hours.

Policy Enforcement Using Access Control Lists

Now we're going to look at policy enforcement using Access Control Lists.

Page 160: Network Notes

We want the ability to stop and reroute traffic based on packet characteristics, based on the information that's flowing across the network.

We can do this with access control lists on incoming or outgoing interfaces. In other words, depending on if this is going to be your connection to the outside world, or to

an intranet, you can define where this control is going to be. You can do this together with NetFlow to provide high-speed enforcement on network access points.

NetFlow is basically a way of making information travel faster by identifying a lot of different packets are going to have similar characteristics. You can also do violation logging. You can keep something called a Syslog file that will keep track of violations

to your Security Policy.

If you had an Access Control List that simply dropped packets that were unacceptable but without a way of logging that and telling you about it, then you may miss some alerts today to potentially more malicious behavior in the future. And

so it's very important to have logs that you review periodically. Let‘s take a look at firewalls next.

Importance of Firewalls

What is a firewall? Why do I want one? Firewalls are used to build trusted perimeters around information and services. Your

Internet security solution must be able to allow employees to access Internet resources, while keeping out unauthorized traffic. The most common way of

protecting the internal network is by using a firewall between the intranet and the Internet.

What Is a Firewall?

So what are the basic requirements of an Internet firewall? First, a firewall needs to be able to analyze all the traffic passing between the internal user community and

the external network. In this way it can ensure that only authorized traffic, as defined by the security policy, is permitted through. It can also ensure that content which

could be potentially harmful to the internal network is filtered out.

Page 161: Network Notes

A firewall also needs to be designed to resist attacks, since once a hacker gains

control of the firewall, the internal network could be compromised. And finally, it should be able to hide the addresses of the internal network from the outside world, making the life of a potential hacker much more difficult.

Importantly, a firewall needs to support all these requirements and have the ability to support the constantly increasing Internet connection speeds and traffic loads, so

that it doesn‘t become a bottleneck.

Packet-Filtering Routers

There are a few different types of firewalls. Here‘s a little history.

The traditional approach was access routers. Using access control lists to control network access.. A low cost, high performance solution. Didn‘t need UNIX expertise, transparent to user - no requirements for user to change their behavior or

configuration.

Issues though were that internal addresses were exposed to the Internet. If you were logging onto servers that were suspect to attacks or snooping, someone could then

see the host addresses. This is often the first step to finding holes in the network. By finding out the host address, you can then start attacking the host, leaving you

vulnerable to attacks. Important to hide the addresses. In most cases, it was possible to spoof in. Basically, spoofing means someone

represents themselves as a trusted host in the network, thus having free access to the network. ACLs are also tough to negotiate if they‘re complex; thus it‘s easy to

make a mistake. This brought about the development of proxy servers, which brought about statefulness, which we‘ll discuss in more detail later.

Proxy Service

Proxy servers are also sometimes known as ―bastion hosts‖. As its name suggests, this kind of firewall acts as a ―proxy‖ for internal computers accessing the Internet.

Page 162: Network Notes

To the outside world, it appears as if all sessions terminate at a single host, which is carefully configured for maximum security.

Proxy servers hide IP addresses, so they are not exposed to the outside world. Certain

proxy servers also can examine content, so they can limit what can or can not be done, such as FTP gets, or going higher in the application and determining what you can or can not do. They can also run other services (e.g. run your mail services).

Problem is that you‘re buying a box dedicated for that, plus software, plus maintaining the operating system. Must follow CERT alerts and make changes

quickly. Hackers can follow alerts and use those techniques to break in before you make changes. This requires a lot of administration and time spent monitoring such advisories. Difficult to do in today‘s busy environment.

This was also a very intrusive method for users as well, since users have to tell apps they‘re using a firewall and going through 2-3 step logins to gain access - not at all

transparent to user.

Stateful Sessions

Many Firewalls talk about being stateful, but what does this mean and why is this important? If you know what traffic to expect on your network, you can ensure that

that is the volume of traffic you get. For example, when Mary sends a web request to a homepage (www.e-tutes.com), a stateful firewall will remember this. When a page comes back from e-tutes.com to Mary, the firewall will expect it and let the traffic

pass.

Stateful filtering, or stateful network address translation, is a security scheme that

provides very high performance with a high degree of security. Stateful means it

Page 163: Network Notes

allows the firewall to maintain session state connection flows, tracking the source and destination ports plus addresses, TCP sequence numbers, and additional TCP

flags.

Each time a TCP connection is established from an inside host accessing the Internet through the firewall, the information about the connection is logged in a stateful session flow table. The table contains the source and destination addresses, port

numbers, TCP sequencing information, and additional flags for each TCP connection associated with that particular host.

This information temporarily creates a connection block in the firewall. Inbound packets are compared against session flows in the connection table and are permitted

through only if they can be validated. The block is then terminated until the next packet is received.

Performance Requirements

High performance in a firewall is critical. This is driven not only by your end user community, but by some of the applications people plan to use. Today‘s performance

is being driven by the new technologies.

For instance, some of the multimedia applications like video or audio over the

Internet require a high performance firewall. In the future, as new business applications continue to place increasing demands on

networks, performance of your security system will be a critical success factor.

Integrity—Privacy

Next let's look at some of the different privacy requirements people might have. So following are some of the different methodologies that used to ensure privacy on the

network.

- VPNs IPSec, IKE, encryption, DES, 3DES, digital certificates, CET, CEP

Page 164: Network Notes

Encryption and Decryption

Encryption is the masking of secret or sensitive information such that only an authorized party may view (or decrypt) it.

Encryption and authentication controls can be implemented at several layers in your computing infrastructure.

Encryption can be performed at the application layer by specific applications at client workstations and serving hosts. This has the advantage of operating on a complete

end-to-end basis, but not all applications support encryption and it is usually subject to being evoked by individual users, so it is not reliable from a network administrator‘s perspective.

Encryption can also be performed at the network layer by general networking devices

for specific protocols. This has the advantage of operating transparently between subnet boundaries and being reliably enforceable from a network administrator‘s perspective.

Finally, encryption can be performed at the link layer by specific encryption devices for a given media or interface type. This has the advantage of being protocol

independent, but has to be performed on a link-by-link basis. Institutions such as the military have been using link-level encryption for years. With

this scheme, every communications link is protected with a pair of encrypting devices-one on each end of the link. While this system provides excellent data protection, it is quite difficult to provision and manage. It also requires that each end

of every link in the network is secure, because the data is in clear text at these points. Of course, this scheme doesn‘t work at all in the Internet, where possibly

none of the intermediate links are accessible to you or trusted.

What Is IPSec?

IPSec provides network layer encryption. IPSec is a framework of open standards for ensuring secure private communications over the Internet. Based on standards developed by the IETF, IPSec ensures confidentiality, integrity, and authenticity of

Page 165: Network Notes

data communications across a public network. IPSec provides a necessary component of a standards-based, flexible solution for deploying a network-wide

security policy.

Privacy, integrity and authenticity technologies protect information transfer across links with network encryption, digital certification, and device authentication. Some of the benefits that you get from these are privacy, integrity, and authenticity for

network commerce. Implemented transparently in the network infrastructure. In other words, you can just set it up at the router level or the level that makes sense

to you, and your users don't necessarily have to know that they're implementing IPSec.

You can just define all of the transactions between my company and this company that happens between, say, ordering and manufacturing that is going to across IPSec and other traffic will not. It's an end-to-end security solution that's going to

incorporate routers, firewalls, PCs and servers.

IPSec Everywhere!

IPSec can be in any device with an IPStack, as shown in the picture. This is an important point, as customers can deploy IPSec where they are most comfortable:

On the gateway/router: Much easier to install and manage, as only dealing with a limited set of devices. The network infrastructure provides the security. On the host/server. Best end-to-end security, but the hardest to install and manage.

Good for applications that really need this level of control.

IKE—Internet Key Exchange

IPSec assumes that a security association or SA is in place, but does have a mechanism for creating that association. The IETF chose to break the process into

two parts: IPSec provides the packet level processing while IKE negotiates security associations. IKE is the mechanism IPSec uses to set up SAs.

Page 166: Network Notes

IKE can be used for more than just IPSec. IPSec is its first application. It can also be used with S/Mime, SSL, etc.

IKE does several things: - Negotiates its own policy. IKE has several methods it can use for authentication

and encryption. It is very flexible. Part of this is to positively identify the other side of the connection.

- Once it has negotiated an IKE policy, it will perform an exchange of key-material using authenticated Diffie-Hellman.

- After the IKE SA is established, it will negotiated the IPSec SA. It can derive the

IPSec key material with a new Diffie Hellman or by a permutation of existing key material.

Summarize that IKE does these 3 things:

- Identification - Negotiation of policy - Exchange key material

How IPSec Uses IKE

This is how IPSec and IKE work together.

Page 167: Network Notes

Sam is trying to securely communicate with Alice. Alice sends her data toward Sam.

When Alice‘s router sees the packet, it checks its security policy and realizes that the packet should be encrypted. The pre-configured security policy also says that Sam‘s

router will be the other endpoint of the IPSec tunnel. Alice‘s router looks to see if it has an existing IPSec SA with Sam‘s router. If not, then it requests one from IKE. If the two routers already share an IKE SA, then the IPSec SA can be quickly and

immediately generated. If they do not share an IKE SA, then one must first be created before negotiation of the IPSec SAs. As part of this, the two routers exchange digital certificates. The certificates had to have been signed beforehand by a certificate

authority that both Sam and Alice‘s routers trust. Once the IKE session is active, now the two routers can negotiate the IPSec security association. After the IPSec SA is set

up, both routers have agreed on an encryption algorithm (e.g., DES), an authentication algorithm (e.g., MD5), and have a shared session key. Now, Alice‘s router can encrypt Alice‘s IP packet, place it into a new IPSec packet and send it to

Sam‘s router. When Sam‘s router receives the IPSec packet, it will look up the IPSec SA, properly process and unpack the original datagram, and forward it on to Sam.

While this sounds complicated, it all happens automatically and transparently to both Alice and Sam.

Encryption—DES and 3DES

So the encryption that we're utilizing here with IPSec, DES and Triple DES are widely adopted standards.

They encrypt plain text, which then becomes a cipher text. DES performs 16 rounds of encryption. Triple DES is going to do a lot more than that.

We're going to do that encryption again and again and again until you wind up with 168-bit encryption. So you can do this on the client, on the server, on the router, or

on the firewall. Now, obviously, when you're doing 168 different bits of encryption, you're going to

introduce some latency. You need to consider performance implications when using Triple DES.

Breaking DES Keys

How secure is DES? The common way to break it is to do just an exhaustive search.

You just try different possibilities until you find the way to break into it. So on a general-purpose computer, this could take literally hundreds of years to

break into a 56-bit DES. Some people speculate, though it hasn't actually been done, that you can make a

specialized computer for about a million dollars that could crack into DES in probably 35 minutes. And so that possibility exists. Now, there are a lot of smart people out there, though, and one of the things that

Page 168: Network Notes

those smart people did is say, "Hey, well, if it takes one computer a long time, maybe it would take less time for a lot of computers."

So they took a big network that had some CRAYs on it, a whole bunch of PCs, and

instead of a screen saver, they put in this little program that, tried passwords when the PC was not active. This led to more thinking that the Internet was made up of lots of computers that could work on the problem simultaneously.

In fact, the Electronic Frontier Foundation and distributed.net did just this. They cracked a 56-bit DES challenge in just 22 hours and 15 minutes. So if DES is not currently insecure, it'll soon be insecure. So this is why we need to start thinking

about Triple DES.

Now, does this mean for your client who has a local hardware shop that he's doing his encryption at 56-bit DES isn't safe enough? It probably is safe enough. Again, you need to take into account the particular costs that you have and how

motivated someone's going to be to break into your particular stream.

Active Audit

Why Active Audit?

Why is active audit necessary? Many companies rely on their perimeter security. Perimeter can be breached most of the network and its systems are virtually

unprotected. First, hackers are quite likely to be employees or may have breached the security

perimeter through a business partner or a modem. Because they are considered ‗trusted‘ they have already breached most network security, such as firewalls,

encryption, and authentication. Note: the company network is usually considered the ‗trusted‘ network while the Internet is ‗untrusted‘. However, with up to 80% of security breaches occurring in the ‗trusted‘ network companies may want to rethink

their strategies for protecting systems and data. Second, the defense may be ineffective. Aging, mismanaged security is no match for

today‘s hacker, who is constantly improving techniques. Third, most security breaks down due to human error. People make mistakes on

programming firewalls, they allow services to the network and forget to turn them off, they are no efficient at changing passwords, they add modems and forget to turn them off -- the list goes on and on.

Fourth, the network is always growing and changing, Every change is a new

opportunity for the patient hacker, who may spend months or even years waiting for an opening. Firewalls , authorization, and encryption provide policy enforcement, but do not monitor behavior. And with hacking, it is the behavior that is the problem.

Page 169: Network Notes

These problems can be alleviated by creating a security process that includes visibility into the network.

Network security is often viewed in terms of point security technologies, such as

firewalls, authentication and authorization, and encryption. While very necessary to a network defense they do not have the capability to analyze and discover two items

essential for network security: 1)User behaviors -- are your employees, business partners, and anyone else

misusing the network? 2)System vulnerabilities -- if a ‗bad guy‘ gets into your network, have your

systems been secured to lock him out?

This is where a strong firewall gives a false sense of security. You must consider what would happen if your firewall is compromised.

The most effective and security strategy for your network defense includes a ‗defense in depth‘ or ‗layered defense‘. This includes augmenting your point solutions with

dynamic systems that monitor users as they use the network and measure the network resources for changes and vulnerabilities. And these technologies should be used to help better secure the network perimeter as well as the intranet.

Often organizations have a tactical approach to network security and do not treat it with the same importance as network operations. However, more companies today

are taking a strategic approach to network security and treating it as part of the network operation. This includes development of processes that constantly measure,

monitor and improve the security posture.

Active Audit—Network Vulnerability Assessment

Active Audit is the systematic implementation of the security policy, to actively audit, verify, detect intrusion and anomalies and report findings For true security policy management enterprise-wide, Active Audit capability must be

in place and be applicable for all access ports, devices and media. Proactive network auditing tools provide preventative maintenance by detecting

security weak points before they can be exploited by intruders.

Active Audit—Intrusion Detection System

Intrusion detection tools recognize when the security of the network is in jeopardy. Intrusion detection provides the burglar alarms that notify you in real-time when break-in attempts are detected.

For example, you want to be able to see a bunch of port scans are happening on your

system. There's some IP address that they are originating from. That somebody who

Page 170: Network Notes

could be potentially doing bad things to your network.

You want to be able to watch suspect behavior. You also want to be able to watch things like, hey, does that person in data entry, are they going back into the data

warehouse? Are they going into our accounting system? IDS architecture is going to consist of several different parts. There's going to be some

IDS engine, something that's analogous to a sniffer that's watching the line, looking for violations in policy. There's going to go some security management system, someplace where you give the instructions about what adheres to your security policy

and what doesn't. And there will be kind of real time alarm notification, some way to tell the people within the organization, hey, this is what's going on in your network.

Something bad is about to happen. Something bad is happening. It's time to take action.

IDS Attack Detection

Some of the different kinds of things that an Intrusion Detection System or IDS might

detect would be looking in the context of the data, looking for attacks on your network for denial of service.

For an example, a Ping of Death shares this following parameters: It's going to be a

ping, but it's going to have a super large packet size. So you can watch for that kind of traffic and take appropriate action against it.

Things like Port Sweeps. I can think of no reason, other than testing your network, to do a Port Sweep other than trying to find ways to break into your system.

SYN attacks and TCP hijacking fall into that same category. There would be no reason to do those other than to do malicious activity on your network. So you want

to be able to watch for those. For the content itself, you want to be able to look at DNS attacks. Internet Explorer

Page 171: Network Notes

attacks would be an example of content attack. And you want to do composite scans. You want to look for telnet attacks and character mode attacks. So these are all the

kinds of things that we can be looking for on the network.

Active Audit

Authentication and authorization occur on the front end. Equally as important is the ―back-end‖ side of security. Accounting is the systematic and dynamic verification

that the security policy as defined is properly implemented. It provides assurance that the security policy is consistent and operating correctly.

Accounting enables customers to detect intrusion and network anomalies, misuse, and attacks. It also includes reporting the findings of the audit process.

Accounting should be handled by a system that is totally separate from the network security solutions that are installed. Currently, there aren‘t many tools available for active audit, which explains why many companies hire outside auditors to check

their security implementations.

For true security policy management on an company-wide basis, accounting capabilities must be in place and be applicable for all access ports, devices and media.

- SUMMARY -

- Security is a mission-critical business requirement for all networks - Security requires a global, corporate-wide policy

- Security requires a multilayered implementation

Page 172: Network Notes

VPNs are a common topic today. Just about everyone is talking about implementing one. This module explains what a VPN is and covers the basic VPN technology. We‘ll

also go through some examples of VPNs including a return on investment analysis.

The Agenda

- What Are VPNs?

- VPN Technologies - Access, Intranet, and Extranet VPNs

- VPN Examples

What Are VPNs?

Simply defined, a VPN is an enterprise network deployed on a shared infrastructure

employing the same security, management, and throughput policies applied in a private network.

A VPN can be built on the Internet or on a service provider‘s IP, Frame Relay, or ATM infrastructure. Businesses that run their intranets over a VPN service enjoy the same

security, QoS, reliability, and scalability as they do in their own private networks. VPNs based on IP can naturally extend the ubiquitous nature of intranets over wide-

area links, to remote offices, mobile users, and telecommuters. Further, they can support extranets linking business partners, customers, and suppliers to provide

better customer satisfaction and reduced manufacturing costs. Alternatively, VPNs can connect communities of interest, providing a secure forum for common topics of discussion.

Virtual Private Networks

Building a virtual private network means you use the ―public‖ Internet (or a service

provider‘s network) as your ―private‖ wide-area network.

Page 173: Network Notes

Since it‘s generally much less expensive to connect to the Internet than to lease your own data circuits, a VPN may allow to you connect remote offices or employees who

wouldn‘t ordinarily justify the cost of a regular WAN connection. VPNs may be useful for conducting secure transactions, or transferring highly

confidential data between offices that have a WAN connection. Some of the technologies that make VPNs possible are:

- Tunneling

- Encryption - QoS - Comprehensive security

Why Build a VPN?

Why should customers consider a VPN?

- Company information is secured

-VPNs allow vital company information to be secure against unwanted intrusion - Reduce costs

- Internet-based VPNs offer low-cost connectivity from anywhere in the world, and can be considered a viable replacement for leased-line or Frame Relay services

Using the Internet as a replacement for expensive WAN services can cut costs by as much as 60 percent, according to Forrester Research - Also lower remote costs by connecting a mobile user over the Internet. (Often

referred to as a virtual private dial-up networking, or VPDN).

- Wider connectivity options for users

- A VPN can provide more connectivity options (for example, over cable, DSL,

telephone, or Ethernet)

Page 174: Network Notes

- Increased speed of deployment

- Extranets can be created more easily (you don‘t wait for suppliers). This keeps the customer in control of their own destiny.

However, for an Internet-based VPN to be considered as a viable replacement for leased-line or Frame Relay service, it must be able to offer a comparable level of

security, quality of service, and reliability.

What’s Driving VPN Offerings?

The strain on today's corporate networks is greater than ever before. Network managers must continually find ways to connect geographically dispersed work

groups in an efficient, cost-effective manner. Increasing demands from feature-rich applications used by a widely dispersed workforce are causing businesses of all sizes to rethink their networking strategies. As companies expand their networks to link

up with partners, and as the number of telecommuters and remote users continues to grow, building a distributed enterprise becomes ever more challenging. To meet this challenge, VPNs have emerged, enabling organizations to outsource

network resources on a shared infrastructure. Access VPNs in particular appeal to a highly mobile work force, enabling users to connect to the corporate network

whenever, wherever, or however they require.

Networked Applications

The traditional drivers of network deployment are also driving the deployment of VPNs.

New networked applications, such as videoconferencing, distance learning, advanced publishing, and voice applications, offer businesses the promise of improved

productivity and reduced costs. As these networked applications become more prevalent, businesses are increasingly looking for intelligent services that go beyond transport to optimize the security, quality of service, management and

scalability/reliability of applications end to end.

Example of a VPN

Page 175: Network Notes

This what a VPN might look like for a company with offices in Munich, New York, Paris, and Milan.

VPN Technologies

Let‘s take a look at some of the technologies that are integral to virtual private

networks. VPN Technology Building Blocks

Business-ready VPNs rely on both security and QoS technologies. Let‘s take a look at both of these in more detail.

Security

Deploying WANs on a shared network makes security issues paramount. Enterprises

need to be assured that their VPNs are secure from perpetrators observing or tampering with confidential data passing over the network and from unauthorized users gaining access to network resources and proprietary information. Encryption,

authentication, and access control guard against these security breaches.

Key components of VPN security are as follows: - Tunnels and encryption

- Packet authentication - Firewalls and intrusion detection

- User authentication These mechanisms complement each other, providing security at different points

throughout the network. VPN solutions must offer each of these security features to be considered a viable solution for utilizing a public network infrastructure. Let‘s start by looking at tunnels and encryption. We‘re going to look in detail at Layer

2 Tunneling Protocol (L2TP), Generic Routing Encapsulation (GRE), for tunnel support, as well as the strongest standard encryption technologies available--- IPSec,

DES and 3DES.

Page 176: Network Notes

Tunneling: L2F/L2TP

Layer 2 Forwarding (L2F) enables remote clients to gain access to corporate networks through existing public infrastructures, while retaining control of security and

manageability. Cisco has submitted this new technology to the IETF for approval as a standard. It supports scalability and reliability features as discussed in later sections

of this document. L2F achieves private network access through a public system by building a secure

"tunnel" across a public infrastructure to connect directly to a home gateway. The service requires only local dialup capability, reducing user costs and providing the same level of security found in private networks.

Using L2F tunneling, service providers can create a virtual tunnel to link customer remote sites or remote users with corporate home networks. In particular, a network

access server at the POP exchanges PPP messages with the remote users and communicates by L2F requests and responses with the customer's home gateway to

set up tunnels. L2F passes protocol-level packets through the virtual tunnel between endpoints of a point-to-point connection.

Frames from remote users are accepted by the service provider POP, stripped of any linked framing or transparency bytes, encapsulated in L2F, and forwarded over the

appropriate tunnel. The customer's home gateway accepts these L2F frames, strips the L2F encapsulation, and processes incoming frames for the appropriate interface.

Layer 2 Tunneling Protocol (L2TP) is an extension to PPP. It is a draft IETF standard derived from Cisco L2F and Microsoft Point-to-Point Tunneling Protocol (PPTP). L2TP delivers a full range of security control and policy management features, including

end-user security policy control. Business customers have ultimate control over permitting and denying users, services, or applications.

Page 177: Network Notes

Tunneling: Generic Route Encapsulation (GRE)

GRE, or Generic Routing Encapsulation, is the standard solution for Service Providers that have an established IP network and want to provide managed IP VPN

services.

One of the most significant advantages of this approach is that Service Providers can offer application-level QoS. This is possible because the routers still have visibility into the additional IP header information needed for fine-grained QoS (this is hidden

in an IPSec packet). Traffic is restricted to a single provider‘s network, allowing end-to-end QoS control.

This restriction of ―on-net only‖ traffic also allows the GRE tunnels to remain secure without using encryption. Customers who require greater levels of security can still

use ―on-demand‖ application-level encryption such as secure connections in a web browser. The entire connection may be encrypted, but at the cost of QoS granularity.

In summary, GRE offers:

- Encryption-optional tunneling.

- Fine-grained QoS service capabilities, including application-level QoS. - IP-level visibility makes this the platform of choice for building value-added services such as application-level bandwidth management.

What Is IPSec?

IPSec provides IP network-layer encryption.

IPSec is a standards-based technology that governs security management in IP

environments. Originally conceived to solve scalable security issues in the Internet, IPSec establishes a standard that lets hardware and software products from many

Page 178: Network Notes

vendors interoperate more smoothly to create end-to-end security. IPSec provides a standard way to exchange public cryptography keys, specify an encryption method

(e.g., data encryption standard (DES) or RC4), and specify which parts of packet headers are encrypted.

What is Internet Key Exchange (IKE)?

IPSec assumes that a security association is in place, but does have a mechanism for

creating that association. The IETF chose to break the process into two parts: IPSec provides the packet level processing while IKE negotiates security associations. IKE is the mechanism IPSec uses to set up SAs

IKE can be used for more than just IPSec. IPSec is its first application. It can also be used with S/Mime, SSL, etc.

IKE does several things:

- Negotiates its own policy. IKE has several methods it can use for authentication and encryption. It is very flexible. Part of this is to positively identify the other side

of the connection. - Once it has negotiated an IKE policy, it will perform an exchange of key-material using authenticated Diffie-Hellman.

- After the IKE SA is established, it will negotiate the IPSec SA. It can derive the IPSec key material with a new Diffie Hellman or by a permutation of existing key material.

Summarize that IKE does these 3 things:

- Identification

- Negotiation of policy - Exchange key material

IPSec VPN Client Operation

Now that you understand both IPSec and IKE, let‘s look at what really happens from the client‘s perspective.

An IPSec client is a software component that allows a desktop user to create an IPSec tunnel to a remote site. IPSec provides privacy, integrity, and authenticity for VPN

Page 179: Network Notes

client operations. With IPSec, no one can see what data you are sending and no one can change it.

What‘s input by a remote user dialing in via the public Internet is encrypted all the way to corporate headquarters with an IPSec client to a router at the home gateway.

Here‘s how it works.

First, the remote user dials into the corporate network. The client uses either an X.509 or one-time password with a AAA server to negotiate an Internet Key

Exchange. Only after it‘s authenticated is a secure tunnel created. Then all data is encrypted.

IPSec is transparent tot he network infrastructure and is scalable from very small applications to very large networks. As you can see, this is an ideal way to connect remote users or telecommuters to corporate networks in a safe and secure

environment.

L2TP and IPSec Are Complementary

Another thing that people often get confused about is the relationship between L2TP and IPSec. Remember that L2TP is Layer 2 Tunneling Protocol. Some people think

that the two technologies are exclusive of each other. In fact, they are complementary.

So you can use both of these together. IPSec can create remote tunnels. L2TP can

Page 180: Network Notes

provide tunnel and end-to-end authentication. So IPSec is going to maintain the encryption, but often times you want to tunnel non-

IP traffic in addition to IP traffic. L2TP can be useful for that.

Encryption: DES and 3DES

DES stands for Data Encryption Standard. It is a widely adopted standard created to

protect unclassified computer data and communications. DES has been incorporated into numerous industry and international standards since its approval in the late 1970s.

DES and 3DES are strong forms of encryption that allow sensitive information to be

transmitted over untrusted networks. They enable customers to utilize network layer encryption.

The encryption algorithm specified by DES is a symmetric, secret-key algorithm. Thus it uses one key to encrypt and decrypt messages, on which both the sending and receiving parties must agree before communicating. It uses a 56-bit key, which

means that a user must correctly employ 56 binary numbers, or bits, to produce the key to decode information encrypted with DES.

DES is extremely secure, however, it has been cracked on several occasions by chaining hundreds of computers together at the same time; but even then, it took a

very long time to break. This led to the development of Triple DES which uses a 168-bit algorithm.

Firewalls

A critical part of an overall security solution is a network firewall, which monitors traffic crossing network perimeters and imposes restrictions according to security

policy. In a VPN application, firewalls protect enterprise networks from unauthorized access to computing resources and network attacks, such as denial of service.

Furthermore, for authorized traffic, a VPN firewall verifies the source of the traffic and prescribes what access privileges users are permitted.

Page 181: Network Notes

User Authentication

A key component of VPN security is making sure authorized users gain access to

enterprise computing resources they need, while unauthorized users are shut out of the network entirely. AAA services (that stands for authentication, authorization, and accounting) provide the foundation to authenticate users, determine access levels,

and archive all the necessary audit and accounting data. Such capabilities are paramount in the dial access and extranet applications of VPNs.

VPNs and Quality of Service

So how does QoS play a role in VPNs? Well, the goal of QoS is to control the

utilization of bandwidth so that you can support mission critical applications. Here‘s how it works. The customer premises equipment or CPE assigns packet priority based on the network policy. Packets are marked and bandwidth is managed so that

the VNP WAN links don‘t choke out the important traffic. One example of this could be an employee watching television off the Internet to his

PC where the video traffic clogs a small 56K WAN line making it impossible for mission critical financial application data to pass. With QoS, you can take advantage of the service providers differentiated services to

maximize network resources and minimize congestion at peak times. For example, e-mail traffic doesn‘t care about latency, but video and mission-critical

Page 182: Network Notes

applications do. Some components of bandwidth management/QoS that apply to VPNs are as follows:

- Packet classification---assigns packet priority based on enterprise network policy

- Committed access rate (CAR)---provides policing and manages bandwidth based on applications and/or users according to enterprise network policy

- Weighted Random Early Detection (WRED)---complements TCP in predicting and managing network congestion on the VPN backbone, ensuring predictable

throughput rates

These QoS features complement each other, working together in different parts of the VPN to create a comprehensive bandwidth management solution. Bandwidth management solutions must be applied at multiple points on the VPN to be effective;

single point solutions cannot ensure predictable performance.

Access, Intranet, and Extranet VPNs

Let‘s look now at the three types of VPNs.

Three Types of VPNs

As previously stated, VPN is defined as customer connectivity deployed on a shared

infrastructure with the same policies as a private network. The shared infrastructure can leverage a service provider IP, Frame Relay, or ATM backbone, or the Internet. Cisco defines three types of virtual private networks according to how businesses and

organizations use VPNs:

Page 183: Network Notes

Access VPNs provide remote connectivity to telecommuters and mobile users. They‘re

typically an alternative to dedicated dial or ISDN connections. They offer users a

range of connectivity options as well as a much lower cost solution. Intranet VPNs link corporate headquarters, remote offices, and branch offices over a

shared infrastructure using dedicated connections. The VPN typically is an alternative to a leased line. It provides the benefit of extended connectivity and lower cost.

Extranet VPNs link customers, suppliers, partners, or communities of interest to a

corporate intranet over a shared infrastructure using dedicated connections. In this example, the VPN is often an alternative to fax, snail mail, or EDI. The extranet VPN facilitates e-commerce.

Access VPNs Let‘s look at the Access VPN in more detail.

Access VPNs

Remote access VPNs extend the corporate network to telecommuters, mobile workers,

and remote offices with minimal WAN traffic. They enable users to connect to their corporate intranets or extranets whenever, wherever, or however they require. Remote access VPNs provide connectivity to a corporate intranet or extranet over a

shared infrastructure with the same policies as a private network. Access methods are flexible---asynchronous dial, ISDN, DSL, mobile IP, and cable technologies are

supported. Migrating from privately managed dial networks to remote access.

VPNs offers several advantages, most notably:

- Reduced capital costs associated with modem and terminal server equipment

Page 184: Network Notes

- Ability to utilize local dial-in numbers instead of long distance or 800 numbers,

thus significantly reducing long distance telecommunications costs

- Greater scalability and ease of deployment for new users added to the network - Restored focus on core corporate business objectives instead of managing and

retaining staff to operate the dial network

Access VPN Operation Overview

In an Access VPN environment, the most important aspect of security revolves around identifying a user as a member of an approved customer company and

establishing a tunnel to its home gateway, which handles per-user authentication, authorization, and accounting (AAA).

User authentication is a critical characteristic of an Access VPN. Through a local point of presence (POP), a client establishes communication with the service provider network (1), and secondarily establishes a connection with the customer network (2).

The Access VPN tunnel end points authenticate each other (3). Next, the user connects to the customer premises equipment (CPE) home gateway

server (local network server) using PPP or SLIP (4) and is authenticated through a username/password handling protocol such as PAP, CHAP, or TACACS+. The home gateway maintains a relationship with an access control server (ACS), also

known as an AAA server, using TACACS+ or RADIUS protocols. At this point, authorization is set up using the policies stored in the ACS and communicated to the home gateway at the customer premises (5).

Often, the customer administrates the ACS server, providing ultimate and centralized control of who can access its network as well as which servers can be accessed. User

profiles define what the user can do on the network. Using authorization profiles, the

Page 185: Network Notes

network creates a "virtual interface" for each user. Access policies are enforced using Cisco IOS software specific to each interface.

Access VPN Basic Components

An access VPN has two basic components: L2TP Network Server (LNS): A device such as a Cisco router located in the customer

premises. Remote dial users access the home LAN as if they were dialed into the home gateway directly, although their physical dialup is via the ISP network access

server. Home gateway is the Cisco term for LNS. An LNS operates on any platform capable of PPP termination. LNS handles the server

side of the L2TP protocol. Because L2TP relies only on the single media over which L2TP tunnels arrive, LNS may have only a single LAN or WAN interface, yet still be able to terminate calls arriving at any LAC's full range of PPP interfaces (async,

synchronous ISDN, V.120, and so on). LNS is the initiator of outgoing calls and the receiver of incoming calls. LNS is also known as HGW in L2F terminology. L2TP Access Concentrator (LAC): A device such as a Cisco access server attached to

the switched network fabric (for example, PSTN or ISDN) or colocated with a PPP end

system capable of handling the L2TP protocol. An LAC needs to only implement the media over which L2TP is to operate to pass traffic to one or more local network

servers (LNSs). It may tunnel any protocol carried within PPP. LAC is the initiator of incoming calls and the receiver of outgoing calls. LAC is also known as NAS in L2F.

Client-Initiated Access VPN

Page 186: Network Notes

There are two types of Access VPNs. Essentially they are dedicated or dial.

With a dedicated or client-initiated Access VPNs, users establish an encrypted IP tunnel from their clients across a service provider's shared network to their corporate

network. With a client-initiated architecture, businesses manage the client software tasked with initiating the tunnel. Client-initiated VPNs ensure end-to-end security from the

client to the host. This is ideal for banking applications and other sensitive business transactions over the Internet. With client-initiated VPN Access, the end user has IPSec client software installed at

the remote site, which can terminate into a firewall for termination into the corporate network. IPSec and IKE and certificate authority are used to generate the encryption,

authentication, and certificate keys to be used to ensure totally secure VPN solutions.

Client-Initiated VPNs

An advantage of a client-initiated model is that the "last mile" service provider access network used for dialing to the point of presence (POP) is secured. An additional consideration in the client-initiated model is whether to utilize operating system

embedded security software or a more secure supplemental security software package. While supplemental security software installed on the client offers more

robust security, a drawback to this approach is that it entails installing and maintaining tunneling/encryption software on each client accessing the remote access VPN, potentially making it more difficult to scale.

NAS-Initiated Access VPN

In a NAS-initiated scenario, client software issues are eliminated. A remote user dials into a service provider's POP using a PPP/SLIP connection, is authenticated by the service provider, and, in turn, initiates a secure, encrypted tunnel to the corporate

network from the POP using L2TP or L2F. With a NAS-initiated architecture, all VPN intelligence resides in the service provider network---there is no end-user client software for the corporation to maintain, thus eliminating client management

burdens associated with remote access. The drawback, however, is lack of security on the local access dial network connecting the client to the service provider network.

In a remote access VPN implementation, these security/management trade-offs must be balanced.

Page 187: Network Notes

NAS-Initiated VPNs

Pros: NAS-initiated Access VPNs require no specialized client software, allowing

greater flexibility for companies to choose the access software that best fits their

requirements. NAS solutions use robust tunneling protocols such as Cisco L2F or L2TP.

IPSec provides encryption only, in contrast with the client-initiated model where IPSec enables both tunneling and encryption. Premium service examples include

reserved modem ports, guarantees of modem availability, and priority data transport. The NAS can simultaneously be used for Internet as well as VPN access.

All traffic to a given destination travels over a single tunnel from a NAS, making

larger deployments more scalable and manageable. Con: NAS-initiated Access VPN connections are restricted to POPs that can support

VPNs.

The Intranet VPN

Intranet VPNs: Link corporate headquarters, remote offices, and branch offices over a shared infrastructure using dedicated connections. Businesses enjoy the same

policies as a private network, including security, quality of service (QoS), manageability, and reliability.

The benefits of an intranet VPN are as follows:

- Reduced WAN bandwidth costs - Connect new sites easily

- Increased network uptime by enabling WAN link redundancy across service providers

Page 188: Network Notes

Building an intranet VPN using the Internet is the most cost-effective means of implementing VPN technology. Service levels, however, are generally not guaranteed

on the Internet. When implementing an intranet VPN, corporations need to assess which trade-offs they are willing to make between guaranteed service levels, network

ubiquity, and transport cost. Enterprises requiring guaranteed throughput levels should consider deploying their VPNs over a service provider's end-to-end IP network, or, potentially, Frame Relay or ATM.

The Extranet VPN

Extending connectivity to corporate partners and suppliers is expensive and burdensome in a private network environment. Expensive dedicated connections

must be extended to the partner, management and network access policies must be negotiated and maintained, and often compatible equipment must to be installed on the partner's site. When dial access is employed, the situation is equally complicated

because separate dial domains must be established and managed. Due to the complexity, many corporations do not extend connectivity to their partners, resulting in complicated business procedures and reduced effectiveness of their business

relationships.

One of the primary benefits of a VPN WAN architecture is the ease of extranet deployment and management. Extranet connectivity is deployed using the same architecture and protocols utilized in implementing intranet and remote access VPNs.

The primary difference is the access permission extranet users are granted once connected to their partner's network.

Intranet and Extranet VPNs

Intranet and extranet VPN services based on IPSec, GRE, and mobile IP create secure

tunnels across an IP network. These technologies leverage industry standards to establish secure, point-to-point connections in a mesh topology that is overlaid on the service provider's IP network or the Internet. They also offer the option to

Page 189: Network Notes

prioritize applications. An IPSec architecture, however, includes the IETF proposed standard for IP-based encryption and enables encrypted tunnels from the access

point to and across the intranet or extranet.

An alternative approach to intranet and extranet VPNs is to establish virtual circuits across an ATM or Frame Relay backbone. With this architecture, privacy is accomplished with permanent virtual circuits (PVCs) instead of tunnels. Encryption

is available for additional security as an optional feature, but more commonly, it is applied as needed by individual applications. Virtual circuit architectures provide prioritization through quality of service for ATM and committed information rate for

Frame Relay.

Finally, in addition to IP tunnels and virtual circuits, intranet and extranet VPNs can be deployed with a Tag Switching/MPLS architecture. Tag Switching is a switching mechanism created by Cisco Systems and introduced to the IETF under the name

MPLS. MPLS has been adopted as an industry standard for converging IP and ATM technologies.

A VPN built with Tag Switching/MPLS affords broad scalability and flexibility across any backbone choice whether IP, ATM, or multivendor. With Tag Switching/MPLS,

packets are forwarded based on a VPN-based address that is analogous to mail forwarded with a postal office zip code. This VPN identifier in the packet header isolates traffic to a specific VPN. Tag Switching/MPLS solves peer adjacency

scalability issues that occur with large virtual circuit topologies. It also offers granularity to the application for priority and bandwidth management, and it

facilitates incremental multiservice offerings such as Internet telephony, Internet fax, and videoconferencing.

Comparing the Types

Access VPNs are differentiated from intranet and extranet VPNs primarily by the

connectivity method into the network. While an access VPN refers to dialup (or part-time) connectivity, an intranet or extranet VPN may contain both dialup and dedicated links.

The distinction between intranet and extranet VPNs is essentially in the users that will be connecting to the network and the security restrictions that each will be

subject to.

Page 190: Network Notes

VPN Examples

Let‘s look at some real examples of VPNs.

Health Care Company Intranet Deployment

Here we have a health care company that's deploying an intranet.

Well, why would they care so much about security? Your health records are something that you want to be secure. This is information that you don't want non-authorized personnel to have access to.

So you can see on the figger, the company has a number of remote centers.

In this case, these are like doc-in-the-box, those little new medical clinics that are springing up. So those are relayed back to a primary network and back to the

association where the primary hospital that these different medical centers are associated with resides.

So a lot of more sophisticated databases, etc., can be back at the hospital, and they can share the Internet and, with confidence, share medical data that they don't want

to have published to the outside world.

Branch Office or Telecommuters

Another example would be branch offices or perhaps telecommuters.

Page 191: Network Notes

So the challenge is getting a cost-effective means to connect those small offices that maybe can't afford a leased line or a leased line wouldn't be appropriate for. And so

with IPSec, you can encrypt the traffic from the remote sites to the enterprise.

It doesn't matter what applications the users are using. This isn't just encrypting mail or just encrypting the database or something like that.

You can encrypt all traffic if you want to. And so that's something that you can set up right into the router in terms of what traffic you want to encrypt right into your client.

So using this, telecommuters can have full access safely to the corporation.

Traditional Dialup Versus Access VPN

To illustrate the savings an Access VPN can provide, compare the cost of

implementing one with that of supporting a dial-up remote access application. Suppose a small manufacturing firm must support 20 mobile users dialing into the corporate network to access the company database and e-mail for approximately 90

minutes per day (per user).

In the traditional dial-up model, the 20 mobile workers use a modem to dial long distance directly into their corporate remote access server. Most of the cost in this scenario comes from the monthly toll chares and the time and effort required to

manage modem pools (access server) that accrue on an on-going basis over the life of the application.

By using an access VPN, the manufacturing firm‘s monthly toll charges can be significantly reduced. The mobile users will dial into a service provider‘s local point of

presence (POP) and initiate a tunnel back to the corporate headquarters over the Internet. Instead of paying long distance/800 toll charges, users pay only the cost equivalent to making a local call to the ISP. The initial investment in equipment and

installation of an access VPN may be recaptured quickly by the savings in monthly toll charges.

Page 192: Network Notes

How long will it take the manufacturing firm to realize a payback of the initial capital

investment, then realize recurring monthly savings?

VPN Payback

This chart shows us the return on investment. You can see that the payback is right about three months.

So you can see that VPNs save money in the long run.

- Summary -

- VPNs reduce costs

- VPNs improve connectivity - VPNs maintain security

- VPNs offer flexibility

- VPNs are reliable

Lower cost: VPNs save money because they use the Internet, not costly leased lines,

to transmit information to and from authorized users. Prior to VPNs, many

companies with remote offices communicated through wide area networks (WANs), or by having remote workers make long-distance calls to connect to the main-office server. Both can be expensive propositions. WANs require establishing dedicated and

inflexible leased lines between various business locations, which can be costly or impractical for smaller offices.

Page 193: Network Notes

Improved communications: A VPN provides a robust level of connectivity

comparable to a WAN. With increased geographic coverage, remote offices, mobile

employees, clients, vendors, telecommuters, and even international business partners can use a VPN to access information on a company's network. This level of

interconnectivity allows for a more effective flow of information between a large number of people. The VPN provides access to both extranets and wide-area intranets, which opens the door for improved client service, vendor support, and

company communications. Security: VPNs maintain privacy through the use of tunneling protocols and

standard security procedures. A secure VPN encrypts data before it travels through the public network and decrypts it at the receiving end. The encrypted information

travels through a secure "tunnel‖ that connects to a company's gateway. The gateway then identifies the remote user and lets the user access only the information he or she is authorized to receive.

Increased flexibility: With a VPN, customers, suppliers and remote users can be

added to the network easily and quickly. Some VPN solutions simplify the process of administering the network by allowing the system's manager to implement changes from any desktop computer. Once the equipment is installed, the company simply

signs up with a service provider that activates the network by giving the company a slice of its bandwidth. This is much easier than establishing a WAN, which must be designed, built and managed by the company that creates it. VPNs also easily adapt

to a company's growth. These systems can connect 2,000 people as easily as 25. Reliability: A secure VPN can be used for the authorization of orders from suppliers,

the forwarding of revised legal documents, and many other confidential business processes. Recent improvements in VPN technology have also increased the system's

reliability. Many service providers will guarantee 99% VPN uptime and will offer credits for unanticipated outages.

Lesson 13: Voice Technology Basics

Welcome to the Voice Technology Basics lesson. Combined voice and data networks

are definitely a hot topic these days. In this module, we‘ll start by discussing the convergence of voice and data. We‘ll present a bit of history as well so that you understand how this all came about.

We‘ll then move into discussing actual voice technology. There‘s a lot to cover here and a lot of vocabulary you‘ll need to be familiar with. We‘ll start with understanding

the traditional telephony equipment. We‘ll also discuss voice quality issues as well as enabling technologies such as compression that are making voice/data networks

possible. After we cover the technology, we‘ll discuss Voice over IP, Voice over Frame Relay,

and Voice over ATM. We‘ll then cover some of the new applications that are possible

Page 194: Network Notes

on combined voice/data networks. Finally, we‘ll look at how a company might migrate from traditional telephony to an

integrated voice/data network.

The Agenda

- Convergence of Voice and Data

- Voice Technology Basics - Voice over Data Transports

- Applications

- Sample Migration

Convergence of Voice and Data

Today, voice and data typically exist in two different networks. Data networks use packet-switching technology, which sends packets across a network. All packets

share the available network bandwidth. At the same time, voice networks use circuit switching, which seizes a trunk or line for dedicated use. But this is all changing...

Data/Voice Convergence—Why?

There is a lot of talk today about merging voice and data networks. You may hear this

referred to as multiservice networking or data/voice/video integration or just voice/data integration. They all refer to the same thing. Merging multiple infrastructures into one that carries all data, regardless of type.

In this new world order, voice is just plain data. The trends driving this integration

are cost initially--saving money. Significant amounts of money can be saved by doing away with parallel infrastructures. In the long run, though, new business applications are what will drive the integration of data and voice. Applications such

as: - Integrated messaging

- Voice-enabled desktop applications - Internet telephony

- Desktop video (Intel ProShare, Microsoft NetMeeting, etc.) So, how does a combined network save money?

Page 195: Network Notes

Data, Voice, and Video Integration Benefits

The place where you can realize the greatest savings is in the wide-area network (WAN), where the bandwidth and services are very expensive.

The concept here is that at some point, you want voice data ―to ride for free.‖ If you

look at the overall bandwidth requirements of voice compared to the rest of the network, it is miniscule. If you had to charge per-packet or per-kilobit, voice is basically ―free.‖

Companies should experience several kinds of cost savings. Traditionally, the overall telecom budget includes three basic sections: capital equipment, support overhead

such as wages and salaries, and facilities. The majority of costs are incurred in the facilities. Facilities charges are recurring, such as leased-line charges which occur

every month, as opposed to capital equipment, which can be amortized over a couple of years.

Because facilities are the largest expense, this can also be the place where the most money can be saved. The largest part of the facilities charge is the telecom budget. If

the telecom budget can be reduced, money can be leveraged out of that to pay for network expansion.

People tell Cisco, ―We have to leverage our budget to converge data, voice, and video. We have exponential applications that demand growth and we don‘t know how to finance that.‖ Cisco advises customers to look at their established budgets and see if

there is any way to squeeze money out of them by putting in a more efficient infrastructure with features such as compression, and move all traffic over a single

transport mechanism. On average, users can expect a 30 to 50 percent reduction in their IT budgets with convergence.

New applications that include voice are becoming increasingly important as they drive competitive advantage.

Before we get into the nuts and bolts of voice technology, let‘s take a look at just a couple of these applications that multiservice networks enable.

Voice Technology Basics

There is a lot of technology and a lot of issues that are important to understand with

voice/data integration. There‘s also a lot of jargon and vocabulary. Pace yourself as we move through this section.

We‘ll start by looking at TDM versus packet-based networks. Then we‘ll cover the traditional telephony equipment. Voice quality issues are essential and we‘ll discuss

these, along with the technologies that are making voice/data convergence a possibility.

Page 196: Network Notes

Traditional Separate Networks

So let‘s go back to looking at where most companies are today?

Many organizations operate multiple separate networks, because when they were created that was the best way to provide various types of communication services

that were both affordable and at a level of quality acceptable to the user community. For example, many organizations currently operate at least three wide-area networks,

one for voice, one for SNA, and another for LAN-to-LAN data communications. This traffic can be very ―bursty.‖

The traditional model for voice transport has been time-division multiplexing (TDM), which employs dedicated circuits.

Dedicated TDM circuits are inefficient for the transport of ―bursty‖ traffic such as LAN-to-LAN data. Let‘s look at TDM in more detail so that you can understand why.

Traditional TDM Networking

TDM relies on the allocation of bandwidth on an end-to-end basis. For example, a pulse code modulated (PCM) voice channel requires 64 kbps to be allocated from end

to end. TDM wastes bandwidth, because bandwidth is allocated regardless of whether there is an actual phone conversation taking place.

Page 197: Network Notes

So again, dedicated TDM circuits are inefficient for the transport of ―bursty‖ traffic

because:

- LAN traffic can typically be supported by TDM in the WAN only by allocating enough bandwidth to support the peak requirement of each connection or traffic type. The trade-off is between poor application response time and expensive

bandwidth. - Regardless of whether single or multiple networks are involved, bandwidth is

wasted. TDM traffic is transmitted across time slots. Varying traffic types, mainly voice and data, take dedicated bandwidth, regardless of whether the time slot is

idle or active. Bandwidth is not shared.

After: Integrated Multiservice Networks—Data/Voice/Video

With a multiservice network, all data is run over the same infrastructure. We no longer have three or four separate networks, some TDM, some packet. One packet-

based network carries all the data. How does this work? Let‘s look at packet-based networking.

Packet-Based Networking

As we have just seen, TDM networking allocates time slots through the network.

In contrast, packet-based networking is statistical, in that it relies on the laws of

probability for servicing inbound traffic. A common trait of this type of networking is that the sum of the inbound bandwidth often exceeds the capacity of the trunk.

Page 198: Network Notes

Data traffic by nature is very bursty. At any instant in time, the average amount of offered traffic may be well below the peak rate. Designing the network to more closely

match the average offered traffic ensures that the trunk is more efficiently utilized.

However, this efficiency is not without its cost. In our effort to increase efficiency, we run the risk of a surge in offered traffic that exceeds our trunk.

In that case, there are two options: we can discard the traffic or buffer it. Buffering helps us reduce the potential of discarded data traffic, but increases the delay of the data. Large amounts of oversubscription and large amounts of buffering can result in

long variable delays.

Traditional Telephony

You can‘t really understand voice/data integration unless you understand telephony. This section covers that.

Voice Systems Rely on Public Switched Telephone Networks

In a typical voice/analog telephone network, users make an outside phone call from

the phone on their desk. The call then connects to the company‘s internal phone system or directly to the Public Switched Telephone Network (PSTN) over a basic

telephone service analog trunk or a T1/E1 digital trunk. From the PSTN, the call is routed to the recipient, such as an individual at home.

If a call connects to a company‘s internal phone system, the call may be routed internally to another phone on the corporate voice network without ever going

through a PSTN. The PSTN may contain a variety of transmission media, including copper cable, fiber-

optic cable, microwave communications, and satellite communications.

Page 199: Network Notes

Traditional Telephony Equipment

A telephone set is simply a telephone.

KTS: Key telephone systems, found commonly in small business environments, enhance the functionality of telephone sets. The telephones have multiple buttons

and require the user to select central-office phone and intercom lines. EKTS: Electronic key telephone systems improve upon KTS systems. EKTSs often

provide switching capabilities and impressive functionality, crossing into the PBX world.

PBX: A private branch exchange system allows the sharing of pooled trunks (outside lines) to which the user typically gains access by dialing an access digit such as ―9.‖

Software in the PBX manages contention for pooled lines. The PBX system has many features, including simultaneous voice call and data screen, automated dial-outs from computer databases, and transfers to experts based on responses to questions

rather than phone numbers.

The historical differences between a PBX and a key system have blurred, and both product lines offer comparable feature sets for station-to-station calling, voice mail, and so on. Either the customer owns the PBX or it can be owned and operated by a

third party as a service to the end customer. To blur things further, key systems are beginning to offer selected trunk interfaces.

The major differences between a PBX and a key system are the following:

- A PBX looks to the network like another switch—it connects via trunk (PBX-to-PBX) interfaces to the network. - A key system looks like a phone set (station) and connects via lines (station to

PBX). - PBXs serve the high end of the market.

- Key systems serve the low end of the market. CO: The central office is the phone company facility that houses the switches.

Switch: An electromechanical device, a switch performs the central switching function of a traditional telephony network. Today, it can include both analog and

digital hardware and software.

Toll switch: This switch is used to handle long-distance traffic.

Traditional Telephony Signaling, Addressing, and Routing

We will now consider how phone calls are created and sent through the traditional telephone network Signaling

Page 200: Network Notes

- Off-hook signaling - how a phone call gets started

- Signaling paths - Signaling types

Addressing

- Very different from data network schemes - These differences must be resolved in order to implement integrated data/voice/video (DVV)

Routing

- Dependent on the resolution of the addressing issue

Signaling in a Voice System Sets Up and Tears Down Calls

In any telephone system, some form of signaling mechanism is required to set up and tear down calls. When a caller from an office desk calls someone across the country

at another office desk, many forms of signaling are used, including the following: - Between the telephone and PBX

- Between the PBX and CO - Between two COs

All of these signaling forms may be different. Simple examples of signaling include ringing of a telephone, dial tone, ringing, and so on.

There are five basic categories of signals commonly used in a telecommunications network:

Supervisory—Used to indicate the various operating states of circuit combinations.

Page 201: Network Notes

Also used to initiate and terminate charging on a call. Information—Inform the customer or operator about the progress of a call. These

are generally in the form of universally understood audible tones (for example, dial

tone, busy, ringing) or recorded announcement (for example, intercept, all circuits busy). Address—Provides information about the desired destination of the call. This is

usually the dialed digits of the called telephone number or access codes. Typical types of address signals are Dial Pulse (DP), DTMF, and MF.

Control—Interface signals that are used to announce, start, stop, or modify a call.

Controls signals are used in interoffice trunk signaling. Alert—Ringing signal put on subscriber access lines to indicate an incoming call.

Signals such as ringing and receiver off-hook are transmitted over the loop to notify the customer of some activity on the line.

Signaling Between the Telephone and PBX

A telephone can be in one of two states: off-hook or on-hook. A line is seized when

the phone goes off-hook.

Off-hook—A telephone is off-hook when the telephone handset is lifted from its

cradle. When you lift the handset, the hook switch is moved by a spring and alerts the PBX that the user wants to receive an incoming call or dial an outgoing call. A

dial tone indicates ―Give me an order.‖ On-hook—A telephone is on-hook when its handset is resting in the cradle and the

phone is not connected to a line. Only the bell is active, that is, it will ring if a call comes in.

The phone company can provision a Private Line, Automatic Ringdown (PLAR) between two devices. A PLAR is a leased voice circuit that connects two single

instruments. When either handset is lifted, the other instrument automatically rings. Typical PLAR applications include a telephone at a bank ATM, phones at an airport that ring a selected hotel, and emergency phones.

Page 202: Network Notes

Signaling Between the PBX and CO

A telephone system ―starts‖ (seizes) a trunk, or the CO seizes a trunk by giving it a supervisory signal. There are three ways to seize a trunk:

- Loop start—A signaling method in which a line is seized by bridging through a

resistance at the tip and ring (both wires) of a telephone line.

- Ground start—A signaling method in which one side of the two-wire line

(typically the ―ring‖ conductor of the tip and ring) is momentarily grounded to get dial tone. - Wink—A wink signal is sent between two telecommunications devices as part of a

handshaking protocol. It is a momentary interruption in the single frequency tone

indicating that one device is ready to receive the digits that have just been dialed. With a DID trunk, a wink signal from the CO indicates that additional digits will be

sent. After the PBX acknowledges the wink, the DID digits are sent by the CO. PBXs work best on ground start trunks, though many will work on both loop start

and ground start. Normal single-line phones and key systems typically work on loop start trunks.

Signaling Between Switches

Common channel signaling (CCS) is a form of signaling where a group of circuits

share a signaling channel.

Page 203: Network Notes

Signaling system 7 (SS7) provides three basic functions:

- Supervisory signaling - Alerting

- Addressing SS7 is an ITU-T standard adopted in 1987. It is required by telecommunications

administrations worldwide for their networks. The major parts of SS7 are the Message Transfer Part (MTP) and the Signaling Connection Control Part (SCCP). SCCP works out-of-band, thereby providing a lower incidence of errors and fraud,

and faster call setup and take-down.

SS7 provides two major capabilities: - Fast call setup via high-speed circuit-switched connections.

- Transaction capabilities that deal with remote data-base interactions. SS7 information can tell the called party who‘s calling and, more important, tell the

called party‘s computer. SS7 is an integral part of ISDN. It enables companies to extend full PBX and Centrex-

based services—such as call forwarding, call waiting, call screening, call transfer, and so on—outside the switch to the full international network.

Signaling in a Computer Telephony System

Foreign Exchange (FX) trunk signaling can be provided over analog or T1/E1 links.

Connecting basic telephone service telephones to a computer telephony system via T1 links requires a channel band configured with FX type connections.

To generate a call from the basic telephone service set to a computer telephony system, a foreign exchange office (FXO) connection must be configured. To generate a

call from the computer telephony system to the basic telephone service set, a foreign exchange station (FXS) connection must be configured.

Page 204: Network Notes

When two PBXs communicate over a tie trunk, they use E&M signaling (stands for

Earth and Magneto or Ear and Mouth). E&M is generally used for two-way (either side may initiate actions) switch-to-switch or switch-to-network connections. It is

also frequently used for the computer telephony system to switch connections.

Dialing Within a Phone System

Calls within a phone system are considered on-net or off-net, as follows: - On-net calling refers to calls that stay on a customer‘s private network, traveling

by private line from beginning to end.

- A call to an off-premise extension connected by a tie trunk is considered an on-net call. The off- premise telephone is located in a different office or building from the main phone system, but acts as if it is in the same location as the main phone

system and can use its full capabilities.

- Off-net calling refers to phone calls that are carried in part on a network but are destined for a phone that is not on the network. That is, some part of the conversation‘s journey will be over the PSTN or someone else‘s network.

Voice Network Addressing

Page 205: Network Notes

Voice addressing is determined by a combination of international and national

standards, local telephone company practices and internal customer-specific codes. Voice addressing historically has had a geographical connotation, but the

introduction of wireless and portable services will render that impossible to maintain. International and national numbering plans are described by the ITU‘s E.164

recommendation. It is expected that the local telephone company adheres to this recommendation.

E.164 is only the public network addressing system. There are also private dialing plans, which are nonstandardized and can be considered highly effective by their

users. This slide depicts a trunk group that bypasses the PSTN. Selection of this trunk has

been predefined and mapped to the number 8. The access number could be part of the E.164 addressing scheme or part of a private dialing plan.

Alternate numbering schemes are employed by users and providers of PSTN service for specific reasons. An example of a of non-E.164 plan is carrier identification code

(CIC), used for selecting different long-distance carriers, tie lines, trunk groups, WATS lines, and private numbering plans, such as seven-digit dialing.

For integrating voice and data networks, each of these areas must be considered.

Voice Routing

Routing is closely related to the numbering plan and signaling that we just described.

Page 206: Network Notes

At its most basic level, routing enables the establishment of a call from the source telephone to the destination telephone. However, most routing is much more

sophisticated and allows subscribers to select specific services. In terms of implementation, routing is a result of establishing a set of tables or rules within each switch. As a call comes in, the path to the desired destination and the

type of features available will be derived from these tables or rules. It is important to know how routing is done in the telephone network, because this function will be required in an integrated data/voice network.

Voice over Data Networks

Now that you understand how today‘s voice networks work, let‘s take a look at how real-time voice over a data network works.

Voice over Packet Networks Allow Real-Time Voice on Data Networks

Voice over packet networks provide techniques for sending real-time voice over data networks, including IP, Frame Relay, and Asynchronous Transfer Mode (ATM)

networks.

Analog voice is converted into digital voice packets, sent over the data network as

data packets, and converted to analog voice on the other end.

Converting from Voice to Data

Page 207: Network Notes

Analog voice packets are converted to digital data packets with the following steps:

1. A person speaking into the telephone is an analog voice signal. 2. Coder-decoder (CODEC) software converts the signal from analog to digital data

packets suitable for transmission over a TCP/IP network. 3. A digital signal processor (DSP) chip compresses the packets for transmission over the data network.

The data network can be an IP LAN, or a leased-line, ATM, or Frame Relay network.

Converting from Data Back to Voice

Digital data packets are converted to Analog voice packets with the following steps:

4. DSP chip uncompresses the packets 5. CODEC software converts the signal from digital data packets back to analog

voice 6. Recipient listens to the voice on their telephone

The ―Enabling‖ Technologies

What‘s made this all possible is that in the last ten years, a lot of things have happened in voice technology:

Access price/performance: Access products and services have increased in price

performance. Processing: Digital signal processors (DSPs) specialize in processing analog wave

forms, which voice or video inherently are. Today, DSPs are cheaper and higher powered, enabling more advanced algorithms to compress, synthesize, and process voice and video signals. CPUs within the devices have increased in power as well.

Voice compression: Voice compression is used to save bandwidth. A variety of voice

compression schemes provide a variety of levels of bandwidth usage and voice quality. These compression methods often do not interoperate. Modem, fax, and dual tone multifrequency (DTMF) functionality are all impacted by voice-compression

methods.

Page 208: Network Notes

Standards: Advances have been made over the past few years that enable the

transmission of voice traffic over traditional public networks, such as Frame Relay

(Voice over Frame Relay). Standards, such as G.729 for voice compression, FRF.11 and FRF.12 for voice over

Frame Relay, and the long list of ATM standards enable different types of traffic to come together in a nonproprietary network. Additionally, the support of Asynchronous Transfer Mode (ATM) for different traffic

types, and the ATM Forum‘s recent completion of the Voice and Telephony over ATM specification, will speed up the availability of industry-standard solutions for voice over ATM.

Higher-speed infrastructure: In general, the infrastructures to support voice in

corporate environments and in the public network environments are much higher-speed now, so they can carry more voice traffic and effectively take on the voice tasks for the corporation.

Voice Technologies Compression

What makes voice compression possible is the power of Digital Signal Processors.

DSPs have continued to increase in performance and decrease in price over time, and as they have, it has made it possible to use new compression schemes that offer

better quality and use less bandwidth. The power of the DSP makes it possible to combine this traffic onto a line that formerly supported perhaps only a LAN connection, but now can support voice, data, and LAN integration.

Looking at this chart, quality and bandwidth tend to trade off. PCM is the standard

64Kbps scheme for coding voice; it is the standard for toll quality. The other compression schemes - ADPCM at 32Kbps, 24Kbps and 16Kbps - offer less quality but more bandwidth efficiency. The newer compression schemes -LDCELP at 16Kbps

and CS-ACELP at 8Kbps - offer even higher efficiency but with very high quality very acceptable in a business environment.

ADPCM—Adaptive Differential Pulse Code Modulation: consumes only 32 Kbps

compared to the 64 Kbps of a traditional voice call; often used on long-distance connections.

Page 209: Network Notes

LPC—Linear predictive code: a second group of standards that provide better voice

compression and, at the same time, better quality. In these standards, the voice coding uses a special algorithm, called linear predictive code (LPC), that models the

way human speech actually works. Because LPC can take advantage of an understanding of the speech process, it can be much more efficient without sacrificing voice quality.

CELP—Code-Excited Linear Predictive voice compression: uses additional

knowledge of speech to improve quality.

CS ACELP: Further improvements known as conjugate structure algebraic

compression enable voice to be coded into 8-kbps streams. There are two forms of this standard, both providing speech quality as good as that of 32-kbps ADPCM.

Voice Quality Guidelines

Silence Suppression by Voice Activity Detection

Voice activity detection (VAD) provides for additional savings beyond that achieved by voice compression.

Page 210: Network Notes

Telephone conversations are half duplex by nature , because we listen and pause

between sentences. Sixty percent of a 64-kbps voice channel typically contains silence. VAD enables traffic from other voice channels or data circuits to make use of

this silence. The benefits of VAD increase with the addition of more channels, because the statistical probability of silence increases with the number of voice conversations

being combined.

QoS Also Plays a Role in Voice Quality

The advantages of reduced cost and bandwidth savings of carrying voice over packet networks are associated with some quality of service issues that are unique to packet

networks. In a circuit-switched or TDM environment, bandwidth is dedicated, making QoS—quality of service—implicit, whereas, in a packet-switched environment, all kinds of

traffic are mixed in a store-and-forward manner. So, in a packet-switched environment, there is the need to devise schemes to prioritize real-time traffic.

So… in an integrated voice data network, QoS is essential to ensure the same high quality as voice transmissions in the traditional circuit-switched environment.

QoS and Voice Quality

Some of the quality of service issues customers face include the following:

Delay—Delay causes two problems: echo and talker overlap. Echo is cased by the

signal reflections of the speaker‘s voice from the far-end telephone equipment back

into the speaker‘s ear. Echo becomes a significant problem when the round-trip delay becomes greater than 50 milliseconds (ms). Talker overlap becomes significant if the

one-way delay becomes greater than 250 ms. Jitter—Jitter relates to variable inter-packet timing caused by the network that a

packet traverses. Removing jitter requires collecting packets and holding them long enough to allow the slowest packets to arrive in time to be played in the correct

sequence, which causes additional delay. Lost packets—Depending on the type of packet network, lost packets can be a

severe problem. Because IP networks do not guarantee service, they will usually exhibit a much higher incidence of lost voice packets than ATM networks. Echo—Echo is present even in a conventional circuit-switched telephone network,

but is acceptable because the round-trip delays through the network are smaller

than 50 ms and the echo is masked by the normal side tone that every telephone generates. Echo is a problem in voice over packet networks because the round-trip

Page 211: Network Notes

delay through the network is almost always greater than 50 ms. For this reason, echo cancellation techniques must be used.

Solutions to Voice Quality Issues

Quality of service issues for voice may be handled by the H.323, VoIP, VoATM, or

VoFR standards, or by an internetworking device. Following are some solutions to quality of service issues:

Delay—Minimize the end-to-end delay budget, including the accumulation delay,

processing delay, and network delay.

Jitter—Adjust the jitter buffer size to minimize jitter. On an ATM network, the

approach is to measure the variation of packet levels over a period of time and incrementally adapt the buffer size to match the calculated jitter. On an IP network, the approach is to count the number of packets successfully processed and adjust

the jitter buffer to target a predetermined allowable late packet ratio. Lost packets—While dropped packets are not a problem for data (due to

retransmission), they cause a significant problem for voice applications. To compensate, voice over packet software can interpolate for lost speech packets by

replaying the last packet, or can send redundant information at the expense of bandwidth utilization. Echo—Echo cancellation techniques are used to compare voice data received from

the packet network with voice data being transmitted to the packet network. The

echo from the telephone network hybrid is removed by a digital filter on the transmit path into the packet network.

Effect of QoS on Voice Quality

With all of the ―marketing hype‖ around QoS today, many customers have become skeptical of the claims some vendors are making.

Here‘s one way to look at the actual effect of Cisco QoS technologies on voice quality.

Page 212: Network Notes

The blue line represents the total network data load. The green line represents voice

quality without QoS. As you can see, the quality of a voice call rises and falls in response to varying levels of background traffic.

The red line represents voice quality with QoS enabled, showing that high voice quality remains constant as background traffic fluctuates.

Voice over Data Transports

We‘ve covered the building blocks for voice/data integration. Now, let‘s take a look at

the different transports customers can consider. The most widely used is Voice over IP. Voice over Frame Relay and Voice over ATM

are also important so we‘ll cover these as well.

Standards— VoIP, VoFR, and VoATM

VoIP:

- International Telecommunications Union (ITU) —International standards body for

telephony - ITU-T H.323—International Telecommunications Union recommendation for

multimedia (including voice) networking over IP - International Multimedia Teleconferencing Consortium (IMTC) —International standards body providing recommendations for multimedia networking over IP,

including VoIP - Internet Engineering Task Force (IETF) —Internet standards body VoFR:

- FRF.11—Implementation agreement, ratified in May 1997 by the Frame Relay Forum, that defines the transport of voice over Frame Relay - FRF.12—Provides an industry-standard approach to implement small frame sizes

(Frame Relay fragmentation) to help reduce delay and delay variation - Other related FRF standards —FRF.6 - Customer Network Management, FRF.7 -

Multicast, FRF.8 - FR/ATM Service Interworking, FRF.9 - Data Compression, FRF.10 - Frame Relay Network to Network VoATM:

- ATM Forum:

- Traffic Management Specification Version 4.0—af-tm-0056.000 - Circuit Emulation Service 2.0—af-vtoa-0078.000

- ATM UNI Signaling, Version 4.0—af-sig-0061.0000 - PNNI V1.0—af-pnni-0055.000

Page 213: Network Notes

Voice over Data Transports

All types of packetized voice implementations lend themselves well to both corporate and service provider use.

The Voice over IP (VoIP) approach provide Internet service providers (ISPs) with a competitive weapon against telecommunications companies, while

telecommunications companies prefer a virtual circuit environment using Voice over Frame Relay (VoFR) or Voice over ATM (VoATM).

VoIP, VoFR, and VoATM Quality

In terms of quality, voice over Frame Relay (VoFR), voice over ATM (VoATM), and voice over IP (VoIP differ). However, they also differ in terms of cost and in terms of general

usability.

Frame Relay‘s variance does have an impact on voice quality, but Frame Relay can maintain a business-quality level of communication at lower cost. Therefore, VoFR is slightly lower cost than VoATM, but VoFR provides some usually undetectable

variations in quality.

VoIP can go anywhere from utility quality, if used over the Internet to toll quality, if used over an intranet with QoS mechanisms enabled. Yet it will generally provide the lowest cost for connectivity. Thus, VoIP in intranets is highly viable for the business

user today and provides the most attractive cost option of the three.

VoATM, meaning voice over real-time variable bit rate (RT-VBR) or constant bit rate (RT-CBR), is fully deterministic in terms of QoS. Voice quality never varies. However, VoATM is generally more costly to implement than is, say, VoFR.

All three options offer significantly lower costs than the costs of building a private or using a PSTN, and usually require a fraction of the bandwidth.

Page 214: Network Notes

Voice over IP Components

The Voice over IP standard incorporates other components, including:

- G. standards, which specify analog-to-digital conversion and compression (as described earlier in this chapter).

- H.323 standard, which specifies call setup and interoperability between devices and applications. - Realtime Transport Protocol (RTP), which manages end-to-end connections to

minimize the effect of packets lost or delayed in transit on the network. - Internet Protocol or IP, which is responsible for routing packets on the network.

ITU-T H.323 Standard

ITU-T H.323 is a standard approved by the ITU-T that defines how audiovisual

conferencing data is transmitted across networks. H.323 provides a foundation for audio, video, and data communications across IP networks, including the Internet.

H.323-compliant multimedia products and applications can interoperate, allowing users to communicate without concern for compatibility.

H.323 provides important building blocks for a broad new range of collaborative, LAN-based applications for multimedia communications.

H.323 sets multimedia standards for the existing infrastructure (for example, IP-based networks). Designed to compensate for the effect of highly variable LAN latency, H.323 allows customers to use multimedia applications without changing

their network infrastructure.

By providing device-to-device, application-to-application, and vendor-to-vendor interoperability, H.323 allows customer‘s products to interoperate with other H.323-compliant products. PCs are becoming more powerful multimedia platforms due to

faster processors, enhanced instruction sets, and powerful multimedia accelerator chips.

Page 215: Network Notes

Applications enabled by the H.323 standard include the following:

- Internet phones - Desktop conferencing

- Multimedia Web sites - Internet commerce - And many others

H.323 Infrastructure

The H.323 standard specifies four kinds of components, which when networked

together, provide the point-to-point and point-to-multipoint multimedia communication services: terminals, gateways, gatekeepers, multipoint control units (MCUs).

H.323 terminals are used for real-time bidirectional multimedia communications. An H.323 terminal can either be a PC or a standalone device running an H.323 and the

multimedia applications. It supports audio communications and can optionally support video or data communications.

An H.323 gateway provides connectivity between an H.323 network and a non-H.323 network. For example, a gateway can connect and provide communication between

an H.323 terminal and the Public Switched Telephone Network (PSTN). This connectivity of dissimilar networks is achieved by translating protocols for call setup

and release, converting media formats between different networks, and transferring information between the networks connected by the gateway. A gateway is not required, however, for communication between two terminals on an H.323 network.

A gatekeeper can be considered the ―brain‖ of the H.323 network. Although they are not required, gatekeepers provide important services such as addressing,

authorization, and authentication of terminals and gateways, bandwidth management, accounting, billing, and charging. Gatekeepers may also provide call-

routing services.

Page 216: Network Notes

MCUs provide support for conferences of three or more H.323 terminals. All terminals

participating in the conference establish a connection with the MCU. The MCU manages conference resources, negotiates between terminals for the purpose of

determining the audio or video CODEC to use, and may handle the media stream. The gatekeepers, gateways, and MCUs are logically separate components of the H.323 standard, but can be implemented as a single physical device.

H.323 Gatekeeper Functionality

Gatekeepers provide call control services to network endpoints. A gatekeeper can

provide the following services: Address translation—Performs alias address to transport address translation.

Gatekeepers typically use a translation table to perform the address translation. Admissions control—Authorizes LAN access based on call authorization, bandwidth,

or other criteria. Call control signaling—The gatekeeper chooses to complete call signaling with

endpoints or may process the call signaling itself. Alternatively, the gatekeeper may

instruct endpoints to connect call signaling channel directly to another to bypass handling a signal channel. Call authorization—A gatekeeper may reject calls from a terminal upon

authorization failure.

Bandwidth management—Controls the number of terminals that are permitted

simultaneous access to a LAN. Call management—Maintains a list of active calls.

H.323 Interoperability

VoIP works with a company‘s existing telephony architecture, including its private branch exchanges (PBXs) and analog phones.

Page 217: Network Notes

VoIP and H.323 enables companies to complete office-to-office telephone and fax calls

across data networks, significantly reducing tolls. New applications are available, including unified messaging that integrates e-mail with voice mail and fax.

Choosing VoIP

Customers may choose VoIP as their voice transport medium when they need a

solution that is simple to implement, offers voice and fax capabilities, and handles phone-to-computer voice communications. IP networks are proliferating throughout the marketplace. Thus, many customers can use VoIP today.

Integrating Voice and Data on the WAN

The Voice over IP and H.323 standards define how analog voice is converted to data packets and back again. The next step is to use a company‘s existing wide-area network (WAN) to transport voice traffic with data traffic.

Page 218: Network Notes

Serial (Leased Line) Services

T1 is a private-line digital service, operating at 1.544 Mbps in a full-duplex, TDM mode. The 1.544-Mbps transmission rate provides the equivalent capacity of 24

channels running at 64 Kbps each.

The full-duplex feature of T1 allows the simultaneous operation of independent transmit and receive paths. Each data path operates at a transmission rate of 1.544 Mbps. Companies that need less bandwidth can deploy fractional T1 trunks, using

any number of channels needed. A fractional service is tariffed on a linear pricing schedule, depending on the number of T1 channels and the distance covered.

The TDM feature allows logical channels to be defined within the T1 serial bit stream. The T1 bit stream may be channelized in many different ways, as follows:

- A single 1.544-Mbps digital channel (non-channelized) between the user‘s premises and the central office (CO) - 24 independent channels, each providing 64 Kbps of bandwidth

- Any variation of 64-Kbps channel combinations

Each logical channel may be independently transmitted and switched. A combination of voice, video, and data may be transmitted over a single T1 line. Ideal Applications for T1 Services T1 service is ideal for applications that require

continuous high-speed transmission capabilities. Some common T1 applications include the following:

- High-volume LAN interconnection

- Integrated voice, data, video, and imaging transmission - Compressed video transmission - Bulk data transfer

Page 219: Network Notes

Frame Relay Services

Frame Relay is a packet-switching WAN technology that has achieved widespread support among vendors, users, and communications carriers. Its development has been spurred by the need to internetwork LANs at high speeds while maintaining the

lower costs associated with packet-switching networks.

Frame Relay offers very high access speeds. In North America, initial Frame Relay access rates start at 56 Kbps and go up to 1.544 Mbps. In Europe, the initial Frame Relay access rates start at 64 Kbps and go up to 2.048 Mbps. Companies can

contract with their service provider for a committed information rate (CIR). The Frame Relay standard today uses permanent virtual circuits (PVCs). All traffic for

a PVC uses the same path through the Frame Relay network. The endpoints of the PVC are defined by a data-link connection identifier (DLCI). The CIR, DLCIs, and

PVCs are defined when the user initially subscribes to a Frame Relay service. Frame Relay allows remote host access for applications such as the following:

- Remote host connectivity - Credit card authorization

- Online information services - Remote order entry

Frame Relay supports multiple virtual connections over a single physical interface. This means that Frame Relay is often the ideal solution to provide many users with

simultaneous access to a remote location. In these cases, the Frame Relay connection helps optimize the return on investment of the host system.

Voice over Frame Relay

Voice over Frame Relay (VoFR) technology consolidates voice and voice-band data

(including fax and analog modems) with data services over a Frame Relay network. The VoFR standard is specified in FRF.11 by the Frame Relay Forum.

VoFR allows PBXs to be connected using Frame Relay PVCs. The goal is to replace leased lines and lower costs. With VoFR, customers can easily increase their link

Page 220: Network Notes

speeds to their Frame Relay service or their CIR to support additional voice, fax, and data traffic.

How VoFR Works

A voice-capable router connects both a PBX and a data network to a public Frame Relay network. A voice-capable router includes a Voice Frame Relay Adapter (VFRAD)

or a voice/fax module that supports voice traffic on the data network.

Choosing VoFR

Frame Relay provides another popular transport for multiservice networks since Frame Relay networks are common in many areas. Frame Relay is a cost-effective service that supports bursty traffic well.

Frame Relay enables customers to prioritize voice frames over data frames to

guarantee quality of service (QoS).

Page 221: Network Notes

Asynchronous Transfer Mode (ATM) Services

Asynchronous Transfer Mode (ATM) is a technology that can transmit voice, video, data, and graphics across LANs, metropolitan-area networks (MANs), and WANs. ATM is an international standard defined by ANSI and ITU-T that implements a high-

speed, connection-oriented, cell-switching, and multiplexing technology that is designed to provide users with virtually unlimited bandwidth. Many in the

telecommunications industry believe that ATM will revolutionize the way networks are designed and managed.

Today‘s networks are running out of bandwidth. Network users are constantly demanding more bandwidth than their network can provide. In the mid 1980s,

researchers in the telecommunications industry began to investigate the technologies that would serve as the basis for the next generation of high-speed voice, video, and data networks. The researchers took an approach that would take advantage of the

anticipated advances in technology and enable support for services that might be required in the future. The result of this research was the development of the ATM standard.

How VoATM Works

Using a WAN switch for ATM, customers can connect their PBX network and data network to a public or private ATM network.

Page 222: Network Notes

One attractive aspect of ATM is its ability to support different QoS, as appropriate for various applications. The QoS spectrum ranges from circuit-style service, where

bandwidth, latency, and other parameters are guaranteed for each connection, to packet-style service, where best-effort delivery allocates bandwidth for each active

connection. The ATM Forum developed a set of terms for describing requirements placed on the

network by particular types of traffic. These five terms (AAL1 through AAL5) are referred to as adaptation layers, and are used as a common language for discussing what kinds of traffic requirements an application will present to the network.

- AAL1—Connection-oriented, constant bit rate, commonly used for emulating

traditional circuit connections. - AAL2—Connection-oriented, variable bit rate, used for packet video and audio services.v - AAL3/4—Connection-oriented, variable bit rate.

- AAL5—Connectionless, variable bit rate, commonly used for IP traffic as it provides packetization similar to that done with IP.

Choosing VoATM

VoATM is an ideal transport for multiservice networks, particularly for customers

who already have an ATM network installed. ATM handles voice, video, and data equally well.

One attractive aspect of ATM is its ability to support different QoS features as appropriate for various applications.

The ATM Forum has defined a number of QoS types, including:

Constant bit rate (CBR)—An ATM service type for nonvarying, continuous streams of

bits or cell payloads. Applications, such as voice circuits, generate CBR traffic patterns. The ATM network guarantees to meet the transmitter‘s bandwidth and

other QoS requirements. Many voice and circuit emulation applications can use CBR. Variable bit rate (VBR)—An ATM service type for information flows with irregular

but fully characterized traffic patterns. VBR is divided into real-time VBR and non-real-time VBR, in which the ATM network guarantees to meet the bandwidth

and other QoS requirements. Many applications, particularly compressed video, can use VBR service. It is fairly common in real networks that will never receive the ceiling value.

Unspecified bit rate (UBR)—An ATM service type that provides ―best effort‖ delivery

of transmitted data. It is similar to the datagram service available from today‘s internetworks. Many data applications can use UBR service. Available bit rate (ABR)—An ATM service type that provides ―best effort‖ delivery of

Page 223: Network Notes

transmitted data. ABR differs from other ―best effort‖ service types, such as UBR, because it employs feedback to notify users to reduce their transmission rate to

alleviate congestion. Hence, ABR offers a qualitative guarantee to minimize undesirable cell loss. Many data applications can use ABR service.

How Packet Technologies Stack Up for Voice

Because Frame Relay technology was originally designed and optimized as a data solution, you could dedicate a public or private Frame Relay network to data and pay

separate dialup or Virtual Private Network (VPN) rates for intracompany phone calls. Provided you can afford the different types of equipment, services, and staff resources

required to manage both networks, this choice assures you of the highest quality for each type of traffic today. This option is most likely desirable for sites that are very data-heavy.

Another option is to achieve some level of integration by using one piece of circuit- switching equipment, such as a time-division multiplexer (TDM), to connect both the

PBX and LAN server to a wide-area network. Customers gain economies by running all WAN traffic over a single service (rather than receiving multiple WAN bills) and

avoiding paying phone company rates for intra-enterprise phone calls. The costly downside is that within the network, bandwidth is likely to be wasted,

because you are still reserving circuits for certain types of traffic, and those circuits sit idle when nothing travels across them.

Applications

Now let‘s put it all together. How does it actually work? Let‘s look at the voice

applications on an integrated voice/data network that replace traditional telephony.

Applications for Integrated Voice and Data Networks

Page 224: Network Notes

Integrated voice and data networks support a variety of applications, all of which are designed to replace leased lines and lower costs. Each of the applications listed above

are discussed on the following pages.

- Inter-office calling - Toll bypass

- On-net to off-net call rerouting - PLAR replacement - Tie trunk replacement

On-Net Call, Intra-Office

A voice-capable router can function as a local phone system for intra-office calls. In the example, a user dials a phone extension, which is located in the same office. The voice-capable router routes the call to the appropriate destination.

Toll Bypass—On-Net Call, Inter-Office

A voice-capable router can function as a phone system for inter-office calls to route calls within an enterprise network.

In the example, a user dials a phone extension, which is located in another office location. Notice that the extension number begins with a different leading number

than the on-net, intra-office call. The voice-capable router routes the call to another voice-capable router over an ATM, Frame Relay, or HDLC network. The receiving router then routes the call to the PBX, which routes the call to the appropriate phone

Page 225: Network Notes

extension.

This solution eliminates the need for tie trunks between office locations, or eliminates long-distance toll charges between locations.

Toll Bypass—On-Net to Off-Net Dialing

A voice-capable router can provide off-net dialing to a location outside the local office,

through the PSTN. In the example, a user dials 9 to indicate an outbound call, then dials the remaining 7-digit number (this is a local phone call). The voice-capable router routes the call to

another voice-capable router over a Frame Relay or HDLC network. The receiving router recognizes that this is an outbound call and routes it to the company‘s PBX in

New York. Finally, the PBX routes the call to the PSTN and the call is routed to the appropriate destination. This solution places the call on-net as far as possible, allowing a local PBX to place a

local call. This saves significantly on toll charges.

On-Net to Off-Net Call Rerouting

1. Call attempted on-net 2. Remote system rejects call

3. Call rerouted off-net

Page 226: Network Notes

At times, on-net resources within an enterprise may be busy. However, telephone calls must still be routed. Using a voice-capable router that deploys Ear and Mouth

(E&M) signaling, a router can route calls to a PBX, and ultimately to the PSTN over a Frame Relay or HDLC network.

Keep in mind that a PBX cannot reroute a call after a line is ―seized.‖ Therefore, a voice-capable router can seize an off-net trunk and route a call. This solution

guarantees that a phone call is placed, regardless of the load on the network.

PLAR—Automatically Dials Extension

A voice-capable router can replace a Private Line, Automatic Ringdown (PLAR) service from a telephone service provider.

In the example, a user takes the phone off-hook, causing another telephone extension to ring. The voice-capable router recognizes that the phone is off-hook, and

routes the call over an ATM, Frame Relay, or HDLC network to the remote router. The remote router then routes the call to the PBX, which rings the appropriate extension. This solution eliminates the need for dedicated PLAR lines.

Tie Trunk Replacement PBX to PBX

Page 227: Network Notes

Voice-capable routers on a WAN can replace tie trunks between remote locations, thereby saving the cost of tie trunks. In essence, the voice-capable router on either

side of the ATM, Frame Relay, or HDLC WAN connection is configured as a tie trunk. The router then routes incoming and outgoing calls through the PBX.

The next slides graphically illustrate the migration from traditional circuit-switched voice networking to the new packet-switched integrated data/voice/video networking. Here you see two offices… one in Vancouver and one in Toronto. Each has a PBX to

handle the office but all calls inter-office go through the PSTN.

By adding voice-capable routers to the existing data network, connecting them to the existing PBXs, the company can first do toll bypass. This represents bandwidth no

longer needed for voice traffic that is now going through the routers.

Page 228: Network Notes

The PBX tie line also goes away now that its function has been replaced by a path between the voice-capable routers.

You can see here the end result. A much simplified network and considerable cost

savings.

- Summary -

As we have seen today, companies are interested in data/voice/video integration for very basic business reasons:

Reduce costs: Phone toll charges; cost of multiple management methods and

multiple types of expertise required to support multiple types of networks; capital

expenditures on multiple networks

Enable the new applications needed for business growth: Multimedia

(data/voice/video) applications require technologies based on multimedia standards

Simplify network design: Through strategic convergence of data, voice, and video

networks

And decision-makers have come to the conclusion that recent technical advancements have brought the benefits of voice/data integration within reach, such

as: H.323 standards; gateways; voice-compression, silence-suppression, and quality-of-service technologies.

customers like to have networkes which with new technologies like Performing ad hoc device management on evolving networks and technologies. Struggling with the

transition to proactive, business-oriented service-level management.

Page 229: Network Notes

Network Management Process

The following figger gives you the clear view of how should be the Management Process done.

There are some three staps which are more importance when we conducting a Management Segment

Plan / Design:

- Build history - Baseline

- Trend analysis - Capacity planning

- Procurement - Topology design

Implement / Deploy

- Installation and configuration - Address management - Adds, moves, changes

- Security - Accounting/billing

- Assets/inventory - User management - Data management

Page 230: Network Notes

Oparate / Maintain

- Define thresholds - Monitor exceptions

- Notify - Correlate - Isolate problems

- Troubleshoot - Bypass/resolve - Validate and report

Network Management Basics

Let's take a close look of Network Management Basics.

Network Management Architecture

In a network management system, the system manages the argent which are dirived

from the main system like Management Database, with the help of Network Management Protocol,which are cleared by the figger.

Network Management Building Blocks

Following are the Management Building Bloks of Natwork Management System.

Page 231: Network Notes

Simple Network Management Protocol (SNMP)

this is a protocol which is comming under the management building blocks. this use

to provide status massages and problemreports across a network to the Management system. SNMP uses Use DAtagram Protocol as a transport mechanism. It employs

different terms from TCP/IP, working with managers and agents instead of clients, and servers. An agent usually provides information about a device, the manager communicates across a network with the agents.

there are two vertions of SNMP they: SNMP V2

- Addressed performance issues SNMP V3

- Multilingual implementations (coexistence of versions) - Enhanced security

Page 232: Network Notes

SNMP Message Types

SNMP messages are the request and responses between the Manager and Agent. Once the Agent gets a request from the manager as a MIB variable, then Agent gives

manager a response as the same variable. And also Trap for the unsolicited alarm conditions.

Management Information Base (MIB)

MIB is a database of objects for a specific device within the network agent.

Types of MIBs:

MIB I - 114 standard objects - Objects included are considered essential for either fault or configuration

management

MIB II - Extends MIB I - 185 objects defined

Other standard MIBs

Page 233: Network Notes

- RMON, host, router, ...

Proprietary MIBs - Extensions to standard MIBs

Sample MIB Variables

Network Management System (NMS)

NMS playies the important rall at the Management system, That is it Polls agents on network and Receives traps, Gathers and displays information about the status around the Network and it is the Platform for integration

Example: HP OpenView

Campus Agent Technologies

Page 234: Network Notes

This is an technology which is comming under the NMS to manage the agents and this going to provaid the customers the industry standards like

SNMP: Device get and sets

RMON, RMON2: Traffic monitoring ILMI: ATM discovery

which most related with the cisco extensions like, CDP: Adjacent neighbor discovery

ISL: VLAN trunking DISL: Error-free ISL enablement

VTP: Automated VLAN setup VQP: Dynamic station ID

Management Traffic Overhead

If a NMS faced a problem with the Traffic Overhead then there should be some reasion, to reduce this the NMS should set polling interval wisely betwen the agents

and the bandwidth issues should lower than befor on lower-speed links

Example:

1 manager, multiple managed devices

64-Kb access link 1 request = 1-KB packet (avg.) 1 poll = getreq + getresp = 2 KB

Assume 1 object polled/managed device

Remote MONitoring (RMON)

RMON or Remote MONitoring MIB was designed to manage the network itself. MIB I/II could be used to check each machines network performance, but would lead to

large amounts of bandwidth for management traffic. Using RMON you see the wire view of the network and not just a single host‘s view. RMON has the capability to set performance thresholds and only report if the threshold is breached, again helping to

reduce management traffic (effectively distributing the network management smarts!).

Page 235: Network Notes

RMON agents can reside in routers, switches, and dedicated boxes. The agents will gather up to 19 groups of statistics. The agents then forward this information upon

request from a client.

Because RMON agents must look at every frame on the network, performance is a must. Early RMON agent‘s performance could be classified based on processing power and memory.

Network Monitoring with RMON

Cisco Discovery Protocol (CDP)

Automatic Network Discovery. and the following are the activities of CDP:

- CDP agent polls neighbor devices - Physical interface, IP address, chassis type exchanged - Each device maintains ―CDP‖ cache table

- Tables are read by management application

Page 236: Network Notes

- Applicable across frame networks - ILMI for ATM networks

Inter-Switch Link (ISL)

Maintains Switch-to-Switch Performance and the following are the activities of ISL:

- Establishes membership through ASICs - Eliminates lookups and tables

- Labels each packet as received (i.e., ―packet tagging‖) - Transports multiple VLANs across links - Maps effectively across mixed backbones

- Protocol, end-station independent

Virtual Trunking Protocol (VTP)

Activities of VTP:

- Assigns virtual interfaces across backbone - Maintains and manages global mapping table - Based on Layer 2 periodic advertisements

Page 237: Network Notes

- Reduces setup time and improves reliability - VTP pruning enhances VLAN efficiencies

Management Intranet Basics

Traditional Management Model Can’t Keep Pace

Here are the reasons, Why the Traditional Management Model can not keep pace when the management Intranet Basics

- Focused point products - Hierarchical platforms - Minimal integration

- Proprietary solutions and APIs - Product conflicts—What works with what?

New Model of Integration— Management Intranet

Multiple Web-accessible management tools can be hyperlinked, and management information shared easily with the DMTF's Common Information Model (CIM)

standard. Cisco's approach to Web-based enterprise management goes beyond mere browser access to embrace the total rearchitecting and reengineering of its

management products as true network-based applications. It also includes leadership in creation and adoption of standards such as CIM for multivendor management data integration. Cisco is aggressively applying Internet technologies

Page 238: Network Notes

and standards to create comprehensive enterprise management that easily integrates with leading third-party tools and enterprise system and service management

frameworks through the Cisco Management Connection.

CIM Data Exchange

For the Web model to deliver substantial value for the management software

industry, however, the vendors must agree on content standards for sharing of management information. Such a set of Web-oriented standards for exchanging basic management information is being defined under the Web-Based Enterprise

Management (WBEM) initiative, spearheaded by vendors such as Cisco, HP, Intel, Compaq, BMC, Microsoft, IBM/Tivoli and others. The Desktop Management Task

Force (DMTF) is now leading the effort to standardize the technologies of WBEM. The first of these, the CIM provides an extensible data model of the enterprise computing environment. Recent work by the DMTF makes the CIM model the basis for Web-

based integration using XML (see sidebar on Web-Based Enterprise Management Standards for details).

Under the emerging Web-based management architecture, separate tools and

management applications can be integrated via a common browser interface that supports hyperlinking and the exchange of management data via CIM. Leading vendors, including Microsoft, Computer Associates, IBM/Tivoli, and Cisco have

announced or released products that implement the early versions of CIM standards. Already, Cisco and IBM/Tivoli have demonstrated use of CIM for two-way device data exchange between their management software packages. In addition to CIM-based

data exchange, tools can be hyperlinked to provide easy shifting within the browser from tool to tool as an operator executes a task such as isolating and solving a

problem. In this way, the most basic launch-level integration, popular for many years in existing management platforms, becomes available with minimal effort for practically any tool. Cisco is exploiting this technique to link its growing body of

management tools and distributed management data collection infrastructure with third-party ISV packages. It already has available Web-linking to more than 30

Page 239: Network Notes

leading third-party applications and is making it easy for its customers to create a "management intranet"

Role of Directories

- Single-user identity

- User profiles, applications, and network services - Integrated policies

- Common information model

Directory Enabled Networks (DEN) Standards

The future of the Directory Enabled Network is to extend the directory throughout the elements of the network. We can then provide a unified view of all the network resources at our disposal. From

a user perspective, you'll not need to be authenticated on a half a dozen different devices just to get your job done.

Policy Management Basics

Need for Policy

Poicy management iis mast important one, Which coming under natwork

management.

Page 240: Network Notes

Aligning Network Resources with Business Objectives

- Application-aware network

- Intelligent network services - Network-wide service policy - Control by application & user

What Is a Network Policy?

The network Plicy is a set of high-level business directives that control the

deployment of network services (e.g., security and QoS). And areated on the basis and in terms of established business practices

Page 241: Network Notes

Example: Allow all members of the Engineering department access to corporate

resources using Telnet, FTP, HTTP, and e-mail, 24 x 7

Role of QoS

Quality of service should be used wherever applications share network resources.

There are two broad application areas where QoS technologies are needed: - Mission-critical applications need QoS to ensure delivery and that their traffic is

not impacted by misbehaving applications using the network. - Real-time applications such as multimedia and voice need QoS to guarantee

bandwidth and minimize jitter. This ensures the stability and reliability of existing applications when new applications are added.

Voice and data convergence is the first compelling application requiring delay-sensitive traffic handling on the data network. The move to save costs and add new features by converging the voice and data networks--using voice over IP, VoFR, or

VoATM--has a number of implications for network management:

- Users will expect the combined voice and data network to be as reliable as the voice network: 99.999% availability - To even approach such a level of reliability requires a sophisticated management

capability; policies come into play again

Cisco‘s unique service is the ability to offer products that let network managers prioritize applications in today‘s evolving networks. Let‘s take a look at QoS in more detail.

What Is Quality of Service (QoS)?

The ability of the network to provide better or ―special‖ service to users/applications.

Page 242: Network Notes

Where Is QoS Important?

Exactly where the QoS need LAN or WAN..

QoS Building Blocks

The following atre the important building blocks of QoS:

- Classification - Policing - Shaping

- Congestion avoidance

QoS and Network/Policy Management

Here we going to know QoS with the Network Policy management.

Page 243: Network Notes

Role of Security

Enterprises are more aware of security issues than ever before, with business

globalization, growing numbers of remote users, and especially the press buzz about the Internet and VPNs forcing security to their attention. Security needs to be tied to policies, so that it can be applied consistently, without leaving hidden holes subject

to hacker penetration.

Followig are the Activities: Authentication and authorization

- Employees, partners, customers Firewalls - Protect corporate resources

- Enable safe Internet use Encryption

- Ensure data confidentiality - Secure Virtual Private Networks

- SUMMARY -

- SNMP, MIBs, RMON, and network management systems are the building blocks

of network management tools - The management intranet promises greater integration and easier-to-use tools

- Policy-based management will allow enterprises to align network resources with business objectives

Page 244: Network Notes

Lesson 15: The Internet

In this lesson, we‘re going to discuss the Internet. We‘ll cover how the Internet has created a new business model that‘s changing how companies do business today.

We‘ll look at intranets, extranets, and e-commerce. Finally, we‘ll look at the technology implications of the new Internet applications such as the need for higher

bandwidth technologies and security.

The Agenda - What Is the Internet?

- The New Business Model

- Intranets

- Extranets - E-Commerce

- Technology Implications of Internet Applications

The Internet: A Network of Networks

What is the Internet? The Internet is the following:

- A flock of independent networks flying in loose formation, owned by no one and connecting an unknown number of users

- A grass roots cultural phenomenon started 30 years ago by a group of graduate students in tie-dyed shirts and ponytails

- Ma Bell‘s good old telephone networks dressed up for the 1990s A new way to transmit information that is faster and cheaper than a phone call, fax, or the post office

Some Internet facts:

- The number of hosts (or computers) connected to the Internet has grown from a handful in 1989 to hundreds of millions today.

- The MIT Media Lab says that the size of the World Wide Web is doubling every 50 days, and that a new home page is created every 4 seconds.

Page 245: Network Notes

Internet Hierarchy

The Internet has three components: information, wires, and people.

- The ―wires‖ are arranged in a loose hierarchy, with the fastest wires located in the middle of the cloud on one of the Internet‘s many ―backbones.‖

- Regional networks connect to the Internet backbone at one of several Network Access Points (NAPs), including MAE-EAST, in Herndon, Virginia; and MAE-WEST, in Palo Alto, California.

- Internet service providers (ISPs) administer or connect to the regional networks, and serve customers from one or more points of presence (POPs).

- Dynamic adaptive routing allows Internet traffic to be automatically rerouted around circuit failures. - Dataquest estimates that up to 88 percent of all traffic on the Internet touches a

Cisco router at some point.

The New Business Model

The Internet Is Changing the Way Everyone Does Business

From simple electronic mail to extensive intranets that include online ordering and extranet services, the Internet is changing the way everyone does business. Small

and medium-sized companies seeking to remain competitive into the next century must leverage the Internet as a business asset.

The Internet is forcing companies adopt technology faster. You‘ll discover several themes that are driving the new Internet economy, as follows.

Compression—Everything happens faster: business cycles are shorter, and time and

distances are less relevant to your customers.

Time—Some companies have reported a 92 percent reduction in processing time

when an item is ordered via an online system. Distance—Using networked commerce, BankAmerica has widened its customer base

Page 246: Network Notes

so that now 30 percent of customers are outside the traditional geographic reach. Business cycles—Adaptec, a manufacturing firm in California, used networked

commerce to reduce their manufacturing cycle from 12 to 8 weeks, slashing their

inventory costs by $10 million a year. Market turbulence—Customers suddenly have more choices. They can shop farther

afield in search of good values. You have to compete even harder to retain customers. Networked business—Many deem that networked commerce applications will ―make

or break‖ companies in the next century. The ability to solicit and sustain business relationships with customers, employees, partners, and suppliers using networked

commerce applications is critical to success. Rapid transformation—Building relationships, business processes, and operating

models that can quickly adjust to accommodate shifting market forces is essential. This requires an infrastructure that provides the ability to change rapidly.

Forces Driving Change

Shorter product life cycles are required to stay competitive.

Industry and geographical borders are changing rapidly:

- Companies today must be able to swiftly ―go to market‖ in new and expanded locations.

- Moreover, the rigid border or boundaries of manufacturers are changing: manufacturers are becoming retailers and distributors.

The need to ―do more with less‖ is essential to accommodate narrowing margins, intensifying competition, and industry convergence. The network must raise the

productivity of the workforce.

Page 247: Network Notes

Traditional Business Model Versus New Business Model

The Internet is transforming the way companies can use information and information systems. Historically, businesses have ―protected‖ company information and allowed

limited sharing of systems.

Creating these ―silos‖ of information has meant that each ―link‖ of the ―extended‖ traditional business has lacked access to relevant information to make profit maximizing decisions. That means your employees, suppliers, customers, and

partners were kept from information, not always by intention, but because limited access created barriers to sharing it. The result was:

- Closely held knowledge base - Limited access to relevant and timely information

- Costly duplication of effort - Limited transaction hours to conduct business

The Internet and networked applications have changed all that. They allow all companies, no matter the size, to break the information barriers—to ―let loose the

power of information.‖ Now we are experiencing a transition to a new business paradigm. In order to

compete effectively in this rapidly expanding Internet economy, we must reshape our business practices.

Companies today are now:

- Sharing knowledge with suppliers and partners - Ensuring that relevant and timely information is available to all employees - Removing redundancies

- Conducting business 24 hours a day, 7 days a week (24x7)

Accelerating this shift is the explosive growth and rapid adoption of Internet usage.

Today’s Internet Business Solutions

Let‘s take a look at some of the Internet business solutions that companies are driven to implement in order to improve their productivity and stay competitive. These include:

- Intranets

- Extranets - E-commerce

Page 248: Network Notes

Intranets

What Is an Intranet?

An intranet is an internal network based on Internet and World Wide Web technology that delivers immediate, up-to-date information and services to networked employees

anytime, anywhere.

Whether providing capabilities to download the latest sales presentation, arrange travel, or report a defective disk drive to the technical assistance center, an intranet

offers a common, platform-independent interface that is consistent, easy to implement, and easy to use.

Initially, organizations used intranets almost exclusively as publishing platforms for delivering up-to-the-minute information to employees worldwide. Increasingly,

however, organizations are broadening the scope of their intranets to encompass interactive services that streamline business processes and reduce the time employees spend on routine, paper-based tasks.

Intranet applications are platform-independent, so they are less costly to deploy than

traditional client/server applications, and they bear no installation and upgrade costs since employees access them from the network using a standard Web browser. Finally, and perhaps most important, intranets enhance employees‘ productivity by

equipping them with powerful, consistent tools.

Typical Intranet Applications

Most companies can benefit from an intranet. Here are some sample applications:

Page 249: Network Notes

Employee self-service—Employee self-service provides your employees with the

ability to access information at any time from anywhere they want. It enables

employees to independently access vital company information. Employee self-service allows companies to save on labor costs as well as increase employee productivity

and communication. We‘ll look at this in more detail. Distance learning—Employee training becomes more accessible through distance

learning over the data network, which can draw employees from many sites into a single virtual classroom, saving them travel time and keeping them more productive. Technical support—Companies with limited IS staff can deploy an intranet server to

answer frequently asked technical questions, house software that users can

download, and provide documentation on a variety of subjects. Users gain instant access to key technical assistance, while IS staff can concentrate on other matters. Videoconferencing—A proven way to bring team members together without calling

for travel, video conferencing is now possible over a data network, bypassing the need

for an expensive parallel network. Intranets can make videoconferences easier to set up and use.

Example: Employee Self-Service

These are some of the employee self-service applications.

Let‘s take a look at one in detail. By posting HR benefits information on an intranet, employees can look up routine information without taking up the time of a benefits

administrator, thus reducing total headcount requirements. By giving employees the ability to look this information up anytime they wish, they are not confined to making their inquiries during regular business hours. And, they don‘t have to wait on hold

while another employee is being assisted, resulting in saved time. In addition, by posting general benefits information on the internal Web site, HR is

able to spend their time in more productive, strategic ways that ultimately benefit the company, as well as reduce the costs of having an administrator available on the

phone all day. Another example is corporate travel. Many employees travel frequently. New intranet

applications that store an employee‘s travel preferences can make it easy for employees to request or even book travel arrangements at any time of the day or night, enabling companies to provide this vital service at a lower cost.

As you can see, intranet applications are a win/win for both employees and the company.

Page 250: Network Notes

Benefits of Intranets

Intranets are rapidly gaining wide acceptance because they make network applications much easier to access and use. Intranets enable self-service.

Intranets allow you to:

- Improve design productivity and compress time-to-market, for example, by providing engineers with immediate access to online parts information and requisitions.

- Increase productivity through greater employee collaboration.

- Share or access vital information at any time, from any location. For example, you can extend intranets around the world, for instance, to sales offices in London

and Tokyo. Now sales teams or manufacturing plants in Asia can quickly access information on servers at the central office in the United States—and it‘s easier to use.

- Minimize downtime and cut maintenance costs by providing work teams with

complete electronic work packages. - Lower administrative costs by automating common tasks, such as forms and

benefit paperwork.

Extranets

What Is an Extranet?

An extranet allows you to extend your company intranet to your supply chain.

Extranets are an extension of the company network—a collaborative Internet connection to customers and trading partners designed to provide access to specific company information, and facilitate closer working relationships.

The way you extend your company network to your extranet partners can vary. For instance, you can use a private network for real-time communication. Or you can

leverage virtual private networks (VPNs) over the Internet for cost savings. You can also use a combination of both. However, it‘s important to realize that each solution has different benefits and security solutions.

A typical extranet solution requires a router at each end, a firewall, authentication software, a server, and a dedicated WAN line or VPN over the Internet.

Typical Extranet Applications

- Supply-chain management

- Customer communications - Distributor promotions

Page 251: Network Notes

- Online continuing education/training - Customer service

- Order status inquiry - Inventory inquiry

- Account status inquiry - Warranty registration - Claims

- Online discussion forums

Extranet applications are as varied as intranet applications. Some examples are listed above. Extranets are advantageous anywhere that day-to-day operations processes that are being done by hand can be automated. Companies can save time

and money in development, production, order processing, and distribution. Improving productivity increases customer satisfaction, which drives business

growth.

Example: Supply Chain Management

The traditional business fulfillment model is linear, with communication flowing from supplier to manufacturers in a step-by-step process. Communication does not

transcend down the supply chain resulting in inefficiencies and time consuming processes.

Effectively managing the supply chain is more critical now than ever. Customers today are looking for a total solution—they want ease of purchase and implementation, they want customized products, and they want them yesterday.

Today, in order to better service and retain customers, companies realize that they

need to improve their business processes in order to deliver products to customers in reduced time. One effective way to do this is to improve the system processes that make up the overall supply chain.

With an extranet, companies can:

- Enable suppliers to see real-time market demand and inventory levels, thus

Page 252: Network Notes

providing them with the necessary information to alter their production mix accordingly.

- Give suppliers access to customer order information, so they can fulfill those

orders directly without having to route product through you. - Using the network, demand forecasts can be updated in real time, and

manufacturing line statuses, and product fulfillment can be queried by any member of the supply chain.

- Use the network to hold online meetings where product design teams work together with suppliers to discuss prototype development, resulting in reduced

cycle times.

Benefits of Extranets

What are the benefits of using extranets? You can decrease inventories and cycle times, while improving on-time delivery.

You can increase customer satisfaction and, at the same time, more effectively manage the supply chain.

You can improve sales channel performance by providing dealers and distributors with product and promotional information online, while it‘s hot. You can reduce costs by automating everyday processes.

You can improve customer satisfaction by streamlining processes and improving productivity.

E-Commerce

E-Commerce Market Growing Rapidly

When we think of e-commerce, most of us think of business-to-consumer e-

commerce, for example, Amazon.com. However, the revenues that business-to-consumer companies are realizing are just

the tip of the iceberg. The bulk of business on the Internet is actually business-to-business e-commerce which, as you can see by this chart, is skyrocketing.

In the last two years alone, the amount of business conducted over the Internet has gone from $1 billion to $30 billion, with an 80 to 20 business-to-business and

business-to-consumer mix. The projections for the next two years and beyond are even more dramatic. Internet commerce will likely reach from $350 to $400 billion in 2002. Some estimates are even more aggressive and place the size of Internet

commerce by 2002 at almost a trillion dollars.

And, most of us generally think that only big businesses are conducting e-commerce.

Page 253: Network Notes

In fact, over 97 percent of businesses conducting electronic commerce are companies with 499 employees or less, and 71 percent of those companies have less than 49

employees. As you can see, e-business has become a critical component of many businesses.

Typical E-Commerce Applications

Now let‘s take a look at what you can do with e-commerce.

A few examples of e-commerce are:

- Online catalog - Order entry

- Configuration - Pricing - Order verification

- Credit authorization - Invoicing - Payment and receivables

For example, by allowing customers to do their own online ordering, long-distance

phone and fax service can be reduced. In addition, fewer people are required to take customer orders and do timely order entry. Finally, online electronic order forms eliminate data entry and shipment errors.

Benefits of E-Commerce

E-commerce can expand and improve business.When we think of e-commerce, we

immediately think of selling online. We quickly realize the benefits of increasing revenue by supplying customers and prospects with valuable information at any time

and providing them the opportunity to purchase online. We also recognize how online ordering can cut costs significantly by reducing the

staff needed to man an 800 number or physically write up orders.

Additionally, we understand that the Internet allows companies to extend their reach and sell into new markets without incurring global headcount costs

What most of us don‘t realize is that these are only a few of the benefits of e-commerce.

Lets take a look at the following two more compelling benefits:

- You can manage your inventory levels better. For example, an automobile manufacturer has its suppliers linked via the Web for online ordering. A supplier can place an order directly and can see immediately if the part is in stock or will

Page 254: Network Notes

need to be back ordered.

- By putting valuable information on your Web site, customers can get answers quickly to most of their questions at any time of the day, from any location.

Customer satisfaction soars when customers can get critical information at any time, from any location. It allows them to do business when they want to, not during the traditional 8 to 5 business day.

Technology Implications of Internet Applications

There are real technology implications to these new Internet applications.

First is the need for increased bandwidth. Internets, intranets, and extranets have

totally reversed the 80/20 rule so that now 80% of the traffic is going over the backbone and only 20% is local. Everyone is clamoring for Fast Ethernet and even Gigabit Ethernet connections.

The need for security is obvious once a company is connected to the Internet. You

cannot read the paper without hearing about the latest hacking job. The Internet makes VPNs possible.

And finally, EDI to enable electronic commerce. We‘ll look at each of these briefly.

Applications Need Bandwidth

The type of connection necessary depends on the bandwidth required:

- Individual users connecting to the Internet for e-mail or casual Web browsing can usually get by using a simple modem.

Page 255: Network Notes

- Power users or small offices should consider ISDN or Frame Relay.

- Larger offices or businesses that expect high levels of Internet traffic should look

into Frame Relay or leased lines. - New technologies like asymmetric digital subscriber line (ADSL) and high-data-

rate digital subscriber line (HDSL) will make high-speed Internet access even more affordable in the future.

Internet Security Solutions

One of the most vulnerable point in a customer‘s network is its connection to the

Internet. To secure the communication between a corporate headquarters and the Internet, a customer needs all the integrity security tools at its disposal. These tools

include firewalls, Network Address Translation (NAT), and encryption, token cards, and others.

Page 256: Network Notes

Virtual Private Network

Virtual Private Networks (VPNs) can bring the power of the Internet to the local enterprise network. Here is where the distinction between Internet and intranet starts to blur. By building a VPN, an enterprise can use the ―public‖ Internet as its own

―private‖ WAN. Because it is generally much less expensive to connect to the Internet than it is to

lease data circuits, a VPN may allow companies to connect remote offices or employees when they could not ordinarily justify the cost of a regular WAN

connection. Some of the technologies that make VPNs possible are:

- Tunneling

- Encryption - Resource Reservation Protocol (RSVP)

Electronic Data Interchange (EDI)

Electronic commerce can streamline regular business activities in new ways. Have any of you used a fax machine to send purchase orders to vendors?

Page 257: Network Notes

A fax machine turns your PO into bits, transmits them across a network, and then

turns them back into atoms on the other end. The disadvantage is that the atoms on the other end can only be read by a human being, who probably has to retype the

data into another computer. EDI provides a way for many companies to reduce their operating costs by

eliminating the atoms and keeping the bits. What advantages does EDI provide your customer?

- Ensures accurate data transmission

- Provides fast customer response - Enables automatic data transfer—no need to re-key

For example, RJR Nabisco reduced PO processing costs from $70 to 93 cents by replacing its paper-based system with EDI.

Public key/private key encryption is created by the PGP program (Pretty Good Privacy). It creates a public key and a private key. Anyone can encrypt a file with your

public key, but only you can decrypt the file. To ensure security, an enterprise may issue a public key to its customers, but only the enterprise will be able to decrypt a message using the private key.

- SUMMARY -

The Internet has created the capability for almost ANY computer system to communicate with any other. With Internet business solutions, companies can redefine how they share relevant information with the key constituents in their

business—not just their internal functional groups, but also customers, partners, and suppliers.

This ―ubiquitous connectivity‖ created by Internet business solutions creates tighter relationships across the company‘s ―extended enterprise,‖ and can be as much of a

competitive advantage for the company as its core products and services. For example, by allowing customers and employees access to self-service tools, businesses can cost effectively scale their customer support operations without

having to add huge numbers of support personnel. Collaborating with suppliers on new product design can improve a company‘s competitive agility, accelerate time-to-

market for its products, and lower development costs. And perhaps most importantly, integrating customers so that they have access to on-time, relevant information can increase their levels of satisfaction significantly. Recapping:

- Internet access can take a business into new markets, decrease costs, and

Page 258: Network Notes

increase revenue through e-commerce applications. It can attract retail customers by providing them with company information and the ability to order online.

- Intranets can provide your employees with access to information and help

compress business cycles. - Extranets enable effective management of your supply chain and transform

relationships with key partners, suppliers, and customers. - Voice/data integration can save companies significant amounts of money and, at

the same time, enable new applications.

- All of these applications reduce costs and increase revenue.