Download - Providing QoS in IP Networks Future: next generation Internet with QoS guarantees m Differentiated Services: differential guarantees m Integrated Services:

Transcript

Providing QoS in IP Networks

Future: next generation Internet with QoS guarantees Differentiated Services: differential guarantees Integrated Services: firm guarantees

simple model for sharing and congestion studies:

Principles for QoS Guarantees

Example: 1Mbps IP phone, FTP share 1.5 Mbps link. bursts of FTP can congest router, cause audio packets to

be excessively delayed or lost want to give priority to audio over FTP

packet marking needed for router to distinguish among packets belonging to different classes of traffic; new router policy needed to treat packets accordingly

Principle 1

Principles for QoS Guarantees

what if applications misbehave (e.g. audio sends higher than declared rate)?

policing: force source adherence to certain criteria (drop or delay packets, e.g., leaky bucket)

Packet classification/marking and policing done at network edge (in the host or at an edge router)

provide protection (isolation) for one class from othersPrinciple 2

Principles for QoS Guarantees

Allocating fixed (non-sharable) bandwidth to each flow: inefficient use of bandwidth if flows doesn’t use its allocation

While providing isolation among flows, it is desirable to use resources as efficiently as possible

Principle 3

Principles for QoS Guarantees

Basic fact of life: can not support traffic demands beyond link capacity

Call Admission: flow declares its QoS requirement, network either accepts the flow or blocks the flow (if it cannot provide the required QoS).

Principle 4

Scheduling Mechanisms

scheduling: choose next packet to send on link FIFO (first in first out) scheduling: send in order of

arrival to queue Drawbacks of FIFO scheduling

No special treatment is given to packets from flows that are of higher priority or are more delay sensitive

Flows of larger packets get better service A greedy flow will adversely affect other flows

Scheduling Mechanisms

Priority scheduling: Multiple priority classes, each has its own queue A packet’s priority class may depend on an

explicit marking or other header info, e.g. source/dest IP address, source/dest port number, etc.

Transmit a packet from the highest priority class that has a nonempty queue

Scheduling Mechanisms

Round Robin scheduling: multiple classes, each has own queue cyclically scan class queues, serving one packet from each

class (if available) No advantage in being greedy

Work-conserving queuing discipline: never allow the link to remain idle whenever there are packets queued for transmission

Scheduling Mechanisms

Weighted Fair Queuing (WFQ): approximate fluid fair queuing (FFQ)

FFQ: A separate FIFO queue for each connection sharing

the same link. During any time interval when there are N

nonempty queues, the server serves the N packets at the head of the queues simultaneously

At any time t, the service rate for a nonempty queue i is where wi is the weight associated with queue i, B(t) is the set of nonempty queues, and C is the link speed.

FFQ allows different connections to have different service shares.

cw

w

tBjj

i

)(

WFQ

FFQ is impractical because Only one connection can receive service at a time An entire packet must be served before another

packet can be served

WFQ: When the server is ready to transmit the next packet at time t, it picks the first packet that would complete service in the corresponding FFQ system if no additional packets were to arrive after time t

Policing Mechanisms

Goal: regulate the rate at which a flow is allowed to inject packets into the network

Three policing criteria: (Long term) Average Rate: how many packets can

be sent per time interval crucial question: what is the interval length?

Peak Rate: max. number of packets that can be sent over a short period of time.

Burst Size: max. number of packets that can be sent consecutively (with no intervening idle)

Policing Mechanisms

Token Bucket: limit input to specified Burst Size and Average Rate.

bucket can hold b tokens tokens generated at rate r token/sec

Token added to bucket if bucket not full, ignored otherwise A packet must remove a token from the token bucket

before it is transmitted into the network

Policing Mechanisms (more)

For a leaky-bucket-policed flow: The max. burst size is b packets Over interval of length t: max. number of packets

admitted is (r t + b) r limit the long term average rate

Leaky bucket + WFQ = guaranteed upper bound on end-to-end delay and delay jitter, i.e., QoS guarantee! A connection’s guaranteed rate must be greater than or

equal to the connection’s average rate

IETF Integrated Services (RFC1633)

Architecture for providing QoS guarantees to individual application sessions

Call setup: A session requiring QoS guarantees must first reserve sufficient resources at each router on its path before transmitting data

Arriving session must: declare its QoS requirement using R-spec characterize traffic it will send into network using T-spec

A signaling protocol is needed to carry R-spec and T-spec to routers RSVP

Router must determine whether or not it can admit the call

Router must maintain per-flow state (allocated resources, QoS requests)

Router components

Classifier: perform a Multi-Field (MF) classification and put the packet in a specific queue based on the classification result.

Packet scheduler: schedule the packet accordingly to meet its QoS requirements.

RSVP

RSVP: a signaling protocol for applications to reserve resources (link bandwidth and buffer space) Provide reservations for bandwidth in multicast trees Receiver-oriented Can reserve resources for heterogeneous receivers

Sender sends a PATH message to the receiver specifying the characteristic of the traffic and QoS requirement

Receiver responds with a RESV message to request resources for the flow An intermediate router can reject or accept the request of

the RESV message A router may merge the reservation messages arriving

from downstream

Intserv Service Models

Guaranteed service: Provide firm bounds on end-

to-end datagram queuing delays.

Provide bandwidth guarantee Leaky-bucket-policed source

+ WFQ

WFQ

token rate, r

bucket size, bper-flowrate, R

D = b/Rmax

arrivingtraffic

Controlled load service: Provide a quality of service

closely approximating the QoS that the same flow would receive from an unloaded network element. A very high percentage of

transmitted packets will be successfully delivered to the destination.

A very high percentage of transmitted packets will experience a queuing delay close to 0.

IETF Differentiated Services

Concerns with Intserv: Scalability: router need to process resource

reservations and maintaining state for each flow. Flexible Service Models: Intserv has only two classes.

Also want “qualitative” service classes

Diffserv approach: Goal: provide the ability to handle different classes of

traffic in different ways Scalable: simple functions in network core, relatively

complex functions at network edge Flexible: don’t define specific service classes, provide

functional components to build service classes

Diffserv Architecture

Edge router:-Packets are marked-The mark of a packet identifies the class of traffic to which the packet belongs

Core router:-Packet forwarded to the next hop according to the per-hop behavior (PHB) associated with that packet’s class

-PHB determines buffering and scheduling at the routers

- Routers needn’t maintain states for individual flows

Edge-Router Packet Marking

Class-based marking: packets classified based on packet header fields, packets of different classes marked differently

Intra-class marking: packet marking based on per-flow profile, conforming portion of flow marked differently than non-conforming one Traffic profile: pre-negotiated rate A, bucket size B

Out-of-profile packets might be shaped (i.e. delayed) or dropped

Packet Marking

Packet is marked in the Type of Service (TOS) in IPv4, and Traffic Class in IPv6

6 bits used for Differentiated Service Code Point (DSCP) and determine PHB that the packet will receive

2 bits are currently unused DS field of the packets can be marked by end hosts or

leaf router

Forwarding (PHB)

PHB is defined as “a description of externally observable forwarding behavior of a Diffserv node applied to a particular Diffserv behavior aggregate”. A PHB can result in different classes of traffic

receiving different performance. A PHB does not specify what mechanisms to use to

ensure required performance behaviors Differences in performance must be observable and

hence measurable Examples:

Class A gets x% of outgoing link bandwidth over time intervals of a specified length

Class A packets leave first before packets from class B

Service Level Agreements

A customer must have a Service Level Agreement (SLA) with its ISP.

A SLA specifies the service classes supported and the amount of traffic allowed in each class. Static SLA: negotiated on a regular basis (e.g. monthly and yearly) Dynamic SLA: customers must use a signaling protocol (e.g. RSVP) to

request for services on demand The classification, policing and shaping rules at the ingress routers are

derived from the SLAs. The amount of buffering space needed for these operations is also

derived from the SLAs. When a packet enters one domain from another domain, its DS field

may be re-marked, as determined by the SLA between the two domains.

Example Diffserv Services

Premium Service: for applications requiring low delay and low jitter service;

Assured Service: for applications requiring better reliability than Best Effort Service

Olympic Service: provide three tiers of services: Gold, Silver and Bronze, with decreasing quality.

An End-to-End Service Architecture

Provide assured service, premium service, and best effort service

Assured service: provide reliable service even in time of network congestion

The SLA specifies the amount of bandwidth allocated for the customers Customers decide how their applications share the

bandwidth SLA usually static

Rfc 2638

Implementation of assured service classification and policing done at the ingress routers of the ISP

networks. the token bucket depth is set by the Profile's burst size When a token is present, packet is considered as in profile and

has its A-bit in the DS field set to one, otherwise the packet is considered as out of profile and has it’s a-bit set to 0.

If the traffic does not exceed the bit-rate specified by the SLA, they are. All packets are put into an Assured Queue (AQ). For Premium service, the token bucket depth must be

limited to the equivalent of only one or two packets. For Premium-configured Marker, arriving packets that see a token present have their P-bits set and are forwarded, but when no token is present, Premium flow packets are held until a token arrives.

the queue is managed by a queue management scheme called RED with In and Out, or RIO.

we designate the forwarding path objects that test flows against their usage profiles "Profile Meters".

Border routers will require Profile Meters at their input interfaces. The bilateral agreement between adjacent administrative domains must specify a peak rate on all P traffic and a rate and burst for A traffic (and possibly a start time and duration). A Profile Meter is required at the ingress of a trust region to ensure that differentiated service packet flows are in compliance with their agreed-upon rates. Non- compliant packets of Premium flows are discarded while non-compliant packets of Assured flows have their A-bits reset. For example, in figure 1, if the ISP has agreed to supply Company A with r bytes/sec of Premium service, P-bit marked packets that enter the ISP through the link from Company A will be dropped if they exceed r. If instead, the service in figure 1 was Assured service, the packets would simply be unmarked, forwarded as best effort.

The simplest border router input interface is a Profile Meter constructed from a token bucket configured with the contracted rate across that ingress link (see figure 5). Each type, Premium or Assured, and each interface must have its own profile meter corresponding to a particular class across a particular boundary.

When an allocation is desired for a particular flow, a request is sent to the BB. Requests include a service type, a target rate, a maximum burst, and the time period when service is required. A BB verifies there exists unallocated bandwidth sufficient to meet the request. If a request passes these tests, the available bandwidth is reduced by the requested amount and the flow specification is recorded.

RED and RIO

RED (random early detection): discarding packets before buffer space is exhausted Router maintains a running average of the queue length

for each output link When the average queue length of an output link exceeds

a threshold, pick a packet at random from the queue and drop it

TCP flow control mechanisms at different end hosts will reduce send rates at different time.

RIO: two thresholds for each queue. When the queue size L is below the first threshold, no packets are dropped

better resource utilization When L is between the two thresholds, only out packets are randomly

dropped. When L exceeds the second threshold, both in and out packets are

randomly dropped, but out packets are dropped more aggressively.

Premium Service

provide low-delay and low-jitter service for customers that generate fixed peak bit-rate traffic.

The SLA specifies a desired peak bit-rate for a specific flow or an aggregation of flows. The customer is responsible for not exceeding the peak rate:

excess traffic will be dropped. The ISP guarantees that the contracted bandwidth will be available

when traffic is sent. Premium Service is suitable for Internet Telephony, Video

Conferencing. it is desirable for ISPs to support both static SLAs and

dynamic SLAs. Admission control is needed for dynamic SLAs.

Implementation of Premium Service

At the customer side, some entity will decide which application flow can use Premium Service.

The leaf routers directly connected to the senders will do MF classifications and shape the traffic.

After the shaping, the P-bits in the DS field of all packets are set for the flow that is allowed to use Premium Service.

Burst parameter is expected to be small, in the one or two packet range. First-hop routers (or other edge devices) set the Premium bit of those that match a Premium service specification, and perform traffic shaping on the flow that smooths all traffic bursts before they enter the network.

The exit routers of the customer domain may need to reshape the traffic to make sure that the traffic does not exceed the peak rate specified by the SLA.

The ingress routers at the provider will police the traffic (excess traffic is dropped)

All packets with the P-bit set enter a Premium Queue (PQ). Packets in the PQ will be sent before packets in the AQ.

by admission control, the amount of premium traffic can be limited to a small percentage, say 10%,of the bandwidth of input links.

excess packets are dropped at the ingress routers of the networks. Non-conformant flows cannot impact the performance of conformant flows.

premium packets are forwarded before packets of other classes, they can potentially use 100% of the bandwidth of the output links.

if premium traffic is distributed evenly among the links, these three factors should guarantee that the service rate of the PQ is much higher than the arrival rate. Therefore, arriving premium packets should find the PQ empty or very short most of the time. The delay or jitter experienced by premium packets should be very low. However, Premium Service provides no quantified guarantee on the delay or jitter bound.

uneven distribution of premium traffic may cause a problem for Premium Service. aggregation ofpremium traffic in the core may invalidate the assumption that the arrival rate of premium traffic is

far below the service rate. Differentiated Traffic Engineering/ Constraint Based Routing must be used to avoid such congestion

caused by uneven traffic distribution.

By limiting the total amount of bandwidth requested by Premium traffic, the network administrators can

guarantee that premium traffic will not starve the Assured and Best Effort traffic. Another scheme is to use

Weight Fair Queuing (WFQ) [22] between the PQ and the AQ.

Service Allocation in Customer Domains

Service allocation: decide how the host in a customer domain share the services specified by the SLA

bandwidth broker (BB) used to allocate resources in a customer domain

Before a host starts sending packets, it may decide the service class for the packets by itself or it may consult a BB for a service class.

The host may mark the packets by itself or may send the packets unmarked. If the host sends the packets unmarked, the BB must use some protocols,

(e.g., RSVP) to set the classification, marking and shaping rules at the leaf router directly connected to the sender so that the leaf router knows how to mark the sender’s packets.

If the SLA between a customer and its ISP is dynamic, the BB in the customer domain must also use a signaling protocol to request resources on demand from its ISP.

Resource Allocations in ISP Domains

Given the SLAs, ISP must decide how to configure the boundary routers so that they know how to handle the incoming traffic

For static SLAs, boundary routers can be manually configured with the classification, policing and shaping rules. Resources are therefore statically allocated for each customer.

For a dynamic SLA, the BB in the customer domain uses RSVP to request for resources from its ISP. At the ISP side, the admission control decisions can be made in a distributed manner by the boundary routers or by a Bandwidth Broker. If boundary

routers are directly involved in the signaling process, they are configured with the corresponding classification,

policing and shaping rules when they grant a request. If a BB is involved rather than the boundary

routers, then the BB must configure the boundary routers when it grants a request.

Examples of end-to-end service delivery

Examples of end-to-end service delivery

Intserv and Diffserv Retrospective

To provide end-to-end Intserv or Diffserv service, all the ISPs between the end systems must Provide the service Cooperate and make settlements

Complex and costly to police and shape traffic bill the service by volume

No perceived difference between a best-effort service and an Intsev/Diffserv service if the network has moderate load

WFQ

during any interval of length U , the number of bits in that interval is less than a + pu. In the (a, p ) model, a and p can be viewed as the maximum burst size and the long term bounding rate of the source respectively.

WFQ

If a connection satisfies the traffic constraint, and is allocated the amount of buffer space as listed in the fifth column, it can be guaranteed an end-to-end delay bound and delay-jitter bound as listed in the third and fourth column, respectively, Ci is link speed of the ith switch on the path traversed by the connection, rj is the guaranteed rate for the connection, and L,,, is the largest packet size. n is the number of hops traversed by the connection,