Load Sensitive Routing Protocol for Providing QoS in Best Effort Network
Improving QOS in IP Networks Thus far: “making the best of best effort” Future: next generation...
-
Upload
flora-porter -
Category
Documents
-
view
221 -
download
0
Transcript of Improving QOS in IP Networks Thus far: “making the best of best effort” Future: next generation...
Improving QOS in IP Networks
Thus far: “making the best of best effort”Future: next generation Internet with QoS guarantees
RSVP: signaling for resource reservations Differentiated Services: differential guarantees Integrated Services: firm guarantees
simple model for sharing and congestion studies:
Principles for QOS Guarantees
Example: 1MbpsI P phone, FTP share 1.5 Mbps link. bursts of FTP can congest router, cause audio loss want to give priority to audio over FTP
packet marking needed for router to distinguish between different classes; and new router policy to treat packets accordingly
Principle 1
Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher
than declared rate) policing: force source adherence to bandwidth allocations
marking and policing at network edge: similar to ATM UNI (User Network Interface)
provide protection (isolation) for one class from othersPrinciple 2
Principles for QOS Guarantees (more)
Allocating fixed (non-sharable) bandwidth to flow: inefficient use of bandwidth if flows doesn’t use its allocation
While providing isolation, it is desirable to use resources as efficiently as possible
Principle 3
Principles for QOS Guarantees (more)
Basic fact of life: can not support traffic demands beyond link capacity
Call Admission: flow declares its needs, network may block call (e.g., busy signal) if it cannot meet needs
Principle 4
Summary of QoS Principles
Let’s next look at mechanisms for achieving this ….
Scheduling And Policing Mechanisms
scheduling: choose next packet to send on link; allocate link capacity and output queue buffers to each connection (or connections aggregated into classes)
FIFO (first in first out) scheduling: send in order of arrival to queue discard policy: if packet arrives to full queue: who to discard?
• Tail drop: drop arriving packet• priority: drop/remove on priority basis• random: drop/remove randomly
Need for a Scheduling Discipline
Why do we need a non-trivial scheduling discipline?
Per-connection delay, bandwidth, and loss are determined by the scheduling discipline The NE can allocate different mean delays to
different connections by its choice of service order it can allocate different bandwidths to connections
by serving at least a certain number of packets from a particular connection in a given time interval
Finally, it can allocate different loss rates to connections by giving them more or fewer buffers
FIFO Scheduling
Disadvantage with strict FIFO scheduling is that the scheduler cannot differentiate among connections -- it cannot explicitly allocate some connections lower mean delays than others
A more sophisticated scheduling discipline can achieve this objective (but at a cost)
The conservation law “the sum of the mean queueing delays received by
the set of multiplexed connections, weighted by their fair share of the link’s load, is independent of the scheduling discipline”
Requirements A scheduling discipline must satisfy four
requirements: Ease of implementation -- pick a packet every few
microsecs; a scheduler that takes O(1) and not O(N) time Fairness and Protection (for best-effort connections) --
FIFO does not offer any protection because a misbehaving connection can increase the mean delay of all other connections. Round-robin scheduling?
Performance bounds -- deterministic or statistical; common performance parameters: bandwidth, delay (worst-case, average), delay-jitter, loss
Ease and efficiency of admission control -- to decide given the current set of connections and the descriptor for a new connection, whether it is possible to meet the new connection’s performance bounds without jeopardizing the performance of existing connections
Schedulable Region
Designing a scheduling discipline
Four principal degrees of freedom: the number of priority levels whether each level is work-conserving or non-work-
conserving the degree of aggregation of connections within a
level service order within a level
Each feature comes at some cost for a small LAN switch -- a single priority FCFS
scheduler or at most 2-priority scheduler may be sufficient
for a heavily loaded wide-area public switch with possibly noncooperative users, a more sophisticated scheduling discipline may be required.
Work conserving and non-work conserving disciplines
A work-conserving scheduler is idle only when there is no packet awaiting service
A non-work-conserving scheduler may be idle even if it has packets to serve makes the traffic arriving at downstream switches
more predictable reduces buffer size necessary at output queues and
the delay jitter experienced by a connection allows the switch to send a packet only when the
packet is eligible for example, if the (k+1)th packet on connection A
becomes eligible for service only i seconds after the service of the kth packet, the downstream swicth receives packets on A no faster than one every i secs.
Eligibility times
By choosing eligibility times carefully, the output from a switch can be mode more predictable (so that bursts won’t build up in the n/w)
Two approaches: rate-jitter and delay-jitter rate-jitter: peak rate guarantee for a connection
E(1) = A(1); E(k+1) = max(E(k) + Xmin, A(k+1)) where Xmin is the time taken to serve a fixed-sized packet at peak rate)
delay-jitter: at every switch, the input arrival pattern is fully reconstructed E(0,k) = A (0,k); E(i+1, k) = E(i,k) + D + L where D is
the delay bound at the previous switch and L is the largest possible delay on the link between switch i and i+1
Pros and Cons
Reduces delay jitter: Con -- we can remove jitter at endpoints with an elasticity buffer; Pro--reduces buffers(expensive) at the switches
Increases mean delay, problem?: pro--for playback applications, which delay packets until the delay-jitter bound, increasing mean delay does not affect the perceived performance
Wasted bandwidth, problem?: pro--It can serve best-effort packets when there are no eligible packets to serve
Needs accurate source descriptors -- no rebuttal from the non-work conserving camp
Priority Scheduling
transmit highest priority queued packet multiple classes, with different priorities
class may depend on marking or other header info, e.g. IP source/dest, port numbers, etc..
Priority Scheduling
The scheduler serves a packet from priority level k only if there are no packets awaiting service in levels k+1, k+2, …, n
at least 3 levels of priority in an integrated services network?
Starvation? Appropriate admission control and policing to restrict service rates from all but the lowest priority level
Simple implementation
Round Robin Scheduling multiple classes cyclically scan class queues, serving one from each class (if available) provides protection against misbehaving sources (also guarantees a minimum bandwidth to every connection)
Max-Min Fair Share
Fair Resource allocation to best-effort connections?
Fair share allocates a user with a “small” demand what it wants, and evenly distributes unused resources to the “big” users.
Maximize the minimum share of a source whose demand is not fully satisfied. Resources are allocated in order of increasing demand no source gets a resource share larger than its demand sources with unsatisfied demand s get an equal share
of resource
A Generalized Processor Sharing (GPS) server will implement max-min fair share
Weighted Fair Queueing
generalized Round Robin (offers differential service to each connection/class)
each class gets weighted amount of service in each cycle
Policing Mechanisms
Goal: limit traffic to not exceed declared parameters
Three common-used criteria: (Long term) Average Rate: how many pkts can be sent per unit time
(in the long run) crucial question: what is the interval length: 100 packets per sec or 6000
packets per min have same average!
Peak Rate: e.g., 6000 pkts per min. (ppm) avg.; 1500 ppm peak rate (Max.) Burst Size: max. number of pkts sent consecutively (with no
intervening idle)
Traffic Regulators
Leaky bucket controllers Token bucket controllers
Policing Mechanisms
Token Bucket: limit input to specified Burst Size and Average Rate.
bucket can hold b tokens tokens generated at rate r token/sec unless
bucket full over interval of length t: number of packets
admitted less than or equal to (r t + b).
Policing Mechanisms (more)
token bucket, WFQ combine to provide guaranteed upper bound on delay, i.e., QoS guarantee!
WFQ
token rate, r
bucket size, b
per-flowrate, R
D = b/Rmax
arrivingtraffic
IETF Integrated Services
architecture for providing QOS guarantees in IP networks for individual application sessions
resource reservation: routers maintain state info (a la VC) of allocated resources, QoS req’s
admit/deny new call setup requests:
Question: can newly arriving flow be admitted with performance guarantees while not violating QoS guarantees made to already admitted flows?
Intserv: QoS guarantee scenario
Resource reservation call setup, signaling (RSVP) traffic, QoS declaration per-element admission control
QoS-sensitive scheduling (e.g.,
WFQ)
request/reply
RSVP
Multipoint Multicast connections Soft state Receiver initiated reservations identifies a session using a combination of
dest. Address, transport-layer protocol type, dest. Port number
each RSVP operation applies only to packets of a particular session
not a routing protocol; merely used to reserve resources along the existing route set up by whichever underlying routing protocol
RSVP Messages
Path message originating from the traffic sender to install reverse routing state in each router along
the path to provide recceivers with information about the
characteristics of the sender traffic and end-to-end path so that they can make appropriate reservation requests
Resv message originating from the receivers to carry reservation requests to the routers along the
distribution tree between receivers and senders
PathTear, ResvTear, ResvErr, PathErr
PATH Message
Phop: the address of the last RSVP-capable node to forward this path message.
Sender template: filter specification identifying the sender
Sender Tspec: defines sender traffic characteristics
Optional Adspec: info about the end-to-end path used by the receivers to compute the level of resources that must be reserved
RESV Message
Rspec: reservation specification comprising the value R of bandwidth to be reserved in each router, and a slack term about end-to-end delay
reservation style, FF, WF, SE Filterspec to identify the senders Flowspec comprising the Rspec and traffic
specification (set equal to Sender Tspec) optional ResvConf
Call Admission
Arriving session must : declare its QOS requirement
R-spec: defines the QOS being requested characterize traffic it will send into network
T-spec: defines traffic characteristics signaling protocol: needed to carry R-spec and T-
spec to routers (where reservation is required) RSVP
Intserv QoS: Service models [rfc2211, rfc 2212]
Guaranteed service: worst case traffic arrival: leaky-
bucket-policed source simple (mathematically
provable) bound on delay [Parekh 1992, Cruz 1988]
Controlled load service: "a quality of service closely
approximating the QoS that same flow would receive from an unloaded network element."
WFQ
token rate, r
bucket size, b
per-flowrate, R
D = b/Rmax
arrivingtraffic
Chapter 6 outline
6.1 Multimedia Networking Applications
6.2 Streaming stored audio and video RTSP
6.3 Real-time, Interactivie Multimedia: Internet Phone Case Study
6.4 Protocols for Real-Time Interactive Applications RTP,RTCP SIP
6.5 Beyond Best Effort 6.6 Scheduling and
Policing Mechanisms 6.7 Integrated
Services 6.8 RSVP 6.9 Differentiated
Services
IETF Differentiated Services
Concerns with Intserv: Scalability: signaling, maintaining per-flow router state difficult with large
number of flows Flexible Service Models: Intserv has only two classes. Also want “qualitative”
service classes “behaves like a wire” relative service distinction: Platinum, Gold, Silver
Diffserv approach: simple functions in network core, relatively complex functions at edge routers
(or hosts) Do’t define define service classes, provide functional components to build
service classes