White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as...

16
Copyright 2002 VisioWave SA All Rights Reserved Digital Video Networks & Quality of Service White Paper

Transcript of White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as...

Page 1: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Copyright 2002 VisioWave SA

All Rights Reserved

Digital Video Networks &

Quality of Service

White Paper

Page 2: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 2/16 Copyright 2002 VisioWave SA All Rights Reserved

TABLE OF CONTENTS

INTRODUCTION ............................................................................................................. 3

WHAT IS QUALITY OF SERVICE ?........................................................................................... 3 DIGITAL VIDEO TRANSPORT REQUIREMENTS.......................................................................... 3 NETWORK TECHNOLOGIES .................................................................................................. 4

INTEGRATED SERVICES VS. DEDICATED NETWORK .............................................. 6

VOICE TRAFFIC................................................................................................................... 6 DATA TRAFFIC .................................................................................................................... 6 VIDEO TRAFFIC .................................................................................................................. 6

VIDEO QUALITY IN LOADED NETWORKS AND REMEDIES ...................................... 7

QUEUING ........................................................................................................................... 7 BANDWIDTH CAPACITY OVERFLOW ....................................................................................... 8 LOWER BANDWIDTH REQUIREMENTS..................................................................................... 8 INCREASE LINK CAPACITY .................................................................................................... 8 TRAFFIC ENGINEERING........................................................................................................ 8 BANDWIDTH ARBITRATION POLICIES..................................................................................... 8 TRANSIENT BUFFER OVERFLOWS ......................................................................................... 9 FLOW CONTROL.................................................................................................................. 9 LARGER BUFFERS............................................................................................................... 9 TRAFFIC SHAPING AND POLICING........................................................................................ 11

BANDWIDTH ARBITRATION ....................................................................................... 12

WHEN IS BANDWIDTH ARBITRATION NEEDED ?..................................................................... 12 DEFAULT BEHAVIOR OF PACKET SWITCHING NETWORKS ....................................................... 12 DEFAULT BEHAVIOR OF VIRTUAL CIRCUIT NETWORKS ........................................................... 13 ARBITRATION POLICIES ..................................................................................................... 13 PRIORITY QUEUING .......................................................................................................... 13 CONGESTION AVOIDANCE ................................................................................................. 15

CONCLUSION............................................................................................................... 16

Page 3: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 3/16 Copyright 2002 VisioWave SA All Rights Reserved

Introduction This white paper addresses the issues of quality of service relevant to building digital video network applications. We chose to cover in-depth the three most common network technologies deployed in metropolitan area networks : Ethernet, routed IP networks, and ATM networks. Particular attention is given to the key differences between data and video networks and how it translates into the specific QoS requirements of video networks.

What is quality of service ?

In our context, quality of service, or QoS, defines all the techniques that can be deployed to arbitrate network resources when there is not enough of those resources for all network packets to be delivered to their destinations at a given point in time.

It is worth noting that the lowest delay, jitter, and packet loss are always achieved when there is no congestion since no data will ever be queued, hence delayed or lost in this case. The opposite is also true : whenever network capacity is not sufficient, no matter how sophisticated the QoS technologies you deploy, packets will still be lost or delivered late in the network.

In this definition, we have voluntarily left aside the issues of guaranteed delay or jitter limits. These are mostly irrelevant as digital video has intrinsic delays in the tens of milliseconds (video is 50 or 60 samples per second), which are orders of magnitude above the maximum delays and jitters that can be observed in a MAN, so that we can safely ignore them.

Classically, QoS techniques are divided into : shaping, policing, tagging, queuing and scheduling. We will introduce these concepts in their context as we find out what happens in an overloaded video network.

Digital video transport requirements

The consideration detailed later on quality of service apply to all types of video compression algorithms.

Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth requirements on the network. Spatial redundancies (inside one video frame) are always exploited. This is the basis of two of most common compression techniques in video networks for surveillance or CCTV applications : MJPEG (DCT) and MJPEG2000 (wavelets).

Temporal redundancies (from one frame to the next) can also be exploited, which dramatically reduces the bandwidth requirements. To do so, consecutive frames are grouped into “group of pictures” or GOPs, which range in size from 4 (typical value for VisioWave 3D wavelets) to 12 (MPEG2) or more (MPEG4).

The downside of this added efficiency is longer delays, since one GOP has to be fully acquired before it can be compressed and sent over the network, and a higher sensitivity to packet loss, since one entire GOP will be lost for each packet dropped by the network. Table 1 presents the common requirements in terms of bandwidth

Page 4: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 4/16 Copyright 2002 VisioWave SA All Rights Reserved

and packet loss of different compression algorithms, as well as there processing delays.

Table 1: Requirements of different video compression algorithms

Compression algorithm

Intrinsic delay Bandwidth Packet loss

MJPEG 40 ms 8 Mbit/s < 50 packet/s MJPEG2000 40 ms 4 Mbit/s < 50 packet/s

3D Wavelets (v2) 100 ms 2 Mbit/s < 12 packet/s 3D Wavelets (v3) 340 ms 1 Mbit/s < 3 packet/s

MPEG1 260 ms 1.5 Mbit/s < 4 packet/s MPEG4 1 s 800 Kbit/s < 1 packet/s

Temporal compression is the most useful tool when confronted with quality of service issues in digital video network since they can double or triple the number of video channels transported on the same network, but particular attention must be paid to the added packet loss sensitivity. VisioWave’s 3D wavelet form of temporal compression is especially relevant in this context since it is activated simply by reprogrammation of the FPGA compression engine and does not require extra costly compression hardware unlike MPEG1/2/4.

Network Technologies

The three most common network technologies available to Metropolitan Area Network builders are : switched / gigabit Ethernet, routed IP networks, ATM networks.

IP networks with an ATM backbone are considered routed IP networks for the purpose of this discussion. The first two types often work together with the routed IP network connecting together a series of Ethernet LANs, as on Figure 1.

MonitorMonitor Monitor

Video VideoVideo

Video Input

Video Output

Routed IP Backbone

Ingress routerEgressrouter

Access LANAccess LAN

Figure 1: Mixed Routed IP / switched Ethernet network

Page 5: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 5/16 Copyright 2002 VisioWave SA All Rights Reserved

Native ATM networks require ATM adapters in the video equipments to connect directly to the ATM cloud, as depicted on Figure 2.

Video VideoVideo

Video Input

MonitorMonitor Monitor

Video Output

ATM Cloud

Figure 2: "Native" ATM network

Ring configurations are common in video network for security applications. Depending on the technology employed, they will be considered as switched Ethernet or ATM networks. IP routing is seldom encountered in ring topologies, but switched packet rings are becoming a common router interconnect technology, under such names as Cisco DPT, or Resilient Packet Ring, as part of an IP backbone.

Page 6: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 6/16 Copyright 2002 VisioWave SA All Rights Reserved

Integrated services vs. dedicated network The computer network that transports the video stream can be either dedicated to

this task, or shared with other services, typically voice and data. QoS requirements of voice and data services are drastically different from those of digital video. In integrated service environment QoS support from the network is required to accommodate those differences without damaging interferences, e.g. loss of voice or video when transferring huge chunks of data.

Voice traffic

Voice is very sensitive to delay and jitter, it is also very low, constant bandwidth. Voice is suitable to regulation by access control : it is OK to deny a call at setup time if it is not possible to guarantee a good quality throughout the conversation. Calls have a limited duration in time and, as other call finish, the user will be able to redial and retry another call setup. On the other hand degrading the voice quality of all currently established calls to allow another user in would not be acceptable.

This is the reason why we recommend using resource reservation mechanisms to support voice over network applications, whether IP RSVP or ATM SVC. This will effectively give priority to voice over video. If not desirable, we recommend setting a limit on the amount of bandwidth that can be reserved by RSVP VoIP calls or AAL0 SVC calls. Usual telephony rules of thumb apply when dimensioning this maximum voice bandwidth as it is very analog to trunking of phone lines.

Data traffic

Data is very bursty and bandwidth intensive, but also very tolerant to losses and delays, all the more so as it is almost invariably using TCP/IP. TCP has a built-in congestion control algorithm that self regulates the bandwidth used by data connections in a congested network.

We recommend having data prioritized as the lowest class of service on IP networks, or using LANE services with UBR class of service on ATM networks. This will enable data users to use the maximum bandwidth when available and let video and audio flow freely.

Video Traffic

With those basic QoS setups, integrated services networks will be equivalent to dedicated networks with regards to digital video streams. From now on we will focus only on dedicated networks.

Page 7: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 7/16 Copyright 2002 VisioWave SA All Rights Reserved

Video quality in loaded networks and remedies Video quality degrades because packet loss occurs in an overloaded network. A

closer look at queuing mechanisms inside the network equipment will shed some light on the possible remedies.

Queuing

At the highest level, a video stream flows from a source (video input equipment) to a receiver (video output equipment) through a network, which can be modelized as a series of links and switches/routers that must be passed through from the source to the receiver (figure3).

Video VideoVideo

Video Input

MonitorMonitor Monitor

Video OutputLink 1 Link 2 Link 3 Link 4

Video VideoVideo

Video Input

MonitorMonitor Monitor

Video Output

Figure 3: Physical vs. logical view of links

Before transmission on each link, including the first link from video input equipment to access network switch, video packets are first put in a queue, then retrieved by the scheduler of the physical link for actual transmission to the next hop (figure 4). Whenever more packets enter the queue at a given time than the physical link can schedule, the queue occupation grows. If this condition lasts long enough, the queue will overflow and additional packets are dropped.

LINK

Excess Packets Dropped

Incoming Packet

Packets Queued

Figure 4: Packet Queuing

Page 8: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 8/16 Copyright 2002 VisioWave SA All Rights Reserved

How to prevent packets from being dropped will depend on the nature of the excess of packets. If there is a link where, on average, more bytes per second must travel than the capacity of the link could allow, it is what we call a “bandwidth capacity overflow” condition. If, on the opposite, all links have enough capacity for their average use, but bursts of data will overflow the queue from time to time, packet loss is only occasional and we call this a “transient buffer overflow” condition.

Bandwidth capacity overflow

In the first case, there are four possible remedies, presented here in order of increased implementation complexity.

Lower bandwidth requirements

One option is to use the temporal compression on VisioWave equipments. You can dramatically reduce the amount of network resources required for each stream without sacrifying visual quality. Another option is to lower the number of frames per second that need to be displayed on the output equipment. In some security application, 12 frames per second is enough, and better than lowering the overall quality of each frame.

Increase link capacity

For example if the uplink from input equipment to access network is not large enough, you could put an additional network interface in it, or upgrade from 100Mb Ethernet to 1Gb Ethernet, or from OC-3 ATM to OC-12. The same is true for links inside the network, although it is not as straightforward since there may be more than one path from source to destination, which leads to the second option.

Traffic engineering

“traffic engineering” refers to the deviation of excess traffic on alternate network routes. For backbone links, rerouting might just be the solution.

In general ATM networks running the PNNI protocol will always seek rerouting options at call setup time, as do MPLS networks running LDP. IP networks running internal routing protocols such as RIP, IGRP or OSPF, will do the same − on a per-packet, rather than per-call basis − provided you use link metrics proportional to the actual capacity of each link.

On the other hand, layer 2 Ethernet network running the spanning tree protocol will only seek to avoid loops in the packet forwarding tables, but will make no attempt at optimizing link utilization. This is generally not a concern since these technologies are deployed on local access networks with plenty of extra capacity, but it should be kept in mind when thinking about pure Ethernet MANs.

Bandwidth Arbitration Policies

Deploy QoS strategies to arbitrate the bandwidth between the different users of the system. Unlike voice application, resource reservation is not the best option : in most security applications a lower number of frames per second is more acceptable than not

Page 9: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 9/16 Copyright 2002 VisioWave SA All Rights Reserved

being able to visualize a camera when a situation occurs. On the other hand, when a critical situation happens, it may be desirable to allocate resources to some security operators in priority.

This may mean a dynamic redistribution of bandwidth between streams after their establishment, which rules out the common resource reservation protocols : IP RSVP and ATM SVC. We will examine bandwidth arbitration options in the next section : priority queuing and congestion avoidance.

Transient buffer overflows

The latter case, transient overflow conditions, can be definitely cured. Here are also the remedies in order of increased implementation complexity.

Flow control

This is mostly applicable to Ethernet networks. By keeping switch decisions and associated hardware simple, switched Ethernet or gigabit Ethernet offers much larger link bandwidth than other network technologies. Cost cutting is often done at the expense of the amount of buffer memory available for queues. More often than not, this translates into very poor handling of video streams, because of their bursty characteristics.

However, the Ethernet model is actually valid for digital video network applications. There does not need to be large buffers at every link between the sender and receiver just in case this one link is transiently overloaded. One correctly sized buffer as far upstream as possible is enough provided that back pressure can be applied from the overloaded link up to that upstream queue.

This is exactly what flow control (802.3x) does in Ethernet network. In most cases, turning on flow control on all Ethernet elements, including the Ethernet adapter of the video input equipment, will eliminate occasional frame dropping by taking advantage of the internal queue in the video input equipment.

Flow control is enabled by default on VisioWave Ethernet video equipment and the buffer size at the input equipment is automatically dimensioned according to the user settings for compression on a per input basis.

Larger buffers

For networks that do not have a flow control mechanism, such as routed IP or ATM, care must be taken to correctly size the queues on each equipment, which requires a better understanding of the characteristics of video streams.

A video input equipment will invariably contain a digital video sampler (also called video decoder) that converts the PAL or NTSC signal from the camera into digital video frames. At least one frame must be completely digitalized, and one GOP for temporal compression, before being passed on to the compression engine. The compression engine processes the video frames using mathematical transforms, such as the discrete cosine or wavelet transforms, to compact the energy of the signal into a few coefficients. Those coefficient are then passed to the entropy coder which produces the bit stream that will sent on the network.

Page 10: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 10/16 Copyright 2002 VisioWave SA All Rights Reserved

The bottom line here is that each frame or GOP is processed independently of the others and the bit stream is produced at once at the end of the encoding pipeline, which produces one large burst of data. The burst size is one compressed video frame or one compressed GOP for temporal compression.

For example, assuming an 8 Mbit/s video stream without temporal compression at 50 frames per second, each frame is 8 Mbit ÷ 50 = 160 Kbit or 20 Kbytes. Table 2 gives common frame sizes for different compression settings.

Table 2: burst size for different compression settings

Compression Bandwidth Burst Size

DCT at 50fps 8 Mbit/s 20 Kbytes Wavelets at 50fps 6 Mbit/s 15 Kbytes Wavelets at 25fps 4 Mbit/s 20 Kbytes

3D Wavelets (GOP of 4) 4 Mbit/s 40 Kbytes 3D Wavelets at 25fps 2 Mbit/s 40 Kbytes

The correct queue size for a link is then the burst size multiplied by the number of video streams that may pass through this link. Assuming the full bandwidth of the link will be used for video, the queue size is simply the link bandwidth divided by 50 or 60 depending on whether PAL or NTSC is used. If temporal compression is applied, this number must be adjusted by multiplying with the GOP size in number of frames. Table 3 gives common buffer size for different combination of network and compression technologies.

Table 3: Buffer size for different compression and link types

Compression Link Type Buffer Size Non temporal 50 fps 100 Mb Ethernet 250 Kbytes Non temporal 50 fps 155 Mb ATM 336 Kbytes Non temporal 25 fps 100 Mb Ethernet 500 Kbytes Non temporal 25 fps 155 Mb ATM 672 Kbytes

Temporal 50 fps 100 Mb Ethernet 1 Mbyte Temporal 50 fps 155 Mb ATM 1.3 Mbytes

Non temporal 25 fps 100 Mb Ethernet 2 Mbytes Non temporal 25 fps 155 Mb ATM 2.6 Mbytes

Buffer size cannot usually be easily extended for Ethernet switches. On most IP routers, there is a fixed amount of total buffer memory which can be allocated for each link by configurations settings.

Page 11: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 11/16 Copyright 2002 VisioWave SA All Rights Reserved

Finally on ATM switches, there is usually a fixed amount of total buffer memory which will be allocated according to the parameters of the SVC nrtVBR setup call messages. VisioWave equipments automatically use the correct parameters for nrtVBR SVCs. ATM PVCs on the other hand must be programmed on the ATM switches using, for example, the data from Table 2.

If the queue size requirements cannot be met, they can be reduced by a technique called traffic shaping.

Traffic shaping and policing

Shaping and policing always operate together. Shaping refers to the idea of smoothing traffic bursts by sending smaller, evenly spaced, packets. Assuming all video streams conform to the same shaping characteristics, the buffer size can be reduced to one shaping atom (largest Ethernet packet or one ATM cell) per video stream. Policing is the action of rejecting incoming streams which do not pass a conformance test, so as not to allow a stream which breaks the assumptions about buffer size into the system.

Shaping does not apply to Ethernet networks where flow control is a much lighter mechanism to achieve the same goal.

In routed IP networks, shaping requires dedicated hardware which is never found on Ethernet adapters. Furthermore the video streams usually first travel through a switch Ethernet access network before entering the routed backbone, so that shaping at the video input equipment is not relevant. For this reason, shaping is typically performed by so called “edge routers”, which the access networks connect to. In this configuration, shaping in routed IP networks effectively shifts the buffer size requirements from the backbone routers up to the edge routers. This really makes a difference only when priority queuing, presented next, is used since it keeps the size of the different queues of backbone routers reasonable.

For ATM networks shaping is pervasive when using CBR calls. Because of the very small size of ATM cells, dedicated hardware is required on the video input equipment to support CBR virtual circuits. The ATM option of VisioWave video equipments supports CBR SVC calls if configured, and will automatically match the peak cell rate parameter with the current compression settings.

We do not recommend using CBR unless severely constrained by buffer size limits in the ATM switches, as it will introduce unnecessary delays in most cases because of the added maximum buffering in the video input equipments. The recommended nrtVBR class of service will optimize the delay / queue size ratio for most ATM networks.

Page 12: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 12/16 Copyright 2002 VisioWave SA All Rights Reserved

Bandwidth Arbitration

When is bandwidth arbitration needed ?

Bandwidth arbitration among video streams will be required each time the bandwidth capacity of one of the video network links is overflowed.

Such a condition can sometimes be ruled out. For instance because of massive over-planning of network capacity. Another common case is because streaming patterns are well identified, for example the network is only used to transport video between two points and calls are not set up on demand.

In all other cases the default behavior of the network under overload is probably not acceptable and one of the techniques presented later must be applied.

Default behavior of packet switching networks

In Ethernet, routed IP networks or ATM networks using UBR SVC calls, random packets will be discarded as the queues overflow. Even when very few packets are being dropped, the effect will be spectacular because digital video is very sensitive to packet or cell loss, so quality will degrade quickly. Table 4 below shows the effective frame rate for different values of network over-utilization and video packet size for Ethernet and ATM networks.

On average, all users will see the same degradation of frame rate, or no video at all if packet loss is massive enough to affect all video frames. The video connections will then be detected as down by the receiver, which will retry them, thereby repeating the overload condition periodically.

Users will then experience intermittent, unreliable, service as video streams keep being reestablished.

Table 4: Packet loss vs. frame loss for Ethernet/IP and ATM networks

Compression Type Overuse Frame loss ratio

(IP 1472 byte packets)

Frame loss ratio (ATM 48 byte

cells) Non temporal 50fps at 6 Mbit/s 1 % 10 % 95 % Non temporal 50fps at 6 Mbit/s 10 % 65 % 100 % Non temporal 25fps at 4 Mbit/s 1 % 15 % 98 % Non temporal 25fps at 4 Mbit/s 10 % 75 % 100 %

Temporal 50fps at 4 Mbit/s 1 % 25 % 99.9 % Temporal 50fps at 4 Mbit/s 10 % 95 % 100 %

Page 13: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 13/16 Copyright 2002 VisioWave SA All Rights Reserved

Default behavior of virtual circuit networks

In ATM networks using nrtVBR or CBR SVC calls or IP networks using RSVP calls, overload conditions cannot appear because the access control check will prevent the establishment of any new video call once the maximum capacity is reached.

This can be quite detrimental because bandwidth is allocated on a first come first served basis and does not depend on individual user identification. Hence some video streams may reserve enough bandwidth on one of the backbone link to effectively cut out entire sections of the network that depend on this link.

For those applications requiring the frequent establishment of video calls, such as cycling through a series of cameras, the result will be again intermittent, unreliable service.

Arbitration Policies

The goals of arbitration policies are :

1. To provide some guarantees of service availability as required by the operating constraints of the security installation. For example, one central security center may be required to always be able to display any of the cameras regardless of the activity in the local security centers.

2. To provide graceful degradation of service (frame rate or image quality). For instance if twice the number of streams normally supported at 50fps is required at one point, they all should switch to 25fps mode, instead of going into an intermittent mode where only half of the streams can be viewed.

Priority Queuing

The first model of arbitration is network based. To provide differentiated service guarantees, packets are tagged with a priority level. Tagging is done in the video input equipment and may be based on where the video originates from, where it is displayed or who required it. Policies can be entirely customized through the VisioWave Video Operating System API.

In this model, switches or routers will use multiple queues, one per priority level, on their output interfaces, and packets will be placed in the queue which maps to their priority level (figure 5). The output scheduler is configured to always service the queue with the highest priority level first, effectively giving a guarantee that video streams running at this priority level will never experience video quality degradation unless the network is overflowed by video streams running at the same or higher priority level.

Page 14: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 14/16 Copyright 2002 VisioWave SA All Rights Reserved

High Priority Queue

Medium Priority Queue

Low Priority Queue

Scheduler LINKIncomingPacket

TOSField

Classifier

figure 5: Priority Queuing

Two implementations of this model are available for VisioWave video equipments: 802.1p Ethernet QoS and DiffServ IP QoS, suitable respectively for switched Ethernet networks and routed IP networks. They are usually applied together when video equipments are first connected to Ethernet local access networks, themselves interconnected by an IP backbone.

The graceful degradation objective is met by giving a higher priority level to every other video frame, thereby providing users with consistent 25fps service when 50fps can no longer be sustained. This works because packets losses will be concentrated in odd frames, leaving even frames intact. The same idea can be extended recursively, so that video quality gracefully reverts to 12.5fps, then 6 fps. For NTSC, the numbers are 60fps, 30fps, 15fps and 7.5 fps respectively. Figure 6 below details the priority level of each individual frames inside the stream that will be applied by a VisioWave video input equipment in graceful degradation mode.

PriorityLevel

Frame #

1

2

3

46 fps

12 fps

25 fps

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Figure 6: Priority level settings for graceful degradation

Page 15: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 15/16 Copyright 2002 VisioWave SA All Rights Reserved

Priority queuing has three major advantages over resource reservation schemes like ATM SVC or IP RSVP :

1. It is scalable and cost efficient since every router or switch only has to maintain a handful of queues and does not need to track the state of all individual connections flowing through it.

2. Arbitration can be done on a per user or class of users basis rather than first come first served, thereby providing a higher level of video availability in critical situations.

3. The level of service degradation self adapts to network conditions. Network resources utilization is always optimized and capacity can be planned for the average case, while still providing hard guarantees on availability. Resource reservation on the other hand will always provide worst case service because it books the maximum amount of bandwidth which can be used by each individual connection and prevents statistical multiplexing of video streams.

Congestion Avoidance

An innovative approach to QoS will be introduced with release 2.1 of the video operating system, called “congestion avoidance”. This model is purely application level and does not rely on any QoS mechanism in the network itself. It is therefore suited for ATM networks with UBR class of service, or IP/Ethernet networks without support for differentiated services. The only assumption on the network is that congestion is indicated by packet loss.

In this model, the receivers constantly measure the quality of the link in terms of delay and packet loss. This information is fed back to the video input equipments where the number of frames per second in dynamically adjusted so that congestion is minimal, using the binomial congestion control laws. The algorithm is designed so that congestion is detected and controlled before frames are actually dropped because of excessive packet loss, making it a real “congestion avoidance” algorithm. Adaptation is not as fine grained as with priority queuing, which works at the packet level, but for most applications adaptation times in the order of a few seconds are acceptable.

Arbitration policies are supported with the same priority level semantics as for priority queuing and are controlled by the same APIs. Arbitration is done by adjusting the back off aggressivity of the congestion avoidance algorithm to the priority level of each stream. This means that lower level video streams will back off faster than higher priority ones, effectively transferring their share of bandwidth to higher priority streams. Inside the same priority class, bandwidth is shared evenly among the different streams.

Graceful degradation is built into the algorithm since it operates directly at the frame level and not at the packet level like network based techniques.

In a nutshell, the congestion avoidance feature brings the benefits of priority queuing to digital video networks without having to upgrade to expensive new DiffServ enabled routers or switches.

Page 16: White Paper Network QoS - Interlogix · compression algorithms. Digital video is compressed so as to take advantage of the redundancies of the video signal to reduce the bandwidth

Digital Video Networks & QoS White Paper 16/16 Copyright 2002 VisioWave SA All Rights Reserved

It is especially relevant in environment where different network technologies are mixed, for example ATM, IP over SONET and gigabit Ethernet, because it operates at the application layer and encompasses all elements of the link between the video input equipment and the receivers, regardless of the underlying network technologies. Therefore, it can provide a realistic answer to availability concerns in environments where existing network investments must be preserved and priority queuing is not available on all elements.

Conclusion The VisioWave Video Operating System provides a comprehensive support for

the quality of service requirements of security applications. On the one hand, cost effective digital video networks can be built with low cost Ethernet technologies without sacrifying service availability and custom arbitration policies. On the other hand, current service integration technologies such as DiffServ, IP RSVP and ATM SVC are extensively supported, enabling true multi-service network integration of video applications. In particular, the power of emerging priority queuing networking products can be leveraged to provide uninterrupted video, voice and data services in a multi service IP network.