Multimedia: the Impact on the Teletraffic

16
1 Multimedia: the Impact on the Teletraffic BRANIMIR RELJIN 1 , IRINI RELJIN 2 1 Faculty of Electrical Engineering, University of Belgrade, Bul. Kralja Aleksandra 73, YUGOSLAVIA 2 PTT College, Belgrade, Zdravka Èelara 16, YUGOSLAVIA Abstract – The Internet has experienced a fascinating evolution in recent past. On the other side, many different multimedia services, particularly high-definition interactive video, demanding guaranties for quality of service, became the driving force in the development of the asynchronous transfer mode (ATM) and different management strategies. These rather new techniques have already changed the traffic type. According to the ubiquitous variable bit rates the traffic appeared to be fractal. As a consequence of multiplexing of different traffic streams as, for instance, in telemedical applications, a multifractal phenomenon arises. This paper explains the complex nature of modern teletraffic and indicates, by analyzing real-life data obtained in telemedicine application, that multifractal analysis is more appropriate in describing such broadband traffic. Having in mind the high variability in network performances, neural networks seem to be a promising solution in some applications. The arguments for and against the neural network control of the modern teletraffic are briefly discussed. Keywords – Multimedia, teletraffic, communication networks, network management, fractal/multifractal traffic, neural networks 1 Introduction The end of the 20 th century is strongly characterized by advances in computer and communication technology. Modern communication networks have hundreds or even thousands of network nodes, with a number of different sources and various traffic types, to which thousands of users are connected. High-speed integrated networks are used to provide a new and a diverse mixture of services and applications. From an actual point of view future advances in networking will concentrate to support the multimedia communication. Multimedia assumes various services as the transmission of continuous signals (e.g., voice, audio, video, image sequences) as well as the discrete data transfer, in the same application and in the interactive manner in/over digital networks [1],[2]. Nowadays multimedia comprises video-conferencing, video/movies on demand, distance learning, distributed games, telemedicine, telemarketing (called interactive television, ITV), connected via local and global networks, as indicated in Fig. 1. Fig. 1 The multimedia world.

Transcript of Multimedia: the Impact on the Teletraffic

Page 1: Multimedia: the Impact on the Teletraffic

1

Multimedia: the Impact on the Teletraffic

BRANIMIR RELJIN 1, IRINI RELJIN 2 1 Faculty of Electrical Engineering, University of Belgrade, Bul. Kralja Aleksandra 73, YUGOSLAVIA

2 PTT College, Belgrade, Zdravka Èelara 16, YUGOSLAVIA

Abstract – The Internet has experienced a fascinating evolution in recent past. On the other side, many different multimedia services, particularly high-definition interactive video, demanding guaranties for quality of service, became the driving force in the development of the asynchronous transfer mode (ATM) and different management strategies. These rather new techniques have already changed the traffic type. According to the ubiquitous variable bit rates the traffic appeared to be fractal. As a consequence of multiplexing of different traffic streams as, for instance, in telemedical applications, a multifractal phenomenon arises. This paper explains the complex nature of modern teletraffic and indicates, by analyzing real-life data obtained in telemedicine application, that multifractal analysis is more appropriate in describing such broadband traffic. Having in mind the high variability in network performances, neural networks seem to be a promising solution in some applications. The arguments for and against the neural network control of the modern teletraffic are briefly discussed. Keywords – Multimedia, teletraffic, communication networks, network management, fractal/multifractal traffic, neural networks

1 Introduction

The end of the 20th century is strongly characterized by advances in computer and communication technology. Modern communication networks have hundreds or even thousands of network nodes, with a number of different sources and various traffic types, to which thousands of users are connected. High-speed integrated networks are used to provide a new and a diverse mixture of services and applications. From an actual point of view future advances in

networking will concentrate to support the multimedia communication. Multimedia assumes various services as the transmission of continuous signals (e.g., voice, audio, video, image sequences) as well as the discrete data transfer, in the same application and in the interactive manner in/over digital networks [1],[2]. Nowadays multimedia comprises video-conferencing, video/movies on demand, distance learning, distributed games, telemedicine, telemarketing (called interactive television, ITV), connected via local and global networks, as indicated in Fig. 1.

Fig. 1 The multimedia world.

Page 2: Multimedia: the Impact on the Teletraffic

2

When multimedia is transmitted over the network different types of traffic have different requirements. The fundamental challenge in inventing a network for multimedia is that it must handle not only multiple types of signals, particularly images with various sizes, number of pixels, and pixel depths, but also multiple media: picture, voice, data, and text. Various media have different needs affecting efficient network operation. For instance, pictures should be transmitted in long data packets, whereas an interactive video/voice packets are to be delivered with a relatively short delay and a small delay variation; hence, these signals are more suited to short packets to minimize the delay between viewers/speakers. Furthermore, video traffic consumes most of the bandwidth (even after compression), requiring also a high level of quality of service (QoS); a file data needs reliable transmission and a reasonable transfer time, and so on. Also, very important objectives as, responsibility, authenticity, privacy, and security of multimedia information [3]-[5] should be stressed out.

The development of the multimedia is in a strong connection with the global technological growth in the last decade. Nowadays the micro- and nano-electronics lead to the low-cost but very fast and extremely powerful personal computers, high-resolution displays, extremely-high-capacity storage devices, high-speed local- and metropolitan area networks (LANs, MANs), while new network protocols enable fast and reliable connection of different devices.

This paper deals with multimedia and it is organized as follows. Section 2 discusses the main characteristics of multimedia traffic. In Section 3 the conditions under which the teletraffic becomes fractal are discussed, explaining the failure of the Poisson modeling. The simulation results and the real-life measurements for few typical traffic types are presented, confirming the fractal and multifractal nature of multimedia traffic. Furthermore, in Section 4 arguments for and against the neural networks applications in the teletraffic control are considered. Finally, according to the theory as well as to the simulation and measurement results, some conclusions are derived.

2 Basic Characteristics of Multimedia Traffic

All the services offered by the network can be classified in the two extreme types called guaranteed deterministic and best effort, respectively. The first one offers predetermined quality of performances even in the worst case, as long as there are no failures in the network. On the contrary, the best-effort service does not offer any guarantees, but tends to reach as good as possible QoS. The Internet service model based on the IP (Internet Protocol) is the most frequently used service belonging to the best-effort type. Although the original Internet applications (e.g., e-mail and FTP) are data-oriented [6], in the new multimedia era this situation is rapidly changing. Few years ago a very interesting paper [7] with rather provocative title “The convergence of telecommunications and computing…” seemed to be only an

axiomatic approach to these problems. Since then, the boundary between the two types of communications has become blurred. There is a growing interest within the telecommunications industry in providing voice and video services over IP networks. This trend is coincided with an explosive growth of the World Wide Web, with voice and video being further integrated into the design of Web pages. These requirements resulted in several improvements and modifications of an initial IP; for instance, Real-Time Protocol (RTP), Integrated Services (Intserv), Resource Reservation Protocol (RSVP), and Differentiated Services (Diffserv). The last one is still in investigation and the working group of the Internet Engineering Task Force tried to use the Diffserv model to support video services over the Internet [8]

The asynchronous transfer mode (ATM) networks are more suited to real-time and guaranteed QoS requirements. Hitherto six network services are specified by the ATM Forum: constant bit rate (CBR), variable bit rate (VBR) including real-time VBR (rt-VBR) and non-real-time VBR (nrt-VBR), available bit rate (ABR), unspecified bit rate (UBR), and, recently, guaranteed frame rate (GFR) [9]. Real-time multimedia transmission requires rate guarantees for an acceptable picture quality. Since UBR cannot guarantee the transmission bit rate, it is rarely used to transmit multimedia over an ATM network. On the contrary, CBR service provides a constant bandwidth that can be used to support guaranteed QoS requirements: the cell loss rate (CLR), maximum cell transfer delay (maxCTD), and delay variation (CDV). However, due to the bursty nature of broadband traffic, particularly compressed video, the peak cell rate (PCR) may not be fully utilized at all times, resulting in low bandwidth utilization and high service costs. Rt-VBR service negotiates the PCR and the sustainable cell rate (SCR) during connection setup, both of which are guaranteed throughout the entire duration of the connection. In this way the rt-VBR provides guarantees on the same QoS parameters, but since it optionally uses statistical multiplexing, the resulting guarantees are probabilistic in nature. Nrt-VBR provides loss guarantees. Finally, to improve the utilization of the network bandwidth the ABR service can be used, which requires a large number of parameters to be negotiated during the connection setup. ABR service provides a minimum cell rate (MCR) guarantees, which can be used to provide an acceptable QoS for video applications. The sources adjust their allowed cell rate (ACR) based on the network feedback. Feedback is indicated in resource management (RM) cells, which are periodically sent by the source and turned around by the destination. The switches along the path indicate the maximum rate they can currently support in the RM cell [10]. The feedback information obtained from the network is used to calculate the control parameters by an appropriate algorithm. In a client/server architecture this task is done by the server to control the source rate. The control parameters could include the data rate of the multimedia source and the priority label of the cells belonging to different frames of the video traffic. The execution unit (as, admission/connection

Page 3: Multimedia: the Impact on the Teletraffic

3

controller, rate controller, priority label assigner, etc.) then implements the control scheme using control parameters. By using ABR service the acceptable QoS is obtained, the bandwidth utilization is very high and, consequently, the service cost is very low [11].

The GFR service category is intended to support non-real-time applications. It is designed for applications that may require a minimum rate guarantee and can benefit from accessing additional bandwidth dynamically available in the network. The service guarantee is based on the AAL-5 (ATM adaptation layer) protocol data units (PDUs, which are frames in this case) and, under congestion conditions, the network attempts to discard complete PDUs instead of discarding cells irrespective of the frame boundaries [9].

ATM was designed from the ground up to enable cheap switches. Small virtual connection identifiers are rapidly looked up, and fixed-size cells are not only easy to switch using a fast parallel fabric, but also easy to schedule. In contrast, IP, with its variable-length packets and need for a longest prefix match, has been much harder to route. This relative difficulty is reflected in the relative prices of ATM switches and IP routers: IP routers are about an order of magnitude more expensive per switched megabit (Mbit) per second of bandwidth. However, several improvements lead to dramatically reducing of the IP routing costs. For instance, by fragmenting IP packets into fixed-sized units at input ports and reassembling them at output ports, ATM-like switching fabrics can be used even in IP routers. Also, advances in route lookup technology and advances in scheduling will make in the near future the cost of IP routing very close to the cost of ATM switching [12].

Nowadays, the most attractive projects are based on satellite transmission, whether they use the ATM technique or Internet protocols. These networks play a very important role in the deployment of global networks [13], as being complementary to the future fixed or terrestrial mobile ones. The ATM Forum, probably the leading institution in developing the broadband ISDN, is working on specifications for a satellite transmission very intensively. Besides, much attention is paid to the ATM service categories available to the TCP/IP traffic [14]. There are many different networking scenarios, as well as experimental notifications [13]-[15] of the assumption that the global network should be based on interconnection of the ATM and the Internet. We shall point out, Fig. 2, a possible architecture [13] based on the satellite networks connected to fixed ATM network. The satellite system used may be geostationary (GEO) or probably medium/low Earth orbits (MEO/LEO) controlled from the network control station. The network control station is responsible for routing and call management (usually one per satellite). Its interconnection to other parts of terrestrial network is obtained through the signaling system No7, the SS7. Ignoring the main architectural/technical problems from now on, we shall concentrate on the possible types of traffic.

In the environment described, traffic is bursty and its modeling is possible only in some constrained conditions, or for some specific applications. The main network

characteristic is that almost every parameter varies (the number of users, network topology, rates, the required bandwidth). The basic assumption of the previous telephone network, the Poisson one, in data networks failed to be acceptable. Namely, for each of traffics as Internet, LAN/WAN, ATM, SS7, it was found that the Poisson model is not valid any more. Different experiments indicate the self-similar (i.e., fractal) nature of modern traffic [16]-[22], as will be shown in Section 3. Fractal parameters can be used as robust parameters describing the complex traffic in the broadband era. Moreover, in multimedia environment even the fractal parameters are not well suited; the multifractal parameters [23]-[31] have to be better descriptors. In order to track and control traffic parameters in multimedia communications, new complex routines have to be established. These need enormous computation power, high computing speed and real time control. The use of neural networks (NNs) probably could be a good solution for these purposes, as will be explained in Section 4.

Among different multimedia services the telemedicine probably has the most severe demands [32]-[33]. Although a telemedicine concept is very simple: we acquire medical data from appropriate devices and transfer them to other centers, its realization is very difficult due to very hard technical requirements; particularly, to transmit, store and search for extremely large number of large files, as medical images. Within the last ten years, various image processing, transmission, and archiving systems have been developed for medical applications. These have been focused in the areas of Radiology and Pathology, yet they are now finding their way into such areas as Cardiology, Neurology, Orthopedics, and Surgery [32]. The medical images, acquired from different imaging devices1 are in different formats having different spatial and level resolution. A typical radiology session is composed by a large set of images, generally about 60-100 CT or MR images (512x512 pixels and 12 bit per pixel) or 4-6 X-ray images of size 2048x2048 pixels (even 4096x4096 pixels) and 12-16 bit per pixels. Consequently, for a single patient the amount of information to be stored and transmitted is about 30-50 Mbytes, assuming two bytes for each pixel representation. For medium sized hospitals (300 beds) the amount of stored data exceeds 3 Gbytes per day [34], or 1-1.5 Tbytes per year! Storing the five-year hospital’s archived picture-data would require a stack of about 10000 standard optical disks (CDs). If the maximum transmission delay, required to avoid an annoying effect on the user, is limited to 10 sec for each set of images, the minimum capacity required by the network is over 25 Mbits per second. (Note that radiologists become impatient if the image writing time is any greater than 2 seconds!) Apart from transmission, the local data access delay becomes the bottleneck of the global system; a fast access to data stored

1 Such as computed tomography (CT), magnetic-resonance (MR) imaging, nuclear medical (NM) imaging, ultrasound (US) imaging, scanned X-ray images or radiology images obtained from imaging plates (as in computed radiology – CR), images from digital microscopes, etc.

Page 4: Multimedia: the Impact on the Teletraffic

4

N-ISDN

PSTN

Internet

Frame relay

Network control center

Gateway station

Public ATM Public

ATM

Private ATM

Gateway station

SS7 network

GEO or MEO/LEO satellites

Onboard satellite Intersatellite link

Distributed satellite user network interface

Fixed user terminal equipment

LAN

on optical jukeboxes or tapes, connected to PACS (Picture Archiving and Communications Systems), is mandatory [34]. The fundamental challenge in inventing a network for medical images is that it must handle not only multiple types

of images with various sizes, number of pixels, and pixel depths, but multiple media: picture, voice, data, and text. The large volume of image data generates the necessity of image compression.

Fig. 2 Global ATM network, according to [13].

A very useful technical way for reducing the number of bits is the diffusion dithering method allowing us to use 8 bit-palette but producing the same visual effect as 24-bit palette [35]. A similar procedure, named halftoning [36], could be applied in displaying and/or printing monochrome gray-scale images (8 bits per pixel or 256 levels) but using only 2 levels (1 bit per pixel, i.e. using binary image). Several methods are suggested making the halftone image be similar to the continuous toned images, due to the low-pass characteristic of the human eye [37]. Multilevel halftoning [38]-[39] produces almost the same visual effect as the original (256 levels) image but using 3 bits per pixel (eight levels).

Different approaches are derived for assuring required level of QoS with the cost effectiveness. Video streams are usually compressed before being transported over a network. Several compression techniques are available today as, H.261 for video teleconferencing, H.263 for low bit rate communications over telephone lines, JPEG (Joint Photographic Experts Group) for still images, and MPEG-x (Moving Picture Experts Group) for audio and full-motion video [40]-[42]. There is no agreement on which quality of digital imagery is necessary. Recent studies show that the

resolution required depends on the examination type and the specialist doing the reading. Through a large number of studies written in the last two years, researchers tried to agree on the minimum quality standard of digitized images. In some of them it can be found that standard TV PAL resolution (512 x 476 pixels) can be used to display the usable diagnostic digitized image [43]. However, in other publications it is claimed that the resolution of 1024 x 768 pixels is the absolute minimum and higher resolutions are recommended (1600 x 1200 pixels, or more). The same disagreement can be found about the pixel depth (the minimal number of bits per pixel, in other words, the number of contrast levels and/or the number of colors used to display the diagnostic usable digitized image). While some authors deem that 8-bit palette (ability to display 256 different colors or gray-scale levels) is sufficient, the others, mostly from the countries with developed communication resources, believe that only the 24-bit palette (ability to display 16.7 million of colors) is the correct choice. In a very carefully elaborated study [35], it was cleared out that most of diagnosticians make no difference between 8 bit (256 colors) and 24 bit (16.7 million colors) image. Experimental studies with loss compression algorithm, as JPEG or wavelet

Page 5: Multimedia: the Impact on the Teletraffic

5

based transforms, confirm that compression ratios of 10:1 or 20:1 produce no sensible differences in the quality of the medical image. At the user level a perceived QoS is defined as the percentage of diagnosis producing the same results as those carried out with original images [44],[45]. Despite these important results and the work of ACR-NEMA on DICOM 3 (the official medical standard for data processing and transmission, which include JPEG, too) [46], no compression standard has obtained global agreement from the teleradiology service manufacturers and from the medical community.

In Yugoslavia the challenge of this attractive field as telemedicine is also in the focus of several research centers. The segment of telemedicine, named Telepathology, has been built in 1997, as a cooperative work of several institutions gathered around the Military Medical Academy (MMA) in Belgrade. Up today the telepathology network connects three medical centers: Belgrade, Nish, and Sremska Kamenica [47],[48]. From the very beginning this network is in active use, performing permanent contacts and exchanges of information/diagnosis between these centers: it is open for further expansion, too. Just today the new Intranet telemedical optical network gathering Teleradiology and Neuro-physiology departments in MMA connecting them to the existing Telemedicial Center (TMC) in MMA.

After acquiring, medical images should be processed in an appropriate way. Different pre- and post-processing methods and algorithms are necessary to apply for improving the raw medical data, extracting objects of interest, recognizing and describing particular areas in the tissue. High-level processing methods; for instance, different statistics, morphometric analysis and expert systems, lead to the facilitation of the primary diagnosis [49]-[51].

In digital image processing (DIP) different methods may be applied to produce the same desired result. For better comparing the efficiency of different algorithms the authors usually test these algorithms over the same pictures, called ‘test-images’ (although they are not really standardized). For instance, at the beginning of DIP the images ‘girl’, ‘bridge’, and ‘tank’ were used [52], later on several still images as ‘boats’, ‘clown’, ‘mandrill’, etc., were implemented even in standard image tool boxes as in Matlab package. For a moving picture analysis the sequences ‘Miss America’, ‘salesman’, ‘foreman’, ‘Claire’, ‘football’, were very often used. But, in last two decades most frequently used is the image named ‘Lena’. The popularity of this image provokes digital image processing community to find this person. Recently, at the home page of Prof. Sheila Hemami, from Cornell Visual Communications Laboratories, interested researchers can find Lena, as she looks today [53]. The part of this JPEG image representing a good-looking woman at the bottom-right on Fig. 1, is just today’s Lena.

In modern multimedia traffic the statistical multiplexing (SM) is a useful mechanism for reducing the bandwidth requirement of bursty and VBR traffic sources. In essence, SM is a spatial aggregation mechanism by which several individual streams are asynchronously superposed and transported over the same channel. Consequently, the

resulting aggregate traffic is expected to exhibit smoother bit rate behavior than the original streams. Temporal smoothing (TS) can also reduce the variability in video traffic [8]. Its general idea is to introduce a buffer in the path of the stream, either at the sender or at the receiver. Smoothing buffer has an integration effect, i.e., acts as a low-pass filter by averaging the bit rate over a time window whose length is determined by the buffer size and its drain rate. Finally, in broadcast-type video applications (e.g., HDTV, distance learning) the per-stream bandwidth requirement can be significantly reduced by means of multicasting. Multicasting is particularly efficient for video-on-demand (VoD) applications in which a large number of recipients request a small number of videos. To support such VoD services, the server must simultaneously multicast multiple copies of each movie that they have different playback times. A client that requests a movie has to wait until the start of one of these copies. Multicasting need not be limited to a single sender. In conferencing scenarios, it is usual to have several senders who normally do not use the resources at the same time (usually only one person is speaking). Note also that in traffic management the delivery of audiovisual data to large receiver groups must take into account that the resource capabilities of the participants can vary widely; from high-speed network links and fast workstations to low-level personal computers connected via narrowband links [2],[8].

The new form of digital consumer multimedia service can be referred to as an interactive television (ITV) [2]. Creating a customized entertaining experience with audio/video whose quality is comparable to state-of-the-art conventional TV programming, is the challenge of ITV. Each viewer of an ITV application is totally independent. This requires that different applications can be executed for each viewer, although the stored media elements are shared. A special class of servers known as gateway servers provide the navigation and other functions required to support the consumer’s interface for the selection of services. These servers offer “menus” of available services, accept the consumer’s selection, and then pass control to the server providing the selected service. New technologies offer also the virtual reality (VR) and/or an augmented reality (AR) capabilities permitting not only the 3D graphics models but even the impression of the real-world scenes. The virtual shopping, virtual traveling, virtual museums and libraries, even virtual surgery, are the reality [1]. The convergence of television and the Internet leads to a new service called WebTV [54] derived from Iacta LLL Group. Launched in the fall of 1996 the WebTV was the first commercial service that empowered mainstream, noncomputer users to access information and communicate via their TV sets. As inferred in Iacta’s WebTV Report, located at the Web address http://iacta.com/studies.htm, this user-friendly and low-cost service is developed since “...a very large portion of the user base consists of people who would have never purchased a computer, feeling that they were too complicated to use, too much trouble to learn, or too expensive for the limited uses they imagined.” In other words, this service is “Internet for the rest of us,” which is the target for non-PC users.

Page 6: Multimedia: the Impact on the Teletraffic

6

3 Teletraffic in the Broadband Era – The End of Poisson Modeling

The teletraffic theory originally represented mathematics applicable to the design, control and management of the public switched telephone networks (PSTN). It included statistical inference, modeling, optimization, queuing and performance analysis. The theory on the way of how humans initiate telephone calls, how long they talk, what are the bandwidth as well as statistical inferences like, preceded the development of the teletraffic modeling. Some general conclusions were derived [16]: § Traffic fluctuations could be predicted very well within

an hour, based on the Poisson (exponential) model, § Variations from hour to hour were significant, but they

followed a repeatable pattern ('busy hour'), § Seasonal variations were predictable, too, § Traffic variations from year to year were slow and

steady, and forecasting based on linear model was established,

§ In the case of an excessive traffic, new calls are denied access,

§ Conventional public switched telephone network had highly static nature.

3.1 The end of Poisson modeling The static nature of traditional telephony contributed to the general belief that some universal laws could be derived from statistical parameters describing the main traffic performances. Based on the assumption of mutually independent call arrivals and exponential distribution of call interarrival times, the Poisson teletraffic model has been established. Its main advantage is parsimonious behavior as it is described by only few parameters. The approval of the model stemmed from the fact that holding times in traditional telephony obeyed the exponential law, too. Among many facts important for the first fifty years of the teletraffic theory were: predictable growth rates (allowing easy capacity planning), fully centralized network control, and strictly regulated and monitored offered services (though, not many of them).

The highly static nature of the PSTN could not intrigue researchers for continuos traffic measurements. However, an analytical model, derived from the Poisson assumption was mathematically tractable and simple enough for implementation. It allowed a prediction of many performance measures. This was the beginning of the queuing theory and its application in the conventional telephony. Most of the significant performances, such as blocking rate, average waiting time, throughput etc, became easy for estimation. For almost fifty years, the initial model has remained unchanged.

With the appearance of the FAX and the data transmission, problems with the network, mainly telephone populated, have been observed. One of the first signs was 'not getting the dial tone' when trying to make a phone call. The rapid growth of the Internet made it more severe (we are waiting for the access to the provider).

How to explain that? First, the network has been planned according to the Poisson theory, but communications between the machines/computers have no similarity with talks between humans. Accesses are not completely independent (night-by-night), duration of calls is not exponentially distributed. The old telephony needs a fixed amount of the bandwidth. The data transfer means that traffic is highly varying, while the duration of connections, as well as the packet lengths, are almost unpredictable. The packet switching means that the header of the packet determines the destination enabling the transparent routing through the network. There is no centralized traffic control. In the case of congestion, packets (or cells in the ATM) are dropped. In some cases backward messages are provided, thus enabling the feedback control of the packet rates. Feedback can be successfully used to manage multimedia traffic over ATM-based networks, as we noted earlier.

The traffic measurement in the presence of data gave surprising results. Burst of packets and high variations in their rates have been observed. One of the first papers concerning this problem [16] provided the results for the Ethernet traffic in a long period. The shortest database contained two hours of traffic. Statistical inferences for the failure of the Poisson modeling were established. The traffic behavior was unknown prior to that moment. The authors gave an 'intuitive interpretation' that 'traffic spikes (causing actual losses) ride on longer-term ripples, that in turn ride on still longer term swells'. That was a very descriptive explanation of fractal-like traffic.

Is there any mathematics? To derive it, we need to find some invariants [19]. However, it is not so simple to derive such quantities: the number of users in the network varies, the network topology is changeable, new services are appearing and they often need enormous bandwidth, etc. Rutkowski said [55] that 'Internet is its own revolution'. It is really the truth: the number of bytes-per-day between 1985 and 1995 increased from 10E5 to 10E9. In last two years, the number of connections-per-day was of the same order of magnitude. Since 1993, the number of HTTP connections at the Lawrence Berkeley National Laboratory [17] tends to double every 7-8 weeks and continued to do so. With the Internet most of the parameters are growing. On the contrary, the median, as a highly robust statistic for describing different processes, is not well suited to describe modern traffic. Namely, median data transfer is not in proper correspondence with the explosive growth of the Internet: in 1992 in five months it decreased twice (due to numerous compression techniques derived) [56].

It was noticed that traffic bursts occur on many different time scales and that this does not fit to the Poisson model [55],[57]. There is a rather simple method for distinguishing the strictly Poisson model from the real traffic. It can be described using the traffic traces as in Fig. 3. First, we generated the Poisson model sequence via computer simulation, which is presented in the first row, Figs. 3(a)-(c). The second row, Figs. 3(d)-(f), represents the traces taken from the motion JPEG (MJPEG) version of Star Wars movie,

Page 7: Multimedia: the Impact on the Teletraffic

7

digitized and compressed by Garrett [18]. Both sequences are given in terms of ATM cells per slot (first column, Figs. 3(a) and 3(d)). The slot is a unit interval, equal to the time of transfer of an ATM cell. The second column, Figs. 3(b) and 3(e), contains the same traces with the changed time scale. Now a time unit is equal to a duration of a slice. The slice corresponds to the 30th part of a frame. The third column,

Figs. 3(c) and 3(f), is based on the frame representation. As we can see the Poisson traffic smoothed as the time scale increases. On the contrary, the movie sequence retains the bursty character as at lower time scales. The basic shape remains unchanged. The same phenomenon has been observed in different types of modern real-life traffic (Ethernet, for instance) [20].

0 20 40 60 80 100-0.5

0.0

1.0

2.0

3.0

slot

cells / slot

(a) 0 10 20 30 40 50 60

0

50

100

150

200

slice interval

cells / slice

(b) 0 5 10 15

0

1000

2000

3000

4000

cells / frame

frame interval

(c)

8060 8080 8100 8120 8140

0.0

0.2

0.4

0.6

0.8

1.0cells / slot

slot

(d) 300 340 380 420 4600

10

20

30

40

50cells / slice

slice (= frame / 30)

one frame

(e)

87100 87200 87300 874000

500

1000

1500

frame interval

beginning of a new scene

cells / frame

(f)

Fig. 3 The ATM traffic traces of: (a)-(c) strictly Poisson modeled input; (d)-(e) the Star Wars MJPEG version as input.

3.2 The fractal nature of teletraffic

Mathematics contributed to the success of teletraffic theory in telephone networks. Researchers hoped that it could do the same with the data traffic. However, the relevant mathematics for old telephony was of limited variability in time (the traffic processes are independent or temporal correlation decay exponentially fast) and in space (due to independence between the users, the traffic related quantities decay exponentially fast). There are many inferences of spatial variability in the data traffic causing the heavy tail distribution with infinite variance. Besides, high temporal variability in traffic patterns points out to the long-range dependency in data. It was shown [58] that SS7 traffic exhibits the LRD (Long Range Dependent) property, while holding times should be modeled by heavy tail distribution (which is in disagreement with the exponential decaying in the Poisson model). Further, there exists a very important problem of the network-wide traffic synchronization as a consequence of the periodicity in the machine generated IP traffic (i.e. in the routing messages) [59]. We can conclude that data traffic statistical descriptors lead to the fractal modeling of the teletraffic. In 1993, a group of authors [20]

reported the results of a massive study of Ethernet traffic and demonstrated that it had a self-similar (or, fractal) characteristic. This means that the traffic had similar statistical properties at different time-scales: milliseconds, seconds, minutes, hours, days, weeks, etc. Another consequence is that the merging of traffic streams, as in statistical multiplexer or an asynchronous ATM switch, does not result in smoothing the traffic. (Note that using the neural network (NN) control in ATM node can produce smoother aggregate stream [60].) Again, bursty data streams that are multiplexed tend to produce a bursty aggregate stream. Experimental and real-life measurements show the self-similarity in ATM traffic, compressed digital video streams, and Web traffic between browsers and servers. Although a number of researchers had observed over the years that network traffic didn’t always obey the Poisson assumption used in the queuing analysis, network theorists split into two camps; one advocated that the entire network theory has to be rewritten, and other disagreed [61]. There are numerous methods enabling the description of the fractal parameters. The most important is the Hurst exponent H as an indicator of long-range dependency in data. This effect, indicating the non-periodic cycles, is known also as the

Page 8: Multimedia: the Impact on the Teletraffic

8

Joseph effect2 [62]. Exponent α indicates another feature, the heavy-tailed behavior obvious in the distribution of many quantities describing the traffic. This phenomenon, indicating the effect of the catastrophe, is referred as the Noah effect.

Determining the fractal behavior as well as estimating the main indicators (H, α, etc.) may be done through many different procedures. Among them the index of dispersion (IDC) obtained for different aggregation levels is a simple indication of the fractal behavior [20]. The simulation results for three different traffic traces are given in Fig. 4. The first one is obtained for the artificially Poisson modeled traffic (the same sequence as in Figs. 3(a)-(c)). The IDC is nearly constant, which is in agreement with theory. The estimated slope (obtained by applying linear regression) equals –0.11955 corresponding to H=0.44. Such a process is said to be a short-range dependent (SRD). Note that in case of H≈0.5, the process is of random walk type [20],[21].

The IDC for the Star Wars movie, Fig. 4, is calculated for the whole movie (120 minutes). It exhibits the slope of 0.47232 (which corresponds to H=0.73616). The third curve corresponds to the two hours of real-life Ethernet traffic available at the Web [63]. Now the IDC slope is equal to 0.39555, corresponding to H=0.69775. The results obtained (0.5<H≤1) undoubtedly indicate that both real traces are LRD processes. Diagrams depicted in Figs. 3(d)-(f) represent the illustrative inference for fractal behavior of the movie sequence. The fact that curves look the same at different time scales explains why they are called self-similar.

10 100 1000 10000 100000

0.1

1

10

100 IDC

Star W ars - MJPEG vers ion

aggregat ion level

Art i f i t ial ly generatedPoisson model

Ethernet traff ic

Fig. 4 Index of dispersion (IDC) for different traffic types.

Notice that we have analyzed two real-life traces of different nature. The Star Wars movie is isolated fractal behavior, meaning that it was produced without any interaction with network or some other source [22]. Such a sequence (Garret's was the first one) was the strongest confirmation of the LRD feature that pushed researchers into a new exploration field. Namely, traffic may be fractal regardless of the type of the network to be used. On the other hand, the Ethernet sequence belongs to the group of interaction induced fractal behavior obtained as a result of interaction between network actions (for instance packet dropping), protocol actions (changing the transmission rate,

2 Hurst suggested the terms Joseph and Noah effects according to the two Bible legends.

or re-transmitting the lost information), and user actions (generating connections with diverse workloads).

Many analytical studies have shown that self-similar network traffic can have an influence on network performance, including an increased queuing delay and packet loss rate [64],[65]. On the other hand, in [66] it was found that long-range dependence was unimportant for buffer occupancy when there was strong short-range dependence and the Hurst parameter was not very large (H<0.7).

One practical effect of self-similarity is that the buffers needed at switches and multiplexers must be bigger than those predicted by the traditional queuing analysis and simulations. These larger buffers create greater delays in individual streams than were originally anticipated. Moreover, scale-invariant burstiness implies the existence of concentrated periods of high activity at a wide range of time-scales, which affects the congestion control. A dynamic congestion control strategy is difficult to implement in the ‘classic’ way. Note that by implementing neural network strategy it is possible to improve the congestion control.

As the traffic self-similarity (described by the α parameter of Pareto file size distribution, α = 3 – H) and network resources (buffer capacity, bottleneck bandwidth) vary, a gradual change in the packet loss rate is observed: as α approaches 1, along with a decrease in buffer capacity, packet loss increases [61].

In the context of facilitating multimedia traffic such as video and voice in a best-effort manner while satisfying their diverse QoS requirements, low packet loss, on average, can only be achieved at a significant increase in queuing delay and vice versa.

One important discovery is that the higher the load on the Ethernet, the higher the degree of self-similarity. When the network load is in the range of 30-70 percents, waveforms of the traffic display self-similarity for which H approximately has the value of unity. Furthermore, a load between 80 and 99 percent produces the waves with a strong periodic component, and calculation of H becomes unreliable [61].

3.3 The multifractal nature of teletraffic Consider the curve representing the Star Wars traffic, depicted in Fig. 3(f). The number of cells averaged in time unit T=1 (for instance 120 minutes of the movie) could be used as a measure µ, describing the traffic. In the next step we divide the unit T into sets of equal sizes (let it be into frames, as in Fig. 3(f)). Every frame i has its own measure µ(Ti)=µ· µi, obtained in the multiplicative process [23], according to the movie dynamics. As the initial measure has to be preserved, it is obvious that Σi µi = 1. If we proceed with partitioning3 of the new sets (for instance to reach the slice scale), we get new measures µ(Tij)=µ· µi· µj, with Σj µj = 1. The unit interval T (equal to the whole duration of

3 The usual description of the multifractal processes is based on the binomial measure and dyadic intervals, for convenience. However, according to [25], any type of fragmentation may be used.

Page 9: Multimedia: the Impact on the Teletraffic

9

the movie) fragments into smaller and smaller components according to a fixed rule. At the same time the measure, i.e., the amount of cells, fragments according to another rule. Such a process is called multiplicative one or a cascade. The measure of any fragment is obtained multiplying the initial measure, and the normalization is performed in every step. At the end it is interesting to characterize the irregularity in the distribution of the measures described.

Let us define density µ(T)/ε, where ε is the size of the smallest set in the procedure of fragmentation of T. As fractals are nonlinear, the logarithms of the quantities above are calculated: α = log(µ)/log(ε). Consequently, the coarse Hölder exponent is obtained [24],[25]. Notice that in the investigation of the LRD property, based on the second order analysis (periodogram, variance, IDC, etc), we loose the evidence about the differences between the traffic types. Some applications require tracking the short-term variations, and especially the maximum amplitudes of some of the traffic descriptors. The local information about the traffic is obtained from the Hölder exponent at each point. It gives the singularity description. The basic idea of multifractal analysis is to classify the singularities by strength, i.e. according to singularity exponent α. So, the global information can be obtained through the characterization of the statistical distribution of exponents α (we call it multifractal spectra).

Notice that computing the multifractal spectra f(α) is useful as it enables (if the scaling is good over many orders of magnitude), by estimating spectra on a coarse resolution, to conclude on statistical behavior on high resolution, i.e. the short-time intervals [24]. These spectra have been calculated for two cases: the Star Wars movie, Fig. 5(a), and for the measured Ethernet traffic, Fig. 5(b). The applied procedure was based on the histogram method [23]. How to explain these spectra? Note that one of the assumptions in the fragmentation process is that a measure is being split between the intervals of fragmentation. (Σi µi = 1). This is called the mass conservation law [25]. Suppose that multipliers are mutually independent, and that we use a very big number of fragments. The logarithm of a measure (for instance log(µ· µi· µj)) brings us to summation. Then, according to the central limit theorem (in case of finite second moments for log (µi)), the sum is Gaussian. As a consequence, the lognormal distribution is obtained.

The calculated spectrum should be concave, 'I shaped'. Failure of being concave is a sign that the traffic is not multiplicative. For instance, if there is an additive component in the signal, an extra 'I part' would appear in the spectrum.

The Star Wars movie gives almost concave spectrum; scattered points in Fig. 5(a). A polynomial approximation is represented by the smooth curve. The main shape tells us that the process is multiplicative. Truth to say, dispersed points at both sides of the spectrum are evident and they are accentuated by small dotted parabolas. This is the sign of the existence of the additive components in the traffic (Star Wars) observed. We believe that some special effects in the

movie could produce these components, but this is the suggestion for possible further investigations. The second multifractal spectrum is obtained for two hours of Ethernet traffic [63], Fig. 5(b). Consequently, the conclusion is that it is composed of few types of traffic. The dotted parabolas point this out.

0.90 0 .95 1 .00 1 .05 1 .10-0.1

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

al fa

Star W a r s - M J P E G

f(alfa)

(a)

0.6 0 .7 0 .8 0 .9 1 .0 1 .1

0 .0

0 .2

0 .4

0 .6

0 .8Etherne t t ra f f i c f ( a l fa)

a l fa

(b)

Fig. 5 Multifractal spectra for: (a) the Star Wars movie; (b) the measured Ethernet traffic.

Note that except in characterizing modern teletraffic, the multifractal parameters can be used for image content description. By exploiting local and global image descriptors given by the exponent α and the spectrum f(α), respectively, different image processing tasks can be derived as, for instance, edge detection, image segmentation, image denoising, object classification, etc. [67]-[71].

3.4 Teletraffic modeling Throughout this paper we have discussed that the actual network traffic is fractal in nature, i.e. bursty over a wide range of time scales. Observing the traffic over small time scales, make us sure that it has additive and multiplicative structure, which is in connection with network-specific mechanisms that operate on such time scales. It was shown [24] that conservative cascades, explained above, are inherent to wide-area network traffic and are traffic invariant for such networks. Further, it is demonstrated [24]-[27] that the described multiplicative structure can co-exist with self-similarity. On the other side, self-similar scaling property over large-time scales is due to the fact that session sizes exhibit an infinite variance.

All these facts are in a sharp contrast with common traffic models used in engineering theory and practice, where exponential assumptions still dominate [26],[27]. Such models are able to reproduce bursty behavior in a limited range of time scales, but none of them catch the real traffic

Page 10: Multimedia: the Impact on the Teletraffic

10

properties [24]-[31]. According to [55], the switch from the Poisson to fractal modeling (and thinking) had a major impact on our understanding of actual network traffic. We will add that it forces us to leave the best fitting models (the traffic keeps changing over time), to leave exact mathematically tractable (but too clumsy) queuing theory. Instead, it is necessary to obtain qualitative understanding of implications of dominant features of the measured traffic. It was suggested to create structural models [31] taking into account the way the data had been created and the hierarchies of networking functions. The authors argue that very often used “black-box modeling” is not well suited to this purpose.

Intensive experiments over living networks confirm the fractal behavior of multimedia traffic. The research group at the Department of Information Engineering, University of Pisa (Italy), derived different measurements over the broadband LAN/MAN networks in middle-north Italy, observing the real-life multimedia traffic between Pisa, Florence and Siena [44],[72]-[74]. Their analysis concentrates to the Hurst parameter estimation by using IDC method. A special attention was devoted to the field of telemedicine. The Hurst indices obtained indicate that isolated video sources and traditional data traffic are well described by self-similar (fractal) process. On the contrary, isolated voice sources are well described by ON-OFF models [72]. From the traffic analysis they concluded that the IP network protocol is a very promising candidate to provide global multimedia services over broadband islands interconnected by an ATM network. Also, they inferred that the long-memory property has to be taken carefully into account in the dimensioning of the future global telecommunication network. Some of their measured data were offered to the public area through the Web [75].

We analyzed their measurement data and here we will present some of conclusions we had derived. We concentrated on the traffic measured at the Ethernet network in Pisa, over the same time interval of about 9 hours in usual working time: from 9:40:12 to 18:45:24, but on two different days; Friday, 16-Jan-98 and Monday, 19-Jan-98 [75]. Intuitively, we expected that the traffic must differ in these two days since the former is the last working day while the last one is the first working day of a week. In Fig. 6 the box diagrams for measured traffics are presented.

0

200

400

600

800

1000

1200

mean

50-th percentil

19/01/98Monday

16/01/98Friday

bytes/frame

Fig. 6 The box diagrams of telemedicine traffic.

The global forms of two box diagrams in Fig 6 are very similar, but slight differences do exist. The large deviation in the 95-th percentile indicates to the LRD process (the Noah effect) while median and mean values points out that there are not significant differences between them, indicating the SRD component. Both processes exhibit sudden oscillations; i.e., high frequency components, but their magnitudes are not so large, as we can see from corresponding periodograms, Fig. 7. The Hurst indices obtained from the periodograms have very close values: H≅0.872 (Friday) and H≅0.866 (Monday). However, by comparing box diagrams and the H-indices some disagreement arises. Note that the box diagram belongs to linear statistics, since averaged values (percentiles) are derived without the assumption of any non-linearity. The similar conclusion can be derived for the fractal measure, such as the H-index, obtained here from the periodogram curves. On the contrary, the highly changeable (even explosive) nature of the multimedia traffic must be described by some more adequate parameters.

1E-3 0.01 0.11E-8

1E-7

1E-6

1E-5

1E-4

1E-3

0.01periodogramEthernet, Pisa16/01/989:40:12 - 18:45:24

H = (1 + 0.74397) / 2 = 0.87198

1E-3 0.01 0.1

1E-8

1E-7

1E-6

1E-5

1E-4

1E-3

0.01 periodogramEthernet, Pisa19/01/989:40:12 - 18:45:24

H = (1 + 0.73163) / 2 = 0.86582

Fig. 7 Periodograms for the Ethernet traffic in Pisa, measured on: (a) Friday, (b) Monday.

However, by applying the multifractal analysis more

accurate analysis is possible, as indicated in Fig. 8. Although for these two processes the box diagrams, periodograms, and H-indices are similar, their multifractal spectra, f(α), are quite different. As noted earlier, multifractal analysis permits to describe both the local and the global behavior of the process [25],[68]. Recall that points in f(α) diagram

(a)

(b)

Page 11: Multimedia: the Impact on the Teletraffic

11

lying on the left side from the maximum belong to the region with statistical moments q>0 (i.e., the high singularity as q>1) while points on the right side correspond to q<0 (low singularity). Both f(α) spectra in Fig. 8 have the main part of points in the last range (q<0). The high slope of f(α) as q>0 indicates the LRD behavior of the process and constant H. Both diagrams can be approximated as superposition of several parabolas which indicates to the additive components in measured traffics. The broader diagram f(α), obtained for Monday, Fig. 8(b), may indicate the variety of traffics: for instance, a large number of short messages/files, while the traffic on Friday, Fig. 8(a), probably consists of large files. Also, the Friday spectrum exhibits a clear edge at the maximum. In rather wide range of q it is almost linear (constant slope) pointing out the self-similar property of the traffic. For q<1 we have a zooming effect for small singularities (produced by different protocols). The two diagrams differ significantly just in that domain. We intend to explain this by a different sort of computer jobs in two days.

The further investigation should be concentrated to deriving such robust parameters describing the type of traffic in order to better plan network resources and traffic management. In any case, the multifractal analysis definitely is better tool for describing multimedia traffic.

0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20

0.0

0.2

0.4

0.6

0.8

bytes/frame

(Ethernet, Pisa)

16. jan, Friday, 1998.

(9:40:12 - 18:45:24)

f(alfa)

alfa(a)

0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20

0.0

0.2

0.4

0.6

0.8

bytes/frame(Ethernet, Pisa)19. jan, Monday, 1998.9:40:12 - 18:45:24

alfa

f(alfa)

(b)

Fig. 8 Multifractal spectra for the Ethernet traffic in Pisa, measured on: (a) Friday, (b) Monday.

4 Is There any Place for Neural Networks in Multimedia Traffic Control ?

As noted earlier, modern communication networks comprise different services each of them having different characteristics and QoS requirements. Multimedia teleconferencing, video-on-demand, telemedicine, telemarketing, and distant learning are some examples of these emerging services. Achieving high utilization, while maintaining the QoS, is the objective of an efficient ATM traffic management strategy. Designing such a strategy using programming techniques is a difficult task. In such large and complex telecommunication networks, the traffic changes in an unpredictable fashion having short-term and long-term variations, and the number of nodes and links are so large that the traditional network control may not be effective due to the high degree of complexity. Modern multimedia networks need adaptive and intelligent systems in order to provide high network reliability, accurate traffic prediction, efficient use of channel bandwidth, and optimized network management in relation to various, dynamically changing environments.

Artificial neural networks, commonly referred to as 'neural networks' (NNs), in short, are known as a multiple-input multiple-output nonlinear mapping structures, which can learn an unknown non-linear input-output relation from a set of examples [76]-[79]. The idea of NNs was inspired by the structure of the human brain and by our knowledge of the brain functionality [80]. The NNs exhibit very useful feature to be capable to solve a given problem without a priori knowledge or experience, and without precise algorithm. The neural network consists of many simple processing units, called 'neurons', connected to each other; each neuron is a multiple-input single-output nonlinear circuit. By defining proper processing functions for each neuron, and defining associated weights for each interconnection (synapse), we can solve different difficult tasks that cannot be performed sufficiently well by digital computers. A massive parallelism of the NN structure makes neural networks very suitable for high-speed processing tasks despite its relatively slow-acting individual neurons. Also, neural networks exhibit very useful features as adaptability and flexibility. Consequently, the NN structure may track the changes in input parameters and it is able to find a solution from not entirely known data [76]-[79]. So, adding a new (unknown) service or changing the communications network topology, will make almost no effect on neural, we should say 'near optimal', network control. These are the reasons why neural networks happen to be a promising solution in the control of modern communication networks. In addition, modern VLSI technology permits various hardware realizations of NNs [81]-[86].

Neural networks can contribute to the emerging new telecommunication infrastructure by providing fast, flexible, adaptive, and intelligent control. No explicit model of the

Page 12: Multimedia: the Impact on the Teletraffic

12

Controlled switch

Server

Output

n

2

1

Inputs

x2 x1

Kohonen-like neural network scheduler

xn

Control signal

Buffer status

traffic is needed, as in traditional methods, only a good representation of the problem. The neural networks accompanied by the fuzzy logic (based on the 'soft computing' mechanism and the expert knowledge), are able to approximate complicated input-output relations by autonomously selecting significant inputs and deriving feature parameters. In this way it is possible to control an ATM network by adapting on its changing environment via learning. After the first work in this area [87], many other recent publications have proposed NNs for applications such as a connection admission control and call routing, traffic characterization and prediction, congestion control and policing, and switch control [60],[88]-[98].

The NN-based traffic characterization and prediction is an inherent property of NNs. Thus, NNs used in the admission control perform the classification of acceptable and unacceptable traffic types and NNs used in the congestion control need first to predict the rate of the arrival so that they suggest optimum control actions. For these purposes the feed-forward NNs with backpropagation learning algorithm are commonly used.

An admission control implies that each new call makes a request for a connection. The request includes the QoS required by the call. If the QoS can be obtained without altering those of the existing calls, then the new one is admitted, otherwise it is rejected. Thus an estimate of the QoS is required based on monitoring traffic patterns and buffer status (i.e., the number of cells waiting in the buffers for service). The latter parameter is important in determining the cell loss probability (CLP), cell delay and the delay variations. Training a NN to learn a QoS parameter such as the CLP is a difficult task, because this parameter depends on the traffic generation rate, even if the number of connections is kept constant [88]. Hence, the NN should be trained to learn the average of those values. However, due to the exponential wide range of the CLP (from 10E0 to 10E-12), it is difficult to derive their average value accurately. In [89], a method called relative target was proposed to solve this problem. In this method, for given buffer status and monitored cell loss probability, the CLP is estimated by the neural network in order to reach the relative target determined as a function of the neural network output at that time.

In some cases, it is desirable to have on-line or real-time training capabilities in the NN. In [89] the virtual output method is proposed to estimate very small CLP from virtual cell loss data. The virtual output buffer circuit is a set of counters that simulates hypothetical buffers. However, the CLP from the virtual buffers is much larger than the actual ones, hence the accuracy is high. The CLP from these virtual buffers are then stored and used to train the three-layer backpropagation NN with 20 hidden neurons. By using that method an accurate estimation of the cell loss rate (CLR) was obtained for a real buffer size of 100 cells and a different number of connections ranging from 20 to 40 voice sources.

Another approach, using adaptive self-organizing neural network [60], was successfully applied to control input buffers (sized 100 cells) in ATM multiple-input-single-

output node, Fig. 9. The method takes into account the buffer status, xi, i=1,2,…,n, as well as the changes of the input signal (i.e., the burstiness), ttxi ∆∆ /)( , avoiding thus

the buffer overflow. The network flexibility and adaptability is improved by applying the 'winner-take-most' algorithm instead of the 'winner-take-all' method used in the original Kohonen network. The winning input is connected to the output in an asynchronous manner through a controlled switch. The simulated ON/OFF sources as well as a real signal of the digitized Star Wars movie, Fig. 3, are used for demonstrating the capabilities of the new scheduling method.

In [90], key network performance parameters such as delay, loss, and jitter are observed while carrying various combinations of calls and their relationship is learned by an NN. The approach estimates the traffic’s entire congestive behavior from its impact on the output queue via measurements of quantities such as a mean cell delay, cell loss, and jitter.

Fig. 9.Neural network ATM scheduler [60].

Traffic policing mechanism, based on the NNs, are proven to be more effective than algorithmic ones due to nonlinear and time-varying nature of the traffic. The policing algorithm should be capable of the following: 1) detecting any non-regular traffic situation, 2) selecting the range of checked parameters (i.e., the algorithm could determine whether the user's behavior is within an acceptable region), and 3) rapid reaction time to any violations in the traffic parameters. Most of the existing algorithms attempt to police the peak and the mean bit-rates of the traffic, but violations could pass undetected by the algorithm. Furthermore, there will be errors in estimating the mean bit-rate due to extremely wide range of the CLP, as noted earlier. Consequently, the algorithm will be attempting to detect violations in an already inaccurately measured parameter. A policing mechanism using NNs, called neural network traffic enforcement mechanism, is proposed in [91], Fig. 10. The ATM traffic policing enforcement mechanism is based on the estimation of the probability density function of the multimedia traffic via its count process. Two neural

Page 13: Multimedia: the Impact on the Teletraffic

13

networks are used in this method. The first NN (NN1) is presented with various traffic patterns that an 'ideal nonviolating' traffic source may generate (e.g., 32 Kbps VBR voice sources). Accordingly, and after many training examples that are performed using off-line training, it learns the 'probability' that a certain number of cells arrives during a sampling period for a nonviolating traffic pattern. The second NN (NN2) is trained to learn different 'violating' patterns of the traffic from on-line measurements. Actually, both NNs are predicting the future values of the traffic count process from the past history. Their inputs are samples of the traffic count process over some monitoring period, whereas their outputs are the predicted values of those samples in the future. Hence, by comparing the outputs of both NNs, an error signal is generated which is a predictive indication that a violation may occur. The produced error signal, arising if the NNs detect any violations, is then used to drop violating cells via a 'dropping switch'.

Fig. 10. ATM traffic policing using neural network reinforcement learning controller [91].

The number of inputs and hidden neurons in NNs depend

on the type of traffic variations that the NNs are expected to learn, and consequently predict. For an ON/OFF voice source model, it was found that four input neurons, four hidden neurons, and a single output backpropagation perceptron produced accurately enough results. Applications in real-life situations may show that more hidden neurons could be needed for nonstationary traffic scenarios. Furthermore, backpropagation may not be the best choice since it may not converge fast enough especially under the highly dynamic behavior of the traffic, as noted in [60].

An NN control of the congestion episodes arising in the user-network interface multiplexer was presented in [92]. The controller uses the number of cells in the multiplexer buffer as a measure of potential congestion problems. Reinforcement learning, similar to that in [91], is applied to generate a control signal that is fed back to the inputs in order to adapt their rates. Another approach to the congestion control using the NN is presented in [93]. The results prove that the new mechanism provides better CLR improvement than the feedback congestion control with static threshold values, while transmission delay introduced by the NN controller is also smaller than the static approach in most cases.

All the examples described above indicate that neural networks can be applied in solving specific problems or as a

part of the overall traffic control. All of them need the understanding of the traffic behavior and they take into account specific features of the underlying network structure.

5 Conclusion This paper describes the main characteristics of the modern teletraffic particularly in the multimedia environment. Multimedia assumes various services as the transmission of continuous signals (e.g., voice, audio, video, image sequences) as well as the discrete data transfer, in the same application and in the interactive manner in/over digital networks [1],[2]. Nowadays multimedia comprises video-conferencing, video/movies on demand, distance learning, distributed games, telemedicine, telemarketing (called interactive television, ITV), connected via local and global networks, including wireless and satellite networks. The fundamental characteristic of a multimedia network is that it must handle not only multiple types of signals, particularly images with various sizes, number of pixels, and pixel depths, but also multiple media: picture, voice, data, and text. Also, in the traffic management the delivery of audiovisual data to large receiver groups must take into account that the resource capabilities of the participants can vary widely; from high-speed network links and fast workstations to low-level personal computers connected via narrowband links [2],[8]. The main feature of multimedia networks is their high variability: almost every parameter varies (the number of users, network topology, rates, the required bandwidth). Consequently, the basic assumption of the previous telephone network, the Poisson-modeled traffic, is not valid any more. Different experiments indicate the self-similar (i.e., fractal) nature of modern traffic [16]-[22]. Fractal parameters can be used as robust parameters describing the complex traffic in the broadband era. Moreover, in multimedia environment even the fractal parameters are not well suited; the multifractal parameters [23]-[31] have to be better descriptors. The f(a) spectra derived from the measured traffic data (obtained as bytes per frame of the Ethernet data) indicate that the type of multimedia has a significant impact on teletraffic. So, for the multimedia traffic analysis some useful invariant may be based on multifractals. Further investigations should take into account different protocols and particular traffic streams performances.

In order to track and control traffic parameters in multimedia communications, new complex routines have to be established. These need enormous computation power, high computing speed and real time control. The use of neural networks (NNs) could be probably a good solution for these purposes. Although still in its infancy, a significant progress is evident. Performance results prove that the neural networks approach achieves better results, simpler and faster, than the algorithmic approaches implemented into digital processors. Despite the existence of powerful computers, note that even software-emulated neural networks may exhibit better performances in the problem solving than standard digital computer routines. Certainly,

To the network From traffic source

Error signal −

+ NN1

NN2

Dropping switch

Page 14: Multimedia: the Impact on the Teletraffic

14

hardware realization of the NNs will be a more powerful choice. The fundamental problem with the neural network is how to determine the connection topology and the connection weights between neurons, while with the computer, including the parallel one, the problem is how to program many processors.

Different types of neural networks and training techniques can be utilized to customize the neural network to a specific application. Although the backpropagation networks are most commonly used, this type of network may not be suitable for highly dynamic situations as the status of link-traffic. Some research results indicate that the bank of different NNs, dedicated to different types of traffic, may be a usable solution. The bank of NNs could be capable to react to different parts of a multifractal spectrum. To summarize, the NNs can be the answer to many challenges of global multimedia network management and to the integration of multiple levels of control.

Acknowledgements The authors are grateful to M. Garret from Columbia University making his video traces containing MJPEG version of Star Wars movie publicly available, to the NetGroup, for their Ethernet statistic data, and to research group at the Department of Information Engineering, University of Pisa (Italy), for their experimental results offered via Web. References

[1] D. McLeod, U. Neumann, C. Nikias, A. Sawchuk, “Integrated media systems”, IEEE Signal Processing Mag., Vol. 16, No. 1, pp. 33-43, 76, Jan. 1999.

[2] K. Rao, Z. Bojkoviæ, Packet Video Communications over ATM Networks, Prentice-Hall, N.J., 1999.

[3] I. Pitas, “A method for watermark casting on digital images”, IEEE Trans. Circ. Syst. for Video Technology, Vol. 8, No. 6, Oct. 1998.

[4] C. T. Hsu, J. L. Wu, “Hidden digital watermarks in images”, IEEE Trans. Image Processing, Vol. 8, No. 1, Jan. 1999.

[5] I. Rakoceviæ, B. Reljin, I. Reljin, “The image authenticity confirmation by modifying spectral components”, to appeared in Proc. 10th IEEE Conf. MELECON-2000, Cyprus, May 29-31, 2000.

[6] B. Leiner, V. Cerf, D. Clark, R. Kahn, L. Kleinrock, D. Lynch, J. Postel, L. Roberts, S. Wolff, “A brief history of the Inernet”, http://www.isoc.org/internet-history/brief.htm

[7] D. Messerschmitt, “The convergence of telecommunications and computing: what are the implications today?”, Proc. IEEE, Vol. 84, No. 8, pp. 1167-1186, Aug. 1966.

[8] M. Krunz, “Bandwidth allocation strategies for transporting variable-bit-rate video traffic”, IEEE Comm. Magazine, Vol. 37, No. 1, pp. 40-46, Jan. 1999.

[9] The ATM Forum Technical Committee, Traffic Management Specification Version 4.1, AF-TM-0121.000, March 1999.

[10] B. Vandalore, S. Fahmy, R. Jain, R. Goyal, M. Goyal, “QoS and multipoint support for multimedia applications over the ATM ABR service”, IEEE Comm. Magazine, Vol. 37, No. 1, pp. 53-57, Jan. 1999.

[11] B. Zheng, M. Atiquzzaman, “Traffic management of multimedia over ATM networks”, IEEE Comm. Magazine, Vol. 37, No. 1, pp. 33-38, Jan. 1999.

[12] S. Keshav, R. Sharma, “Issues and trends in router design”, IEEE Comm. Mag., Vol. 36, No. 5, pp. 144-151, Jan. 1999.

[13] I. Mertzanis, G. Sfikas, R. Tafazolli, B. Evans, “Protocol architectures for satellite ATM broadband networks”, IEEE Comm. Magazine, Vol. 37, No. 3, pp. 46-54, March 1999.

[14] R. Goyal, R. Jain, M. Goyal, S. Fahmy, B, Vandalore, “Traffic management for TCP/IP over satellite networks”, IEEE Comm. Magazine, Vol. 37, No. 3, pp. 56-61, March 1999.

[15] TEN-155: Trans-European Network with access ports of 155 Mbps, http://www.dante.net/ten-155

[16] H. Fowl, W. Leland, “Local area network traffic characteristics with implications for broadband network congestion management”, IEEE J. on Selected Areas in Comm., Vol. 9, No. 7, pp. 1139-1149, Sept. 1991.

[17] V. Paxon, “Growth trends in wide-area TCP connections”, IEEE Network, Vol. 8, pp. 8-17, 1994.

[18] M. Garrett, “Contributions toward real-time services on packet switched networks”, PhD. Thesis, Columbia Univ., NY, 1993.

[19] V. Paxon, S. Floyd, “Wide area traffic: the failure of Poisson modeling”, IEEE/ACM Trans. on Networking, Vol.3, No.3, June 1995.

[20] W. Leland, M. Taqqu, W. Willinger, D. Wilson, “On the self-similar nature of Ethernet traffic”, Extended version - Proc. ACM SIGComm ’93, San Francisco, CA (USA), 1993.

[21] S. Klivansky, A. Mukherjee, C. Song, “On long-range dependence in NSFNET traffic”, TR GIT-CC-94-61, Georgia Techn, 1994.

[22] B. Ryu, “Fractal network traffic modeling: past, present and future”, http://www.hrl.com/people/ryu

[23] C. Eversz, B. Mandelbrot, “Mutifractal measures”, Appendix B in Chaos and Fractals: New Frontiers of Science, by H.O. Peitgen, H. Jürgens, D. Saupe, Springer, N.Y., 1992.

[24] R. Riedi, J. Levy Vehel, “TCP traffic is multifractal: a numerical study”, IEEE Trans. on Networking, Oct. 1997.

[25] R. Riedi, W. Willinger, “Toward an improved understanding of network traffic dynamics”, in Self-similar Network Traffic and Performance Evaluation, J. Wiley, June 1999.

[26] M. Taqqu, V. Teverovsky, W. Willinger, “Is the Ethernet data self-similar or multifractal?”, Fractals, No 5, pp. 63-73, 1997.

[27] J. Levy Vehel, R. Riedi, “Fractional Brownian motion and data traffic modeling: The other end of the spectrum”, in Fractals in Engineering, J.L.Vehel et al. Eds., Springer, 1997.

[28] A. Adas, “Traffic models in broadband networks”, IEEE Comm. Magazine, Vol. 35, No. 7, pp. 82-89, July 1997.

[29] H. Michiel, K. Laevens, “Teltraffic engineering in a broadband era”, Proc. IEEE, Vol. 85, No. 12, pp. 2007-2034, Dec. 1997.

[30] S. I. Resnick, “Heavy tail modeling and teletraffic data”, The Annals of Statistics, 25(5), pp. 1805-1869, 1997.

[31] W. Willinger, V. Paxon, “Discussion of Heavy tail modeling and teletraffic data”, http://www.aciri.org/vern/papers

[32] Medical Communications, Spec. Issue, IEEE J. Selected Areas in Communications, Vol. 10, No. 7, Sept. 1992. ref. 1

[33] K. Kayser, “Telemedicine”, Wien-Klin-Wochenschr. 1996; 108(17): 532-540

[34] J. R. Cox, et al., “Study of a distributed picture archiving and communications systems for radiology”, SPIE, Picture Archiving Commun. Syst. (PACS) Med. Imaging Syst., Vol. 318, pp. 133-142, 1982.

[35] M. H. Doolittle, K. W. Doolittle, Z. Winkelman, D. S. Weinberg, “Color images in telepathology: how many colors do we need?” Hum-Pathol. Jan.1997,28(1):36-41

Page 15: Multimedia: the Impact on the Teletraffic

15

[36] R. A. Ulichney, Digital Halftoning, Cambridge, MA: MIT Press, 1990.

[37] K. R. Crounse, T. Roska, L. O. Chua, “Image halftoning with cellular neural networks”, IEEE Trans. Circuits Syst.-II, Vol. 40, No. 4, pp. 267-283, Apr. 1993.

[38] P. Bakiæ, B. Reljin, N. Vujoviæ, D. Brzakoviæ, P. Kostiæ, “Multilevel transient-mode CNN for solving optimization problems, in Proc. 4th IEEE Int. Workshop CNNA-96, pp. 25-30, Seville, Spain, June 1996.

[39] P. Bakiæ, N. Vujoviæ, D. Brzakoviæ, P. Kostiæ, B. Reljin, “CNN paradigm based multilevel halftoning of digital images, IEEE Trans. Circuits Syst.-II, Vol. 44, No. 1, pp. 50-53, Jan. 1997.

[40] Z. Bojkoviæ, C. Toma, V. Gui, R. Vasiu, Advanced Topics in Digital Image Compression, Editura Politehnica, Timisoara, Romania, 1997.

[41] M. Strintzis, S. Malassiotis, “Object-based coding of stereoscopic and 3D image sequences”, IEEE Signal Proc. Mag., Vol. 16, No. 3, pp. 14-28, May 1999.

[42] J. Whitaker, DTV: The Revolution in Digital Video (Second Ed.), McGraw-Hill, N.Y., 1999.

[43] F. A. Allaert, D. Weinberg, P. Dusserre, P. J. Yvon, L. Dusserre, P. Cotran, “Evaluation of a telepathology system between Boston (USA) and Dijon (France): glass slides versus telediagnostic TV-monitor”, in Proc. Annu. Symp. Comput. Appl. Med. Care, 1995: 596-600

[44] M. Cinotti, R. Garroppo, S. Giordano, F. Russo, “Analysis and efficient modeling of the traffic generated by broadband teleradiology applications”, in Proc. Globecom’97, Phoenix, AR, Nov. 1997.

[45] I. Milosavljeviæ, Determining of Minimal Quality of Digital Images of Microscopic Samples for Telemicroscopy Purposes (in Serbian), M.Sc. Thesis, Military Medical Academy, Belgrade (Yugoslavia), 1999.

[46] Digital Imaging and Communications in Medicine (DICOM): Version 3.0, Draft Standard, American College of Radiology, National Electrical Manufacturers Association, ACR-NEMA Committee, Working Group VI, Washington, D.C., 1993.

[47] I. Milosavljeviæ, P. Spasiæ, D. Mihajloviæ, M. Kostov, S. Mihajloviæ, D.Mijatoviæ, R.Markoviæ, S.Ristiæ, “Telepathology – Second opinion network in Yugoslavia”, Advances in Clinical Pathology, Vol. 2, No. 2, p. 156, April 1998.

[48] B. Reljin, P. Spasiæ, I. Milosavljeviæ, P. Kostiæ, S. Mijuskoviæ, “Telemicroscopy – the first step in telemedicine foundation in Yugoslavia”, in Recent Advances in Signal Processing and Communications, N. Mastorakis (Ed.), World Scientific Press, Danvers, MA, pp. 209-216, 1999.

[49] J. C. Russ, Image Processing Handbook (Third Ed.), CRC Press, Boca Raton, FL, 1998.

[50] B. Reljin, P. Bakiæ, P. Kostiæ, D. Brzakoviæ, N.Vujoviæ, “Local enhancement of images using cellular neural networks”, in Proc. Int. Symp. ISTET’95, Thessaloniki (Greece), Sept. 22-23, 1995.

[51] P. Kostiæ, B. Reljin, I. Milosavljeviæ, S. Æirkoviæ, I. Reljin, I. Rakoèeviæ, “CAMIA – Computer-aided medical image analyzer”, to appeared in Proc. 10th IEEE Conf. MELECON-2000, Cyprus, 29-31 May, 2000.

[52] W. Pratt, Digital Image Processing, John Wiley, N.Y., 1978. [53] http://foulard.ee.cornell.edu/hemami/lena.jpg, “Lena as she

looks today”, [54] R. Amin, J. Miller, G. Sadik, “Television meets the Internet”,

Communications Technology, pp. 48-58, Sept. 1999.

[55] W. Willinger, V. Paxon, “Where mathematics meets the Internet”, http://www.aciri.org/vern/papers

[56] S. Floyd, V. Paxon, “Why we don’t know how to simulate the Internet”, AT&T Center for Internet Research, Berkeley, CA, http://www.aciri.org, Oct. 11, 1999.

[57] V. Paxon, S. Floyd, “Why we don't know how to simulate the Internet”, in Proc. Winter Simulation Conf., Atlanta (USA), 1997.

[58] D. Duffy, A. McIntosh, M. Rosenstein, W. Willinger, “Statistical analysis of CCSN/SS& traffic data from working CCS subnetworks”, IEEE J. on Selected Areas in Comm., Vol. 12, No. 3, pp. 544-551, April 1994.

[59] S. Floyd, V. Jacobson, “The synchronization of periodic routing messages”, IEEE/ACM Trans. on Networking, Vol. 2, pp. 122-136, 1994.

[60] I. Reljin, “Neural network based cell scheduling in ATM node”, IEEE Communications Letters, Vol. 2, No. 3, pp.78-81, March 1998.

[61] Z. Sahinoglu, S. Tekinay, “On multimedia networks: Self-similar traffic and network performances”, IEEE Comm. Mag., Vol. 37, No. 1, pp. 48-52, Jan. 1999.

[62] H. E. Hurst, “Long-term storage capacity of reservoirs”, Trans. Amer. Soc. Civil Engineers, Vol. 116, pp. 770-799, 1951.

[63] Ethernet Traffic Data, http://www.acm.org/sigcomm/ITA [64] K. Park, G. Kim, M. Crovella, “On the relation between file

sizes, transport protocols, and self-similar network traffic”, in Proc. IEEE Int. Conf. Network Protocols, pp. 171-180, Oct. 1996.

[65] K. Park, G. Kim, M. Crovella, “On the effect of traffic self-similarity on network performance”, in Proc. SPIE Int. Conf. Perf. And Control of Network Syst., pp. 296-310, 1997.

[66] D. Heyman, T. Lakshman, “What are the implications of long-range dependence for VBR-video traffic engineering?”, IEEE Networking, Vol. 4, No. 3, pp. 301-317, June 1996.

[67] J. L. Vehel, “Fractal approaches in signal processing”, http://www-rocq.inria.fr/fractales, 1995.

[68] J. L. Vehel, “Introduction to the multifractal analysis of images”, http://www-rocq.inria.fr/fractales, 1996.

[69] J-P. Berroir, J. L. Vehel, “Medical image segmentation with multifractals”, in Proc. Int. Conf. IFCS’98, 1998.

[70] I. Reljin, B. Reljin, I. Rakoèeviæ, N. Mastorakis, “Image content described by fractal parameters”, in Recent Advances in Signal Processing and Communications, N. Mastorakis (Ed.), World Scientific Press, Danvers, MA, pp. 31-34, 1999.

[71] I. Reljin, B. Reljin, I. Pavloviæ, I. Rakoèeviæ, “Multifractal analysis of gray-scale images”, to appeared in Proc. 10th IEEE Conf. MELECON-2000, Cyprus, May 29-31, 2000.

[72] S. Giordano, G. Perazzini, F. Russo, “Multimedia experiments at the University of Pisa: from videoconference to random fractals”, in Proc. INET ’95, pp. 543-550, Honolulu (Hawaii), June 26-28, 1995.

[73] R. Garroppo, S. Giordano, S. Miduri, M. Pagano, F. Russo, “Statistical multiplexing of self-similar VBR videoconferencing traffic ”, in Proc. IEEE GLOBECOM ’97, Phoenix, AR, Nov. 1997.

[74] R. Garroppo, S. Giordano, M. Pagano, “Queuing impact of heavy tailed OFF periods distribution in cell switching networks”, in Proc. XI ITC Specialist Seminar, Yokohama, Japan, Oct. 1998.

[75] http://131.114.9.85/projects/TandM/traces.html, NetGroup, Ethernet Statistics.

[76] P. Wasserman, Neural Computing, Van Nostrand, NY, 1989.

Page 16: Multimedia: the Impact on the Teletraffic

16

[77] R. Hecht-Nielsen, Neurocomputing, Addison-Wesley, 1990. [78] S. Haykin, Neural Networks, IEEE Press, Macmillan

Publishing Co., 1994. [79] L. Tsoukalas, R. Uhrig, Fuzzy and Neural Approaches in

Engineering, John Wiley, NY, 1997. [80] A. C. Guyton, J. E. Hall, Textbook of Medical Physiology,

ninth ed., W. B. Saunders Co., Philadelphia, PE, 1996. [81] J. M. Cruz, L. O. Chua, “A CNN chip for connected

component detection"” IEEE Trans. Circuits Systems, 38, pp. 812-817, 1991.

[82] T. Dominguez-Castro, S. Espejo, A. Rodriguez-Vazquez, R. Carmona, “A CNN universal chip in CMOS technology”, in Proc. 3rd IEEE Workshop on Cellular Neural Networks and their Applications (CNNA ‘94), Roma (Italy), 1994.

[83] B. Reljin, P. Kostiæ, T. Serdar, A. Pavasoviæ, “Single amplifier CNN cell suitable for VLSI CMOS implementation“, Int. J. Circ. Th. & Appl., Vol. 24, No. 6, pp. 649-655, 1996.

[84] S. Bang, B. Sheu, E. Chou, “A hardware annealing method for optimal solutions on cellular neural networks”, IEEE Trans. Circuits Systems - II, vol. 43, no. 6, pp. 409-421, 1996.

[85] B. Reljin, P. Kostiæ, D. Novakoviæ, “Cellular Neural Network using switched-capacitor single-amplifier cells”, in Proc. European Conf. on Circuit Theory and Design (ECCTD’97), Vol. 2, pp. 645-649, Budapest (Hungary), Sept. 1997.

[86] R. Carmona-Galan, A. Rodriguez-Vasquez, S. Espejo-Meana, R. Dominguez-Castro, T. Roska, T. Kozek, and L. O. Chua, “An 0.5 µm CMOS analog random access memory chip for TeraOPS speed multimedia video processing”, IEEE Trans. on Multimedia, Vol. 1, No. 2, pp. 121-135, June 1999.

[87] A. Hiramatsu, “ATM communications network control by neural networks”, IEEE Trans. Neural Networks, Vol. 1, 1990.

[88] A. Hiramatsu, “Training techniques for neural network applications in ATM”, IEEE Comm. Mag, Vol. 33, No. 10, pp. 58-67, Oct. 1995.

[89] A. Hiramatsu, “ATM call admission control using a neural network trained with output buffer method”, in Proc. IEEE Int. Conf. on Neural Networks '94, 1994.

[90] R. Moris, B. Samadi, “Neural network control of communicatios systems”, IEEE J. Selec. Areas Comm., July 1994.

[91] A. Tarraf, I. Habib, T. Saadawi, “A novel neural network controller using reinforcement learning method for ATM traffic policing”, IEEE J. Select. Areas Comm., Aug. 1994.

[92] A. Tarraf, I. Habib, T. Saadawi, “A neural network controller using reinforcement learning for congestion control in ATM networks”, in Proc. IEEE ICC'95, 1995.

[93] Y. Liu, C. Douligeris, “Static vs. adaptive feedback congestion controller for ATM networks”, in Proc. IEEE Globecom '95, 1995.

[94] E. Nordstrom, J. Carlstrom, O. Gallmo, L. Asplund, “Neural networks for adaptive traffic control in ATM networks”, IEEE Comm. Mag, Vol. 33, No. 10, pp. 43-49, Oct. 1995.

[95] J. Neves, M. Leitao, L. Almeida, “Neural networks in B-ISDN flow control: ATM traffic prediction or network modeling”, IEEE Comm. Mag, Vol. 33, No. 10, pp. 50-56, Oct. 1995.

[96] Y.-K. Park, G. Lee, “Applications of neural networks in high-speed communication networks”, IEEE Comm. Magazine, Vol. 33, No. 10, pp. 68-74, Oct. 1995.

[97] I. Habib, “Applications of neurocomputing in traffic management of ATM networks”, Proc. IEEE , Vol. 84, No. 10, pp. 1430-1441, Oct. 1996.

[98] C. Douligeris, G. Develekos, “Neuro-fuzzy control in ATM networks”, IEEE Comm. Mag, Vol. 35, No. 5, pp. 154-162, May 1997.