Calculating Call Blocking and Utilization for...

9
978-1-4577-0557-1/12/$26.00 ©2012 IEEE 1 Calculating Call Blocking and Utilization for Communication Satellites that Use Dynamic Resource Allocation Leah Rosenbaum Mohit Agrawal Leah Birch Yacoub Kureh Nam Lee UCLA Institute for Pure and Applied Mathematics (IPAM) 460 Portola Plaza Box 957121 Los Angeles, CA 90095-7121 [email protected] James Hant Brian Wood Eric Campbell James Gidney The Aerospace Corporation 2310 E. El Segundo Blvd. El Segundo, CA 90245 310-336-1388 [email protected] AbstractThe performance of most satellite communication (SATCOM) systems is characterized by loading analyses that assess the percentage of users or total throughput a particular system can satisfy. These analyses usually assume a static allocation of resources in which users request communication resources 100% of the time and higher priority users often block lower priority users from getting service. However, the loading of more dynamic, circuit networks such as the public- switched telephone network (PSTN) is typically analyzed on a statistical basis where the probability of a blocked call is computed. These types of systems can potentially satisfy more users than those that use static resource allocation because they take advantage of statistical multiplexing. As SATCOM moves toward a more dynamic concept of operations (CONOPS) to take advantage of potential statistical multiplexing gains, it is crucial to develop analysis capabilities to evaluate performance. In this paper, a method is developed to calculate call-blocking, preemption, and resource utilization for dynamically-allocated SATCOM systems in which users have different priorities and bandwidth requirements. The first part of the study augments the M/M/m queuing model to account for users with different priorities and bandwidth requirements. In the second part of the study, the model is used to predict the performance for two competing traffic classes with different bandwidths or priorities and highlight important trends. Finally, the third part of the study directly compares the performance of static and dynamic resource allocation approaches. This work was performed by The Aerospace Corporation in collaboration with a team of students representing the Research in Industrial Projects for Students (RIPS) Program. Administered by the UCLA Institute for Pure & Applied Mathematics (IPAM), RIPS provides opportunities for high- achieving undergraduate students to work in teams on real- world research projects proposed by a sponsor from industry. TABLE OF CONTENTS 1. INTRODUCTION ................................................. 1 2. THEORETICAL MODEL ..................................... 3 3. PERFORMANCE OF DYNAMIC RESOURCE ALLOCATION ........................................................ 4 4. COMPARISON OF STATIC AND DYNAMIC RESOURCE ALLOCATION ..................................... 7 5. CONCLUSIONS AND FUTURE WORK ................. 9 REFERENCES......................................................... 9 BIOGRAPHIES... ERROR! BOOKMARK NOT DEFINED. 1. INTRODUCTION Satellite communication (SATCOM) systems often have limited resources to satisfy communication circuits which need to be managed among competing users who have different priorities and bandwidth needs. Most of these systems allocate resources on a static basis in which users are given access to communication circuits for long periods of time in priority order. A pictorial view of this type of allocation approach is shown in Figure 1. This example assumes a total system capacity of 100 Mbps and 18 requested circuits with different bandwidths and priorities (high, medium, and low).

Transcript of Calculating Call Blocking and Utilization for...

Page 1: Calculating Call Blocking and Utilization for …ykureh/rips-aerospace-ieee-draft...978-1-4577-0557-1/12/$26.00 ©2012 IEEE 1 Calculating Call Blocking and Utilization for Communication

978-1-4577-0557-1/12/$26.00 ©2012 IEEE

1

Calculating Call Blocking and Utilization for

Communication Satellites that Use Dynamic Resource

Allocation

Leah Rosenbaum Mohit Agrawal

Leah Birch Yacoub Kureh

Nam Lee UCLA Institute for Pure and Applied Mathematics

(IPAM) 460 Portola Plaza

Box 957121 Los Angeles, CA 90095-7121

[email protected]

James Hant Brian Wood

Eric Campbell James Gidney

The Aerospace Corporation 2310 E. El Segundo Blvd.

El Segundo, CA 90245 310-336-1388

[email protected]

Abstract— The performance of most satellite communication

(SATCOM) systems is characterized by loading analyses that

assess the percentage of users or total throughput a particular

system can satisfy. These analyses usually assume a static

allocation of resources in which users request communication

resources 100% of the time and higher priority users often

block lower priority users from getting service. However, the

loading of more dynamic, circuit networks such as the public-

switched telephone network (PSTN) is typically analyzed on a

statistical basis where the probability of a blocked call is

computed. These types of systems can potentially satisfy more

users than those that use static resource allocation because

they take advantage of statistical multiplexing. As SATCOM

moves toward a more dynamic concept of operations

(CONOPS) to take advantage of potential statistical

multiplexing gains, it is crucial to develop analysis capabilities

to evaluate performance.

In this paper, a method is developed to calculate call-blocking,

preemption, and resource utilization for dynamically-allocated

SATCOM systems in which users have different priorities and

bandwidth requirements. The first part of the study augments

the M/M/m queuing model to account for users with different

priorities and bandwidth requirements. In the second part of

the study, the model is used to predict the performance for two

competing traffic classes with different bandwidths or

priorities and highlight important trends. Finally, the third

part of the study directly compares the performance of static

and dynamic resource allocation approaches.

This work was performed by The Aerospace Corporation in

collaboration with a team of students representing the

Research in Industrial Projects for Students (RIPS) Program.

Administered by the UCLA Institute for Pure & Applied

Mathematics (IPAM), RIPS provides opportunities for high-

achieving undergraduate students to work in teams on real-

world research projects proposed by a sponsor from industry.

TABLE OF CONTENTS

1. INTRODUCTION ................................................. 1

2. THEORETICAL MODEL ..................................... 3

3. PERFORMANCE OF DYNAMIC RESOURCE

ALLOCATION ........................................................ 4

4. COMPARISON OF STATIC AND DYNAMIC

RESOURCE ALLOCATION ..................................... 7

5. CONCLUSIONS AND FUTURE WORK ................. 9

REFERENCES ......................................................... 9 BIOGRAPHIES ... ERROR! BOOKMARK NOT DEFINED.

1. INTRODUCTION

Satellite communication (SATCOM) systems often have

limited resources to satisfy communication circuits which

need to be managed among competing users who have

different priorities and bandwidth needs. Most of these

systems allocate resources on a static basis in which users

are given access to communication circuits for long periods

of time in priority order. A pictorial view of this type of

allocation approach is shown in Figure 1. This example

assumes a total system capacity of 100 Mbps and 18

requested circuits with different bandwidths and priorities

(high, medium, and low).

Page 2: Calculating Call Blocking and Utilization for …ykureh/rips-aerospace-ieee-draft...978-1-4577-0557-1/12/$26.00 ©2012 IEEE 1 Calculating Call Blocking and Utilization for Communication

2

Figure 1: Static Resource Allocation Approach

With this type of scheme, high-priority users are given their

own reserved channel, regardless of their usage pattern,

which causes lower priority users to be blocked and the

system to be underutilized. For this example, all low priority

users are blocked even though the server utilization is less

than 100 Mbps during most of the time.

A dynamic resource allocation approach is shown in Figure

2 in which users are allocated resources (in priority order)

only when those resources are specifically needed. For this

case, the system utilization is increased and more of the

lower priority users are satisfied even though some of these

users may be preempted by higher priority users.

Figure 2: Static Resource Allocation Approach

The potential benefit of implementing a dynamic resource

allocation scheme depends on the time varying bandwidth

needs for the different priority users. As the duty cycles of

the users decrease, a dynamic allocation approach can more

easily take advantage of multiplexing. In this paper,

classical queuing theory is expanded to highlight some of

the basic trade offs that determine the performance of static

vs. dynamic resource allocation schemes.

For this study, a SATCOM system with dynamic resource

allocation is modeled as the M/M/m/0 queuing model [1, 2]

shown in Figure 3 with m available circuits, no queuing

buffer, and circuit arrivals and departures described by

exponential distributions. This allows us to expand on

classical queuing theory to estimate performance and

determine trends.

Figure 3: M/M/m/0 Queuing Model

The following three types of traffic conditions are

considered for the dynamic resource allocation system:

1. Single traffic type: all requested circuits have the

same priority and bandwidth requirements

2. Two competing traffic classes with different

priorities

3. Two competing traffic classes with different

bandwidth requirements

Theoretical models for user satisfaction (or

blocking/preemption probability) and system utilization are

determined for these different traffic conditions and a direct

comparison is made between static and dynamic allocation

approaches.

The organization of this paper is as follows. In Section 2, a

theoretical model for dynamic resource allocation is

developed assuming a single traffic type or two competing

traffic classes with different priorities or bandwidths. In

Section 3, the theoretical model is used to generate results

that demonstrate some of the basic performance trends.

µ

Page 3: Calculating Call Blocking and Utilization for …ykureh/rips-aerospace-ieee-draft...978-1-4577-0557-1/12/$26.00 ©2012 IEEE 1 Calculating Call Blocking and Utilization for Communication

3

In Section 4, the performance of static and dynamic

allocation schemes is compared for two competing traffic

classes with different priorities or different bandwidths.

Finally, conclusions and suggestions for future work are

presented in Section 5.

2. THEORETICAL MODEL

In this section, theoretical models for dynamic resource

allocation are developed for a single traffic type, two

competing priorities, and two competing bandwidths. To

evaluate system performance, we consider the following

two performance measures: call blocking/preemption

probability and server (or bandwidth) utilization. A call is

blocked when there are not enough servers available in the

system to handle the job. Preemption occurs when a low

priority user gets kicked off a server by a high-priority user

who requests to use the system. Server utilization describes

the average system utilization or how much bandwidth is

being occupied on an average basis. These measures are

tracked as a function of the traffic intensity, ρ, which is the

ratio of overall arrival and departure rates from the system.

A discrete-event simulation model [4] was also generated in

MATLAB [3] to verify all theoretical results.

Single Traffic Type

For one job type, we can consider a stochastic processing

network with m servers and a queue of length 0. As such,

there can be at most m jobs in the system at any time. If a

job seeks to enter the system but no free servers are

available, the job is blocked. Because blocked jobs are lost

forever, this system is known as the Erlang loss system [2].

A state transition diagram for this type of system is shown

in Figure 4.

Figure 4: State Transition Diagram for an M/M/m/0

Queuing System

System state is defined as the number of servers that are

currently occupied and it changes whenever a new job

arrives or leaves the system. The inter-arrival times of jobs

entering the system is described by an exponential

distribution with an average arrival rate of The time

required to service each job is also described by an

exponential distribution with an average service rate of

Once the system is in a given state, the probability of

entering another state is fixed and independent of the

system’s past states and thus the system can be modeled by

a finite state, continuous time Markov process [1]. When the

underlying operating policy is first-in-first-out (FIFO) the

M/M/m/0 system can be described by the following

infinitesimal transition rate matrix, Λ.

[ ]

(1)

Assuming the process is stationary and irreducible, the

probability that the system is in a particular state, π, is

calculated by finding the unique solution to the following

two equations:

(2)

(3)

Blocking probability can then be calculated as the

probability that the system is in state m (that is, that the

server is fully utilized) and the mean system occupancy (or

server utilization) can be calculated as a weighted sum of

the probabilities of being in each state.

A similar analysis can be done for a dynamic allocation

system with two competing priorities, however, now the

state transition diagram and infinitesimal transition rate

matrix will be different. To gain some insight into how to

develop a state transition diagram for competing priorities,

consider the m=1 case (shown in Figure 5).

Figure 5: Continuous-Time Markov Chain for

Competing Priorities

An m=1 system with two competing priorities can be in one

of 3 states: unoccupied (‘0’), servicing a low-priority circuit

(‘low’), or servicing a high-priority circuit (‘high’). State

transitions are determined by the arrival rate of high (H) or

low (L) priority circuits and the service rate for those

circuits (The infinitesimal transition rate matrix, Λ, for

this system is given by equation 4.

highlow0

H

H

L

Page 4: Calculating Call Blocking and Utilization for …ykureh/rips-aerospace-ieee-draft...978-1-4577-0557-1/12/$26.00 ©2012 IEEE 1 Calculating Call Blocking and Utilization for Communication

4

[

] (4)

This model can be extended to any value of m and the

blocking probability and server utilization can be calculated

based on equations (2) and (3) for different values of H, L,

and . These values can be used to determine the

corresponding traffic intensities for high and low priority

traffic defined as H = (H/and L = (L/respectively.

A continuous time Markov chain can also be defined for a

dynamic allocation system with two competing bandwidths.

To better understand how this is done, consider a system

with a server capacity of 4 bandwidth units and two job

classes: jobs requiring 1 bandwidth unit (with an arrival rate

of λ1) and jobs requiring 2 bandwidth units (with an arrival

rate of λ2). Assuming both traffic classes have the same

service rate, , the Markov chain shown in Figure 6 can be

used to describe the system.

Figure 6: Continuous-Time Markov chain for

Competing Bandwidths

Note that this chain is two dimensional and the ordered pair

(n1 , n2 ) indicates the state of the system having n1 jobs of

the first class (those requesting 1 bandwidth unit) and n2

jobs of the second class (those requesting 2 bandwidth

units). An infinitesimal transition rate matrix can be

generated for this Markov chain or for any two competing

bandwidths with an arbitrary number of servers. The

blocking probability and server utilization can then be

calculated as a function of the total traffic intensity and the

ratio of arrival rates for both classes. For the case of

competing bandwidth classes, the total traffic intensity is

given by equation (5) where λi is the arrival rate for

bandwidth class i, Bi is number of servers (or bandwidth

units) requested for class i, m is the total number of servers

(or bandwidth units), and is the service rate for each user.

(5)

3. PERFORMANCE OF DYNAMIC RESOURCE

ALLOCATION

In this section, the theoretical models developed in Section

2 are used to highlight some of the basic trends of a

dynamic resource allocation system with a single traffic

type, two competing priorities, and two competing

bandwidth classes.

Single Traffic Type

Figure 7 shows the performance of a dynamic resource

allocation system with a single traffic type. Blocking

probability and mean server utilization is plotted in (a) and

(b), respectively, for different numbers of servers, m. As

expected, both blocking probability and server utilization

increase with increasing traffic intensity, however,

performance is much better for larger m. This implies that

systems with a larger number of servers (greater bandwidth

resolution) have less blocking and can more efficiently use

resources.

Figure 7: Performance for a Single Traffic Type

(a)

(b)

Page 5: Calculating Call Blocking and Utilization for …ykureh/rips-aerospace-ieee-draft...978-1-4577-0557-1/12/$26.00 ©2012 IEEE 1 Calculating Call Blocking and Utilization for Communication

5

Two Competing Priorities

Assuming a queuing system with 100 servers (m=100), the

performance of two competing priorities (high and low) is

plotted in Figure 8 and Figure 9, respectively. Figure 8 (a)

and (b) plot the blocking probability and server utilization

for high priority traffic as a function of the high priority

traffic intensity, H, for different arrival ratios of high and

low priority traffic (H/L). The performance of an

equivalent single-traffic type system (M/M/100) is

superimposed on the plots to assess the effect of

prioritization. Results show that high priority traffic only

competes with itself and its performance is completely

determined by the high priority traffic intensity. Regardless

of the ratio of arrivals of high and low priority traffic, high

priority traffic performs identically to an M/M/100 system

at the high priority traffic intensity.

Figure 8: Performance for Two Competing Priorities

(High Priority Users)

Figure 9 (a) and (b) plot the blocking probability and server

utilization for low priority traffic as a function of the total

traffic intensity for different arrival ratios of high and low

priority traffic (H/L). The performance of an equivalent

single-traffic type system (M/M/100) is again superimposed

on the plots to assess the effect of prioritization. Results

show that the performance of low priority traffic, on the

other hand, is highly dependent on the ratio of high and low

priority traffic. When most of the traffic arrivals are low

priority (H/L is small) than the system behaves similarly to

an M/M/100 system. However, when most of the traffic

arrivals are high priority (H/L is large) than the low

priority traffic is preempted and its performance degrades

considerably from M/M/100.

Figure 9: Performance for Two Competing Priorities

(Low Priority Users)

Two Competing Bandwidth Classes

The results for a system with 100 total servers and two

competing traffic classes with bandwidths of 1 and 10

servers (or bandwidth units) each are shown in Figure 10

below. Blocking probability and mean server utilization are

plotted as a function of traffic intensity with the ratio of

arrivals for both traffic classes (1/1010) as a parameter.

Note that the ratio of arrivals is weighted by the bandwidth

required for each traffic class. To gain greater insight,

results are plotted with cases in which each traffic class is

by itself. This corresponds to an M/M/100 model for the

(a)

(b)

(a)

(b)

Page 6: Calculating Call Blocking and Utilization for …ykureh/rips-aerospace-ieee-draft...978-1-4577-0557-1/12/$26.00 ©2012 IEEE 1 Calculating Call Blocking and Utilization for Communication

6

bandwidth 1 class and an M/M/10 model for the bandwidth

10 class.

Figure 10: Total Performance for Two Competing

Bandwidths

Results show total blocking probability is lower-bounded

and server utilization is upper-bounded by the smallest

bandwidth class (M/M/100). Interestingly, the blocking

probability is not upper-bounded (and the server utilization

is not lower bounded) by the largest bandwidth class

(M/M/10). When the ratio of small and large bandwidth

traffic is near 1, the system actually has worse performance

than if the large bandwidth traffic was by itself. This is

because the small bandwidth traffic can more easily block

the large bandwidth traffic from getting service.

The blocking probability of the small and large bandwidth

users is shown in Figure 11 (a) and (b), respectively.

Blocking probability is plotted as a function of the ratio of

arrivals for the large and small bandwidth traffic classes

(1/1010) with total traffic intensity as a parameter. To gain

greater insight into how the two traffic classes compete for

resources, the performance of equivalent single-traffic type

systems (M/M/100 for the bandwidth 1 class and M/M/10

for the bandwidth 10 class) are superimposed onto the

results.

Figure 11: Performance of Small and Large Bandwidth

Users for Two Competing Bandwidths

Figure 11 (a) shows that when a majority of the traffic is

large bandwidth users (1/1010 is small) the blocking

probability for small bandwidth users is similar the case

where only the large bandwidth traffic is competing with

itself (M/M/10). Conversely, when most of the traffic

consists of small bandwidth users (1/1010 is large) than

blocking probability is similar to the case where only the

small-bandwidth traffic is competing with itself (M/M/100).

Interestingly, small bandwidth users have the best

performance when there is nearly an equal ratio of large and

small bandwidth users. This is because small bandwidth

users can more easily take advantage of available servers

and block the large bandwidth users from getting service.

Figure 11 (b) shows a more consistent trend for the large

bandwidth users. When a majority of the traffic is large

bandwidth users (1/1010 is small), the performance of

large bandwidth users is dominated by these users

competing with themselves (M/M/10). As the amount of

small bandwidth traffic increases, more of the large

bandwidth users begin to get blocked by the small

bandwidth users. At the point where the ratio of small-to-

(a)

(b)

(a)

(b)

Page 7: Calculating Call Blocking and Utilization for …ykureh/rips-aerospace-ieee-draft...978-1-4577-0557-1/12/$26.00 ©2012 IEEE 1 Calculating Call Blocking and Utilization for Communication

7

large bandwidth users is nearly equal (1/1010=1), almost

no large bandwidth users are able to get through at the

higher traffic intensities (=1.5 or 2).

4. COMPARISON OF STATIC AND DYNAMIC

RESOURCE ALLOCATION

With an analysis method developed to compute the blocking

probability and server utilization for dynamic SATCOM

systems with competing priorities and bandwidths, the

performance of static and dynamic allocation can be

compared. Figure 12 compares the total performance of

static and dynamic resource allocation for a SATCOM

system with two competing priorities (high and low) and

100 total servers. The arrival rates of the low and high

priority traffic are equal. A static system must pre-allocate

resources for each priority and the following combinations

of high:low priority servers were tested: 99:1, 95:5, 90:10,

80:20, 70:30, 60:40, 50:50. The dynamic system was able

to allocate resources dynamically, with low priority circuits

being preempted by high priority circuits when all servers

were occupied. Total satisfaction (defined as 1 – the

blocking/preemption probability) was plotted vs. server

utilization for all the cases tested.

Figure 12: Comparison of Static and Dynamic Resource

Allocation for Two Competing Priorities (Total

Performance)

Results show dynamic resource allocation outperforms

static resource allocation for all possible configurations.

Regardless of the number of servers pre-allocated to high or

low priority users by a static allocation scheme, dynamic

resource allocation always achieves better user satisfaction

at a given server utilization.

The performance of high and low priority traffic classes

under static and dynamic resource allocation is shown in

Figure 13. User satisfaction for high and low priority users

is plotted as a function of traffic intensity in Figure 13 (a)

and (b), respectively, for all system configurations

considered. Results show that for the high priority users,

dynamic resource allocation always outperforms static

resource allocation. However, that is not always true for the

low priority users. At lower traffic intensities, dynamic

allocation provides improved performance for low priority

traffic. At higher traffic intensities, however, dynamic

allocation allows high priority requests to outcompete low

priority requests, leading to lower satisfaction than static

allocation cases where resources have been set aside for low

priority users.

Figure 13: Comparison of Static and Dynamic Resource

Allocation for Two Competing Priorities (Performance

for each priority)

A similar comparison of static and dynamic allocation

schemes was generated for two competing bandwidth

classes. Figure 14 compares the performance of static and

dynamic resource allocation for a SATCOM system with

100 total servers and two competing traffic classes with

different bandwidth requirements: one that requests 1 server

and the other that requests 10 servers. The arrival rates of

the small and large bandwidth classes were assumed to be

equal (1/1010=1). The static system pre-allocated 50

servers for each bandwidth class while the dynamic system

was able to allocate resources for both classes dynamically.

Total satisfaction (defined as 1 – the blocking probability)

(a)

(b)

Page 8: Calculating Call Blocking and Utilization for …ykureh/rips-aerospace-ieee-draft...978-1-4577-0557-1/12/$26.00 ©2012 IEEE 1 Calculating Call Blocking and Utilization for Communication

8

and server utilization were plotted vs. traffic intensity in

Figure 14 (a) and (b), respectively.

Figure 14: Comparison of Static and Dynamic Resource

Allocation for Two Competing Bandwidths (Total

Performance)

Results show that the total performance of the dynamic

resource allocation system has slightly better performance

than the static system, with higher user satisfaction and

server utilization at all traffic intensities. The performance

for small and large bandwidth users is shown in Figure 15

and Figure 16, respectively.

Figure 15 plots user satisfaction and server utilization for

small bandwidth users as a function traffic intensity for the

both the dynamic allocation system (shown in green) and

the static allocation system (shown in blue).

Figure 15: Comparison of Static and Dynamic Resource

Allocation for Two Competing Bandwidths (Small

Bandwidth Users)

Results show that dynamic resource allocation has much

better performance at the higher traffic intensities. As traffic

intensity approaches 1, dynamic resource allocation allows

small bandwidth users to utilize more than 50% of the

bandwidth resources, resulting in greater user satisfaction

and server utilization. The static resource allocation system

only allocates 50 servers to these users which greatly

reduces their satisfaction and utilization at the higher traffic

intensities.

This improvement in performance for small bandwidth users

at high traffic intensities comes at the price of the large

bandwidth users (as shown in Figure 16). Figure 16 plots

user satisfaction and server utilization for large bandwidth

users as a function of traffic intensity for both the dynamic

allocation system (shown in green) and the static allocation

system (shown in blue).

(a)

(b)

(a)

(b)

Page 9: Calculating Call Blocking and Utilization for …ykureh/rips-aerospace-ieee-draft...978-1-4577-0557-1/12/$26.00 ©2012 IEEE 1 Calculating Call Blocking and Utilization for Communication

9

Figure 16: Comparison of Static and Dynamic Resource

Allocation for Two Competing Bandwidths (Large

Bandwidth Users)

Results show that at the higher traffic intensities, the

performance of the large bandwidth users is degraded

considerably by the dynamic resource allocation scheme.

This is because at the higher traffic intensities, dynamic

resource allocation allows the small bandwidth users to take

away resources from the large bandwidth users, reducing

their satisfaction and utilization. The static allocation

system, on the other hand, pre-allocates 50 servers for just

the large bandwidth users, resulting in much better

performance. Since large bandwidth users are so

disadvantaged, it may necessary to fence off resources for

them if dynamic allocation systems are finally implemented

in a SATCOM system.

5. CONCLUSIONS AND FUTURE WORK

An analytical model has been generated to measure user

satisfaction (or blocking/preemption probability) and

resource utilization for dynamically-allocated SATCOM

systems that have users with competing priorities and

bandwidths. Results show that users who request a smaller

fraction of the total bandwidth resources have better

performance (less blocking and higher utilization) than

higher bandwidth users. For competing priorities, high

priority traffic only competes with itself and its performance

is determined by the high priority traffic intensity. Lower

priority users are highly dependent on the amount of high-

priority traffic and as the ratio of high priority jobs

increases, more low priority jobs are preempted and their

bandwidth utilization is degraded. For competing

bandwidths, total performance is upper bounded by the

smaller bandwidth traffic class and small bandwidth jobs

can more easily block large bandwidth jobs from getting

service. Systems with large and small bandwidth jobs

arriving at similar rates perform marginally worse than if the

large bandwidth jobs were only competing with themselves.

Dynamic allocation schemes provide better overall

performance than comparable static allocation schemes,

however, at the higher traffic intensities they provide worse

performance for low priority and large bandwidth users.

Future work will consider more sophisticated models of

satellite resources and traffic load, including the effects of

frequency channels, time-slots, antenna coverage, beam

pointing, and requested circuits with duty cycles. Additional

model enhancements may also include allowing for more

than two competing priority and bandwidth classes, reentry

procedures for preempted jobs, and improvement to the

dynamic allocation algorithm. It will also be important to

model some of the real-world consequences of

implementing a Demand Assigned Multiple Access

(DAMA) scheme for SATCOM including the effects of

protocol delays and network management.

REFERENCES

[1] S. M. Ross. Introduction to Probability Models, Sixth

Edition. Academic Press, 1997

[2] E. Cınlar. Introduction to Stochastic Processes. Prentice-

Hall, 1997.

[3] A. Gilat. MATLAB: An Introduction with Applications.

John Wiley & Sons, 2005.

[4] S. M. Ross. Simulation. Academic Press, 2002.

(a)

(b)