An equal-opportunity-loss MPLS-based network design modelhelgason/papers/BanffPaper.pdf ·...
Transcript of An equal-opportunity-loss MPLS-based network design modelhelgason/papers/BanffPaper.pdf ·...
Technical Report 04-EMIS-12
An equal-opportunity-loss MPLS-based network design model
by
Richard S. Barr1
Richard V. Helgason1
Maya G. Petkova1
and
Saib Jarrar2
1{helgason, barr, maya}@engr.smu.edu EMIS Department
School of Engineering Southern Methodist University
Dallas, TX 75275
[email protected] MCI Data Network Engineering
Richardson, TX 75081
September 2004
Presented at the CORS/INFORMS Joint International Meeting, May 2004, Banff, Alberta, Canada
2
Abstract
Multi-Protocol Label Switching (MPLS) is an evolving switching technology that is being integrated into
Internet Protocol (IP) networks to overcome IP-routing deficiencies. MPLS facilitates traffic engineering
(TE) by providing the mechanisms needed to control traffic flows in IP networks. Combined with
differentiated services (Diffserv) capabilities, MPLS enables the implementation and support of multiple
classes -of-service (CoS) types, each with specific quality-of-service (QoS) guarantees. Thus, MPLS
facilitates network optimization to maximize resource utilization and enables the convergence of data,
voice, and video applications over a common network infrastructure.
A previous study by Barr and Jarrar addressed an MPLS-based TE problem utilizing constrained-
based routing to perform admission control with a single CoS type. Their problem was formulated as an
integer multi-commodity network-flow model focusing on revenue maximization, one of the primary goals
of MPLS deployment by service providers.
This report presents a two-stage equal-opportunity-loss model for MPLS-based IP networks,
which builds on the Barr and Jarrar model. Our model addresses all customer demands equally (fairly),
while maximizing the revenue and meeting certain customer QoS requirements. A computational study on
a realistic network is given.
Acknowledgment
This work was supported by the Texas Advanced Research Program under Grant No. 003613-0028-2001.
Keywords: MPLS Networks, Traffic Engineering, Fairness, Revenue Management, QoS
3
1. Introduction.
Over the last decade the public Internet has evolved from a limited U.S. government-sponsored
network serving the education and research communities to a gigantic global, robust, and ubiquitous
commercial network. The Internet has evolved into a critical communications network at the heart of the
new information-oriented economy serving both consumers and businesses. Internet growth—in terms of
number of users and traffic volume—has been phenomenal and is expected to continue.
The growth and popularity of the public Internet has accelerated the adoption of the Internet
Protocol (IP) as a dominant communications technology [8]. The Transmission Control Protocol/Internet
Protocol (TCP/IP) [7] suite of protocols has been adopted as the protocol of choice by enterprise networks
for both internetworking and applications. Carriers are now aggressively pursuing new Virtual Private
Network (VPN) offerings that are based on IP technology [2,3]. It is expected that these new services will
replace current private-line and virtual data services such as Frame Relay (FR) and Asynchronous Transfer
Mode (ATM).
Another important industry trend is the convergence of data communications and
telecommunications. This convergence is driven by economic pressure to achieve cost savings and
increase operational efficiencies. Enterprise customers are striving to embrace one common
communications infrastructure to service their data, voice, and video applications. Technology
advancements in packet voice and Voice over IP (VoIP), in particular, are accelerating and promoting that
convergence over IP networks.
Handling the explosive traffic growth and achieving convergence present serious challenges to the
IP technology and service providers. Both Internet and IP-based VPN services are competitive businesses
that require continual investment to keep pace with the increase in traffic.
The convergence of voice, video, and data traffic imposes new requirements on IP networks. IP
networks will need to support multiple traffic types with dissimilar characteristics and requirements. Voice
traffic requires the same predictability and dependability as the existing Public Switched Telephone
Network (PSTN). Voice and video traffic characteristics differ from those of data traffic. The current IP
paradigm does not provide performance guarantees or service differentiation. All traffic receives best-
effort service; i.e., all packets are treated equally with no regard to the needs of applications for some level
4
of resource assurance or performance guarantees. IP networks are required to offer different grades or
classes of service (CoS) with quality-of-service (QoS) guarantees. QoS traffic performance parameters
include guaranteed bandwidth availability and upper bounds on packet delay, packet delay variation, and
probability of packet loss.
The current IP routing-and-forwarding paradigm has other important deficiencies. IP routing may
result in sub-optimal use of network resources and an imbalance of traffic loads on different links because
it considers topology information only in its shortest-path calculations. It does not consider traffic load and
resource-utilization information. Moreover, IP routing provides few controls to influence traffic flow
across the network and exploit unused capacity.
Due to IP deficiencies the conventional answer to dealing with traffic growth has been the over-
provisioning of costly resources. In the current and future business environments this answer is not
adequate. Carriers are under pressure to contain capital expenditures and are looking for other solutions to
maximize the use of existing network resources. The Internet Engineering Task Force (IETF) introduced
and defined the architecture of Multi-Protocol Label Switching (MPLS ) [9] and defined requirements for
Traffic Engineering (TE) [1] over MPLS. MPLS is a new switching technology that is integrated into IP
networks and overcomes IP’s deficiencies. TE is the process of controlling traffic flow through a network
so that network performance and resource utilization are optimized. Thus TE seeks to maximize the
benefits of an installed network infrastructure. MPLS facilitates traffic engineering by providing the
mechanisms needed to control traffic flows in IP networks. It enables the implementation of QoS and
enhances restoration in IP networks. MPLS overcomes the limitations of shortest-path-only routing and
allows the creation of traffic -engineered paths that may not be simply shortest paths. Thus, the user can
exploit otherwise underutilized resources.
This study presents a two-stage equal-opportunity-loss (EOL) model for MPLS-based IP
networks. Our model designs traffic-engineered paths for packet delivery by guaranteeing equal minimal
percent demand fulfillment for each customer, while meeting the desired QoS types and maximizing the
traffic generated revenue.
5
2. Survey of Related Literature .
Barr and Jarrar [4] focused on the formulation and evaluation of optimization models for MPLS
traffic engineering with QoS require ments. They developed a new formulation of the basic traffic-
engineering problem in MPLS-based packet networks as a multi-commodity network flow model with side
constraints. Their optimization model maximizes revenue and determines which demands are admitted,
hence also solving the admission control problem in case network congestion is being experienced.
Computational experiments were conducted to evaluate the benefits of the optimization model in
comparison with an online FCFS strategy. The impact of mu ltiple factors on the performance of both
strategies and on the performance improvement of optimization over the FCFS strategy were examined.
The factors included the number of OD pairs, average and range of demand per OD pair, network topology,
and average node degree.
Matula and Shahrokhi [6] utilized a maximal concurrent traffic flow lower bound with all node
pairs serving as origin-destination pairs having unit demand in studying the structure of graphs and the
determination of critical bottlenecks. The use of a uniform bound appearing in that context was the
motivating factor in our use of such a bound in defining a new fairness concept.
Kennington [5] presented an AMPL formulation for the single-path MCNF model. Some of those
modeling ideas were used in the formulation of the basic EOL model presented in this manuscript.
2.1. Contributions.
We formulated a two-stage EOL model for solving a fundamental TE problem for MPLS -based
networks. Our assumption is that the network will be sufficiently congested that it is not possible to satisfy
all demand, but not overly congested as will be clarified shortly. The first stage of the model finds the
guaranteed equal level (percentage) of traffic that can be delivered for all commodities by determining the
maximal concurrent traffic flow lower bound. Thus the model treats all demand pairs fairly and guarantees
that there is a bandwidth allocation for each commodity which will allow this lower bound percentage of
given demand to be delivered for all commodities . We further assume that the network is not so congested
that the maximal concurrent traffic flow lower bound will be at least 50%. Practically speaking, a network
6
which cannot deliver at least 50% of all demand will need more bandwith enhancement than our model is
capable of suggesting. The second stage maximizes revenue by routing as much demand as possible so
that each commodity’s delivered demand is at or above the guaranteed lower bound. This may be viewed
as constituting fair treatment since no commodity will receive less that the maximum percentage
determined in the first stage. In a congested network it is of interest to determine on which links additional
bandwidth should be furnished to produce additional revenue. A parametric study was conducted to
assess the outcomes from individually doubling the capacities of the congested links in an attempt to find
the link (links) which will give the largest revenue increase. Additionally, an optimization model was built
as an enhancement to the second stage model which seeks to determine which individual link or pair of
links will give the largest revenue increase if their capacities are doubled. The results optimization model
can be compared to the results from the parametric study and used for making cost-effective decisions.
Network managers can benefit from utilizing the parametric study along with the optimization model for
capacity planning, revenue management, and optimal resource allocation.
3. Basic MPLS TE Problem Description.
The basic problem of MPLS traffic engineering can be stated as follows:
Given the physical topology of an MPLS network, its link attributes, a traffic matrix, and resource
and traffic performance constraints , maximize revenues by admitting and routing via a single path as much
traffic as possible while observing the resource and the traffic performance constraints .
The link attributes include capacity and an assigned administrative “cost” that reflects delay on the
link. The traffic matrix represents aggregate traffic demand between each OD pair. The resource constraints
are the link capacities. The traffic performance constraints are typically expressed as the maximum number
of hops and delay allowed for traffic between any OD pair. This is a logical design problem that involves
constructing a set of paths to route the traffic, with each OD pair’s traffic routed along a single path. The
solution presented by Barr and Jarrar [4] includes also a traffic admission control; in case not all traffic can
be routed, their solution identifies the set of OD pairs whose traffic can be routed. The solutions we present
7
in this report address the congestion problem by guaranteeing an equal (fair) percentage of traffic demand
is delivered for each commodity.
4. The Basic EOL Model ( Stage I).
The Stage I equal-opportunity-loss model was designed to determine the maximal concurrent
traffic flow lower bound. That bound defines the minimum equal percent traffic loss that each OD pair will
experience in the event that the MPLS-based transmission network becomes congested.
4.1 Notations, Variables, and Parameters.
The physical topology of an MPLS network is represented by an undirected graph G = (N, E),
where N is the set of nodes and E is the set of edges or links. Let n = |N | and m = |E|. A node represents an
MPLS LSR (Label Switch Router). The terms node, LSR, and router will be used interchangeably. A link
will be undirected and denoted by an unordered pair of nodes l = (p, q). Each link is assigned a number
from the set {1, 2, …,m}. Associated with link l = (p, q) are two directed arcs: <p, q> and <q, p>. The
flow on arc <p, q> will be referred to as the flow in the normal direction for link l. The flow on arc <q, p>
will be referred to as the flow in the reverse direction. The capacity of a link represents the bandwidth or
the transmission speed of that link measured in units of bandwidth such as Mbps. The cost of a link
represents a traffic performance metric associated with that link, such as delay, and not a monetary cost. A
commodity represents distinct packet traffic to be routed from a specified source node to a specified
destination node. The demand associated with a commodity is the data rate or bandwidth (measured in units
of Mbps) consumed by that traffic. The terms source and origin are used interchangeably. The terms
demand and traffic demand will be used interchangeably.
4.1.2. Parameters .
}n,...,2,1{=N the set of nodes in the network, where N=n
}m,...,2,1{=E the set of links in the network, where E=m
)(qIn the set of links into node Nqq ∈ , , )},(:{)( qplElqIn =∈=
)(qOut the set of nodes out of node Nqq ∈ , , )},(:{)( pqlElqOut =∈=
8
eb the capacity of link e in units of bandwidth )0( >eb
ec the administrative cost associated with link e; typically the administrative
cost of a link represents a measure of delay on that link )0( >ec
onTraff the demand for the commodity with origin node o at node n, where 0<onTraff
implies a demand node, and 0=onTraff implies a transit or transshipment node
ooTraff the requirement (the total supply) for all commodities with origin node o,
0>ooTraff ( 0=−= ∑≠on
onoo TraffTraff )
h the maximum allowed number of hops that any commodity may traverse from source
to destination ( 0>h , integer)
µ a unit of revenue generated from delivering a unit of demand of any commodity
( 0≥µ )
1ω a scaling factor or a weight used in the objective function ( 01 >ω )
2ω a scaling factor or a weight used in the objective function ( 02 >ω )
ε a small deviation factor used in guaranteeing a single traffic delivery path ( 0>ε )
*D the guaranteed percentage of delivered traffic demand, a uniform lower bound, for all
commodities (used in the second stage); *D is the optimal objective value from the
first stage
4.1.3. Decision Variables.
odefx the flow of the commodity with origin node o and destination node d on link e in the
normal direction
odefy the flow of the commodity with origin node o and destination node d on link e in the
reverse direction
odesfx indicator for positive flow of the commodity with origin node o and destination node d
on link e in the normal direction; >
=otherwise 0,
0 if ,1 odeode
fxsfx
9
odesfy indicator for positive flow of the commodity with origin node o and destination node d
on link e in the reverse direction; >
=otherwise 0,
0 if ,1 odeode
fysfy
D the guaranteed percentage of delivered traffic demand, a uniform lower bound, for all
commodities (used in the first stage)
onDeliv the percentage delivered (fulfillment) of the traffic requirement for the commodity
with origin node o at node on ≠ ( 0=onDeliv if n is a transshipment node)
4.1.4. Derived Variables . The following variables are defined in terms of the above decision variables and
are used in the model formulation.
cexσ the flow of all commodities with origin node c on link e in the normal direction
NeNcfxxcnTraffNn
cnece ∈∀∈∀= ∑<∈
, 0:
σ
ceyσ the flow of all commodities with origin node c on link e in the reverse direction
NeNcfyycnTraffNn
cnece ∈∀∈∀= ∑<∈
, 0:
σ
eXσ the total flow on link e in the normal direction
NexXNc
cee ∈∀= ∑∈
σσ
eYσ the total flow on link e in the reverse direction
NeyYNc
cee ∈∀= ∑∈
σσ
odHops the number of hops that a commodity will traverse form its origin node o to
its destination node d
0: , )( <∈∀∈∀+= ∑∈
odEe
odeodeod TraffNdNosfysfxHops
4. 2. Mathematical Model: Stage I.
Maximize D
Subject to:
10
NnNcTraffDelivyxyxnn Ine
cncncececeOute
ce ∈∀∈∀=−−− ∑∑∈∈
, ))(()()( σσσσ (1)
ncmTraffNnNcfyfxfyfx cnmIne
cnecnecnemOute
cne , ,0: , )()()()(
≠∀<∈∀∈∀−=− ∑∑∈∈
(2)
0: , ))(()()(
<∈∀∈∀−=+ ∑∑∈∈
cncIne
cncncnecOute
cne TraffNnNcTraffDelivfyfx (3)
0: , ))(()()(
<∈∀∈∀−=+ ∑∑∈∈
cnnIne
cncncnenOute
cne TraffNnNcTraffDelivfxfy (4)
EeTraffNnNcsfysfx cncnecne ∈∀<∈∀∈∀<=+ ,0: , 1 (5)
0: , <∈∀∈∀≤ cncn TraffNnNchHops (6)
EeTraffNnNcTraffsfxfx cncncnecne ∈∀<∈∀∈∀−≤ ,0: , ))( ( (7)
EeTraffNnNcTraffsfxfx cncncnecne ∈∀<∈∀∈∀+−≥ ,0: , )2/1 ())( ( ε (8)
EeTraffNnNcTraffsfyfy cncncnecne ∈∀<∈∀∈∀−≤ ,0: , )( )( (9)
EeTraffNnNcTraffsfyfy cncncnecne ∈∀<∈∀∈∀+−≥ ,0: , )2/1 ()( )( ε (10)
EebX ee ∈∀≤ σ (11)
EebY ee ∈∀≤ σ (12)
0: , 100 <∈∀∈∀≥ cncn TraffNnNcDeliv (13)
0: , D <∈∀∈∀≥ cncn TraffNnNcDeliv (14)
All variables are nonnegative (15)
The objective function simply maximizes the guaranteed percentage of delivered traffic demand
for all commodities. Constraints (1), (2), (3), and (4) are flow conservation equations, which ensure a
connected path for each routed commodity. Constraints (5) guarantee that a link can be used no more than
once in the path designed to route each commodity. Constraints (6) are traffic performance constraints,
which ensure that the number of hops along any path cannot exceed a predetermined upper hop limit h .
Constraints (7), (8), (9) and (10) are used to ensure a single traffic delivery path for each commodity. (We
are only interested in networks which have bandwidth capacity enough to deliver at least 51% of the
requested demand for each OD pair). Constraint sets (11) and (12) enforce the link capacity resource
constraints. Constraints (13) impose the natural upper bound for the percentage fulfillment of the traffic
demand for each commodity. Constraints (14) impose the uniform lower bound on the percentage delivered
traffic demand for each commodity. Constrains (15) are nonnegativity constraints.
11
5. The Basic EOL Model ( Stage II).
The Stage II equal-opportunity-loss model uses the optimal uniform traffic flow lower bound
determined using the Stage I model to design traffic engineered paths by routing as much demand as
possible ( thus optimizing the revenue), so that each commodity’s delivered demand is at or above the
guaranteed lower bound.
5.1.Mathematical Model: Stage II.
Let *D is the optimal solution obtained using the Stage I model. The second stage model is:
Maximize
∑∑ ∑∈<∈ ∈
−+−Nnc
cneeTraffNnc Ee
ecncn HopswYXcwTraffDelivcn ,
20:,
1 )())(( σσµ
Subject to:
Constraint sets (1) to (13), and (15) from stage I and a new constraint set (14*) as follows.
0: , * <∈∀∈∀≥ cncn TraffNnNcDDeliv (14*)
In the II stage formulation, the objective function consists of three terms with the first one being
the primary objective and the dominant term. The first term represents the total revenue generated from the
routed commodities (i.e. the delivered traffic). The total demand delivered (the revenue) is maximized.
The second term represents the total delay incurred by all the delivered traffic. The purpose of this term is
to select the solution with the lowest delay among multiple alternate optimum solutions (yielding the same
revenue) that may exist. The total delay is multiplied by the scaling factor , 01 1 >> ω . Typically, 1ω is
set so that the second term will be small relative to the first (dominant) term. The third term represents the
total number of links used (total number of hops). Its purpose is to minimize the total number of hops in
order to avoid cycling, which could be generated in attempting to maximize the revenue (delivered traffic).
It is also multiplied by a scaling factor 01 2 >> ω , 2ω usually greater than second term’s scaling factor
1ω .The new constraint set (14*) ensures that each commodity’s delivered demand is at or above the
guaranteed lower bound.
12
6. Parametric Study.
If the total requested packet traffic for all OD pairs cannot be delivered, a set of congested links
based on the flows from stage II is constructed. A link is considered to be congested if at least 98% of its
capacity has been used. A parametric study was conducted to assess the outcomes from individually
doubling the capacities of the congested links in an attempt to find the link which will give the largest
revenue increase.
7. Optimization Model. To suppliment the parametric study, an ancillary optimization model was built as an enhancement
of the Stage II model. The goal was to determine by optimization the link (links), which contribute to the
largest revenue increase if their capacities are doubled.
Two new parameters for the optimization model are introduced.
eSatur
=otherwise ,0
saturated is link if ,1 eSature
nl the number of links, which capacity is to be doubled, 20 ≤< nl
Further a new decision variable is introduced.
eDcap
=otherwise ,0
doubled be tohas link ofcapacity theif ,1 eDcape
The optimization model for finding the optimal link (links), whose capacities have to be doubled is
described below.
Maximize
∑∑ ∑∈<∈ ∈
−+−Nnc
cneeTraffNnc Ee
ecncn HopswYXcwTraffDelivcn ,
20:,
1 )())(( σσµ
Subject to:
Constraint sets (1) to (10), constraint sets (13), (14*), and (15).
The capacity constraints sets (11) and (12) are replaced by
EeDcapbbX eeee ∈∀+≤ σ (11’)
EeDcapbbY eeee ∈∀+≤ σ (12’)
The objective function is the same as in the second stage model. The new constraint sets (11’) and (12’)
account for the new, increased link capacity.
Two new constraints sets were added to this optimization model
13
EeSaturDcap ee ∈∀≤ (16)
∑∈
≤Ee
e nlDcap (17)
Constraint set (16) ensures that only the capacity of the saturated (congested links) can be doubled.
Constraint set (17) imposes number of links whose capacity is to be doubled concurrently.
8. Computational Experiments.
8.1. The Test Network.
The EOL model was tested on a realistic network, which has the typical topology of a nationwide
data communications network The example network is shown in Fig.1 and its description is given below.
? 20 nodes, 31 links
? Average node degree ~3
? Link capacities:
• 2488 Mbps (15 links) - OC48 transmission line
• 622 Mbps ( 16 links) - OC12 transmission line
? Trunks connecting the nodes are bi-directional and full duplex
Figure 1. Network topology of realistic test network
16
17
20
11
9
13
8
19
4
515
1
14
2
3
6
7
10
12
18OC-48 (2488 Mbps)
OC-12 (622 Mbps)
14
8.2. Computing Environment.
Tests were performed on a Compaq AlphaServer DS20E with dual 667 MHz processors and 4096
MB RAM. Machine is configured as a Network Queuing System executing batch jobs. Each job has access
to approximately 20 MB RAM. Models were implemented using AMPL 8.0. Integer programming
solutions were generated using the CPLEX Linear Optimizer 8.0. Default settings for CPLEX were used
except that the MIP time limit was set to 1500 seconds.
8.3. Data sets .
A traffic generator was used to generate multiple sets of commodities and traffic demands. OD
pairs were selected randomly and uniformly from the set of nodes (no duplicates allowed). The demands
associated with the OD pairs were selected randomly using a uniform distribution over the range specified
by the min and max demands. Table 1 summarizes the characteristics of the data sets used and the
solutions obtained from by the EOL stage I model.
The experimental results for each individual data set are given in the Appendix. The tables show
three performance metrics for each of the presented solution strategies: percentage of revenue missed,
bandwidth utilization, and bandwidth efficiency (see [4]). The percentage of revenue missed is defined as
the ratio of the total demand not delivered to the total demand. Bandwidth utilization is defined as the ratio
of total flow on all arcs to the total bandwidth of all arcs. Bandwidth efficiency is defined as the ratio of
total demand delivered to the total bandwidth of all arcs. These metrics are helpful to make general
observations about the behavior of the different solution strategies and can be further used for hypothesis
testing.
15
SET # OD
Pairs Demand Range
Mean Demand
Guaranteed % Delivered
Demand
DS1 80 320 240 74.67
DS2 80 320 240 69.57
DS3 160 320 240 < 51.00
DS4 320 320 240 < 51.00
DS5 80 160 120 100.00
DS6 160 160 120 82.49
DS7 160 160 120 90.67
DS8 320 160 120 51.41
DS9 80 80 60 100.00
DS10 160 80 60 100.00
DS11 320 80 60 95.55
DS12 320 80 60 89.20
DS13 80 160 480 < 51.00
DS14 80 80 480 < 51.00
TABLE 1. DATA SETS
9. Summary and Conclusions.
A two stage equal-opportunity-loss model for solving a funadmental TE problem for MPLS -based
networks is formulated. The first stage of the model finds the guaranteed equal level (percentage) of traffic
that can be delivered for all commodities by determining the maximal concurrent traffic flow lower bound.
The concept of EOL fairness in traffic delivery was introduced. The model treats all demand pairs fairly
and guarantees and guarantees that there is a bandwidth allocation for each commodity which will allow
this lower bound percentage of given demand to be delivered for all commodities. The second stage
designs the paths by routing as much demand as possible so that each commodity’s delivered demand is at
or above the guaranteed lower bound. A parametric study was conducted to assess the outcomes from
individually doubling the capacities of the congested links in attempt to find the link (links) which will give
the largest revenue increase. An optimization model was built as an enhancement of the Stage II model to
16
determine which link (links) will produce the largest revenue increase if their capacities are doubled. The
optimization model result can be compared to the results from the parametric study and used for making
cost-effective decisions. Network managers can benefit from utilizing the parametric study along with the
optimization model for capacity planning, revenue management, optimal resource allocation.
REFERENCES
[1] AWDUCHE, D. O. MPLS and traffic engineering in IP networks. IEEE Communications Magazine 37:12 (1999), 42-47.
[2] GUICHARD, I., AND PEPELNJAK, I. MPLS and VPN Architectures, Cisco Press, Indianapolis, IN, 2001.
[3] GUICHARD, I., PEPELNJAK, I., AND APCAR, J. MPLS and VPN Architectures, Volume II, Cisco Press, Indianapolis, IN, 2003.
[4] JARRAR, S. Formulation and evaluation of optimization models for MPLS traffic engineering with QoS requirements. D.Eng Praxis, Southern Methodist University, Dallas, TX, 2004.
[5] KENNINGTON, J. L. EMIS 8392 Class Notes: Prospects for Operations Research in the Design and Analysis of Telecommunications Networks, (Summer 2002).
[6] MATULA , D. W., AND SHAHROKHI, F. The maximum concurrent flow problem. JACM 37 (1990), 318-334.
[7] POSTEL, J. DoD standard transmission control protocol. RFC 761, Internet Engineering Task Force, http://www.ietf.org, 1980.
[8] POSTEL, J. Internet protocol. RFC 791, Internet Engineering Task Force, http://www.ietf.org, 1981.
[9] ROSEN, E., VISWANASAN, A., AND CALLON, R., Multiprotocol Label Switching Architecture, RFC 3031, Internet Engineering Task Force, http://www.ietf.org, 2001.
APPENDIX
DS1
OD PAIRS 80 STAGE 1 STAGE 2
DEMAND RANGE 320
Guaranteed
% Delivered Demand
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%) (a)
MEAN DEMAND
240 74.67 4.48 59.21 18.13
Analyst: Single Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%)
18 2488 4.48 56.26 17.23 0.00 3 622 4.22 57.46 17.95 0.27 (b)
19 622 2.65 58.89 18.25 1.91 11 2488 2.61 58.40 18.25 1.96 10 622 2.59 58.42 18.26 1.98 14 622 2.23 58.92 18.32 2.36
Analyst: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (c)
10, 14 622, 622 2.26 58.18 18.00 2.33
Optimization: Single Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (d)
10 622 2.55 58.87 18.27 2.02
Optimization: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (e)
10, 11 622, 2488 0.64 59.69 18.38 4.02
TABLE 2. DS1 SUMMARY
DS2
OD PAIRS 80 STAGE 1 STAGE 2
DEMAND RANGE 320
Guaranteed % Delivered
Demand
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%) (a)
MEAN DEMAND
240 69.57 10.74 60.02 18.11
Analyst: Single Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%)
15 2488 10.40 57.90 17.26 0.35 23 622 10.11 59.37 18.00 0.70 25 622 10.05 59.14 18.01 0.77 8 2488 10.02 57.36 17.34 0.80 4 622 10.02 60.17 18.01 0.80
22 622 10.00 59.96 18.04 0.93 31 622 9.90 59.39 18.04 0.94 (b) 13 2488 9.87 58.59 17.37 0.97 7 622 9.11 60.95 18.20 1.82
19 622 8.80 59.51 18.25 2.13 11 622 8.80 60.08 18.25 2.13 18 2488 8.79 59.12 17.58 2.19 20 622 8.66 60.42 18.29 2.32 9 622 8.50 61.31 18.32 2.50
16 2488 7.73 59.62 17.78 3.37 Analyst: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (c)
9,16 622, 2488 6.86 59.61 17.73 4.35
Optimization: Single Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (d)
20 622 8.66 60.34 18.29 2.32 Optimization: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (e)
19, 20 622, 622 5.73 61.78 18.63 5.61
TABLE 3. DS2 SUMMARY
DS6 OD
PAIRS 160 STAGE 1 STAGE 2
DEMAND RANGE 160
Guaranteed
% Delivered Demand
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%) (a)
MEAN DEMAND
120 82.49 1.5 58.87 20.16
Analyst: Single Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%)
13 2488 1.50 55.93 19.15 0.00 14 622 1.50 58.10 19.90 0.00 19 622 1.50 58.11 19.90 0.00 25 622 1.50 58.11 19.90 0.00 (b) 31 622 1.50 58.10 19.90 0.00 17 622 1.40 58.03 19.92 0.08 26 622 0.20 58.80 20.16 1.29 23 622 0.15 58.23 20.17 1.33
Analyst: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (c)
23, 26 622, 622 0.27 57.4 19.89 1.22 Optimization: Single Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (d)
23 622 0.12 58.47 20.17 1.37 Optimization: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (e)
17, 23 622, 622 0.20 57.36 19.90 1.29
TABLE 4. DS6 SUMMARY
DS7
OD PAIRS 160 STAGE 1 STAGE 2
DEMAND RANGE 160
Guaranteed
% Delivered Demand
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%) (a)
MEAN DEMAND
120 90.67 1.65 65.73 20.15
Analyst: Single Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%)
16 2488 2.16 61.49 19.04 -0.52 20 622 2.06 63.95 19.80 -0.42 15 2488 2.01 61.12 19.07 -0.37 3 622 1.60 64.90 19.89 0.01 (b)
13 2488 1.13 63.26 19.24 0.53 26 622 0.60 64.40 20.10 1.06 23 622 0.55 63.72 20.11 1.11
Analyst: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (c)
23, 26 622, 622 0.25 63.24 19.91 1.42 Optimization: Single Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (d)
23 622 0.35 63.87 20.15 1.32 Optimization: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (e)
13, 23 2488, 622 0.20 61.28 19.18 1.48
TABLE 5. DS7 SUMMARY
DS8 OD
PAIRS 320 STAGE 1 STAGE 2
DEMAND RANGE 160
Guaranteed
% Delivered Demand
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%) (a)
MEAN DEMAND 120 51.41 29.90 78.11 28.85
Analyst: Single Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%)
26 622 30.10 76.71 28.37 -0.37 22 622 30.05 76.79 28.38 -0.33 10 622 30.00 76.50 28.40 -0.24 6 622 30.00 76.65 28.43 -0.15 14 622 30.00 76.51 28.43 -0.14 5 2488 29.80 74.13 27.40 0.00 31 622 29.80 76.58 28.48 0.01 (b) 4 622 29.80 76.93 28.49 0.05 3 622 29.80 77.56 28.49 0.06 23 622 29.60 76.91 28.55 0.28 8 2488 29.50 74.58 27.55 0.51 15 2488 29.00 74.66 27.59 0.67 19 622 29.14 78.11 28.75 0.97 11 622 29.00 77.40 28.80 1.16 9 622 29.00 78.92 28.81 1.19 7 622 28.87 79.00 28.86 1.36 13 2488 28.80 74.86 27.79 1.41 16 2488 26.40 78.03 28.73 4.82
Analyst: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (c)
13, 16 2488, 2488 27.36 73.00 27.02 3.51
Optimization: Single Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (d)
Not Found -
Time Limit
29.90 78.11 28.85 0.00
Optimization: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (e)
Not Found -
Time Limit 29.90 78.11 28.85 0.00
TABLE 6. DS8 SUMMARY
DS11 OD
PAIRS 320 STAGE 1 STAGE 2
DEMAND RANGE 80
Guaranteed
% Delivered Demand
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%) (a)
MEAN DEMAND
60 95.55 0.17 60.70 20.52
Analyst: Single
Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%)
25 622 0.35 59.51 20.21 -0.18 13 2488 0.28 58.14 19.47 -0.11 (b) 15 2488 0.19 58.27 19.49 -0.02 9 622 0.18 60.03 20.25 0.00
23 622 0.14 59.63 20.26 0.03 11 622 0.09 60.12 20.27 0.08 20 622 0.09 59.77 20.27 0.08
Analyst: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (c)
11, 20 622, 622 0.07 59.19 20.01 0.10 Optimization: Single Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%) Revenue Increase
(%)
(d)
23 622 0.15 59.43 20.26 0.02 Optimization: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (e)
9,23 622, 622 0.30 58.22 19.97 -0.13
TABLE 7. DS11 SUMMARY
DS12 OD
PAIRS 320 STAGE 1 STAGE 2
DEMAND RANGE 80
Guaranteed
% Delivered Demand
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%) (a)
MEAN DEMAND
60 89.20 0.23 59.65 19.64
Analyst: Single
Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%)
11 622 0.30 59.02 19.37 -0.04 19 622 0.30 50.05 19.38 -0.02 17 622 0.24 58.85 19.38 -0.01 (b) 13 2488 0.20 57.13 18.67 0.05 25 622 0.16 58.84 19.40 0.07 20 622 0.12 58.58 19.41 0.11 9 622 0.10 58.93 19.40 0.11
15 2488 0.00 57.43 18.70 0.23
Analyst: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (c)
9, 15 622, 2488 0.02 56.13 18.47 0.22
Optimization: Single Link Link Capacity Doubled
LINK # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (d)
9 622 0.30 58.72 19.37 -0.07 Optimization: Two Links Link Capacities Doubled
LINKS # Original Capacity (Mbps)
Revenue Missed
(%)
Bandwidth Utilization
(%)
Bandwidth Efficiency
(%)
Revenue Increase
(%) (e)
11, 19 622, 622 0.16 58.72 19.37 0.07
TABLE 8. DS12 SUMMARY