Deriving Traffic Demands for Operational IP Networks: Methodology and Experience
description
Transcript of Deriving Traffic Demands for Operational IP Networks: Methodology and Experience
1
Deriving Traffic Demands for Operational Deriving Traffic Demands for Operational IP Networks: Methodology and ExperienceIP Networks: Methodology and Experience
Anja Feldmann*, Albert Greenberg, Carsten Lund, Nick Reingold, Jennifer Rexford, and Fred True
Internet and Networking Systems Research LabAT&T Labs-Research; Florham Park, NJ
*University of SaarbrueckenPowerPoint: view slide show for animation; view notes page for notes
2
Traffic Engineering For Operational IP NetworksTraffic Engineering For Operational IP Networks Improve user performance and network efficiency by tuning router
configuration to the prevailing traffic demands.– Why?
– Time Scale? AS 7018 (AT&T)*
*synthetic loads
some customers or peers
some customers or peers
backbone
3
Traffic Engineering StackTraffic Engineering Stack Topology of the ISP backbone
– Connectivity and capacity of routers and links Traffic demands
– Expected/offered load between points in the network Routing configuration
– Tunable rules for selecting a path for each flow Performance objective
– Balanced load, low latency, service level agreements … Optimization procedure
– Given the topology and the traffic demands in an IP network, tune routes to optimize a particular performance objective
4
Traffic DemandsTraffic Demands How to model the traffic demands?
– Know where the traffic is coming from and going to– Support what-if questions about topology and routing changes– Handle the large fraction of traffic crossing multiple domains
How to populate the demand model?– Typical measurements show only the impact of traffic demands
» Active probing of delay, loss, and throughput between hosts» Passive monitoring of link utilization and packet loss
– Need network-wide direct measurements of traffic demands How to characterize the traffic dynamics?
– User behavior, time-of-day effects, and new applications– Topology and routing changes within or outside your network
5
OutlineOutlineSound traffic model for traffic engineering of
operational IP networksMethodology for populating the modelResultsConclusions
6
OutlineOutlineSound traffic model for traffic engineering of
operational IP networks– Point to Multipoint Model
Methodology for populating the modelResultsConclusions
7
Traffic DemandsTraffic Demands
Big Internet
Web Site User Site
8
Traffic DemandsTraffic DemandsInterdomain Traffic
Web Site User Site
AS 1
AS 2
AS 3
AS 4
U
AS 3, U
AS 3, U
AS 3, U
•What path will be taken between AS’s to get to the User site?•Next: What path will be taken within an AS to get to the User site?
AS 4, AS 3, U
9
Traffic Demands Traffic Demands
Web SiteUser Site
Zoom in on one AS
200
11010
110
300
25
75
50
300
IN
OUT3
OUT2
OUT1
110
Change in internal routing configuration changes flow exit point!
110
10
Point-to-Multipoint Demand ModelPoint-to-Multipoint Demand Model Definition: V(in, {out}, t)
– Entry link (in)
– Set of possible exit links ({out})
– Time period (t)
– Volume of traffic (V(in,{out},t))
Avoids the “coupling” problem with traditional point-to-point (input-link to output-link) models:
Pt to Pt Demand ModelTraffic Engineering
Improved Routing
Pt to Pt Demand ModelTraffic Engineering
Improved Routing
11
OutlineOutlineSound traffic model for traffic engineering of
operational IP networksMethodology for populating the model
– Ideal
– Adapted to focus on interdomain traffic and to meet practical constraints in an operational, commercial IP network
ResultsConclusions
12
Ideal Measurement MethodologyIdeal Measurement MethodologyMeasure traffic where it enters the network
– Input link, destination address, # bytes, and time
– Flow-level measurement (Cisco NetFlow)
Determine where traffic can leave the network– Set of egress links associated with each network address
(forwarding tables)
Compute traffic demands– Associate each measurement with a set of egress links
13
Adapted Measurement MethodologyAdapted Measurement MethodologyInterdomain FocusInterdomain Focus
A large fraction of the traffic is interdomainInterdomain traffic is easiest to capture
– Large number of diverse access links to customers
– Small number of high speed links to peers
Practical solution– Flow level measurements at peering links (both
directions!)
– Reachability information from all routers
14
Inbound and Outbound Flows on Peering LinksInbound and Outbound Flows on Peering Links
Peers Customers
Inbound
Outbound
Note: Ideal methodology applies for inbound flows.
15
Most Challenging Part: Most Challenging Part: Inferring Ingress Links for Outbound FlowsInferring Ingress Links for Outbound Flows
Outbound traffic flowmeasured at peering link
Customersdestination
output
Use Routing simulation to trace back to the ingress links!
? input
? input
Example
16
ForwardingTables
ConfigurationFiles
NetFlow SNMP
Computing the DemandsComputing the Demands
Data– Large, diverse, lossy– Collected at slightly different, overlapping time intervals, across
the network.– Subject to network and operational dynamics. Anomalies
explained and fixed via understanding of these dynamics Algorithms, details and anecdotes in paper!
NETWORK
researcher in data mining gear
17
OutlineOutlineSound traffic model for traffic engineering of
operational IP networksMethodology for populating the modelResults
– Effectiveness of measurement methodology
– Traffic characteristics
Conclusions
18
Experience with Populating the ModelExperience with Populating the Model Largely successful
– 98% of all traffic (bytes) associated with a set of egress links
– 95-99% of traffic consistent with an OSPF simulator
Disambiguating outbound traffic– 67% of traffic associated with a single ingress link
– 33% of traffic split across multiple ingress (typically, same city!)
Inbound and transit traffic (uses input measurement)– Results are good
Outbound traffic (uses input disambiguation)– Results are pretty good, for traffic engineering applications, but there are
limitations
– To improve results, may want to measure at selected or sampled customer links; e.g., links to email, hosting or data centers.
19
Proportion of Traffic in Top Demands (Log Scale)Proportion of Traffic in Top Demands (Log Scale)
Zipf-like distribution. Relatively small number of heavy demands dominate.
20
Time-of-Day Effects (San Francisco)Time-of-Day Effects (San Francisco)
Heavy demands at same site may show different time of day behavior
midnight EST midnight EST
21
DiscussionDiscussion Distribution of traffic volume across demands
– Small number of heavy demands (Zipf’s Law!)– Optimize routing based on the heavy demands– Measure a small fraction of the traffic (sample)– Watch out for changes in load and egress links
Time-of-day fluctuations in traffic volumes– U.S. business, U.S. residential, & International traffic– Depends on the time-of-day for human end-point(s)– Reoptimize the routes a few times a day (three?)
Stability?– No and Yes
22
OutlineOutlineSound traffic model for traffic engineering of
operational IP networksMethodology for populating the modelResultsConclusions
– Related work
– Future work
23
Related WorkRelated Work Bigger picture
– Topology/configuration (technical report)» “IP network configuration for traffic engineering”
– Routing model (IEEE Network, March/April 2000)» “Traffic engineering for IP networks”
– Route optimization (INFOCOM’00)» “Internet traffic engineering by optimizing OSPF weights”
Populating point-to-point demand models– Direct observation of MPLS MIBs (GlobalCenter)
– Inference from per-link statistics (Berkeley/Bell-Labs)
– Direct observation via trajectory sampling (next talk!)
24
Future WorkFuture WorkAnalysis of stability of the measured demandsOnline collection of topology, reachability, &
traffic dataModeling the selection of the ingress link (e.g., use
of multi-exit descriptors in BGP)Tuning BGP policies to the prevailing traffic
demandsInteractions of Traffic Engineering with other
resource allocation schemes (TCP, overlay networks for content delivery, BGP traffic engineering “games” among ISP’s)
25
BackupBackup
26
Identifying Where the Traffic Can LeaveIdentifying Where the Traffic Can LeaveTraffic flows
– Each flow has a dest IP address (e.g., 12.34.156.5)– Each address belongs to a prefix (e.g., 12.34.156.0/24)
Forwarding tables– Each router has a table to forward a packet to “next hop”– Forwarding table maps a prefix to a “next hop” link
Process– Dump the forwarding table from each edge router– Identify entries where the “next hop” is an egress link– Identify set all egress links associated with a prefix
27
Measuring Only at Peering LinksMeasuring Only at Peering LinksWhy measure only at peering links?
– Measurement support directly in the interface cards
– Small number of routers (lower management overhead)
– Less frequent changes/additions to the network
– Smaller amount of measurement data
Why is this enough?– Large majority of traffic is interdomain
– Measurement enabled in both directions (in and out)
– Inference of ingress links for traffic from customers
28
Full Classification of Traffic Types at Peering LinksFull Classification of Traffic Types at Peering Links
Peers Customers
Internal
Inbound
Outbound
Transit
29
Flows Leaving at Peer LinksFlows Leaving at Peer LinksSingle-hop transit
– Flow enters and leaves the network at the same router– Keep the single flow record measured at ingress point
Multi-hop transit– Flow measured twice as it enters and leaves the network– Avoid double counting by omitting second flow record– Discard flow record if source does not match a customer
Outbound – Flow measured only as it leaves the network– Keep flow record if source address matches a customer– Identify ingress link(s) that could have sent the traffic
30
Results: Populating the ModelResults: Populating the Model
Ingress Egress Effectiveness
Inbound Netflow Reachability Good
Transit Netflow Netflow & Reachability
Good
Outbound Packet filters
Netflow & Reachability
Pretty Good
Internal X Reachability X
Data Used