Demystifying and Controlling the Performance of Data Center Networks.

43
Demystifying and Controlling the Performance of Data Center Networks

Transcript of Demystifying and Controlling the Performance of Data Center Networks.

Demystifying and Controlling the Performance of Data Center Networks

Why are Data Centers Important?

• Internal users– Line-of-Business apps– Production test beds

• External users– Web portals– Web services– Multimedia applications– Chat/IM

Why are Data Centers Important?

• Poor performance loss of revenue• Understanding traffic is crucial• Traffic engineering is crucial

Road Map

• Understanding Data center traffic

• Improving network level performance

• Ongoing work

Canonical Data Center Architecture

Core (L3)

Edge (L2)Top-of-Rack

Aggregation (L2)

Applicationservers

Dataset: Data Centers Studied

DC Role DCName

Location Number Devices

Universities EDU1 US-Mid 22

EDU2 US-Mid 36

EDU3 US-Mid 11

Private Enterprise

PRV1 US-Mid 97

PRV2 US-West 100

Commercial Clouds

CLD1 US-West 562

CLD2 US-West 763

CLD3 US-East 612

CLD4 S. America 427

CLD5 S. America 427

10 data centers

3 classes Universities Private enterprise Clouds

Internal users Univ/priv Small Local to campus

External users Clouds Large Globally diverse

Dataset: Collection

• SNMP– Poll SNMP MIBs– Bytes-in/bytes-out/discards– > 10 Days– Averaged over 5 mins

• Packet Traces– Cisco port span– 12 hours

• Topology– Cisco Discovery Protocol

DCName

SNMP PacketTraces

Topology

EDU1 Yes Yes Yes

EDU2 Yes Yes Yes

EDU3 Yes Yes Yes

PRV1 Yes Yes Yes

PRV2 Yes Yes Yes

CLD1 Yes No No

CLD2 Yes No No

CLD3 Yes No No

CLD4 Yes No No

CLD5 Yes No No

Canonical Data Center Architecture

Core (L3)

Edge (L2)Top-of-Rack

Aggregation (L2)

Applicationservers

Packet Sniffers

Analyzing Packet Traces• Transmission patterns of the applications• Properties of packet crucial for

– Understanding effectiveness of techniques

• ON-OFF traffic at edges– Binned in 15 and 100 m. secs – We observe that ON-OFF persists

9

Routing must react quickly to overcome bursts

Data-Center Traffic is Bursty

• Understanding arrival process– Range of acceptable models

• What is the arrival process?– Heavy-tail for the 3 distributions

• ON, OFF times, Inter-arrival,

– Lognormal across all data centers

• Different from Pareto of WAN– Need new models

10

Data Center

Off PeriodDist

ON periodsDist

Inter-arrivalDist

Prv2_1 Lognormal Lognormal Lognormal

Prv2_2 Lognormal Lognormal Lognormal

Prv2_3 Lognormal Lognormal Lognormal

Prv2_4 Lognormal Lognormal Lognormal

EDU1 Lognormal Weibull Weibull

EDU2 Lognormal Weibull Weibull

EDU3 Lognormal Weibull Weibull

Need new models to generate traffic

Canonical Data Center Architecture

Core (L3)

Edge (L2)Top-of-Rack

Aggregation (L2)

Applicationservers

Intra-Rack Versus Extra-Rack

• Quantify amount of traffic using interconnect– Perspective for interconnect analysis

Edge

Applicationservers

Extra-Rack

Intra-Rack

Extra-Rack = Sum of Uplinks

Intra-Rack = Sum of Server Links – Extra-Rack

Intra-Rack Versus Extra-Rack Results

• Clouds: most traffic stays within a rack (75%)– Colocation of apps and dependent components

• Other DCs: > 50% leaves the rack– Un-optimized placement

EDU1 EDU2 EDU3 PRV1 PRV2 CLD1 CLD2 CLD3 CLD4 CLD50

102030405060708090

100

Extra-RackInter-Rack

Extra-Rack Traffic on DC Interconnect

• Utilization: core > agg > edge– Aggregation of many unto few

• Tail of core utilization differs– Hot-spots links with > 70% util– Prevalence of hot-spots differs across data centers

Persistence of Core Hot-Spots

• Low persistence: PRV2, EDU1, EDU2, EDU3, CLD1, CLD3

• High persistence/low prevalence: PRV1, CLD2

– 2-8% are hotspots > 50%• High persistence/high prevalence: CLD4, CLD5

– 15% are hotspots > 50%

Prevalence of Core Hot-Spots

• Low persistence: very few concurrent hotspots• High persistence: few concurrent hotspots• High prevalence: < 25% are hotspots at any time

0 10 20 30 40 50Time (in Hours)

0.6%

0.0%

0.0%

0.0%

6.0%

24.0%Smart routing can better utilize core

and avoid hotspots

Insights Gained

• 75% of traffic stays within a rack (Clouds)– Applications are not uniformly placed

• Traffic is bursty at the edge

• At most 25% of core links highly utilized– Effective routing algorithm to reduce utilization– Load balance across paths and migrate VMs

Road Map

• Understanding Data center traffic

• Improving network level performance

• Ongoing work

Options for TE in Data Centers?

• Current supported techniques– Equal Cost MultiPath (ECMP)– Spanning Tree Protocol (STP)

• Proposed– Fat-Tree, VL2

• Other existing WAN techniques– COPE,…, OSPF link tuning

How do we evaluate TE?

• Simulator– Input: Traffic matrix, topology, traffic engineering– Output: Link utilization

• Optimal TE– Route traffic using knowledge of future TM

• Data center traces– Cloud data center (CLD)

• Map-reduce app• ~1500 servers

– University data center (UNV)• 3-Tier Web apps• ~500 servers

Draw Backs of Existing TE

• STP does not use multiple path– 40% worst than optimal

• ECMP does not adapt to burstiness– 15% worst than optimal

Design Goals for Ideal TE

Design Requirements for TE

• Calculate paths & reconfigure network– Use all network paths– Use global view

• Avoid local optimals

– Must react quickly• React to burstiness

• How predictable is traffic?

….

Is Data Center Traffic Predictable?

• YES! 27% or more of traffic matrix is predictable

• Manage predictable traffic more intelligently

99%27%

How Long is Traffic Predictable?

• Different patterns of predictability• 1 second of historical data able to predict future

1.5 – 5.0

1.6 - 2.5

MicroTE

MicroTE: Architecture

• Global view: – Created by network controller

• React to predictable traffic: – Routing component tracks demand history

• All N/W paths:– Routing component creates routes using all paths

Monitoring Component Routing Component

Network Controller

Architectural Questions

• Efficiently gather network state?

• Determine predictable traffic?

• Generate and calculate new routes?

• Install network state?

Architectural Questions

• Efficiently gather network state?

• Determine predictable traffic?

• Generate and calculate new routes?

• Install network state?

Monitoring Component

• Efficiently gather TM• Only one server per ToR monitors traffic• Transfer changed portion of TM • Compress data

• Tracking predictability – Calculate EWMA over TM (every second)

• Empirically derived alpha of 0.2• Use time-bins of 0.1 seconds

Routing Component

Install routes

Calculate network routes for predictable traffic

Set ECMP for unpredictable traffic

Determine predictable ToRs

New Global View

Routing Predictable Traffic

• LP formulation– Constraints

• Flow conservation• Capacity constraint• Use K-equal length paths

– Objective• Minimize link utilization

• Bin-packing heuristic– Sort flows in decreasing order– Place on link with greatest capacity

Implementation

• Changes to data center– Switch

• Install OpenFlow firmware

– End hosts• Add kernel module

• New component– Network controller

• C++ NOX modules

Evaluation

Evaluation: Motivating Questions

• How does MicroTE Compare to Optimal?

• How does MicroTE perform under varying levels of predictability?

• How does MicroTE scale to large DCN?

• What overheard does MicroTE impose?

Evaluation: Motivating Questions

• How does MicroTE Compare to Optimal?

• How does MicroTE perform under varying levels of predictability?

• How does MicroTE scale to large DCN?

• What overheard does MicroTE impose?

How do we evaluate TE?

• Simulator– Input: Traffic matrix, topology, traffic engineering– Output: Link utilization

• Optimal TE– Route traffic using knowledge of future TM

• Data center traces– Cloud data center (CLD)

• Map-reduce app• ~1500 servers

– University data center (UNV)• 3-Tier Web apps• ~400 servers

Performing Under Realistic Traffic

• Significantly outperforms ECMP• Slightly worse than optimal (1%-5%)• Bin-packing and LP of comparable performance

Performance Versus Predictability

• Low predictability performance is similar to ECMP

0

20

40

60

80

100

ECMP MicroTE Optimal

Time (in Secs)

ML

U

Performance Versus Predictability

• Low predictability performance is similar to ECMP• High predictability performance is comparable to Optimal• MicroTE adjusts according to predictability

0

20

40

60

80

100

120

ECMP MicroTE Optimal

Time (in Secs)

ML

U

Conclusion

• Study existing TE– Found them lacking (15-40%)

• Study data center traffic– Discovered traffic predictability (27% for 2 secs)

• Developed guidelines for ideal TE

• Designed and implemented MicroTE– Brings state of the art within 1-5% of Ideal – Efficiently scales to large DC (16K servers)

Road Map

• Understanding Data center traffic

• Improving network level performance

• Ongoing work

Looking forward

• Stop treating the network as a carrier of bits

• Bits in the network have a meaning– Applications know this meaning.

• Can applications control networks?– E.g Map-reduce

• Scheduler performs network aware task placement and flow placement