Hosting Virtual Networks on Commodity Hardware VINI Summer Camp.
-
Upload
daisy-malone -
Category
Documents
-
view
215 -
download
0
Transcript of Hosting Virtual Networks on Commodity Hardware VINI Summer Camp.
Hosting Virtual Networks on Commodity Hardware
VINI Summer Camp
Decouple Service from Infrastructure
• Service: “slices” of physical infrastructure– Applications and networks that benefit from
• Flexible, custom topologies• Application-specific routing
• Infrastructure: needed to build networks
Fixed Physical Infrastructure
Shared By Many Parties
Network Virtualization: 3 Aspects
• Host: Divide the resources of one physical node into the appearance of multiple distinct hosts
• Network stack: Give each process its own interfaces, routing table, etc.
• Links: Connect two nodes by composing underlying links
Why Virtual Networks
• Sharing amortizes costs– Enterprise network or small ISP does not
have to buy separate routers, switches, etc.– Large ISP can easily expand to new data
center without buying separate equipment
• Programmability and customizability
• Testing in realistic environments
Why Commodity Hardware
• Lower barrier to entry– Servers are inexpensive– Routing (e.g., Quagga), and forwarding (e.g., Click)
software is open source (free)
• No need for specialized hardware– Open-source routing software: Quagga, etc.– Network processors can be hard to program
• Easy adaptation of physical infrastructure– Expansion is easy: buy more servers
Commercial Motivation:Logical Routers
• Consolidation– PoP and Core– Simpler physical topology– Fewer physical interconnection
• Application-Specific Routing– PoP and Core– Simpler physical topology– Fewer physical interconnection
• Wholesale Router Market• Proof-of-Concept Deployment
Other Beneficiaries
• Interactive applications: require application-specific routing protocols– Gaming– VoIP
• Critical services: benefit from custom data plane– Applications that need more debugging info– Applications with stronger security requirements
Requirements
• Speed: Packet forwarding rate that approach that of native, in-kernel
• Flexibility: Support for custom routing protocols and topology
• Isolation: Separation of resource utilization and namespaces
Host Virtualization• Full virtualization: VMWare Server, KVM
– Advantage: No changes to Guest OS, good isolation– Disadvantage: Slow– Paravirtualization: Xen, Viridian
• OS-Level Virtualization: OpenVZ, VServers, Jail– Advantage: Fast– Disadvantage: Requires special kernel, less isolation
Network Stack Virtualization
• Allows each container to have its own – Interfaces– View of IP address space– Routing and ARP tables
• VServer does not provide this function– Solution 1: Patch VServer with NetNS– Solution 2: OpenVZ
• VServer is already used for PlanetLab
Link Virtualization
• Containers need Ethernet connectivity– Routers expect direct Ethernet connections to
neighbors
• Linux GRE tunnels support only IP-in-IP
• Solution: Ethernet GRE (EGRE) tunnel
Synthesis
• Tunnel interface outside of container– Permits traffic shaping outside of container– Easier to create point-to-multipoint topology
• Need to connect tunnel interface to virtual interface
Connecting Interfaces: Bridge
• Linux bridge module: connects virtual interface with the tunnel interface– speed suffers due to bridge table lookup– allows point-to-multipoint topologies
Optimization: ShortBridge
• Kernel module used to join virtual interface inside the container with the tunnel interface
• Achieves high packet forwarding rate
Evaluation
• Forwarding performance– Packets-Per-Second– Source->Node-Under-Test->Sink
• Isolation– Jitter/loss measurements with bursty cross traffic
• Scalability– Forwarding performance as the number of
containers grow• All tests were conducted on Emulab
– 3GHz CPU, 1MB L2 Cache, 800MHz FSB, 2GB 400MHz DDR2 RAM
Forwarding Performance - Click
• Minimal Click configuration– Raw UDP receive->send
• Higher jitter• ~80’000PPS
Forwarding Performance - Bridged
• Allows more flexibility through bridging
• ~250’000PPS
Forwarding Performance – Bridged w/o Tunneling
• Xen: often crashes, ~70’000PPS• OpenVZ: ~300’000PPS• NetNS: ~300’000PPS
Forwarding Performance – Spliced
• Avoids bridging overhead• Point-to-Point topologies only• ~500’000PPS
Forwarding Performance - Direct
• No resource control
• ~580’000PPS
Overall Forwarding Performance
Forwarding for Different Packet Sizes
Isolation
• Setup:– 5 nodes. 2 pairs of source+sink – 2 NetNS containers in spliced mode– pktgen used to generate cross flow– iperf measures jitter on another flow
• Step function– CPU utilization < 99%: no loss, 0.5ms jitter– CPU utilization ~> 100%: loss, 0.5ms jitter for
delivered packets
Scalability Test Setup
Scalability Results
Tradeoffs
• Bridge vs. Shortbridge– Bridge enables point-to-multipoint– Shortbridge is faster
• Data-plane flexibility vs. Performance– Non-IP forwarding requires user-space
processing (Click)
Future Work
• Resource allocation and scheduling– CPU– Interrupts/packet processing
• Long-running deployment on VINI testbed
• Develop applications for the platform
Questions
• Other motivations/applications?
• Other aspects to test?
• Design alternatives?