Software Defined Data Centers - June 2012

41
Software Defined Data Centers Brent Salisbury Network Architect University of Kentucky [email protected]

description

Software Defined Data Centers slide deck from June' 2012. All thoughts are irresponsibly magical. http://networkstatic.net

Transcript of Software Defined Data Centers - June 2012

Page 1: Software Defined Data Centers - June 2012

Software Defined Data Centers

Brent  Salisbury  Network  Architect  University  of  Kentucky  [email protected]  

Page 2: Software Defined Data Centers - June 2012

Change  is  Bad  

•  We  are  operaAng  far  to  close  to  the  hardware.    o  Do  systems  administrators  configure  their  services  in  x86  Bios?  Guess  what?  We  do.  

•  Generic  components  decomposed  into  resources  to  consume  anywhere,  anyAme.  

•  AbstracAon  of  Forwarding,  State  and  Management.  o  Forwarding:  Networking  gear  with  flow  tables  and  firmware.  o  State:  Bag  of  protocols  destrucAon.  o  Management:  OrchestraAon,  CMDB  etc.  Join  the  rest  of  the  data  center  (and  world)  

My Obligatory Rationalizing

Page 3: Software Defined Data Centers - June 2012

>  Doh!"

Jumbled  Protocol  Picture  source:  Nick  McKeown  -­‐Stanford    

A Quick Recap

Page 4: Software Defined Data Centers - June 2012

•  Security  Policy  at  the  Edge.  •  MulA-­‐Tenancy  at  the  Edge.  •  QOS  Policy  at  the  Edge.  •  Management  at  the  Edge.  •  Cost  at  the  Edge.  •  Complexity  at  the  Edge.  

The Problem Has Always Been the Edge  

Page 5: Software Defined Data Centers - June 2012

 CommodiAzaAon:  A  Collage  of  DisrupAon  Google’s  Pluto  

Page 6: Software Defined Data Centers - June 2012

1.  Commodity  Hardware.  Off the shelf “Merchant Silicon”. – If all vendors are using the same pieces and parts where is the value.

•  “We want to create a dynamic where we have a very good base set of vendor-

agnostic instructions. On the other hand, we need to give room for switch/chip vendors to differentiate.” -Nick McKeown

•  “You don’t have to have an MBA to realize there is a problem. We are still ok but

not for very long.” -Stuart Selby,Verizon •  When you run a large data center it is cheaper per unit to run a large thing rather

than a small thing, unfortunately in networking that’s not really true. -Urs Hoezle, Google

•  “Work with existing silicon today; tomorrow may bring dedicated OpenFlow silicon.” -David Erickson

•  “The path to OpenFlow is not a four lane highway of joy and freedom with a six pack and a girl in the seat next to you, it’s a bit more complex and a little hard to say how it will work out, but I’d be backing OpenFlow in my view” – Greg Ferro Etherrealmind.com

   

What Changed? Why Now? #1 HW Commoditization  

Page 7: Software Defined Data Centers - June 2012

Multi-­‐‑Tenancy  FlowVisor Openflow  Controller

Physical  Network  Infrastructure Router,  Switches,  RIB,  LIB,  

TCAM,  Memory,  CPU,  ASIC.

Virtualization

HyperVisors,  Vmware,   Hyper-­‐‑V,  KVM,  Xen,  X86  

Instruction  Set  

Physical  Server  Infrastructure Servers,  CPU,  Memory,  Disk,  

NIC,  Bus.   Physical  HW

Slices

SDN  Network VM  Farms  Today

WindowSlice

Windows Slices WindowS

lice

Windows Slices WindowS

lice

Windows Slices WindowS

lice

General Purpose Slice

WindowSlice

Research Slices WindowS

lice

Secure Network Slice

Not New Ideas  

Page 8: Software Defined Data Centers - June 2012

-­‐Flow  Switching  

port3  

Switch  Port  

MAC  src  

MAC  dst  

Eth  type  

VLAN  ID  

IP  Src  

IP  Dst  

IP  Prot  

TCP  sport  

TCP  dport   AcAon  

00:20..   00:1f..  0800   vlan1   1.2.3.4   5.6.7.8   4   17264   80   port6  

-­‐Firewall  

*  

Switch  Port  

MAC  src  

MAC  dst  

Eth  type  

VLAN  ID  

IP  Src  

IP  Dst  

IP  Prot  

TCP  sport  

TCP  dport   AcAon  

*   *   *   *   *   *   *   *   22   drop  

RouAng  

*  

Switch  Port  

MAC  src  

MAC  dst  

Eth  type  

VLAN  ID  

IP  Src  

IP  Dst  

IP  Prot  

TCP  sport  

TCP  dport   AcAon  

*   *   *   *   *   5.6.7.8  *   *   *   port6  

VLAN  Switching  

*  

Switch  Port  

MAC  src  

MAC  dst  

Eth  type  

VLAN  ID  

IP  Src  

IP  Dst  

IP  Prot  

TCP  sport  

TCP  dport   AcAon  

*   *   vlan1   *   *   *   *   *  port6,    port7,  port9  

00:1f..  

Conceptually Simple, Yet Powerful

Page 9: Software Defined Data Centers - June 2012

Southbound  API  (x86  ‘like’  or  a  HAL)  

Northbound    API  

(POSIX,  REST,  JSON)  

ApplicaAons/  Policy  

Controllers/  Slicing  

Hardware/Firmware/  vSwitch  

SDN  Stack   OperaAng  System    AbstracAon  Layers  

ApplicaAons  

Kernel/OS/  Hypervisor  

Memory  Device  CPU  

Firmware  

   Abstraction

Mainframe  

Abacus   x86  Win32  

Page 10: Software Defined Data Centers - June 2012

DistribuAon  

Campus    Core  

DistribuAon  DistribuAon  

Wireless  Controllers  Apply  Policy  Centrally  

Access  Points  

Enterprise Wireless Today

Page 11: Software Defined Data Centers - June 2012

DistribuAon  

Campus    Core  

DistribuAon  DistribuAon  

Enterprise Wireless at Larger Scale Today

Distributed    Controllers  In  the  same    

AdministraAve    Domain  

Page 12: Software Defined Data Centers - June 2012

DistribuAon  

Campus    Core  

DistribuAon  DistribuAon  

SDN/OF  x86  Controllers  Apply  Policy  Centrally  

Edge  Switches  

Decoupled Control Plane (NOS)

Page 13: Software Defined Data Centers - June 2012

Policy Application in Wired Networks •  Decoupling  the  Control  Plane  !

=  mean  distributed  systems  in  networks  go  away.  

 •  The  problem  is  a  distributed  

systems  theory  problem  managed  in  sonware  independent  of  hardware.  

 •  We  centralize  the  control  

plane  in  tradiAonal  hierarchical  campus  architectures  today  in  big  expensive  chassis.  

Distributed    SDN/OF  

Controllers  

Page 14: Software Defined Data Centers - June 2012

DistribuAon  

Campus    Core  

DistribuAon  DistribuAon  

•  The  AlternaAve  to  apply  policy  is  Business  as  usual.  Un-­‐scalable  and  cost  prohibiAve  bumps  in  the  wire  

•  NAC  at  scale  is  even  more  mythical  than  BYOD  and  SDN.  

Edge  Switches  

The Alternative is More of the Same

Page 15: Software Defined Data Centers - June 2012

Ingress  Port  Ethec  Src    Ether  Dst    Ether  Type  Vlan  ID   IP  Dst   IP  Src   TCP  Dst  TCP  Src   IP  Proto  Port  0/1   *   *   *   *   *   *   80   *   *  *   *   *   *   *   192.168.1.0/20   *   *   *   *  *   *   *   *   *   192.168.1.0/24   *   25   *   *  Port  0/3   *   *   *   *   192.168.1.1/32   *   *   *   *  

Port  0/3   *   *   *   *   192.168.1.1/32   *   *   *   *  

0/5   *   *   *   *   172.24.16.5/32   *   80   *   *  

Packet-­‐in  with  match  in  TCAM  –  AcAon  is  forward  to  port  0/2  

Packet-­‐in  with  NO  match  in  TCAM  –  AcAon  is  Punt  to  Controller  

AcAon  Bucket  

Send  Packet    to  Controller  

Send  Packet  to  Port  0/2  In  (n)RAM  

AcAon  Bucket  

TCAM  Lookup  

TCAM  Lookup  

Open vSwitch – Scale HW vs. SW

•  VM  rack  density  East-­‐West  traffic  could  be  problemaAc  for  general  purpose  top  of  rack.  

•  100K+  entries  in  a  rack  is  unrealisAc  in  HW  today.  

Page 16: Software Defined Data Centers - June 2012

 Where Do You Start? (Physical Devices)  Ships  in  the  night..  •  Whatever  your  method  of  virtualizaAon  or  an  overlay  network  on  hardware  

lifecycle.  The  idea  is  to  create  test  bed  and  migrate  as  warranted.  •  VirtualizaAon  today  is  done  in  segments  or  layers  rather  than  a  complete  

network  soluAon.  •  Most  vendor  early  OpenFlow  enabled  code  supports  hybrid  adopAons.    

Lambdas   Physical  Overlay  Vlans  or  VRFs  

Page 17: Software Defined Data Centers - June 2012

Ez Deployment Scenario  

TradiAonal    Network  

Physical  Switch  Running  OF  Agent  

POX,  FloodLight,  Etc  OF  Controller  

TradiAonal    Layer  3  Gateway  

Host  A  Vlan-­‐10  

10.100.1.10/24  

Host  B  Vlan-­‐20  

10.200.1.10/24  

802.1q  Trunk  or  (M)LAG  group  

Access  Port  to  Controllers  

24   1  

11  10  

 New  Flow  Processing  -­‐-­‐  struct  ofp_packet_in  (POX  L2  LearningAlgorthym)  

     1.  Update  Source  Address  in  (T)CAM  or  SW  tables.          2.  Is  desAnaAon  address  a  Ethertpe  LLDP  or  Bridge  Filtered  MAC,  or  is?      Drop  or  FWD  to  Controller  or  even  hand  off  to  STP.  LLDP  may  be  used  to  build  a  topology  (important  for  future).        3.  Is  MulAcast?  Yes  Flood.        4.  Is  the  desAnaAon  address  in  port  mac  address  table.  If  no  Flood.        5.  Is  output  port  the  same  as  input  port?  Drop  to  prevent  loops.        6.  Install  flow  and  forward  buffered  and  subsequent  packets.  

Page 18: Software Defined Data Centers - June 2012

  What Changed? Why Now? #2 The Data Center    •  “The  network  is  in  my  way”  -­‐James  Hamilton,  Amazon    •  Networking  is  complex  because  the  appropriate  abstracAons  have  not  yet  been  

defined.”  –A  Case  for  Expanding  OpenFlow/SDN  Deployments  On  University  Campuses  

•  “If  you  look  at  the  way  things  are  done  today,  it  makes  it  impossible  to  build  an  efficient  cloud.  If  you  think  about  the  physical  network  because  of  things  like  VLAN  placements,  you  are  limited  on  where  you  can  place  workloads.  So  even  without  thinking  about  the  applicaAon  at  all,  there  are  limits  on  where  you  can  place  a  VM  because  of  capacity  issues  or  because  of  VLAN  placement  issues.”  –  MarAn  Casado  

   •  The  tools  we  have  today  for  automaAon:  snmp,  netconf,  subprocess.Popen(Python),  

Net::Telnet(Perl),#!/bin/bash,  autoexpect,  etc.    

Page 19: Software Defined Data Centers - June 2012

  What Changed? #2 The Data Center      •  Public  Cloud  Scale  •  VID  LimitaAons  -­‐  ~4094  Tags  •  ¼  of  Servers  are  Virtualized  •  Customers  want  flat  networks  

but  they  do  not  scale.  •  Complexity  in  the  network  

substrate  to  support  bad  applicaAon  design.  

•  Required-­‐  Flexible  &  Open  APIs  to  consume  Network  Resources.  

•  East-­‐West  policy  applicaAon.  •  East-­‐West  BW  ConsumpAon.  •  L2  MulA-­‐Tenancy.  •  Hypervisor  AgnosAc.  •  VM  port  characterisAc  mobility.  •  Traffic  Trombone  for  Policy.  

VM1  Port1  

VM4  Port4  

VM3  Port3  

VM2  Port2  

Physical    x86  Hardware  

Open  vSwitch  &  Hypervisor  

Physical  Network  

Policy  

The  Edge  Needs  to  Be  Smarter  but    also  manageable:  Below  is  neither  

Page 20: Software Defined Data Centers - June 2012

Virtual Switching (Open vSwitch)

VM1  Port1  

VM4  Port4  

VM3  Port3  

VM2  Port2  

%  ovs-­‐appctl  fdb/show  br0    port    VLAN    MAC                                Age          0          0    00:0f:cc:e3:0d:d8        6          1          0    00:50:56:25:21:68        1          2          0    10:40:f3:94:e0:82        1          3          10  10:40:f3:94:e0:82      1          4          10    00:0f:cc:e3:0d:d8      1  

Physical    x86  Hardware  

Open  vSwitch  &  Hypervisor  

•  Security-­‐Vlan  Layer2  isolaAon,  Traffic  filtering  

•  QOS-­‐Traffic  queuing  and  shaping  •  Monitoring-­‐  Ne|low,  sFlow,  SPAN,  

RSPAN  •  Control-­‐  OpenFlow  or  NextGen  •  Features:  Bonding,  GRE,  VXLan,  Capwap,  

STT,  Ipsec.  •  Hypervisor  Support:  KVM,  XEN,  Xenserv,  

Vbox.  •  Orchestrator  Support:  OpenStack,  

CloudStack.  •  License:  Apache2  and  GPL  (upstream  

kernel  mod)  

Physical  Network  

Page 21: Software Defined Data Centers - June 2012

Open vSwitch Forwarding

•  First  Packet  in  the  flow  goes  to  the  OVS  controller  (slowpath)  and  subsequent  are  forwarded  by  the  OVS  data  path  (fastpath)  

•  Underlying  Open  vSwitch  is  a  flow-­‐table  forwarding  model  similar  to  that  used  by  OpenFlow.  

•  When  a}ached  to  a  controller  datapaths  are  determined  by  the  OpenFlow  Controller.  

•  MulAple  tenants  can  share  the  same  tunnel.  •  AcAons:  Forward,  Drop,  encapsulate  and  send  to  controller.  

Open  vSwitch  Data  Path  

Open  vSwitch  Controller  or  Controller  (x)   First  Packet  in  a  Flow  

Subsequent  Packets    

VM  1  

Physical  Hardware/Hypervisor  

VM  2    

Page 22: Software Defined Data Centers - June 2012

 Refresh of Encapsulation

Page 23: Software Defined Data Centers - June 2012

Encapsulated  In  New  Headers  the  Original  Packet  +  Headers  is  now  Payload.  

New Encapsulation to Traverse Physical Net

Insert  Keys/VID  etc  here  

Page 24: Software Defined Data Centers - June 2012

x86  Box    OrchestraAng  Tunnels  

Br2  interface  on  both  hosts  cannot  reach  either  side  unAl  the  GRE  tunnel  is  brought  up  

Br2  =  island  needing  connecAvity.  Br1  with  a  GRE  will  tunnel  the  two  together.  

The  Tunnel  bridges  the  two  VMs  together  on  the  same  

Network    

Tap0  VM-­‐A   Br0/Eth0   Tap0    

VM-­‐B  Eth0/Br0  Gre/VXLan/etc  

 Tunneling/Overlays

Flat  Layer  2  Network  10.100.0.0/16  

VM-­‐B  192.168.1.10/24  

Physical  Host  A  w/Virtual  Switch  &  Hyper  Visor    

VM-­‐A  192.168.1.10/24  

VM-­‐A  Taps  into  a  virtual  bridge  

Physical  Host  A  w/Virtual  Switch  &  Hyper  Visor    

The  Network  Substrate  has  no  VM  hosts  in  Tunnels  Traversing  the  Network  

Legacy  Network  

Controller  Establishing  Tunnels  in  some  fashion  

Page 25: Software Defined Data Centers - June 2012

Physical  L3  Network  

Resources  ConsumpAon  (Storage,  Network,  Compute)  Either  Local  or  Hybrid  Private/Public  Cloud.  Visibility,  OAM,  Dynamic  Provisioning,  Brokerage  and  AnalyAcs.    

Tenancy  X  Tenancy  Y  Tenancy  Z  

SDN  Overlays    (GRE,  STT,  VXLan)  TradiAonal  and  SDN  Network  Substrates  

CreaAng  Dynamic  Network  Resource  Pools  

Data Center Overlays •  Early  SDN  adopAons  are  happening  today  in  Data  Centers.  decouple  the  virtual  from  the  physical  discreetly.  

Where  do  we  terminate  Tunnel  endpoints?  HW  or  SW    for  De-­‐Encap?  

Page 26: Software Defined Data Centers - June 2012

Evolution or Re-Invention?  

? 3-­‐Tier    North-­‐South  90/10  

2-­‐Tier  Flat  TRILL/SPB/MPLS  North-­‐South  75/25  

Sonware  Defined  

Page 27: Software Defined Data Centers - June 2012

Leveraging  Overlays  With  VXLAN/(NV)GRE/CAPWAP  Create  one  Flat  Network  

Layer  3  Network  e.g.  Carrier  MPLS/VPN,  Internet,    

L3  Segmented  Data  Center  

Data  Center  West  Segment  

Data  Center  East  Segment  

Disaster  Recovery  Warm/Hot  Site  

Cloud  Provided  ElasAc  Compute  

Tenancy  X  Tenancy  Y  Tenancy  Z  

Does This Make Sense?

Page 28: Software Defined Data Centers - June 2012

OpenStack Vlan Networking

Page 29: Software Defined Data Centers - June 2012

Hybrid Cloud Look

Internets  

Public  and  Private  IP  addr  on  one  NIC  

Public  Cloud  Spoke  

Controller  Dnsmasquerading  

&  Iptables  Aka,  Router,    

Switch  and  Firewall  

Internets  

VM  Instance   VM  Instance  

How  Public  Cloud  Feels  

VM  Networks  

How  it  Really  is:  

Page 30: Software Defined Data Centers - June 2012

 Hybrid Cloud - IMO Not as Bad as It Looks, this exists today in most DCs

Public  and  Private  On  one  Nic  

Spoke   Spoke  

Private  Cloud  On  Your  Network  

Hub  Gateway  

Spoke   Spoke  

Internet  

Page 31: Software Defined Data Centers - June 2012

Creates  One  Network  and  Hybrid  Cloud  

Internets  

Private  Cloud  On  Your  Network  

Public  Cloud  Spoke  

Encapsulated  Tunnels  Network  is    Unaware  of  

An  x86  Node  Can  Aggregate  the  Tunnel    Endpoints.  Hub  and  Spoke.  The  AlternaAve  would  be  a  Full  Mesh.  Policy  could  centrally  be  applied  there.  

Hub  Gateway  

Spoke   Spoke  

Tunneling & Hybrid Cloud

Page 32: Software Defined Data Centers - June 2012

Internets  

Private  Cloud  On  Your  Network  

Public  Cloud  Spoke  

Hub  Gateway  

Spoke   Spoke  

De-Dup Policy is the best Reason for Tunnels

•  Leverage  exisAng  centralized  policy  ApplicaAon  and  OrchestraAon.  

•  That  all  said  sending  the  client  directly  to  a  cloud  provider  outside  of  a  tunnel  via  the  Internet  is  by  far  the  easiest  and  most  scalable  soluAon.  

Crypto,  IDS/IPS,  Firewall  etc.  

Page 33: Software Defined Data Centers - June 2012

Leverage  Regional  &  Super-­‐Regional  Statewide  Networks  and  Open  Peerings  to  Cloud  Providers.  xAAS  driven  as  a  

commodiAes  market  through  Emerging  Open  API  Standards.  Programmability  Should  Enable  Efficiency  in  Usage  and  Allow  for  Time  Sharing  via  OrchestraAon.  

OpenStack  Resources  Either  Local  w/the  ability  to  leverage  Hybrid  Private/Public  Cloud  offerings  based  on  the  best  market  price  that  year,  month  maybe  even  day  depending  on  the  elasAcity  and  flexibility  to  move  workloads.  Also  balancing  workloads  amongst  each  other  through  scheduling  and  predicAve  analysis  and  magic.  Tenants  would  be  any  community  anchor,  state,  city,  educaAon  non-­‐profit  etc.  

Op#on  3:  Internet2,  Ideally  begin  leveraging  their  peering  and  Colos  globally  for  a  broader  

net  to  capture  compeAAvely  priced  

resources  

Op#on  2:  Dedicated  peerings  to  any  node  from  tenant  to  colo  into  the  super-­‐regional  anyone  selling  resource  pools  with  open  APIs.  

Rackspace,  HP,  Dell,  Piston  Cloud.  Companies  whose  end  game  is  not  

100%  cloud.  

Op#on  1:General  Internet1  best  effort  connecAvity  through  commodity  I1  drains  like  a  Cogent  for  example  at  ~¢25-­‐¢50  cents  per/mb.  Capture  that  as  service  level  as  a  lower  Aer  SLA  but  

priced  significantly  lower.    Primary  opAon.    

Public Cloud: The Internet will be the new LAN

Page 34: Software Defined Data Centers - June 2012

Quantum: The OpenStack Network Plugin

Page 35: Software Defined Data Centers - June 2012

Cisco Nexus Plugin Snippet

Page 36: Software Defined Data Centers - June 2012

Nicira NVP Plugin

Page 37: Software Defined Data Centers - June 2012

FloodLight Rest Proxy

Page 38: Software Defined Data Centers - June 2012

Challenges •  The  idea  that  all  networks  should  be  built  like  the  Internet  is  ge�ng  in  the  

way.  I  have  to  build  Campus  and  Healthcare  networks  in  the  same  manner  as  Tier1  Service  Providers  build  their  networks  and  we  build  our  Regional  SP  network  in  the  state  to  find  reliability  and  scale.  Time  and  Space  should  be  taken  into  account.  

 •  The  Internet  will  be  the  LAN  in  the  next  decade.  The  Carriers,  LECs  and  Cable  

companies  are  not  invesAng  the  capital  necessary  to  scale.    •  DistribuAon  of  state  and  mapping  of  elements  in  a  decoupled  Control  Plane.  

This  has  been  solved  in  systems  today  eg.  Hadoop  /  HGFS.  Tracking  millions  of  elements  over  some  distributed  controllers  and  maintaining  state  is  well  within  the  realm  of  reality.  

 •  “Big  old  tech  companies  are  onen  incapable  of  invesAng  in  new  ideas  

because  they're  addicted  to  the  revenue  streams  from  their  current  businesses  and  don't  want  to  disrupt  those  businesses.”  -­‐  Marc  Andreessen  Silicon  Valley  VC  Firm  

•  If  providing  IaaS  understand  Self-­‐Provisioning  and  Customer  Experience.  

Page 39: Software Defined Data Centers - June 2012

•  Get  Involved  with  test  beds  with  the  community  and  vendors.    •  Thought  leadership  and  knowledge  transfer.  •  Dip  your  toes  in  the  public  and  private  Cloud.  •  Installers  for  local  OpenStack  instances.  @  h}p://www.rackspace.com/              &  h}ps://airframe.pistoncloud.com/  •  ParAcipate  in  the  Internet2  working  group.  Co-­‐Chairs  Dan  Schmeidt  

[email protected]  &  Deniz  Gurkan  [email protected]  •  h}p://incntre.iu.edu/openflow  IU  OpenFlow  in  a  Day  Class.  •  Networking  vendors  need  to  realize  one  of  the  role  reversal  in  Networking  

that  has  occurred  (thanks  in  large  part  to  R&E  and  DIY’s).  Just  as  in  the  x86  market  it  is  consumer  driven.  The  legacy  echo  chambers  product  managers  live  in  are  no  longer  acceptable.  

•  Mimic  what  the  sonware  industry  perfected.  Open  and  communiAes  of  pracAce.    

 How To Get Involved

Page 40: Software Defined Data Centers - June 2012

Brent’s Bookmarks – Comments, Questions, Nerd Rage?

•  h}p://ioshints.info  •  h}p://etherealmind.com  •  h}p://nerdtwilight.wordpress.com/    •  h}p://networkheresy.com/    •  h}p://www.openflowhub.org/  (Floodlight)  •  h}p://www.noxrepo.org/  (POX)  •  First  10  minutes  of  McKeown’s  presentaAon  for  anyone  with  manager  in  Atle  not  to  menAon  brings  tears  to  my  

eyes.  

•  h}p://www.youtube.com/watch?v=W734gLC9-­‐dw  (McKeown)  •  h}p://packetpushers.net  •  h}p://www.codybum.com    •  h}p://www.rackspace.com  (Rackspace  OpenStack  Private  Cloud  build)  •  h}p://www.networkworld.com/community/fewell  •  h}p://networkstaAc.net/  My  Ramblings  •  irc.freenode.net  #openflow  #openvswitch  #openstack    

Page 41: Software Defined Data Centers - June 2012

 

Closing- Comments Questions Nerd Rage?