Download - Open stack networking_101_part-2_tech_deep_dive

Transcript
Page 1: Open stack networking_101_part-2_tech_deep_dive

© 2011 VMware Inc. All rights reserved

OpenStack Networking Technical Deep Dive

Yves Fauser, System Engineer VMware NSBU

10/16/2013

Page 2: Open stack networking_101_part-2_tech_deep_dive

2

Agenda

§  Networking before Neutron §  Nova-Networking modes (flat / flat-dhcp / vlan-dhcp)

§  OpenStack Networking with Neutron

§  Neutron Refresher §  OVS Overview §  Open Source OVS Plugin Overview §  OVS-Plugin vs. VMware NSX Plugin

§  Nova-Metadata – Neutron Implementation

Page 3: Open stack networking_101_part-2_tech_deep_dive

© 2011 VMware Inc. All rights reserved

Networking before Neutron Nova-Networking modes (flat / flat-dhcp / vlan-dhcp)

Drawbacks of Nova-Networking that led to Neutron

Page 4: Open stack networking_101_part-2_tech_deep_dive

4

OpenStack Networking before Neutron - Refresher

nova-api (OS,EC2,Admin)

nova-console (vnc/vmrc)

nova-compute

Nova DB

nova-scheduler

nova-consoleauth

Hypervisor (KVM, Xen,

etc.)

Queue

nova-cert

Libvirt, XenAPI, etc.

nova-metadata

§  Nova has its own networking service – nova-network. It was used before Neutron

§  Nova-network is still present today, and can be used instead of Neutron

nova-network

nova-volume

Network-Providers (Linux-Bridge or OVS with

brcompat, dnsmasq, IPTables)

Volume-Provider (iSCSI, LVM, etc.)

§  Nova-network does -

§  base L2 network provisioning through Linux Bridge (brctl)

§  IP Address management for Tenants (in SQL DB)

§  configure DHCP and DNS entries in dnsmasq

§  configure fw-policies and NAT in IPTables (nova-compute)

§  Calls to network services are done through the nova API

§  Nova-network only knows 3 basic Network-Models;

§  Flat & Flat DHCP – direct bridging of Instance to external eth. Interface with and w/o DHCP

§  VLAN based – Every tenant gets a VLAN, DHCP enabled

Inspired by

Page 5: Open stack networking_101_part-2_tech_deep_dive

5

Nova-Networking deployment modes - Flat

§  In flat mode all VMs are patched into the same bridge (normally the Linux Bridge)

§  All VM Traffic is directly bridged onto the physical transport network (or single VLAN) (aka as ‘fixed network’)

§  DHCP and Default Gateway is provided externally, and is not done using OpenStack components

§  All VMs in a project are bridged to the same network, there is no multi-tenancy beside security groups (IPTables between VM interfaces and bridge)

nova-compute

hypervisor VM VM

Bridge 100 IP Stack

Compute Node

nova-compute

hypervisor VM VM

Bridge 100 IP Stack

Compute Node

nova-compute

hypervisor VM VM

Bridge 100 IP Stack

Compute Node

Management Network

(or VLAN) Transport Network

(or VLAN)

DHCP Server

WAN/Internet

Page 6: Open stack networking_101_part-2_tech_deep_dive

6

Nova-Networking deployment modes – Flat / DHCP

§  As in flat mode all VMs are patched into the same bridge and all VM traffic is directly bridged onto the physical transport network (or single VLAN) – (aka as ‘fixed network’)

§  DHCP and Default Gateway is provided by OpenStack Networking – Through ‘dnsmasq’ (DHCP) and iptables/routing stack + NAT / floating-ip’s

§  All VMs in a project are bridged to the same network, there is no multi-tenancy beside security groups (IPTables between VM interfaces and bridge)

nova-compute

hypervisor VM VM

Bridge 100 IP Stack

Compute Node + Networking *

nova-compute

hypervisor VM VM

Bridge 100 IP Stack

Compute Node

nova-compute

hypervisor VM VM

Bridge 100 IP Stack

Compute Node

External Network

(or VLAN)

Internal Network (or VLAN)

WAN/Internet

dnsmasq

iptables/ routing

* With ‘multi-host’, each compute node will also be a networking node

NAT & floating

-IPs

nova-netw.

Page 7: Open stack networking_101_part-2_tech_deep_dive

7

Nova-Networking deployment modes – VLAN

§  Other than with the flat modes, each project has its own network that maps to a VLAN and bridge that needs to be pre-configured on the physical network

§  VM Traffic is bridged through one bridge and VLAN per project onto the physical network

§  DHCP and Default Gateway is provided by OpenStack Networking – Through ‘dnsmasq’ (DHCP) and iptables/routing stack + NAT / floating-ip’s

nova-compute

hypervisor VM VM

Bridge 30 IP Stack

Compute Node + Networking *

nova-compute

hypervisor VM VM

Br 30 IP Stack

Compute Node

nova-compute

hypervisor VM VM

IP Stack

Compute Node

External Network

(or VLAN)

Internal VLANs

WAN/Internet

dnsmasq

iptables/ routing

Bridge 40

VLAN30 VLAN40

Br 40

VLAN30 VLAN40

Br 30

Br 40

VLAN30 VLAN40

VLAN Trunk VLAN Trunk

dnsmasq

* With ‘multi-host’, each compute node will also be a networking node

NAT & floating

-IPs

nova-netw.

Page 8: Open stack networking_101_part-2_tech_deep_dive

© 2011 VMware Inc. All rights reserved

OpenStack Networking with Neutron OVS Overview

OVS-Plugin vs. VMware NSX Plugin

Page 9: Open stack networking_101_part-2_tech_deep_dive

9

br-tun

Linux IP stack + routing table 192.168.10.1

WEB WEB APP APP

Config/State DB

ovsdb-server

ovs-vswitchd

eth0

MGMT

eth1

kernel user

Tunnel Ports (to Linux IP Stack)

br-int (flow table)

OpenVSwitch (OVS)

Configuration Data Interface

(ovsdb, CLI, …)

Flow Data Interface (OpenFlow, CLI, …

Transport Network

Flows

Page 10: Open stack networking_101_part-2_tech_deep_dive

10

Neutron – Open Source OVS Plugin Architecture

§  The following components play a role in the open source OVS Plugin Architecture §  Neutron-OVS-Agent: Receives tunnel & flow setup information from OVS-Plugin and programs

OVS to build tunnels and to steers traffic into those tunnels

§  Neutron-DHCP-Agent: Sets up dnsmasq in a namespace per configured network/subnet, and enters mac/ip combination in dnsmasq dhcp lease file

§  Neutron-L3-Agent: Sets up iptables/routing/NAT Tables (routers) as directed by OVS Plugin

§  In most cases GRE overlay tunnels are used, but flat and vlan modes are also possible

IP Stack

Neutron- Network-Node

nova-compute

hypervisor VM VM

IP Stack

Compute Node

nova-compute

hypervisor VM VM

Compute Node

External Network

(or VLAN)

WAN/Internet

iptables/ routing

Layer 3 Transport Network

dnsmasq NAT & floating

-IPs iptables/ routing

N.-L3-Agent N.-DHCP-Agent N.-OVS-Agent

ovsdb/ ovsvsd

Neutron-Server + OVS-Plugin

N.-OVS-Agent N.-OVS-Agent

ovsdb/ ovsvsd

ovsdb/ ovsvsd

Layer 3 Transport Net.

IP Stack

br-int br-int br-tun

br-int br-tun

br-tun

L2 in L3 (GRE) Tunnel

dnsmasq

br-ex

Page 11: Open stack networking_101_part-2_tech_deep_dive

11

§  With the VMware NSX Plugin (aka NVP Plugin) the following services are replaced by VMware NSX components §  OVS-Plugin: The OVS Plugin is exchanged by the NVP-Plugin

§  Neutron-OVS-Agent: Instead of the OVS-Agent, a centralized NVP controller cluster is used

§  Neutron-L3-Agent: Instead of the L3-Agent, a scale out cluster of NVP Layer3 Gateways is used

§  IPTables/Ebtables: Security is provided by native OpenVSwitch methods, controlled by the NVP-Controller Cluster

§  GRE Tunneling is exchanged with the more performing STT technology

IP Stack

Neutron- Network-Node

nova-compute

hypervisor VM VM

IP Stack

Compute Node

nova-compute

hypervisor VM VM

Compute Node

External Network

(or VLAN)

WAN/Internet

iptables/ routing

Layer 3 Transport Network

dnsmasq NAT & floating

-IPs iptables/ routing

N.-L3-Agent N.-DHCP-Agent N.-OVS-Agent

ovsdb/ ovsvsd

Neutron-Server + OVS NVP-Plugin

N.-OVS-Agent N.-OVS-Agent

ovsdb/ ovsvsd

ovsdb/ ovsvsd

Layer 3 Transport Net.

IP Stack

br-int br-int br-tun

br-int br-tun

br-tun

L2 in L3 (GRE) Tunnel

dnsmasq

br-ex

Open Source OVS Plugin / VMware NSX Plugin differences

Page 12: Open stack networking_101_part-2_tech_deep_dive

12

br-0

Linux IP stack + routing table 192.168.10.1

WEB WEB APP APP

Config/State DB

ovsdb-server

ovs-vswitchd

eth0

MGMT

eth1

kernel user

Flows & Tunnel Ports (to Linux IP Stack)

br-int (flow table)

OpenVSwitch with VMware NSX

Transport Network

TCP 6633 OpenFlow

TCP 6632 OVSDB

NSX Controller

Cluster

Page 13: Open stack networking_101_part-2_tech_deep_dive

13

§  Centralized scale-out controller cluster controls all OpenVSwitches in all Compute- and Network Nodes. It configures the tunnel interfaces and programs the flow tables of OVS

§  NSX L3 Gateway Service (scale-out) is taking over the L3 routing and NAT functions

§  NSX Service-Node relieves the Compute Nodes from the task of replicating broadcast, unknown unicast and multicast traffic sourced by VMs

§  Security-Groups are implemented natively in OVS, instead of iptables/ebtables

IP Stack

Neutron- Network-Node

nova-compute

hypervisor VM VM

IP Stack

Compute Node

nova-compute

hypervisor VM VM

Compute Node

Management Network

WAN/Internet

dnsmasq

N.-DHCP-Agent

ovsdb/ ovsvsd

Neutron-Server + NVP-Plugin

ovsdb/ ovsvsd

ovsdb/ ovsvsd

Layer 3 Transport Net.

IP Stack

br-int br-int br-0

br-int br-0

br-0

L2 in L3 (STT) Tunnel

dnsmasq

Open Source OVS Plugin / VMware NSX Plugin differences

NSX L3GW + NAT

Layer 3 Transport Network

NSX Controller Cluster

NSX Service-Node

Page 14: Open stack networking_101_part-2_tech_deep_dive

14

§  Tunnel status

§  Port-to-port troubleshooting tool

§  Traceflow packet injection

Management & Operations

Page 15: Open stack networking_101_part-2_tech_deep_dive

15

VMware NSX Port Connection Tool Demo

DEMO TIME

Page 16: Open stack networking_101_part-2_tech_deep_dive

16

§  Automated deployment of new Version

§  Built in compatibility verification

§  Rollback

§  Online Upgrade (i.e. dataplane & control plane services stay up)

Management & Operations – Software Upgrades

Page 17: Open stack networking_101_part-2_tech_deep_dive

17

Nova Metadata Service in Folsom

§  Nova-metadata is used to enable the use of cloud-init enabled images (https://help.ubuntu.com/community/CloudInit)

§  After getting an IP address the Instance contacts the well know IP 169.254.169.254 via HTTP and requests the needed metadata for the Instance

•  Some of the things cloud-init configures are:

•  setting a default locale, hostname, etc.

•  Set up ephemeral mount points

•  Generate ssh private keys, and add ssh keys to user's .ssh/authorized_keys so they can log in

§  With neutron in Folsom, the quantum-dhcp-agent will do the following:

§  provides option 121 “classless static routes” - adds a static route to 169.254.169.254 pointing to the dhcp-agent host itself

Instance

HTTP req. to 169.254.169.254 next-hop = quantum-dhcp-agent IP in Tennant-Net

dhcp-agent

Nova-metadata

NAT to local nova-metadata ,or

Nova-metadata

Forward to remote nova-metadata

§  IPTables on the dhcp-agent host NATs the request either to the local metadata server on the dhcp-agent host, or to a remote metadata service

§  !! Caveat: In Folsom there is no support for overlapping IPs, and no support of namespaces if nova-metadata is used. In Grizzly this will change (see next Slide)

Page 18: Open stack networking_101_part-2_tech_deep_dive

18

Nova Metadata Service in Grizzly

§  To address the limitations of nova-metadata in Folsom, the Grizzly release introduces two new services on the network-node; quantum-ns-metadata-proxy and quantum-metadata-proxy (http://tinyurl.com/a3n4ypl for details)

quantum-metadata-proxy

quantum-ns-metadata-proxy

Tenant router network namespace

via UNIX domain socket

Network Node Node in management network

nova-metadata

quantum-server

§  In Grizzly DHCP option 121 is not used anymore. The L3GW will route the request to 169.254.169.254 to the ns-metadata-proxy

§  The ns-metadata-proxy parses the request and forwards it internally to the metadata-proxy with two new headers: ‘X-Forward-For’ and the ‘X-Quantum-Router-ID’. These headers provide context to properly identify the Instance that made the original request. Only the metadata-proxy can reach hosts on the management network

§  The metadata-proxy uses the two headers to retrieve the device-id of the port that sent the request by interrogating quantum server

§  Nova-metadata then uses the ‘X-Instance-id’ header to identify the tenant, and to properly service the request

§  The metadata proxy uses the device-id received from quantum-server to construct the ‘X-Instance-id’ header, and sends the request to nova-metadata including this information

Page 19: Open stack networking_101_part-2_tech_deep_dive