OpenStack Networking Overview of the networking challenges and solu5ons in OpenStack
Yves Fauser
Network Virtualiza5on Pla>orm System Engineer @ VMware
OSDC 2014, Berlin, 08-‐10.04
The perfect storm
hOp://en.wikipedia.org/wiki/File:Hurricane_Isabel_from_ISS.jpg
The perfect storm
=> • Very feature rich vSwitch (Tunneling, QoS,
monitoring & management, automated control through OpenFlow and OVSDB)
• Part of the Linux Kernel since 3.3
=>
• OpenFlow and OVSDB (RFC 7047) is used between OpenVSwitch and external controllers
• Numerous OpenSource and Commercial controllers emerged in the last years
• Examples; NOX, Beacon, Floodlight, OpenDayLight, VMware NSX, Big Controller, NEC, etc.
=> • OpenStack drives the need for flexible and fast
network deployment models • The OpenStack Neutron Project offers a network
abstrac5on that enables OpenSource Projects and commercial implementa5ons to innovate with and for OpenStack
Open vSwitch
Open vSwitch Features vs. Linux-‐Bridge Feature Open vSwitch Linux Bridge
MAC Learning Bridge X X
VLAN support (802.1Q) X (na5ve in OVS) using ‘vlan’
Sta5c Link Aggrega5on (LAG) X (na5ve in OVS) using ‘ifenslave’
Dynamic Link Aggrega5on (LACP) X (na5ve in OVS) using ‘ifenslave’
Support for MAC-‐in-‐IP encapsula5on (GRE, VXLAN, …) X (na5ve in OVS) VXLAN support in 3.7 Kernel + iproute2
Traffic capturing / SPAN (RSPAN with encap. into GRE) X (na5ve in OVS) Using advanced traffic control
Flow monitoring (NetFlow, sFlow, IPFIX, …) X (na5ve in OVS) e.g. using ipt_ne>low
External management interfaces (OpenFlow & OVSDB) X
Mul5ple-‐Table forwarding pipeline with flow-‐caching engine X
Performance improvements (e.g. RSS Support) X
hOp://openvswitch.org/features/ hOps://github.com/homework/openvswitch/blob/master/WHY-‐OVS
br-‐tun (flow tables)
Linux IP stack + rouDng table 192.168.10.1
WEB WEB APP APP
Config/State DB
ovsdb-‐server
ovs-‐vswitchd
eth0
MGMT
eth1
kernel user
Tunnel Ports (to Linux IP Stack)
br-‐int (flow tables)
Open vSwitch (OVS)
Configura5on Data Interface
(ovsdb, CLI, …)
Flow Data Interface (OpenFlow, CLI, …
Transport Network
Flows
Br-‐0 (flow tables)
Linux IP stack + rouDng table 192.168.10.1
WEB WEB APP APP
Config/State DB
ovsdb-‐server
ovs-‐vswitchd
eth0
MGMT
eth1
kernel user
Flows & Tunnel Ports (to Linux IP Stack)
br-‐int (flow table)
Open vSwitch with a controller cluster
Transport Network
TCP 6633 OpenFlow
TCP 6632 OVSDB
Controller Cluster
Common misconcepEons with regards to controllers § Misconcep5on 1)
Traffic will flow through the controller cluster, un5l a specific flow is installed in the switch through OpenFlow
§ It depends!
§ Most architectures don’t send any traffic to the controller (e.g. VMware NSX doesn’t do it)
§ In some architectures, where address space is limited (e.g. CAM/TCAM in low end ToR Switches), the controller gets the first few data packets, and then installs a flow in the Hardware. This is usually not the case when controlling OVS, as OVS holds the Tables in the Hypervisors Memory (and there is plenty!)
§ Misconcep5on 2) The controller is a single point of failure
§ Controllers are usually deployed as scale out clusters
§ Depending on the chosen architecture, even a complete controller cluster outage doesn’t affect traffic forwarding
OpenFlow and Controller based Networks
MulEple incarnaEons of SDN
So what is SDN? It depends on the were you stand!
hOp://upload.wikimedia.org/wikipedia/commons/f/f8/Blind_men_and_elephant3.jpg
Data plane
• Hardware specific • Bound by ASIC/TCAM limits in physical devices
Control plane • Distributed protocols used • OSPF, STP, etc.
• Populates the data plane with forward. entries
Internal API
§ The core concept of OpenFlow is control and data plane separa5on
§ There are heated debates if the use of “Hybrid” approaches qualify for being “real SDN”
§ The purist point of view is; Without the clear separa5on of control and data plane, one should not call his solu5on an “SDN solu5on”
SDN defined – Control / Data plane separaEon
Data plane
• Hardware specific • Bound by ASIC/TCAM limits in physical devices
Control plane
Internal API
Control plane • Central management of forwarding tables • Populates the data plane with forwarding entries using
OpenFlow as an external “southbound” interface
Ope
nFlow
Controller
§ The core concept of OpenFlow is control and data plane separa5on
§ There are heated debates if the use of “Hybrid” approaches qualify for being “real SDN”
§ The purist point of view is; Without the clear separa5on of control and data plane, one should not call his solu5on an “SDN solu5on”
SDN defined – Control / Data plane separaEon
SDN Controllers “Landscape” (incomplete list) OpenSource Controllers Commercial Controllers
• C++ and Phython controllers open sourced by Nicira
• NOX was the first controller in the ‘market’
hOp://www.noxrepo.org
• Commercial con5nua5on of NOX with a focus on “Network virtualiza5on” using Overlays
hOp://www.projec>loodlight.org
• Java based controller • Focused to enable ‘apps’ to
evolve independently of the control plane func5on
• Backed by BigSwitch Networks Engineers
• Commercial version of Floodlight controller by BigSwitch Networks with a focus on OpenFlow controlled Switch Fabrics
hOps://openflow.stanford.edu/display/Beacon/Home
• First Java based controller • Basis of Floodlight
hOp://www.opendaylight.org
• Java based controller • “community-‐led, open,
industry-‐supported framework”
hOp://yuba.stanford.edu/~casado/of-‐sw.html And a lot more @:
etc …
Network VirtualizaEon, an “SDN ApplicaEon”
What are the key components of network virtualization?!
Network VirtualizaEon – A technical definiEon
Network virtualiza5on is:
§ A reproduc5on of physical networks: § Q: Do you have L2 broadcast / mul5cast, so apps do not need to be modified?
§ Q: Do you have the same visibility and control over network behavior?
§ A fully isolated environment: § Q: Could two tenants decide to use the same RFC 1918 private IP space?
§ Q: Could you clone a network (IPs, MACs, and all) and deploy a second copy?
§ Physical network loca5on independent: § Q: Can two VMs be on the same L2 logical network, while in different physical L2 networks?
§ Q: Can a VM migrate without disrup5ng its security policies, packet counters, or flow state?
§ Physical network state independent: § Q: Do physical devices need to be updated when a new network/workloads is provisioned?
§ Q: Does the applica5on depend on a feature in the physical switch specific to a vendor?
§ Q: If a physical device died and was replaced, would applica5on details need to be known?
§ Network virtualiza5on is NOT: § Running network func5onality in a VM (e.g., Router or Load-‐balancer VM)
OpenStack Projects & Networking
Some of the Integrated (aka ‘Core’) projects
Image repo
(glance)
Object Storage (Swix)
Network (Neutron)
Block Storage (cinder)
Iden5ty (keystone)
Dashboard (horizon)
Provides UI for other projects
Provides AuthenDcaDon and Service Catalog for other Projects
Compute (nova)
Provides Images
Stores Images as Objects
Provides volumes
Provides network connecDvity
OpenStack Networking before Neutron
nova-api (OS,EC2,Admin)
nova-console (vnc/vmrc)
nova-compute
Nova DB
nova-scheduler
nova-consoleauth
Hypervisor (KVM, Xen,
etc.)
Queue
nova-cert
Libvirt, XenAPI, etc.
nova-metadata
§ Nova has its own networking service – nova-‐network. It was used before Neutron
§ Nova-‐network is s5ll present today, and can be used instead of Neutron
nova-network
nova-volume
Network-Providers (Linux-Bridge or OVS with
brcompat, dnsmasq, IPTables)
Volume-Provider (iSCSI, LVM, etc.)
§ Nova-‐network does -‐
§ base L2 network provisioning through Linux Bridge (brctl)
§ IP Address management for Tenants (in SQL DB)
§ configure DHCP and DNS entries in dnsmasq
§ configure fw-‐policies and NAT in IPTables (nova-‐compute)
§ Nova-‐network only knows 3 basic Network-‐Models;
§ Flat & Flat DHCP – direct bridging of Instance to external eth. Interface with and w/o DHCP
§ VLAN based – Every tenant gets a VLAN, DHCP enabled
Inspired by
Nova-‐Networking – Drawbacks that lead to develop Neutron § Nova-‐Networking is missing an well defined API for consuming networking services
(tenant API for defined topologies and addresses)
§ Nova-‐Networking only allows for the 3 simple models; Flat, Flat/DHCP and VLAN/DHCP, all of those are limited in scale and flexibility – e.g. max. 4094 VLAN ID limit
§ Closed solu5on; No ability to use network services from 3rd par5es and/or to integrate with Network vendors or overcome the limita5ons of Nova-‐Network
§ No support for: § Advanced Open vSwitch features like Network Virtualiza5on (IP-‐Tunnels instead of VLANs)
§ Mul5ple user configurable networks per project
§ User configurable routers (L3 Devices)
OpenStack Neutron – Plugin Concept
Neutron Core API"
Neutron Service (Server)""
• L2 network abstrac5on defini5on and management, IP address management
• Device and service aOachment framework • Does NOT do any actual implementa5on of abstrac5on
"
Plugin API"
"Vendor/User Plugin"
• Maps abstrac5on to implementa5on on the Network (Overlay e.g. NSX or physical Network) • Makes all decisions about *how* a network is to be implemented • Can provide addi5onal features through API extensions. • Extensions can either be generic (e.g. L3 Router / NAT), or Vendor Specific
"
Neutron API Extension"
Extension API implementa5on is op5onal
Core and service plugins § Core plugin implement the “core” Neutron API func5ons
(l2 Networking, IPAM, …)
§ Service plugins implements addi5onal network services (l3 rou5ng, Load Balancing, Firewall, VPN)
§ Implementa5ons might choose to implement relevant extensions in the Core plugin itself
Neutron Core API"Function"
Core "
L3 "
FW "
Core "
L3 "
FW "
Core "
L3 "
FW "
Plugin"Core Plugin
"
Core Plugin
"
FW plugin
"
Core Plugin
"
FW plugin
"
L3 plugin
"
OpenStack Neutron – Plugin locaEons
!# cat /etc/neutron/neutron.conf | grep "core_plugin"!core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin!!# cat /etc/neutron/neutron.conf | grep "service_plugins”!service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin!!
!# ls /usr/share/pyshared/neutron/plugins/!bigswitch cisco embrane __init__.py metaplugin ml2 nec openvswitch ryu!brocade common hyperv linuxbridge midonet mlnx nicira plumgrid!!# ls /usr/share/pyshared/neutron/services/!firewall __init__.py l3_router loadbalancer metering provider_configuration.py service_base.py vpn""
OpenStack Neutron – Modular Plugins § Before the modular plugin (ML2), every team or vendor had to implement a
complete plugin including IPAM, DB Access, etc.
§ The ML2 Plugin separates core func5ons like IPAM, virtual network id management, etc. from vendor/implementa5on specific func5ons, and therefore makes it easier for vendors not to reinvent to wheel with regards to ID Management, DB access …
§ Exis5ng and future non-‐modular plugins are called “monolithic” plugins
§ ML2 calls the management of network types “type drivers”, and the implementa5on specific part “mechanism drivers”
Arista
Cisco Linux Bridge
OVS etc.
Mechanism
Drivers"
GRE
VLAN
VXLAN
etc. Type
Drivers"
Type Manager" Mechanism Manager "
ML2 Plugin & API Extensions"
OpenStack Neutron ML2 – locaEons
!# cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep type_drivers! # the neutron.ml2.type_drivers namespace.! # Example: type_drivers = flat,vlan,gre,vxlan! type_drivers = gre!!# cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep mechanism_drivers! # to be loaded from the neutron.ml2.mechanism_drivers namespace.! # Example: mechanism_drivers = arista! # Example: mechanism_drivers = cisco,logger! mechanism_drivers = openvswitch,linuxbridge!!
!# ls /usr/share/pyshared/neutron/plugins/ml2/drivers/!cisco l2pop mechanism_ncs.py mech_hyperv.py mech_openvswitch.py type_gre.py type_tunnel.py type_vxlan.py __init__.py mech_agent.py mech_arista mech_linuxbridge.py type_flat.py type_local.py type_vlan.py!"
Some of the Plugins available in the market (1/2) § ML2 modular Plugin
§ With support for the type drivers: local, flat, VLAN, GRE, VXLAN
§ And the following mechanism drivers: Arista, Cisco Nexus, Hyper-‐V Agent, L2 Popula5on, Linuxbridge, Open vSwitch Agent, Tail-‐f NCS
§ Open vSwitch Plugin – The most used (Open Source) plugin today
§ Supports GRE based Overlays, NAT/Security groups, etc.
§ Depreca5on planned for Icehouse release in favor of ML2
§ Linuxbridge Plugin
§ Limited to L2 func5onality, L3, floa5ng IPs and provider networks. No support for Overlays
§ Depreca5on planned for Icehouse release in favor of ML2
Some of the Plugins available in the market (2/2) § VMware NSX (aka Nicira NVP) Plugin
§ Network Virtualiza5on solu5on with centralized controller + OpenVSwitch
§ Cisco UCS / Nexus 5000 Plugin
§ Provisions VLANs on Nexus 5000 switches and on UCS Fabric-‐Interconnect as well as UCS B-‐Series Servers network card (palo adapter)
§ Can use GRE and only configure OVS, but then there’s no VLAN provisioning
§ NEC and Ryu Plugin
§ Openflow Hop-‐by-‐Hop implementa5ons with NEC or Ryu controller
§ Other plugins include Midokura, Juniper (Contrail), Big Switch, Brocade, Plumgrid, Embrane, Melanox
§ LBaaS Service Plugins from; A10 and Citrix
§ This List can only be incomplete, please check the latest informa5on on:
§ hOps://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
§ hOp://www.sdncentral.com/openstack-‐neutron-‐quantum-‐plug-‐ins-‐comprehensive-‐list/
New Plugins / ML2 Drivers in Icehouse Release
§ New ML2 Mechanism Drivers:
§ Mechanism Driver for OpenDaylight Controller
§ Brocade ML2 Mechanism Driver for VDX Switch Cluster
§ New Neutron Plugins
§ IBM SDN-‐VE Controller Plugin
§ Nuage Networks Controller Plugin
§ Service Plugins
§ Embrane and Radware LBaaS driver
§ Cisco VPNaaS driver
§ Various
§ VMware NSX -‐ DHCP and Metadata Service
§ This list is incomplete, please see here for more details: hOps://blueprints.launchpad.net/neutron/icehouse
Neutron –OVS Agent Architecture
§ The following components play a role in OVS Agent Architecture § Neutron-‐OVS-‐Agent: Receives tunnel & flow setup informa5on from OVS-‐Plugin and programs OVS to build
tunnels and to steers traffic into those tunnels
§ Neutron-‐DHCP-‐Agent: Sets up dnsmasq in a namespace per configured network/subnet, and enters mac/ip combina5on in dnsmasq dhcp lease file
§ Neutron-‐L3-‐Agent: Sets up iptables/rou5ng/NAT Tables (routers) as directed by OVS Plugin or ML2 OVS mech_driver
§ In most cases GRE or VXLAN overlay tunnels are used, but flat and vlan modes are also possible
IP Stack
Neutron-‐ Network-‐Node
nova-‐compute
hypervisor VM VM
IP Stack
Compute Node
nova-‐compute
hypervisor VM VM
Compute Node
External Network (or VLAN)
WAN/Internet
iptables/ rouDng
Layer 3 Transport Network
dnsmasq NAT & floaDng -‐IPs
iptables/ rouDng
N.-‐L3-‐Agent N.-‐DHCP-‐Agent N.-‐OVS-‐Agent
ovsdb/ ovsvsd
Neutron-‐Server + OVS-‐Plugin
N.-‐OVS-‐Agent N.-‐OVS-‐Agent
ovsdb/ ovsvsd
ovsdb/ ovsvsd
Layer 3 Transport Net.
IP Stack
br-‐int br-‐int br-‐tun
br-‐int br-‐tun
br-‐tun
L2 in L3 (GRE) Tunnel
dnsmasq
br-‐ex
§ Centralized scale-‐out controller cluster controls all Open vSwitches in all Compute-‐ and Network Nodes. It configures the tunnel interfaces and programs the flow tables of OVS
§ NSX L3 Gateway Service (scale-‐out) is taking over the L3 rou5ng and NAT func5ons
§ NSX Service-‐Node relieves the Compute Nodes from the task of replica5ng broadcast, unknown unicast and mul5cast traffic sourced by VMs
§ Security-‐Groups are implemented na5vely in OVS, instead of iptables/ebtables
IP Stack
Neutron-‐ Network-‐Node
nova-‐compute
hypervisor VM VM
IP Stack
Compute Node
nova-‐compute
hypervisor VM VM
Compute Node
Management Network
WAN/Internet
dnsmasq
N.-‐DHCP-‐Agent
ovsdb/ ovsvsd
Neutron-‐Server + NVP-‐Plugin
ovsdb/ ovsvsd
ovsdb/ ovsvsd
Layer 3 Transport Net.
IP Stack
br-‐int br-‐int br-‐0
br-‐int br-‐0
br-‐0
L2 in L3 (STT) Tunnel
dnsmasq
Using “SDN controllers” VMware NSX Plugin example
NSX L3GW + NAT
Layer 3 Transport Network
NSX Controller Cluster
NSX Service-‐Node
hOps://www.flickr.com/photos/17258892@N05/2588347668/lightbox/
Thank You! And have a great Conference
OSDC 2014, Berlin, 08-‐10.04
Backup Slides
Neutron – Agent Status § This output shows the Neutron agents status axer a base installa5on
# neutron agent-list!+--------------------------------------+--------------------+---------------+-------+----------------+!| id | agent_type | host | alive | admin_state_up |!+--------------------------------------+--------------------+---------------+-------+----------------+!| 1a58601c-ff41-4dc5-914f-d37ec5761b06 | L3 agent | os-controller | :-) | True |!| 416c854b-611b-42f9-b7b1-3bbe0bd840f2 | DHCP agent | os-controller | :-) | True |!| 57bed0b7-55da-455a-8351-fd28e05cf1dc | Open vSwitch agent | os-controller | :-) | True |!| 7b1ae4e8-7bc2-480e-82a7-0eb6a02b119f | Open vSwitch agent | os-compute-1 | :-) | True |!| d5d27e99-ba76-4e5f-bdfe-ef7d0638a52e | Open vSwitch agent | os-compute-2 | :-) | True |!+--------------------------------------+--------------------+---------------+-------+----------------+!
Neutron – OVS – Tunnel Structure § This output shows the OVS config on the OpenStack Network-‐Node before any logical network has
been configured
# ovs-vsctl show!09d5b89a-600d-4da3-b761-11206456385a! Bridge br-ex! Port br-ex! Interface br-ex! type: internal! Port "eth2"! Interface "eth2"! Bridge br-tun! Port br-tun! Interface br-tun! type: internal! Port patch-int! Interface patch-int! type: patch! options: {peer=patch-tun}! Port "gre-172.16.0.11"! Interface "gre-172.16.0.11"! type: gre! options: {in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.11"}! Port "gre-172.16.0.12"! Interface "gre-172.16.0.12"! type: gre! options: {in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.12"}! Bridge br-int! Port patch-tun! Interface patch-tun! type: patch! options: {peer=patch-int}! Port br-int! Interface br-int! type: internal! ovs_version: "1.10.2"!
Neutron – OVS – Tunnel Structure § This output shows the OVS config on the OpenStack Network-‐Node before any logical network has
been configured
# ovs-vsctl show!09d5b89a-600d-4da3-b761-11206456385a! Bridge br-ex! Port br-ex! Interface br-ex! type: internal! Port "eth2"! Interface "eth2"! Bridge br-tun! Port br-tun! Interface br-tun! type: internal! Port patch-int! Interface patch-int! type: patch! options: {peer=patch-tun}! Port "gre-172.16.0.11"! Interface "gre-172.16.0.11"! type: gre! options: {in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.11"}! Port "gre-172.16.0.12"! Interface "gre-172.16.0.12"! type: gre! options: {in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.12"}! Bridge br-int! Port patch-tun! Interface patch-tun! type: patch! options: {peer=patch-int}! Port br-int! Interface br-int! type: internal! ovs_version: "1.10.2"!
!# Interface to first compute node!! Port "gre-172.16.0.11"! Interface "gre-172.16.0.11"! type: gre! options: !{in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.11"}!
!# Interface to second compute node!!Port "gre-172.16.0.12"! Interface "gre-172.16.0.12"! type: gre! options: !{in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.12"}!
Neutron – OVS – Tunnel Structure § This output shows the OVS config on the OpenStack Network-‐Node before any logical network has
been configured
# ovs-vsctl show!09d5b89a-600d-4da3-b761-11206456385a! Bridge br-ex! Port br-ex! Interface br-ex! type: internal! Port "eth2"! Interface "eth2"! Bridge br-tun! Port br-tun! Interface br-tun! type: internal! Port patch-int! Interface patch-int! type: patch! options: {peer=patch-tun}! Port "gre-172.16.0.11"! Interface "gre-172.16.0.11"! type: gre! options: {in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.11"}! Port "gre-172.16.0.12"! Interface "gre-172.16.0.12"! type: gre! options: {in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.12"}! Bridge br-int! Port patch-tun! Interface patch-tun! type: patch! options: {peer=patch-int}! Port br-int! Interface br-int! type: internal! ovs_version: "1.10.2"!
!# Patch from br-tun table to br-int table!! Port patch-int! Interface patch-int! type: patch! options: {peer=patch-tun}!
!# patch from br-int table to br-tun table!! Port patch-tun! Interface patch-tun! type: patch! options: {peer=patch-int}!
Neutron – Internal Network CreaEon § Now we will create a logical L2 network, without any subnet assigned to it
# neutron net-create Internal-Network!!Created a new network:!+---------------------------+--------------------------------------+!| Field | Value |!+---------------------------+--------------------------------------+!| admin_state_up | True |!| id | 56a76117-8910-4d85-b91d-8e6842e0a510 |!| name | Internal-Network |!| provider:network_type | gre |!| provider:physical_network | |!| provider:segmentation_id | 1 |!| shared | False |!| status | ACTIVE |!| subnets | |!| tenant_id | b1178a03969b4f638937f5a632fb547a |!+---------------------------+--------------------------------------+!!# neutron net-list!+--------------------------------------+------------------+---------+!| id | name | subnets |!+--------------------------------------+------------------+---------+!| 56a76117-8910-4d85-b91d-8e6842e0a510 | Internal-Network | |!+--------------------------------------+------------------+---------+!
Neutron – Internal Subnet CreaEon § Now we will create and aOach a new Subnet to the L2 network, without any subnet assigned to it
# neutron subnet-create Internal-Network --name Internal-Subnet 10.12.13.0/24!Created a new subnet:!+------------------+------------------------------------------------+!| Field | Value |!+------------------+------------------------------------------------+!| allocation_pools | {"start": "10.12.13.2", "end": "10.12.13.254"} |!| cidr | 10.12.13.0/24 |!| dns_nameservers | |!| enable_dhcp | True |!| gateway_ip | 10.12.13.1 |!| host_routes | |!| id | b4c95b8b-65a4-402e-8359-69b55d6c9bf1 |!| ip_version | 4 |!| name | Internal-Subnet |!| network_id | 56a76117-8910-4d85-b91d-8e6842e0a510 |!| tenant_id | b1178a03969b4f638937f5a632fb547a |!+------------------+------------------------------------------------+!!# neutron subnet-list -c id -c cidr -c name!+--------------------------------------+----------------+-----------------+!| id | cidr | name |!+--------------------------------------+----------------+-----------------+!| b4c95b8b-65a4-402e-8359-69b55d6c9bf1 | 10.12.13.0/24 | Internal-Subnet |!+--------------------------------------+----------------+-----------------+!!# ip netns show!#!
§ Note: The dhcp namespace will be created when the first instance boots
Neutron – external Network CreaEon 1/2 § Now we will create a external network defini5on, and add an IP subnet and pool to it
# neutron net-create External-Net --router:external=True!!Created a new network:!+---------------------------+--------------------------------------+!| Field | Value |!+---------------------------+--------------------------------------+!| admin_state_up | True |!| id | 8998c547-ff7c-45f8-884a-a6d4bcaa5de7 |!| name | External-Net |!| provider:network_type | gre |!| provider:physical_network | |!| provider:segmentation_id | 2 |!| router:external | True |!| shared | False |!| status | ACTIVE |!| subnets | |!| tenant_id | b1178a03969b4f638937f5a632fb547a |!+---------------------------+--------------------------------------+!!
Neutron – external Network CreaEon 2/2 § Now we will create a external network defini5on, and add an IP subnet and pool to it
# neutron subnet-create External-Net 172.16.65.0/24 \!--allocation-pool start=172.16.65.100,end=172.16.65.150!!Created a new subnet:!+------------------+----------------------------------------------------+!| Field | Value |!+------------------+----------------------------------------------------+!| allocation_pools | {"start": "172.16.65.100", "end": "172.16.65.150"} |!| cidr | 172.16.65.0/24 |!| dns_nameservers | |!| enable_dhcp | True |!| gateway_ip | 172.16.65.1 |!| host_routes | |!| id | 16eb9d34-819f-4525-99ab-ec9358ea132f |!| ip_version | 4 |!| name | |!| network_id | 8998c547-ff7c-45f8-884a-a6d4bcaa5de7 |!| tenant_id | b1178a03969b4f638937f5a632fb547a |!+------------------+----------------------------------------------------+!!!
Neutron – Router CreaEon 1/4 § Now we will create a router, and connect it to the “Uplink” (external network) we created earlier
# neutron router-create MyRouter!!Created a new router:!+-----------------------+--------------------------------------+!| Field | Value |!+-----------------------+--------------------------------------+!| admin_state_up | True |!| external_gateway_info | |!| id | bda86e19-4831-4bfb-b3f4-bb79113ceab1 |!| name | MyRouter |!| status | ACTIVE |!| tenant_id | b1178a03969b4f638937f5a632fb547a |!+-----------------------+--------------------------------------+!!# neutron router-gateway-set MyRouter External-Net!Set gateway for router MyRouter!!# neutron router-interface-add MyRouter Internal-Subnet!Added interface a86dfa2b-9ceb-43ba-90ea-fb67ef5c5d17 to router MyRouter.!!
Neutron – Router CreaEon 2/4 § Now we will create a router, and connect it to the “Uplink” (external network) we created earlier
# neutron router-show MyRouter!+-----------------------+-----------------------------------------------------------------------------+!| Field | Value |!+-----------------------+-----------------------------------------------------------------------------+!| admin_state_up | True |!| external_gateway_info | {"network_id": "8998c547-ff7c-45f8-884a-a6d4bcaa5de7", "enable_snat": true} |!| id | bda86e19-4831-4bfb-b3f4-bb79113ceab1 |!| name | MyRouter |!| routes | |!| status | ACTIVE |!| tenant_id | b1178a03969b4f638937f5a632fb547a |!+-----------------------+-----------------------------------------------------------------------------+!!# neutron router-port-list MyRouter -c fixed_ips!+--------------------------------------------------------------------------------------+!| fixed_ips |!+--------------------------------------------------------------------------------------+!| {"subnet_id": "b4c95b8b-65a4-402e-8359-69b55d6c9bf1", "ip_address": "10.12.13.1"} |!| {"subnet_id": "16eb9d34-819f-4525-99ab-ec9358ea132f", "ip_address": "172.16.65.100"} |!+--------------------------------------------------------------------------------------+!!
Neutron – Router CreaEon 3/4 § Now that the router is created, and interfaces are assigned to it, we will see a new namespace
# ip netns show!qrouter-bda86e19-4831-4bfb-b3f4-bb79113ceab1!!# ip netns exec qrouter-bda86e19-4831-4bfb-b3f4-bb79113ceab1 /bin/bash!!# ip addr!1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN ! link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00! inet 127.0.0.1/8 scope host lo! inet6 ::1/128 scope host ! valid_lft forever preferred_lft forever!10: qg-f9d1f494-7f: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN ! link/ether fa:16:3e:02:9a:1c brd ff:ff:ff:ff:ff:ff! inet 172.16.65.100/24 brd 172.16.65.255 scope global qg-f9d1f494-7f! inet6 fe80::f816:3eff:fe02:9a1c/64 scope link ! valid_lft forever preferred_lft forever!11: qr-a86dfa2b-9c: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN ! link/ether fa:16:3e:7b:1a:92 brd ff:ff:ff:ff:ff:ff! inet 10.12.13.1/24 brd 10.12.13.255 scope global qr-a86dfa2b-9c! inet6 fe80::f816:3eff:fe7b:1a92/64 scope link ! valid_lft forever preferred_lft forever!!# netstat -rn!Kernel IP routing table!Destination Gateway Genmask Flags MSS Window irtt Iface!0.0.0.0 172.16.65.1 0.0.0.0 UG 0 0 0 qg-f9d1f494-7f!10.12.13.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-a86dfa2b-9c!172.16.65.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-f9d1f494-7f!
Neutron – Router CreaEon 4/4 – OVS View § The OVS show will now show the tap interfaces to the router Namespace, and to the external interface
root@os-controller:/home/localadmin# ovs-vsctl show!09d5b89a-600d-4da3-b761-11206456385a! Bridge br-ex! Port "qg-f9d1f494-7f"! Interface "qg-f9d1f494-7f"! type: internal! Port br-ex! Interface br-ex! type: internal! Port "eth2"! Interface "eth2”!! .... SNIP ....!! Bridge br-int! Port patch-tun! Interface patch-tun! type: patch! options: {peer=patch-int}! Port "qr-a86dfa2b-9c"! tag: 1! Interface "qr-a86dfa2b-9c"! type: internal! Port br-int! Interface br-int! type: internal! ovs_version: "1.10.2"!
!# external router interface is patched to br-ex, and therefore bridged out to interface eth2!
!# Internal router interface is patched to br-int, and therefore connected to the ‘br-int’ flow table!
Neutron – Horizon Dashboard View
Nova – Boot two Instances § Now we will boot two ‘cirros’ Instances, and connect those to the virtual network we created earlier
# nova boot --flavor 1 --image 'CirrOS 0.3.1’ \!--nic net-id=56a76117-8910-4d85-b91d-8e6842e0a510 Instance1!!+--------------------------------------+--------------------------------------+!| Property | Value |!+--------------------------------------+--------------------------------------+!| OS-EXT-STS:task_state | scheduling |!| image | CirrOS 0.3.1 |!| OS-EXT-STS:vm_state | building |!| OS-EXT-SRV-ATTR:instance_name | instance-0000000b |!!... SNIP ...!!# nova boot --flavor 1 --image 'CirrOS 0.3.1' \!--nic net-id=56a76117-8910-4d85-b91d-8e6842e0a510 Instance2!!+--------------------------------------+--------------------------------------+!| Property | Value |!+--------------------------------------+--------------------------------------+!| OS-EXT-STS:task_state | scheduling |!| image | CirrOS 0.3.1 |!| OS-EXT-STS:vm_state | building |!| OS-EXT-SRV-ATTR:instance_name | instance-0000000c |!!... SNIP ...!!!
Neutron – Horizon Dashboard View
Neutron – DHCP Namespace / dnsmasq process § Axer the first Instances was started, Neutron created the dhcp namespace and started a dnsmasq
process in it
# ip netns show!qdhcp-56a76117-8910-4d85-b91d-8e6842e0a510!qrouter-bda86e19-4831-4bfb-b3f4-bb79113ceab1!!# ip netns exec qdhcp-56a76117-8910-4d85-b91d-8e6842e0a510 /bin/bash!!# ip addr!... SNIP ...!12: tap383cd579-5e: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN ! link/ether fa:16:3e:de:5f:bf brd ff:ff:ff:ff:ff:ff! inet 10.12.13.3/24 brd 10.12.13.255 scope global tap383cd579-5e! inet 169.254.169.254/16 brd 169.254.255.255 scope global tap383cd579-5e! inet6 fe80::f816:3eff:fede:5fbf/64 scope link ! valid_lft forever preferred_lft forever!!# ps -ef | grep dnsmasq!nobody 16209 1 0 22:29 ? 00:00:00 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap383cd579-5e --except-interface=lo --pid-file=/var/lib/neutron/dhcp/56a76117-8910-4d85-b91d-8e6842e0a510/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/56a76117-8910-4d85-b91d-8e6842e0a510/host --dhcp-optsfile=/var/lib/neutron/dhcp/56a76117-8910-4d85-b91d-8e6842e0a510/opts --leasefile-ro --dhcp-range=set:tag0,10.12.13.0,static,86400s --dhcp-lease-max=256 --conf-file= --domain=openstacklocal!root 22102 15608 0 22:58 pts/0 00:00:00 grep --color=auto dnsmasq!!# cat /var/lib/neutron/dhcp/56a76117-8910-4d85-b91d-8e6842e0a510/host!fa:16:3e:ee:1e:2f,host-10-12-13-2.openstacklocal,10.12.13.2!fa:16:3e:7b:1a:92,host-10-12-13-1.openstacklocal,10.12.13.1!fa:16:3e:17:75:f6,host-10-12-13-4.openstacklocal,10.12.13.4!!
Neutron – Instance config file § Here’s what the network part of the Instance configura5on for KVM looks like
-- COMPUTE NODE 1 ---!!# virsh list! Id Name State!----------------------------------------------------! 6 instance-0000000b running!!# virsh dumpxml 6!<domain type='kvm' id='6'>! <name>instance-0000000b</name>!! ... SNIP ...!! <interface type='bridge'>! <mac address='fa:16:3e:64:20:31'/>! <source bridge='br-int'/>! <virtualport type='openvswitch'>! <parameters interfaceid='32141443-073a-4be9-993b-51f3e131b037'/>! </virtualport>! <target dev='tap32141443-07'/>! <model type='virtio'/>! <alias name='net0'/>! <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>! </interface>!! ... SNIP ...!
!# Instance Port id ‘tap32141443-07’!
Neutron – OVS view aber Instances are connected § Now let’s examine what the patches and flow tables look like on OVS axer the Instances were started
-- COMPUTE NODE 1 ---!!root@os-compute-1:/home/localadmin# ovs-vsctl show! Bridge br-int!... SNIP ...! Port patch-tun! Interface patch-tun! type: patch! options: {peer=patch-int}! Port "tap32141443-07"! tag: 6! Interface "tap32141443-07"! Bridge br-tun!... SNIP ...! Port "gre-172.16.0.12"! Interface "gre-172.16.0.12"! type: gre! options: {in_key=flow, local_ip="172.16.0.11", out_key=flow, remote_ip="172.16.0.12"}! Port patch-int! Interface patch-int! type: patch! options: {peer=patch-tun}! Port "gre-172.16.0.10"! Interface "gre-172.16.0.10"! type: gre! options: {in_key=flow, local_ip="172.16.0.11", out_key=flow, remote_ip="172.16.0.10"}! ovs_version: "1.10.2"!
!# Instance Port id ‘tap32141443-07’!!# Instance Port mapping into br-int flow-table!
Neutron – OVS flows created through rootwrap by OVS-‐Agent § OVS flows and interfaces get created through rootwrapper by the OVS Agent
-- COMPUTE NODE 1 ---!!# tail -f /var/log/syslog !!Apr 6 23:51:34 os-compute-1 ovs-vsctl: 00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --may-exist add-port br-int tap60b3782b-80 -- set Interface tap60b3782b-80 "external-ids:attached-mac=\"fa:16:3e:64:20:31\"" -- set Interface tap60b3782b-80 "external-ids:iface-id=\"60b3782b-8096-497d-96a4-f3a8dc187eb6\"" -- set Interface tap60b3782b-80 "external-ids:vm-id=\"17f0fdee-3ecd-440f-8e77-c43d2fcda9de\"" -- set Interface tap60b3782b-80 external-ids:iface-status=active!!Apr 6 23:51:37 os-compute-1 neutron-rootwrap: (root > root) Executing ['/usr/bin/ovs-ofctl', 'mod-flows’,'br_tun’,'hard_timeout=0,idle_timeout=0,priority=1,table=21,dl_vlan=6,actions=strip_vlan,set_tunnel:1, output 3,2'] (filter match = ovs-ofctl)!!Apr 6 23:51:37 os-compute-1 neutron-rootwrap: (root > root) Executing ['/usr/bin/ovs-ofctl', 'add-flow', 'br-tun', 'hard_timeout=0,idle_timeout=0,priority=1,table=2,tun_id=1,actions=mod_vlan_vid:6,resubmit(,10)'] (filter match = ovs-ofctl)!!Apr 6 23:51:37 os-compute-1 neutron-rootwrap: (root > root) Executing ['/usr/bin/ovs-vsctl', '--timeout=2', 'set', 'Port', 'tap60b3782b-80', 'tag=6'] (filter match = ovs-vsctl)!!Apr 6 23:51:37 os-compute-1 ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 set Port tap60b3782b-80 tag=6!!Apr 6 23:51:37 os-compute-1 neutron-rootwrap: (root > root) Executing ['/usr/bin/ovs-ofctl', 'del-flows', 'br-int', 'in_port=7'] (filter match = ovs-ofctl)!!
Neutron – OVS – MAC learning § OVS with OVS Agent s5ll uses classic MAC Learning to understand were which MAC Address is in the
Network
-- COMPUTE NODE 1 ---!!# ovs-appctl fdb/show br-int! port VLAN MAC Age! 4 6 fa:16:3e:64:20:31 4! -1 6 fa:16:3e:de:5f:bf 4!!# ovs-appctl dpif/show br-int!br-int (system@ovs-system):!
!lookups: hit:1461 missed:343!!flows: cur: 0, avg: 8.634, max: 39, life span: 7746(ms)!! !hourly avg: add rate: 0.654/min, del rate: 0.658/min!! !overall avg: add rate: 0.775/min, del rate: 0.775/min!!br-int 65534/1: (internal)!!patch-tun 1/none: (patch: peer=patch-int)!!tap60b3782b-80 7/4:!
!# ovs-appctl dpif/show br-tun!br-tun (system@ovs-system):!
!lookups: hit:568 missed:364!!flows: cur: 0, avg: 9.707, max: 39, life span: 5976(ms)!! !hourly avg: add rate: 0.730/min, del rate: 0.731/min!! !overall avg: add rate: 0.817/min, del rate: 0.817/min!!br-tun 65534/2: (internal)!!gre-172.16.0.10 2/3: (gre: key=flow, local_ip=172.16.0.11, remote_ip=172.16.0.10)!!gre-172.16.0.12 3/3: (gre: key=flow, local_ip=172.16.0.11, remote_ip=172.16.0.12)!!patch-int 1/none: (patch: peer=patch-tun)!
Neutron – OVS – Table Structure § OVS Agent programs a complex Table structure into OVS
hOps://wiki.openstack.org/wiki/Ovs-‐flow-‐logic
Neutron – IPTable Rules – Compute Nodes – Security G. § The following output shows what Neutron configures into IPTables on the compute node to implement
security groups
-- COMPUTE NODE 1 ---!!# iptables –L!… SNIP …!Chain neutron-openvswi-i7fff0812-9 (1 references)!target prot opt source destination !DROP all -- anywhere anywhere state INVALID!RETURN all -- anywhere anywhere state RELATED,ESTABLISHED!RETURN tcp -- anywhere anywhere tcp multiport dports tcpmux:65535!RETURN icmp -- anywhere anywhere !RETURN udp -- anywhere anywhere udp multiport dports 1:65535!RETURN udp -- 10.12.13.3 anywhere udp spt:bootps dpt:bootpc!… SNIP …!Chain neutron-openvswi-o7fff0812-9 (2 references)!target prot opt source destination !RETURN udp -- anywhere anywhere udp spt:bootpc dpt:bootps!!neutron-openvswi-s7fff0812-9 all -- anywhere anywhere !DROP udp -- anywhere anywhere udp spt:bootps dpt:bootpc!DROP all -- anywhere anywhere state INVALID!RETURN all -- anywhere anywhere state RELATED,ESTABLISHED!RETURN all -- anywhere anywhere !!Chain neutron-openvswi-s7fff0812-9 (1 references)!target prot opt source destination !RETURN all -- 10.12.13.2 anywhere MAC FA:16:3E:43:C6:20!DROP all -- anywhere anywhere !
Neutron – IPTable Rules – Compute Nodes – Security G. § The following output shows what Neutron configures into IPTables on the compute node to implement
security groups
-- COMPUTE NODE 1 ---!!# iptables –L!… SNIP …!Chain neutron-openvswi-i7fff0812-9 (1 references)!target prot opt source destination !DROP all -- anywhere anywhere state INVALID!RETURN all -- anywhere anywhere state RELATED,ESTABLISHED!RETURN tcp -- anywhere anywhere tcp multiport dports tcpmux:65535!RETURN icmp -- anywhere anywhere !RETURN udp -- anywhere anywhere udp multiport dports 1:65535!RETURN udp -- 10.12.13.3 anywhere udp spt:bootps dpt:bootpc!… SNIP …!Chain neutron-openvswi-o7fff0812-9 (2 references)!target prot opt source destination !RETURN udp -- anywhere anywhere udp spt:bootpc dpt:bootps!!neutron-openvswi-s7fff0812-9 all -- anywhere anywhere !DROP udp -- anywhere anywhere udp spt:bootps dpt:bootpc!DROP all -- anywhere anywhere state INVALID!RETURN all -- anywhere anywhere state RELATED,ESTABLISHED!RETURN all -- anywhere anywhere!!Chain neutron-openvswi-s7fff0812-9 (1 references)!target prot opt source destination !RETURN all -- 10.12.13.2 anywhere MAC FA:16:3E:43:C6:20!DROP all -- anywhere anywhere !
!# Inbound rule to Instances!
!# Default outbound allow dhcp!
!# ‘Port Security Rule’ only allow Instance MAC outbound!
Neutron – add floaEng-‐ip to instance § We will now add a floa5ng-‐ip to an instance
# neutron floatingip-create External-Net!Created a new floatingip:!+---------------------+--------------------------------------+!| Field | Value |!+---------------------+--------------------------------------+!| fixed_ip_address | |!| floating_ip_address | 172.16.65.101 |!| floating_network_id | 8998c547-ff7c-45f8-884a-a6d4bcaa5de7 |!| id | 5d3a71e6-f94e-4c9f-9389-474abc559900 |!| port_id | |!| router_id | |!| tenant_id | 94fa9a0f01f24ba2983d06575add8764 |!+---------------------+--------------------------------------+!# nova list!+--------------------------------------+---------------+--------+------------+-------------+---------!| ID | Name | Status | Task State | Power State | Networks |!+--------------------------------------+---------------+--------+------------+-------------+----------!| af2d9b9f-3e25-4242-82f9-b059778cf217 | Instance1 | ACTIVE | None | Running | Internal-Network=10.12.13.2 |!| 2206f513-9313-4c87-be09-3cfacbc6d2a2 | Instance2 | ACTIVE | None | Running | Internal-Network=10.12.13.4 |!+--------------------------------------+---------------+--------+------------+-------------+----------!!# nova add-floating-ip Instance1 172.16.65.101!#!!
Neutron – add floaEng-‐ip to instance § We will now add a floa5ng-‐ip to an instance
# nova show Instance1!+--------------------------------------+----------------------------------------------------------+!| Property | Value |!+--------------------------------------+----------------------------------------------------------+!| status | ACTIVE |!| updated | 2014-04-08T00:08:23Z |!| OS-EXT-STS:task_state | None |!| OS-EXT-SRV-ATTR:host | os-compute-1 |!| key_name | None |!| image | CirrOS 0.3.1 (55438187-bc0e-4245-b4a7-edb338cf47bd) |!!... SNIP ...|!!| accessIPv4 | |!| accessIPv6 | |!| Internal-Network network | 10.12.13.2, 172.16.65.101 |!| progress | 0 |!| OS-EXT-STS:power_state | 1 |!| OS-EXT-AZ:availability_zone | nova |!| config_drive | |!+--------------------------------------+----------------------------------------------------------+!!
Neutron –floaEng-‐ip, router namespace § This is what a floa5ng IP looks like in the router Namespace and in IPTables
# ip netns exec qrouter-c6687e7c-ab1c-4336-ab1e-8021f9c59925 /bin/bash!!# ip addr!!1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN ! link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00! inet 127.0.0.1/8 scope host lo! inet6 ::1/128 scope host ! valid_lft forever preferred_lft forever!13: qg-92d91e4c-2d: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN ! link/ether fa:16:3e:58:f6:2c brd ff:ff:ff:ff:ff:ff! inet 172.16.65.100/24 brd 172.16.65.255 scope global qg-92d91e4c-2d! inet 172.16.65.101/32 brd 172.16.65.101 scope global qg-92d91e4c-2d! inet6 fe80::f816:3eff:fe58:f62c/64 scope link ! valid_lft forever preferred_lft forever!14: qr-8abeb2b0-a6: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN ! link/ether fa:16:3e:18:6e:93 brd ff:ff:ff:ff:ff:ff! inet 10.12.13.1/24 brd 10.12.13.255 scope global qr-8abeb2b0-a6! inet6 fe80::f816:3eff:fe18:6e93/64 scope link ! valid_lft forever preferred_lft forever!! !
# Router IP!# configured floating-ip!
Neutron –floaEng-‐ip, IPTables NAT § This is what a floa5ng IP looks like in the router Namespace and in IPTables
# iptables -t nat -L!!...SNIP ...!!Chain neutron-l3-agent-OUTPUT (1 references)!target prot opt source destination !DNAT all -- anywhere 172.16.65.101 to:10.12.13.2!!Chain neutron-l3-agent-POSTROUTING (1 references)!target prot opt source destination !ACCEPT all -- anywhere anywhere ! ctstate DNAT!!Chain neutron-l3-agent-PREROUTING (1 references)!target prot opt source destination !REDIRECT tcp -- anywhere 169.254.169.254 tcp dpt:http redir ports 9697!DNAT all -- anywhere 172.16.65.101 to:10.12.13.2!!Chain neutron-l3-agent-float-snat (1 references)!target prot opt source destination !SNAT all -- 10.12.13.2 anywhere to:172.16.65.101!!Chain neutron-l3-agent-snat (1 references)!target prot opt source destination !neutron-l3-agent-float-snat all -- anywhere anywhere !SNAT all -- 10.12.13.0/24 anywhere to:172.16.65.100!!Chain neutron-postrouting-bottom (1 references)!target prot opt source destination !neutron-l3-agent-snat all -- anywhere anywhere !
Neutron –floaEng-‐ip, IPTables NAT § This is what a floa5ng IP looks like in the router Namespace and in IPTables
# iptables -t nat -L!!...SNIP ...!!Chain neutron-l3-agent-OUTPUT (1 references)!target prot opt source destination !DNAT all -- anywhere 172.16.65.101 to:10.12.13.2!!Chain neutron-l3-agent-POSTROUTING (1 references)!target prot opt source destination !ACCEPT all -- anywhere anywhere ! ctstate DNAT!!Chain neutron-l3-agent-PREROUTING (1 references)!target prot opt source destination !REDIRECT tcp -- anywhere 169.254.169.254 tcp dpt:http redir ports 9697!DNAT all -- anywhere 172.16.65.101 to:10.12.13.2!!Chain neutron-l3-agent-float-snat (1 references)!target prot opt source destination !SNAT all -- 10.12.13.2 anywhere to:172.16.65.101!!Chain neutron-l3-agent-snat (1 references)!target prot opt source destination !neutron-l3-agent-float-snat all -- anywhere anywhere !SNAT all -- 10.12.13.0/24 anywhere to:172.16.65.100!!Chain neutron-postrouting-bottom (1 references)!target prot opt source destination !neutron-l3-agent-snat all -- anywhere anywhere !
!# floating-ip DNAT!
!# floating-ip SNAT!
!# default SNAT for all instances!
Top Related