Reference design for v mware nsx
-
Upload
solarisyougood -
Category
Technology
-
view
3.557 -
download
2
Transcript of Reference design for v mware nsx
Nimish Desai VMware
Reference Design for VMware NSXNET4282
Student Guide & Internal & Confidential Update Dailyhttps://goo.gl/VVmVZ0
The vSphere Optimization Assessment (VOA): Best Practices for Closing a Virtualization Deal in 30 Days or Less http://ouo.io/y4vaqW
vRealize Operations v6.0What's New Technical Overview (formerly vC Ops http://ouo.io/1KWCBr
Best Practices for Conducting a vSphere Optimization Assessment (VOA http://ouo.io/E1If0
Deep Dive into New Features in vRealizeOperations 6.0 (formerly vCOps) http://ouo.io/CyuCmK
How to Extend and Customize vRealizeOperations and Automation (vCOps and vCAC) http://ouo.io/jCvk7D
Troubleshooting with vRealizeOperations Insight (Operations and Log Management) http://ouo.io/gcz0oN
vRealizeAir –NEW Cloud Management SaaS Offerings http://ouo.io/6TMPFHow to Help Customers Install, Deploy and Migrate to the vRealizeOperations Manager 6.0 (formerly vCOps) http://ouo.io/1pL8woShowing Costs Back in the Virtualized Environment vRealize Business Standard Proof of Concept (formerly ITBM) http://ouo.io/30TzE
vRealizeCloud Management Portfolio Overview and Glimpse into the Future http://ouo.io/OpLGQB
vRealizeSuite: VMware’s vRealizeCloud Management Platform http://ouo.io/t5n5MOvRealizeAutomation (formerly vCAC) and NSX: Automating Networking & Security Infrastructure http://ouo.io/CyCXv
Agenda
CONFIDENTIAL 2
1 Software Defined Data Center
2 Network Virtualization - NSX
3 NSX for vSphere Design and Deployment Considerations
4 Reference Designs
5 Summary and Q&A
Data Center Virtualization Layer
CInotemllpiguetnec, eNeintwSHooarfrktdwawanardereStorage CapacityPODoepodelirecadati,toeVndea,nlVdMeonorddIonerdl oeSfppVeenMcdifeifconrItn,DfBraaetsasttrCuPecrtnicuteree/rPerformance Infrastructure SAMiuamtnopumliafailetCedodCnCofiongnfuigfriaguturiaoratnioti&onnM&&aMnMaaangnaeagmgeeemnmet ennt t
What Is a Software Defined Data Center (SDDC)?Abstract…pool…automate…across compute, networking and storage.
Software
Hardware
CONFIDENTIAL 3
VMware NSX Momentum: Over 400 Customers
top investment banks enterprises & service providers
CONFIDENTIAL 4
NSX Introduction
Traditional Networking Configuration TasksInitial configuration
Multi-chassis LAG Routing configuration SVIs/RVIs VRRP/HSRP STP
• Instances/mappings• Priorities• Safeguards
LACP VLANs
• Infra networks on uplinks and downlinks
• STP
Recurring configuration
SVIs/RVIs VRRP/HSRP Advertise new subnets Access lists (ACLs) VLANs Adjust VLANs on trunks VLANs STP/MST mapping
VLANs STP/MST mapping Add VLANs on uplinks Add VLANs to server ports
Configuration consistency !
CONFIDENTIAL 6
L3
L2
How Does NSX Solve Next Generation DC Challenges?
Distributed FW Micro Segmentation Multifunctional Edge
• Stateful FW• NAT• Load Balancer• IPSEC/SSL
Third Party Integration
Security & Services
Time to Deploy Mobility Topology Independent
• L2 vs L3• Services
Distributed Forwarding Highly Available
Flexibility & Availability
IP Fabric Configure Once Horizontal Scale Any Vendor
Simplicity & Devices Agnostic
API Driven Automation CMP Integrated Self Services
Cloud Centric Services
NSX Platform
IP Fabric – Topology Independent (L2 or L3)
CONFIDENTIAL 7
ProvidesA Faithful Reproduction of Network & Security Services in Software
Management APIs, UI
Switching Routing
Firewalling
Load Balancing
VPN
Connectivity to Physical Networks
Policies, Groups, Tags
Data Security Activity Monitoring
CONFIDENTIAL 8
NSX Architecture and ComponentsCloud Consumption • Self Service Portal
• vCloud Automation Center, OpenStack, Custom
Data Plane
NSX Edge
ESXi Hypervisor Kernel Modules
Distributed Services
• High – Performance Data Plane• Scale-out Distributed Forwarding Model
Management PlaneNSX Manager
• Single configuration portal• REST API entry-point
Logi
cal N
etw
ork
Phys
ical
N
etw
ork
…
…
NSX Logical Router ControlVM
Control Plane
NSX Controller• Manages Logical networks• Control-Plane Protocol• Separation of Control and Data Plane
Logical Distributed Firewall
Switch Logical Router
vCenter Server
9
NSX for vSphere Design andDeployment Considerations
Agenda• NSX for vSphere Design and Deployment Considerations
– Physical & Logical Infrastructure Requirements– NSX Edge Design– Logical Routing Topologies– NSX Topologies for Enterprise and Multi-tenant Networks– Micros-segmentation with Distributed FW Design
CONFIDENTIAL 12
NSX is AGNOSTIC to Underlay Network Topology
L2 or L3 or Any Combination
Only TWO Requirements
IP Connectivity MTU of 1600
CONFIDENTIAL 13
Classical Access/Aggregation/Core Network• L2 application scope is limited to a single
POD and is the failure domain• Multiple aggregation modules, to limit the
Layer 2 domain size
• VLANs carried throughout the PoD• Unique VLAN to a subnet mapping• Default gateway – HSRP at aggregation
layer
WAN/Internet
L3
L2
POD A
L3
L2
POD B
VLAN X Stretch VLAN Y Stretch
CONFIDENTIAL 14
L3 Topologies & Design Considerations• L3 ToR designs have dynamic
routing protocol between leaf and spine.
• BGP, OSPF or ISIS can be used• Rack advertises small set of prefixes
(Unique VLAN/subnet per rack)• Equal cost paths to the other racks
prefixes.
• Switch provides default gateway service for each VLAN subnet
• 801.Q trunks with a small set of VLANs for VMkernel traffic
• Rest of the session assumes L3 topology
WAN/Internet
L3
L2
L3 Uplinks
VLAN Boundary 802.1Q Hypervisor 1
802.1Q
. . .
Hypervisor nCONFIDENTIAL
L3
L2
15
MTU Considerations• Arista
– L2 Interfaces by default IP packet as large as 9214 Bytes can be sent and received no configuration is required
– L3 interfaces by default IP packet as large as 1500 Bytes can be sent and received• Configuration step for L3 interfaces: change MTU to 9214 (“mtu 9214” command) IP packet as large as 9214 Bytes can be
sent and received
• Cisco Nexus 9000– L2 and L3 Interfaces by default IP packet as large as 1500 Bytes can be sent and received– Configuration Steps for L2 interfaces
• Change the System Jumbo MTU to 9214 (“system jumbomtu 9214” global command) this is because you can only set MTU to default value (1500 Bytes) or the system wide configured value
• Change MTU to 9214 on each L2 interface (“mtu 9214” interface command)– Configuration Steps for L3 interfaces
• Change MTU to 9214 on each L3 interface (“mtu 9214” interface command)
• Cisco Nexus 3000 and 5000/6000– The configuration for L2 interfaces can ONLY be changed with “system QoS” policy– Configuration step for L3 interfaces: change MTU to 9214 (“mtu 9214” command) IP packet as large as 9214
Bytes can be sent and received
CONFIDENTIAL 16
Cluster Design Considerations
Organizing Compute, Management & Edge
Edge LeafL3 to DC FabricL2 to External
Networks
Compute Clusters Infrastructure Clusters (Edge, Storage, vCenter and Cloud
Management System)
WANInternet
L3
L2
L3
L2
Leaf
Spine
L2 VLANsfor bridging
Separation of compute, management and Edge function with following design advantage
• Managing life-cycle of resources for compute and Edge functions
• Ability to isolate and develop span of control• Capacity planning – CPU, Memory & NIC• Upgrades & migration flexibility
• High availability based on functional need• Workload specific SLA (DRS & FT)• Network centric connectivity – P/V,
ECMP• vMotion boundary
• Automation control over area or function that requires frequent changes
• app-tier, micro-segmentation & load-balancer
Three areas of technology require considerations• Interaction with physical network• Overlay (VXLAN) impact• Integration with vSphere clustering
vSphere Cluster Design – Collapsed Edge/Infra Racks
Compute Racks Infrastructure Racks (Edge, Storage, vCenter and Cloud Management System)
Edge Clusters
vCenter 1Max supported number of VMs
vCenter 2 Max supported number of VMs
WANInternet
Cluster location determined by connectivity requirements
Storage
Management Cluster
L3
L2
L3
L2
L3
L2
Leaf
Spine
Edge Leaf (L3 to DC Fabric, L2 toExternal Networks)
L2 VLANsfor bridging
19
vSphere Cluster Design – Separated Edge/Infra
WANInternet
Leaf
vCenter 1Max supportednumber of VMs
vCenter 2Max supported number of VMs
Compute Racks
Cluster location determined by connectivity requirements
Infrastructure Racks (Storage, vCenter and Cloud Management)
Edge Racks (Logical RouterControl VMs and NSX Edges)
Spine
L3
L2
L3
L2
L3
L2
Edge Leaf (L3 to DC Fabric, L2 toExternal Networks)
20
21
Registration or Mapping
WebVM
WebVM
VM
VMWebVM
Compute Cluster
WebVM VM
VM
ComputeA
vCenter Server
NSX Manager NSXControllerCompute
B
Edge and Control VM
Edge Cluster
Management Cluster
Single vCenter Server to manage all Management, Edge and Compute Clusters• NSX Manager deployed in the Mgmt Cluster and paired to the vCenter Server• NSX Controllers can also be deployed into the Management Cluster• Reduces vCenter Server licensing requirements• Most common in POCs or small environments
Single vCenter Design
22
Management VC
Web VM
WebVM
VM
VM
Compute A
Compute B
VC for NSX Domain - A
NSXController
Edge and Control VM
Web VM
Web VM
VM
VM
VC for NSX Domain - B
NSX Manager VM - B
NSXController
Edge and Control VM
NSX Manager VM - A
Edge ClusterCompute Cluster Edge Cluster
• Option 2 following VMware best practices to have the Management Cluster managed by a dedicated vCenter Server (Mgmt VC)
• Separate vCenter Server into the Management Cluster to manage the Edge and Compute ClustersNSX Manager also deployed into the Management Cluster and pared with this second vCenter Server
Can deploy multiple NSX Manager/vCenter Server pairs (separate NSX domains)
• NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to, therefore the Controllers are usually also deployed into the Edge Cluster
Management Cluster
Multiple vCenters Design - Multiple NSX Domains
Leaf L2L3 L3
L2
VMkernel VLANs VLANs for
Management VMs
L2
L2
VMkernelVLANs
Routed DC Fabric
802.1QTrunk
VMkernel VLANs
VLANs for Management VMs
Single Rack Connectivity
Deployment Considerations Mgmt Cluster is typically provisioned on a single rack The single rack design still requires redundant uplinks
from host to ToR carrying VLANs for management Dual rack design for increased resiliency (handling
single rack failure scenarios) which could be the requirements for highly available design• Each ToR can be deployed in a separate rack
• Host uplinks extend across the racks
Typically in a small design management and Edge cluster are collapsed• Exclude management cluster from preparing VXLAN
• NSX Mngr, NSX Controllers automatically excluded from DFW functions.
• Put vCenter server in DFW exclusion list !
Leaf L3 L2
VMkernel VLANs
Routed DC Fabric
802.1QTrunk
Dual Rack Connectivity
L2
23
Management Cluster
Edge cluster availability and capacity planning requires for• Minimum three host per cluster
• More if ECMP based North-South traffic BW requirements
Edge cluster can also contain NSX controller and DLR control VM for Distribute Logical Routing (DLR)
L3
L2
VMkernel VLANs
VLANs for L2 andL3 NSX Services
Routed DC Fabric
L2
L3
WANInternet
L2L3
L2L3
VMkernel VLANs
VLANs for L2 and L3 NSX Services
Routed DC Fabric WANInternet
L2L3
Single Rack Connectivity
Deployment Considerations Benefits of Dedicated Edge Rack
Reduced need of stretching VLANs L2 required for External 802.1Q VLANs & Edge Default GW
L2 connectivity between active and standby stateful Edge design Uses GARP to announce new MAC in the event of a failover
Localized routing configuration for N-S Traffic, reduce need to configure and mange on rest of the spine
Span of control for network centric operational management, BW monitoring & features
Dual Rack Connectivity 24
Edge Cluster
NSX ManagerDeployment Considerations
NSX Manager deployed as a virtual appliance• 4 vCPU, 12 GB of RAM per node• Consider reserving memory for VC to ensure good Web Client performance• Can not modify configurations
Resiliency of NSX Manager provided by vSphere HA
Catastrophic failure of NSX Manager is rare, however periodic backup is recommended to restore to the last known configurations• During the failure all existing data plane connectivity continue to work since
data and management plane are separated
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Manager
NSX ControllersDeployment Considerations
Provide control plane to distribute network information to ESXi hosts
NSX Controllers are clustered for scale out and high availability
Network information is distributed across nodes in a Controller Cluster (slicing)
Remove the VXLAN dependency on multicast routing/PIM in the physical network
Provide suppression of ARP broadcast traffic in VXLAN networks
Logical Router 1
VXLAN 5000
Logical Router 2
VXLAN 5001
Logical Router 3
VXLAN - 5002
Controller VXLAN Directory Service
MAC table
ARP table
VTEP table
NSX Controllers Functions
Controller nodes are deployed as virtual appliances• 4 vCPU, 4GB of RAM per node• CPU Reservation of 2048 MHz• No memory reservation required• Modifying settings is not supported
Can be deployed in the Mgmt or Edge clusters Cluster size of 3 Controller nodes is the only supported configuration Controller majority is required for having a functional controller cluster
• Data plane activity maintained even under complete controller cluster failure
By default the DRS and anti-affinity rules are not enforced for controller deployment• The recommendation is to manually enable DRS and anti-affinity rules• Minimum 3 host is required to enforce the anti-affinity rule
ToR # 1 ToR #2
Controller 2
Controller 3
NSX Mgr
Controller 1
vCenter Server
NSX Controllers
VDS, Transport Zone, VTEPs,VXLAN Switching
Transport Zone, VTEP, Logical Networks and VDS
Transport Zone: collection of VXLAN prepared ESXi clusters
Normally a TZ defines the span of Logical Switches (Layer 2 communication domains)
VTEP (VXLAN Tunnel EndPoint) is a logical interface (VMkernel) connects to TZ for encap/decap VXLAN traffic
VTEP VMkernel interface belongs to a specific VLAN backed port-group dynamically created during the cluster VXLAN preparation
One or more VDS can be part of the same TZ
A given Logical Switch can span multiple VDS
33
vSphere Host
VXLAN Transport Network
VTEP110.20.10.10
Host 1
VTEP210.20.10.11
VM
VXLAN 5002MAC2
vSphere Host
VTEP310.20.10.12
Host 2
10.20.10.13
VM
MAC4
VM
MAC1
VM
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
vSphere Host (ESXi)
VMkernel Networking
L3 ToR Switch
Routed uplinks (ECMP)
VLAN Trunk (802.1Q)
VLAN 66
Mgmt
10.66.1.25/26DGW: 10.66.1.1
VLAN 77
vMotion
10.77.1.25/26GW: 10.77.1.1
VLAN 88
VXLAN
10.88.1.25/26DGW: 10.88.1.1
VLAN 99
Storage
10.99.1.25/26GW: 10.99.1.1
SVI 66: 10.66.1.1/26SVI 77: 10.77.1.1/26SVI 88: 10.88.1.1/26SVI 99: 10.99.1.1/26
Spa
n of
VLA
Ns
Spa
n of
VLA
Ns
34
VMkernel Networking Multi instance TCP/IP Stack
• Introduced with vSphere 5.5 and leveraged by:VXLAN
NSX vSwitch transport network
Separate routing table, ARP table and default gateway per stack instance
Provides increased isolation and reservation of networking resources
Enables VXLAN VTEPs to use a gateway independent from the default TCP/IP stack
Management, vMotion, FT, NFS, iSCSI leverage the default TCP/IP stack in 5.5
VMkernel VLANs do not extend beyond the rack in an L3 fabric design or beyond the cluster with an L2 fabric, therefore static routes are required for Management, Storage and vMotion Traffic
Host Profiles reduce the overhead of managing static routes and ensure persistence
35
L2 – Fabric Network Addressing and VLANs Definition Considerations
L2 Fabric
For L2 Fabric – Y denotes the same subnet used on entire cluster
• VXLAN when deployed creates automatic port-group whose VLAN ID must be the same per VDS
• For the Fabric is L2, this usually means that the same IP subnets are also used across racks for a given type of traffic
• For a given host only one VDS responsible for VXLAN traffic. A single VDS can span multiple cluster
VXLAN Transport Zone Scope (extends across ALL PODs/clusters)
Compute Cluster A32 Hosts
Compute Cluster B32 Hosts
VMkernel VLAN/Subnet Scope VMkernel VLAN/Subnet Scope
POD A POD BL3
L2
37
Compute Rack - IP Address Allocations and VLANs
Function VLAN ID IP SubnetManagement 66 10.66.Y.x/24
vMotion 77 10.77.Y.x/24
VXLAN 88 10.88.Y.x/24
Storage 99 10.99.Y.x/24
L3 - Network Addressing and VLANs Definition Considerations
VXLAN Transport Zone Scope (extends across ALL racks/clusters)
For L3 Fabric - Values for VLANs, IP addresses and masks are provided as an example. R_id is the rack number
38
• VXLAN when deployed creates automatic port-group whose VLAN ID must be the same per VDS
• For the Fabric is L3, this implies that separate IP subnets are associated to the same VLAN IDs defined across racks
• In L3 fabric the IP addressing for the VTEP requires consideration in which traditional “IP Pools” may not work well, recommendation is to use DHCP
L2Compute Cluster A32 Hosts
Compute Cluster B32 Hosts
VMkernel same VLAN unique Subnet Scope
VMkernel same VLAN unique Subnet Scope
L3
Compute Rack - IP Address Allocations and VLANs
Function VLAN ID IP SubnetManagement 66 10.66.R_id.x/26
vMotion 77 10.77.R_id.x/26
VXLAN 88 10.88.R_id.x/26
Storage 99 10.99.R_id.x/26
VDS Uplink Design• VDS utilizes special port-groups (called dvuplinks) for
uplink connectivity
• The choice of configuration may be simplified based on the following requirements
– Simplicity of teaming configuration
– BW Required for each type of traffic
– Convergence requirement
– Cluster usage – compute, Edge and management
– The uplink utilization factors – flow based vs. VM
• LACP teaming forces all the traffic types to use the same teaming mode
• For the VXLAN traffic the choice in teaming mode depends on• Simplicity• Bandwidth requirement• LBT mode is not supported
• Having multiple VDS for compute and Edge allow flexibility of teaming more for uplink configuration
39
Teaming and Failover Mode
NSXSupport
Multi-VTEP Support
Uplink Behavior 2 x 10G
Route based onOriginating Port
✓ ✓ Both Active
Route based on Source MAC hash
✓ ✓ Both Active
LACP ✓ × Flow based –both active
Route based on IP Hash (Static EtherChannnel)
✓ × Flow based –both active
Explicit Failover Order ✓ × Only one link is active
Route based on Physical NIC Load (LBT)
× × ×
vSphere Host
VXLAN Transport Network
• Simple operational model all VXLAN traffic are associated to10.20.10.10
Host 1
VTEP210.20.10.11
VTEP1
VM
VXLAN 5002MAC2
vSphere Host
VTEP310.20.10.12
Host 2
10.20.10.13
VM
MAC4
VM
MAC1
VM
MAC3
VTEP4
vSphere Distributed Switch vSphere Distributed Switch
VTEP Design Number of VTEPs deployed depends on teaming mode
• Single VTEP for LACP and Explicit Failover• Multiple VTEPs (based on number of host uplinks) for Src-ID
teaming option
Single VTEP is sufficient for• Workloads do not drive more than 10G of throughput
the same VTEP address• Deterministic traffic mapping to uplink is desired (Explicit
Failover only)
Multiple VTEPs typically two is required for Workloads require > 10G of throughput
• Allows flexibility of choosing teaming mode for other traffic types
• IP addressing for VTEP• Common VTEP subnet for a L2 fabric• Multiple VTEP subnets (one per rack) for L3 fabrics
IP Pools or DHCP can be use for IP address assignment
40
Design Considerations – VDS and Transport ZoneManagement Cluster
Edge Cluster
WebVM
WebVM
VM
VM
WebVM
WebVM
VM
VM
Compute A Compute N
vCenter Server
NSX Manager
Controller Cluster
NSX Edges
VXLAN Transport Zone Spanning Three Clusters
Compute VDS Edge VDS
VTEP
vSphere Host
vSphere Host
192.168.230.100 192.168.240.100
192.168.230.101
Compute Cluster 1vSphere Host
Compute Cluster N
vSphere Host
192.168.240.101
vSphere Host
vSphere Host
192.168.220.100
192.168.220.101
VTEP VTEP
Recap: vCenter – Scale Boundaries
vCenter Server
ESXi ESXi ESXi ESXi ESXi ESXi
VDS 1
Cluster
DC Object
Max. 32 hosts
Max. 500 hosts
10,000 powered on VMs1,000 ESXi hosts128 VDS
Manual vMotion
DRS-based vMotion
42
ESXi ESXi
VDS 2
NSX for vSphere – Scale & Mobility BoundariesCloud Management System
DRS-based vMotion
Manual vMotion
Logical Network Span
Transport Zone43
vCenter ServerNSX API(Manager) vCenter ServerNSX API
(Manager)
Controller Cluster Controller Cluster
1:1 mapping of vCenter to NSX Cluster
Cluster
DC Object
Max. 32 hosts
Max. 500 hosts
ESXi ESXi ESXi ESXi
VDS
ESXi ESXi
VDS
ESXi ESXi
VDS
NSX for vSphere VXLAN Replication ModesNSX for vSphere provides flexibility for VXLAN Transport – Does not require complex multicast configurations on physical network
• Unicast Mode– All replication occurs using unicast. Applicable to small
deployment
• Multicast Mode– Entire replication is off-loaded to physical network– Requires IGMP/Querier & and multicast routing for L3(PIM) *
• Hybrid Mode– Local replication offloaded to physical network, while remote
replication occurs via unicast– Most practical without the complexity of multicast mode– Only requires IGMP Snooping/Querier. Does not require L3
PIM *
• All modes require an MTU of 1600 bytes.
* Host provides necessary querier function however external querier recommended for manageability/admin-scope CONFIDENTIAL 44
Agenda• NSX for vSphere Design and Deployment Considerations
– Physical & Logical Infrastructure Requirements– NSX Edge Design– Logical Routing Topologies– NSX Topologies for Enterprise and Multi-tenant Networks– Micros-segmentation with Distributed FW Design
CONFIDENTIAL 47
NSX Edge Gateway: Integrated network services
Routing/NAT
Firewall
Load Balancing
L2/L3 VPN
DHCP/DNS relay
DDI
VM VM VM VM VM
• Multi-functional & multi-use VM model. Deployment varies based on its use, places in the topology, performance etc.• Functional use – P/V routing only, LB Only, Perimeter FW etc.
• Form factor – X-Large to Compact (one license)• Stateful switchover of services(FW/NAT, LB, DHCP & IPSEC/SSL)
• Multi-interface routing Support – OSPF & BGP
• Can be deployed in high availability or stand alone mode
• Per tenant Edge services – scaling by interface and instance
• Scaling of north-south bandwidth with ECMP support in 6.1
• Requires design consideration for following• Edge placement for north-south traffic• Edge cluster design consideration
• Bandwidth scaling – 10G to 80G• Edge services with multi-tenancy
NSX Edge Services Gateway Sizing
49
• Edge services gateway can be deployed in many sizes depending on services used
• Multiple Edge nodes can be deployed at once e.g. ECMP, LB and Active-Standby for NAT
• When needed the Edge size can be increased or decreased
• In most deployment the Quad-Large is sufficient for many services such as ECMP & LB
• X-Large is required for high performance L7 load balancer configurations
Edge Services Gateway Form
vCPU Memory MB Specific Usage
X-Large 6 8192 Suitable for L7 High Performance LB
Quad-Large 4 1024 Suitable for mostdeployment
Large 2 1024 Small DC
Compact 1 512 PoC
50
Active-Standby Edge Design
L3 - ToR
Routing Adjacency
vSphere Host vSphere Host
VXLAN 5020Transit Link
Active-Standby Stateful
FW/NAT/LB
• Active-Standby Edge Services Gateway enables statefulservices• Perimeter FW, NAT, LB, SSL-VPN, North-South routing• Deployed in pair with heartbeat and synchronization of
services state• Heartbeat and sync both use the same internal vNic• L2 connectivity required between active and standby• Form factor – X-Large to Compact (one license)
• Multi-interface routing Support – OSPF & BGP• Must tune protocol timers to 40/120(hello/hold timer)
• Anti-affinity is automatically created• Active and Standby Edges are placed on different
hosts• Minimum three hosts are recommended
• Multiple instance to Edge can be deployed• LB Edge can be deployed near application tire
• Multiple tenants can have separate Edge services
51
ECMP Based Edge Desing
ECMPEdges Non-Stateful
VXLAN
VLAN
Transit VXLAN
E1
E2… E7
E8
R1 R2
External Network
VLAN 10
VLAN 20
ECMP Active NSX Edges
Customer Routers
• ECMP Edge enables scalable north-south traffic forwarding services• 8 instances of Edge - upto 80G BW• Stateful services are not supported due to
asymmetric traffic behavior• No heartbeat and sync between Edge
nodes• L2 connectivity required for peering• Form factor – X-Large to Compact (one
license)
• Multi-interface routing Support – OSPF & BGP• Aggressing timers tuning supported 3/4
(hello/hold timer)• Anti-affinity configuration is required
• Minimum three hosts are recommended
• Multiple tenants can have separate Edge services
Edge Interaction with Physical Topology• Edge forms peering adjacency with physical devices
• Impact of teaming configuration of uplink to routing peering– Failover or Src-ID - Single Uplink is used to establish routing
adjacencies– LACP - Both uplink can be used however dependencies on
physical switch vendors
• In addition the design choices differs depending of either Edge can peer with ToR configured as L3 or L2
• The uplink configuration on VDS along with ToR connectivity create a design choices that has vendor specific technology dependencies ( vPC or MLAG)
• The recommendation for typical design is to use explicit failover mode for the teaming– The explicit failover does not depend on vendor specific
configuration and provides a simple route peering.
L3 - ToR
Routing Adjacency
vSphere Host vSphere Host
Uplink Teaming Mode – Non-LACP
L3 - ToR
Routing Adjacency
vSphere Host vSphere Host
Uplink Teaming Mode – LACP
VXLAN 5020Transit Link
VXLAN 5020Transit Link
CONFIDENTIAL 52
Agenda• NSX for vSphere Design and Deployment Considerations
– Physical & Logical Infrastructure Requirements– NSX Edge Design– Logical Routing Topologies– NSX Topologies for Enterprise and Multi-tenant Networks– Micros-segmentation with Distributed FW Design
CONFIDENTIAL 53
Distributed Logical Routing Components – Control Plane The Distributed Logical Router Control Plane is provided by a per
instance DLR Control VM and the NSX Controller
Dynamic Routing Protocols supported with DLR• OSPF
• BGP
• Control VM forms the adjacencies with Edge node
Communicates with NSX Manager and Controller Cluster
• NSX Manager sends LIF information to the Control VM and Controller Cluster
• Control VM sends Routing updates to the Controller Cluster
DLR Control VM and NSX Controller are not in the data path
High availability supported through Active-Standby configuration
Can exist in edge cluster or in compute cluster
Logical Router Control VM
Distributed Logical Routing Components – Data Plane
Logical Interfaces (LIFs) on a Distributed Logical Router Instance• There are internal LIFs and uplink LIFs• VM Default Gateway traffic is handled by LIFs on the appropriate network• LIFs are distributed across every hypervisor prepared for NSX• Up to 1000 LIFs can be configured per Distributed Logical Router Instance
8 Uplink992 Internal
• An ARP table is maintained per LIF
vMAC is the MAC address of an internal LIF• vMAC is same across all hypervisors and it is never seen by the physical network (only by VMs)
• Routing table on each ESXi hosts is programed via controller
DLR Kernel Module
vSphere Host
LIF1 LIF2
Transit VXLAN
Uplink
ECMP with DLR and Edge
56
DLR
E3E1
Physical Routers
E2 …
CoreVXLAN
VLAN
E8
Web DBApp
ECMP support on the DLR and on the NSX EdgeBoth have the capability of installing in their forwarding tables up to 8 equal cost routes toward a given destination
8 NSX Edges can be simultaneously deployed for a given tenantIncrease the available bandwidth for North-South communication (up to 80 Gbps*)
Reduces the traffic outage in an ESG failure scenario (only 1/Xth of the flows are affected)
Load-balancing algorithm on NSX Edge:Based on Linux kernel flow based random round robin algorithm for the next-hop selection a flow is a pair of source IP and destination IP
Load-balancing algorithm on DLR:Hashing of source IP and destination IP defines the chosen next-hop
Active Standby
Distributed Router & ECMP Edge Routing
2 VLANs used for peering with Customer Routers
Map each of these VLANs (portgroups) to a different dvUplink on Edge VDS to ensures distribution of N/S traffic across dvUplinks
Uplink = VLAN = Adjacency
Avoid using LACP to ToR for route peering due to vendor dependencies
Min 3 host per rack With two host, two active Edge with anti-
affinity
same host to avoid dual failure
Use third host for active control-VM, standby on any remaining host with anti-affinity rule
VXLAN
VLAN
Web DBApp
Transit VXLAN
E1 E2 E3 E4
R1 R2
External Network
VLAN 10
VLAN 20
ECMP Active NSX Edges
Customer Routers
Distributed
RouterActive Standby
DLRControl VM
Edge HA Models Comparison – BW, Services & Convergence
E1Active
Physical Router
E2Standby
RoutingAdjacency
Web DB
DLRControl
VM
DLR
AppActive Standby
…E8E3E1
Physical Router
Routing
E2
Adjacencies
Web DB
DLR
App
Active Standby
DLRControl VM
Active/Standby HA ModelBandwidth Single Path
(~10 Gbps/Tenant)
Stateful Services Supported - NAT, SLB, FW
Availability Low convergence with stateful services enabled
ECMP ModelBandwidth Up to 8 Paths
(~80 Gbps/Tenant)
Stateful Services Not Supported
Availability High~ 3-4 sec with (1,3 sec)
timers tuning
3-Tier App Logical to Physical Mapping
vSphere Host
Host 3
vSphere Host
Host 4
vSphere Host
Host 5
WebWeb WebAppApp DB
Edge VMs
Logical Router ControlVMs
WebWeb WebAppApp DB
WebWeb WebAppApp DB
vSphere Host
Host 1
vSphere Host
Host 2
vSphere Host
Host 6
vSphere Host
Host 7
Compute Cluster
NSX Manager
NSX Controller ClustervCAC
vCenter
Edge Cluster
Management Cluster
CONFIDENTIAL 59
Edge cluster availability and capacity planning requires for• Minimum three host per cluster
• More if ECMP based North-South traffic BW requirements
Edge cluster can also contain NSX controller and DLR control VM for Distribute Logical Routing (DLR)
L3
L2
VMkernel VLANs
VLANs for L2 andL3 NSX Services
Routed DC Fabric
L2
L3
WANInternet
L2L3
L2L3
VMkernel VLANs
VLANs for L2 and L3 NSX Services
Routed DC Fabric WANInternet
L2L3
Single Rack Connectivity
Deployment Considerations Benefits of Dedicated Edge Rack
Reduced need of stretching VLANs L2 required for External 802.1Q VLANs & Edge Default GW
L2 connectivity between active and standby stateful Edge design Uses GARP to announce new MAC in the event of a failover
Localized routing configuration for N-S Traffic, reduce need to configure and mange on rest of the ToRs in spine
Span of control for network centric operational management, BW monitoring & features
Dual Rack Connectivity 60
Edge Cluster
Agenda• NSX for vSphere Design and Deployment Considerations
– Physical & Logical Infrastructure Requirements– NSX Edge Design– Logical Routing Topologies– NSX Topologies for Enterprise and Multi-tenant Networks– Micros-segmentation with Distributed FW Design
CONFIDENTIAL 61
Enterprise Topology – Two Tier Design – with/without 6.1 Onward Typical Enterprise topology consist of app-tier logical
segments
Routing and distributed forwarding is enable for each logical segment available on all host via distributed logical router (DLR)
• Allowing workload to move without the dependencies of VLAN as local forwarding exist on each host via DLR LIF
• The north-south traffic is handled via next hop Edge which providesvirtual to physical(VXLAN to VLAN) forwarding
The DLR to Edge routing is provisioned initially once, the topology then can be used for additional logical segments (additional LIFs) for multiple app-tier deployment Scaling
• Edge Scaling – Two ways• Per tenant scaling – aka each workload/tenant gets its own Edge
and DLR• ECMP based scaling of incremental BW gain – 10G BW upgrade
per spin up of Edge upto maximum of 80 Gig(8 Edges). Available on NSX 6.1 release onward
• DLR Scaling• Upto 1000 LIF – aka 998 logical network per DLR instance
External Network
Physical Router
VLAN 20 Routing Edge
Uplink Peering
NSX EdgeRoutingPeeri
ngVXLA
N 5020Trans
it Link
Distributed Routing
Web1
App1 DB1
Webn Appn
DBn
Web DB
DLR
E8E1
Physical Router
E2
…
App
Core
RoutingPeering
Route Update
ECMPNon-Stateful
E3
Multi Tenant (DLRs) Routing Topology
External Network
Tenant 9
DLR Instance 9DLR Instance 1
Web Logical Switch App Logical Switch DB Logical Switch
Web Logical Switch App Logical Switch DB Logical Switch
Tenant 1
NSX Edge
VXLAN 5020Transit Link
VXLAN 5029Transit Link
…
63
Can be deployed by Enterprises, SPs and hosting companies
No support for overlapping IP addresses between Tenants connected to the same NSX Edge
If the true isolation of tenant routing and overlapping IP addressing is required – dedicated Edge HA mode is the rightapproach
VLAN
VXLAN
Multi Tenant Routing Topology (Post-6.1 NSX Release)
External Network
NSX Edge
VXLAN Trunk Interface
64
From NSX SW Release 6.1, a new type of interface is supported on the NSX Edge (in addition to Internal and Uplink), the “Trunk” interface
This allows to create many sub-interfaces on a single NSX Edge vNic and establish peering with a separate DLR instance on each sub-interface Scale up the number of tenants supported with
a single ESG (assuming no overlapping IP addresses across tenants)
Aggregate of 200 sub-interfaces per NSX Edgesupported in 6.1
Only static routing & BGP supported on sub- interfaces in 6.1
OSPF support will be introduced in 6.1.3 maintenance release
Scale numbers for Dynamic Routing (max Peers/Adjacencies) are under review
Routing Peering
Tenant 1 Tenant
2Tenant n
Single vNIC
Web Logical Switch App Logical Switch DB Logical Switch
VLAN
VXLAN
High Scale Multi Tenant Topology
65
• High scale multi-tenancy is enabled with multiple tiers of Edge interconnected via VxLAN transit uplink
• Two tier Edges allow the scaling with administrative control– Top tier Edge acting as a provider Edge manage
by cloud(central) admin– Second tier Edges are provisioned and managed by
tenant
• Provider Edge can scale upto 8 ECMP Edges for scalable routing
• Based on tenant requirement tenant Edge can be ECMP or stateful
• Used to scale up the number of tenants (only option before VXLAN trunk introduction)
• Support for overlapping IP addresses between Tenants connected to different first tier NSX Edges
External Network
Tenant 1
Web Logical Switch
App LS DB LS
…
Web Logical Switch
Edge with HA NAT/LBfeatures Single
Adjacency to ECMP Edge
ECMP Based NSX Edge X-Large (Route Aggregation
Layer)
ECMP Tenant NSX Edge
VXLAN Uplinks or VXLAN Trunk*
VXLANUplinks or
VXLAN Trunk*
VXLAN 5100Transit
App LS DB LS
*Supported from NSX Release 6.1 onward
… E8E1
Multi Tenant Topology - NSX (Today)
MPLS Network
Tenant 1
Web Logical Switch App Logical Switch DB Logical Switch
…
Web Logical Switch App Logical Switch DB Logical Switch
Tenant NSX ESG
Physical Router(PE or Multi-VRF CE)
VXLAN Uplinks (or VXLAN Trunk*)
VLAN 10
66*Supported from NSX Release 6.1 onward
Tenant 1 VRF
Tenant 2 VRF
T1 T2
Tenant NSX ESG
T1
T2
VXLAN Uplinks (or VXLAN Trunk*)
VLAN 20
VLAN
VXLAN
NSX Edge currently it is not VRF aware Single routing table does not allow to keep
tenants logically isolated Each dedicated Tenant Edge can connect
to a separate VRF in the upstream physical router Current deployment option to integrate with an
MPLS network
Agenda• NSX for vSphere Design and Deployment Considerations
– Physical & Logical Infrastructure Requirements– NSX Edge Design– Logical Routing Topologies– NSX Topologies for Enterprise and Multi-tenant Networks– Micro-segmentation with Distributed FW Design
CONFIDENTIAL 67
Internet Intranet/Extranet
Perimeter Firewall
(Physical)
NSX Edge Service
Gateway
SDDC (Software Defined DC)
D F W
D F W
D F W
Distributed FW - DFW
Virtual
Compute Clusters
Stateful Perimeter Protection
Inter/Intra VM
Protection
NSX Security Architecture Overview• Stateful Edge Security
• DFW per vNIC Characteristics– Distributed & fully programmable(REST-API)– vMotion with rules and connection state intact– Flexible Rules and topology independence– Third party ecosystem integration – PAN– Foundation for the micro-segmentation design
• Tools and Methods to protect virtual resources– Traffic redirection rules with services composer or
partner security services UI– Filtering module within security policy definition– Diverse policy object & Policy Enforcement
Points(PEP)• Identity – AD Groups• VC Container Objects – DC, Cluster, Port-Groups,
Logical SW• VM Characteristics– VM Names, Security Tags, Attributes, OS
Names• Protocols, Ports, Services
• Security Groups to leverage objects and PEP to achieve micro-segmentation
CONFIDENTIAL 68
Micro-segmentations Design• Collapsing application tiers to like services with each app-
tier has its own logical switch– Better for managing domain(WEB, DB) specific security
requirements– Easier to develop segmented isolation between apps tier domain –
Web-to-DB – Deny_All vs Web-to-App granularity– May requires complex security between app tiers as specifics web-to-
app or app-to-db security isolation required within logical switch as well as between segments
• Collapsing the entire apps tiers into single logical switch– Better for managing group/application-owner specific expertise– Apps container model. May suits well for app as tenant model– Simpler security group construct per app-tier– Isolation between different apps container is required
• DMZ Model– Zero trust security– Multiple DMZ logical network, default deny_ALL within DMZ
segments– External to internal protection by multiple groups
Logical Distributed Router
.1.1
.1
W eb-Tier-01 1.1.1.0/24
w eb-01w e b - 0 2
A pp -Tier-01 2.2.2.0/24
ap p-01app -02
D B -Tie r-01 3.3.3.0/24
d b-01 d b - 0 2
.11 .12 .11 .12 .11 .12
L o g i c a l D i s t r i b u t e d R o u t e r
. 1
w e b - 0 1 w e b - 0 2 a p p - 0 1 d b - 0 1a p p - 0 2
A ll -T ie r -0 1 1 . 1 . 1 . 0 / 2 4
.11 . 1 2 . 2 1 . 2 2 . 3 1
d b - 0 2
. 3 2
S G -W E B S G -A P P S G -D B
Web-Tier
App-Tier
External Network
STOP
Client to Web HTTPS Traffic
Web to App TCP/8443
CONFIDENTIAL 69
Feature Overview - vCloud Automation Center & NSX
• Connectivity– vCAC Network Profiles for On-Demand Network Creation
• Define routed, NAT, private, external profiles for variety of app topologies• Option to connect app to pre-created networks (logical or physical)
– NSX Logical Distributed Router (DLR)• Optimize for east-west traffic & resources by connecting to pre-created
LDR• Security
– On-Demand Micro-segmentation• Automatic creation of security group per app w/ default deny firewall rules
– Apply Firewall and Advanced Security Policies w/ Ease• Select pre-defined NSX security policies to apply to app/tier• Antivirus, DLP, Intrusion Prevention, Vulnerability Mgmt…more to come
– Connect Business Logic to Security Policy w/ Ease• Select pre-defined NSX security tag (e.g. ‘Finance’) which is applied to
workload and interpreted by NSX to place in pre-defined security group
• Availability– On-demand Load Balancer in ‘One-Armed Mode
• Plus option for using pre-created, in-line load balancer (logical or phys)
CONFIDENTIAL
Range of features from pre-created to on-demand network and security services.
Web
App
DatabaseVM
70
Reference Designs
73
NSXReference Designs
NSXPlatform
Hardening
NSXGetting Started Guides
SDDCValidatedSolutions
NSXPartner White papers
Reference Designs & Technical Papers on VMware Communities:https://communities.vmware.com/docs
Reference Designs and Technical Papers on the NSX Portal:http://www.vmware.com/products/nsx/resources.html
NSX and Fabric
Vendors
VMware NSX Collateral Landscape
VMware NSX Network Virtualization Design Guides:https://communities.vmware.com/docs/DOC-27683
NSX Reference Design Guides – The Architecture
ESXiCompute Clusters
Compute ClustersInfrastructure/Edge Clusters (Edge,
Storage, vCenter and Cloud Management System)
Edge Clusters
WANInternet
Storage Cluster
Mgmt andCloud Mgmt Cluster
CONFIDENTIAL 74
What’s Next…
VMware NSX Hands-on Labs
labs.hol.vmware.com
VMware Booth #12293 NSX Demo Stations
Explore, Engage, Evolvevirtualizeyournetwork.com
Network Virtualization Blogblogs.vmware.com/networkvirtualization
NSX Product Pagevmware.com/go/nsx
NSX Training & Certificationwww.vmware.com/go/NVtraining
NSX Technical Resources Reference Designs vmware.c
om/products/nsx/resources
VMware NSX YouTube Channelyoutube.com/user/vmwarensx
Play Learn Deploy
CONFIDENTIAL 75
76
Please submit your feedback via our mobile app.
Thank you!