Virtual Networking PAVMUG: July 24, 2008
description
Transcript of Virtual Networking PAVMUG: July 24, 2008
![Page 1: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/1.jpg)
Virtual NetworkingPAVMUG: July 24, 2008
Jonathan ButzServices Manager
Arraya Solutions, [email protected]
Halim ChtourouSenior Solutions Engineer
Arraya Solutions, Inc.hchtourou@
![Page 2: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/2.jpg)
Slide | 2
Virtual Networking Outline• Arraya Introduction• Virtual Networking• Design Essentials• Design Examples• Advanced Concepts
![Page 3: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/3.jpg)
Slide | 3
Arraya Solutions, Inc.• IT Infrastructure Consultants since 1999• Consulting Services in Industry Leading Technologies• Custom Solutions and Services
![Page 4: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/4.jpg)
Slide | 4
The Arraya Team• Experienced and Knowledgeable
– Certified Professionals– Responsive Sales Professionals– Consultative Approach with a Proven Track Record
• Flexible– Local Presence and Premier Service– In-house Demo Center, New Data Center
• Successful– Consistent Double-Digit Growth Since Inception– Portfolio of Satisfied Reference Customers
![Page 5: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/5.jpg)
Slide | 5
Satisfied Customers
![Page 6: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/6.jpg)
Custom Solutions• Exchange 2007 CCR Design and Migration• Storage architecture, deployment, migration• DR architecture and implementation• VMware architecture and deployment• Health Checks, Report and Recommendations
SAN, VMware, Active Directory, Exchange, TSM
![Page 7: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/7.jpg)
Slide | 7
VMware Solutions• VMware Virtual Infrastructure Partner since 2003• VMware Authorized Consulting Partner• VMware Premier Partner, VAC Gold Partner • 9 VMware Certified Professionals on Staff• Close Relationships With VMware Team• Planning & Design Accreditation
![Page 8: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/8.jpg)
Slide | 8
Virtual Networking Outline• Arraya Introduction
• Virtual Networking• Design Essentials• Design Examples• Advanced Concepts
![Page 9: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/9.jpg)
Physical to Virtual
Slide | 9
• Increased scale on similar physical footprint
• ESX host servicing multiple endpoints
• Networking concepts remain the same
• Virtual Networking enables additional flexibility
![Page 10: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/10.jpg)
Physical to Virtual
Slide | 10
Ph
ysical Sw
itchP
hysical S
witch
Virtu
al Sw
itch
![Page 11: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/11.jpg)
Increased Flexibility• Add vSwitches as
required• Assign guest OS and
physical NICs (vmnics) as required
• Guest OS traffic switched internally
Slide | 11
Virtu
al S
witch
Virtu
al S
witch
Virtu
al Sw
itch
![Page 12: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/12.jpg)
Slide | 12
Virtual Networking Outline• Arraya Introduction• Virtual Networking
•Design Essentials• Design Examples• Advanced Concepts
![Page 13: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/13.jpg)
Design Essentials• Virtual network topology: same as physical • Conventional access, distribution, core design• Virtual Switches are Access Switches• Isolate certain traffic types where possible
Slide | 13
![Page 14: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/14.jpg)
Traffic Types• Virtual Machine Traffic
– Traffic sourced and received from virtual machines– Traffic between VMs on same vswitch stays internal
• VMotion Traffic– Traffic sent when moving a virtual machine from one ESX host to another– Should be isolated from VM traffic
• Management Traffic– Should be isolated from VM traffic– Includes heartbeats if VMware HA is enabled
• iSCSI Traffic– Should be isolated from all other traffic
Slide | 14
![Page 15: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/15.jpg)
Virtual Switch Capabilities• L2 Ethernet Switching• VLAN Trunking and Segmentation (802.1Q)• Rate limiting: restrict traffic generated by a VM• VMware NIC Teaming
– Load balancing for better use of physical network– Redundancy for enhanced availability
• Layer 2 functionality only — no routing• MAC addresses known by registration rather than learned
– No MAC learning required– Prevents MAC spoofing
Slide | 15
![Page 16: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/16.jpg)
VLAN Trunking in ESX• Enables logical network
partitioning• Virtual machines connect to
virtual switch portgroups• Virtual switch portgroups are
associated with a particular VLAN
• Virtual switch tags packets exiting virtual machine just as physical switches do for physical servers
Slide | 16
![Page 17: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/17.jpg)
VLAN Tagging Options
Slide | 17
vnic
vnic
vnic
vSwitch
Physical Switch
EST – External Switch Tagging
External Physical switch applies
VLAN tags
External Physical switch applies
VLAN tags
vnic
vnic
vnic
vSwitch
Physical Switch
VGT – Virtual Guest Tagging
VLAN Tags applied in
Guest
VLAN Tags applied in
Guest
PortGroup set to VLAN
“4095”
PortGroup set to VLAN
“4095”
vnic
vnic
vnic
vSwitch
Physical Switch
VST – Virtual Switch Tagging
VLAN Tags applied in vSwitch
VLAN Tags applied in vSwitch
Port Groups
assigned to a VLAN
Port Groups
assigned to a VLAN
Preferred
![Page 18: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/18.jpg)
Redundant Paths: Uplinks and Switches
Slide | 18
A1
A2
NIC Teaming
![Page 19: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/19.jpg)
Teaming Options for ESX Uplinks• “Originating Virtual Port ID” or “Source MAC” based Teaming
– NIC chosen based on originating virtual switch port ID or source MAC– Traffic from the same vNIC sent via same physical NIC (vmnic) until failover– Simple: no link aggregation
• “IP Hash” Teaming– NIC chosen based on SRC-DST IP– Link aggregation (EtherChannel) required on physical switch– Limited teaming to single switch except where explicitly supported (Cisco Catalyst
6500 VSS, Nortel Split MLT and some stacked switches)– Better balancing if guest has large number of IP peers
• Recommendation: Choose Originating Virtual Port ID based teaming for simplicity and multi-switch redundancy (default)
Slide | 19
![Page 20: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/20.jpg)
Multiport NICs
Slide | 20
ESX Host
![Page 21: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/21.jpg)
Slide | 21
Virtual Networking Outline• Arraya Introduction• Virtual Networking• Design Essentials
•Design Examples• Advanced Concepts
![Page 22: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/22.jpg)
Design and Network PortsQuestion• How do I best design the virtual network given VM traffic,
VMotion and Management for security and isolation?Answer• Depends on number of physical ports• 4 NIC ports per server recommended, +2 for iSCSI• VLAN trunking highly recommended Design Examples• ESX flexibility allows for multiple variations of valid
configurations• Understand your requirements and resultant traffic types and
design accordingly Slide | 22
![Page 23: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/23.jpg)
Example Infrastructure• 4 ESX Servers• 2 logical groups of virtual
machines
• VLANs– VLAN 10: Management– VLAN 20: VMotion– VLAN 105: Finance– VLAN 106: Engineering
Slide | 23
VLANs 10, 20, 105, 106
VLANs 10, 20, 105, 106
VLANs 10, 20, 105, 106
VLANs 10, 20, 105, 106
VLAN 10VC ServerVC Server
ESX Host 1ESX Host 1
ESX Host 2ESX Host 2
ESX Host 3ESX Host 3
ESX Host 4ESX Host 4
![Page 24: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/24.jpg)
ESX with 2 NICs• Create one virtual switch• Connect both physical NICs• Port groups
– Port group 10 for Service Console– Port group 20 for VMotion– Port group 105 for Finance VMs– Port group 106 for Engineering VMs
• On-board NIC0 (vSwitch1 Uplink)– PG10 (preferred) and PG20 (preferred)
• On-board NIC1 (vSwitch1 Uplink)– PG105 (preferred) and PG106 (preferred)
Slide | 24
VLANs 10, 20, 105, 106
![Page 25: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/25.jpg)
ESX with 4 NICs: Option 1• Create two virtual switches• Connect two physical NICs to each VSwitch• Port groups
– Virtual Switch0• Port group 10 for Service Console• Port group 20 for VMotion
– Virtual Switch1• Port group 105 for Finance VMs• Port group 106 for Engineering VMs
Slide | 25
![Page 26: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/26.jpg)
ESX for 4 NICs: Option 1• On-board NIC0 (vSwitch0 uplink)
– PG10 (preferred) and PG20• On-board NIC1 (vSwitch1 uplink)
– PG105 and PG106
• PCI based NIC0 (vSwitch0 uplink)– PG10 and PG20 (preferred)
• PCI based NIC1 (vSwitch1 uplink)– PG105 and PG106
Slide | 26
Team
Team
![Page 27: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/27.jpg)
ESX with 4 NICs: Option 2• Create one virtual switch• Connect all 4 NICs to VSwitch • Port groups
– Port group 10 for Service Console– Port group 20 for VMotion– Port group 105 for Finance VMs– Port group 106 for Engineering VMs
• Configure preferred physical NICs
• More effective use of available bandwidth• Simplest physical switch configuration: all ports are VLAN Trunks
carrying VLANs 10, 20, 105 and 106
Slide | 27
vSwitch
vmnic0 1 2 3
vnic
vnic
PG105
SC VMkernel
PG10 PG20
vnic
Preferred
Standby
![Page 28: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/28.jpg)
ESX with More than 4 NICs• With Trunks
– Use previous approach and scale up to meet additional bandwidth and redundancy requirements
– Dedicate NIC pair for iSCSI (if using VM software initiator)• Without Trunks
– Dedicate NIC pair for VMotion– Dedicate NIC pair for Service Console– Separate NIC pairs for each network– Dedicate NIC pair for iSCSI (if using VM software initiator)
Slide | 28
![Page 29: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/29.jpg)
DMZ Architecture• Regulations may require DMZ traffic separation• SOX and HIPPA requirements for isolation are
open to interpretation• Many customers dedicate NICs to DMZ traffic• Allows internal and DMZ traffic in same cluster• Compliance may vary by auditor
Slide | 29
![Page 30: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/30.jpg)
Slide | 30
Virtual Networking Outline• Arraya Introduction• Virtual Networking• Design Essentials• Design Examples
• Advanced Concepts
![Page 31: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/31.jpg)
iSCSI Design• Provides SCSI block storage access over IP network • Relevant for VMs using the iSCSI software-based
initiator• Design depends on NIC ports available• General Design Guidance
– Keep iSCSI traffic on its own dedicated vlan and subnet– Dedicate NIC pairs to iSCSI traffic– Use teaming as appropriate
• “Virtual Source Port ID” setting if all your iSCSI targets share the same IP address• “IP Hash” setting for other scenarios, including the case for multiple targets
Slide | 31
![Page 32: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/32.jpg)
iSCSI Examples• Two NIC ports
– Buy additional NICs if possible– Follow two port example– For high VM traffic
• Set SC + VMotion + iSCSI to prefer NIC0• Set VM traffic to prefer NIC1
– For low VM traffic• Set SC + VMotion to prefer NIC0• Set VM traffic + iSCSI to prefer NIC1
• Four NIC Ports– Buy additional NICs if possible– Follow two port example– Create additional VSwtich, connect remaining NICs for iSCSI
• Six NIC Ports– Follow four port example, dedicate additional NICs to iSCSI
Slide | 32
![Page 33: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/33.jpg)
Spanning Tree: Not Used by ESX• ESX does not alter STP on
physical network– ESX does not participate (does
not generate/consume BPDUs)– Use “portfast” or “trunkfast”
on physical switch to progress immediately to “forwarding” state
• Interconnections between virtual switches are not possible
• Loops are not possible within a single virtual switch
Slide | 33
Virtual
Sw
itchV
irtual S
witch
![Page 34: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/34.jpg)
Link-state Tracking: faster failover
Slide | 34
ESX Host
Virtual
Sw
itch “Link State Tracking” associates upstream and downstream links
![Page 35: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/35.jpg)
VMotion: Step by Step
Slide | 35
MACA
MACB
MACA
MACB
MACAMACA
IPAIPA
MACBMACB
IPBIPB
MACCMACC
MACCMACC
ESX Host 1ESX Host 1 ESX Host 2ESX Host 2
MACCMACC
IPCIPC
CCBBAA
VMotion Traffic
RARP for MAC move
(L2 broadcast to network)
RARP for MAC move
(L2 broadcast to network)
Physical Switch Physical Switch
MACCMACC
IPCIPC
CC
![Page 36: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/36.jpg)
Slide | 36
Questions?Arraya Solutions, Inc.
521 Plymouth Road Suite 113JPlymouth Meeting, PA 19462
http://www.arrayasolutions.com866.229.6234
Jonathan ButzServices Manager
![Page 37: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/37.jpg)
Hidden Bonus Slides
![Page 38: Virtual Networking PAVMUG: July 24, 2008](https://reader036.fdocuments.us/reader036/viewer/2022081516/56814907550346895db63dc3/html5/thumbnails/38.jpg)
Slide | 38
Customer: TCO
3 Year TCO Comparison for VMware VI3
$0
$100,000
$200,000
$300,000
$400,000
$500,000
$600,000
Current (As Is) With VI (Projected)
VI Training
VI Design, Plan and Deployment Services
Additional Software Licensing Costs (ifany)VI Software Licensing and SnS
Data Center Server Unplanned Downtime(Indirect)Data Center Server Disaster Recovery(Indirect)Data Center Server Administration
Data Center Server Provisioning
Data Center Server Space
Data Center Server Power and CoolingConsumptionData Center Server Networking
Data Center Server Storage
Data Center Server Hardware