Flex 10 and vSphere 5

14
1 HP BladeSystem Networking Reference Architecture HP Virtual Connect FlexFabric Module and VMware vSphere 5 Technical white paper Table of contents Executive Summary .............................................................................................................................. 2 Virtual Connect FlexFabric Module Hardware Overview .......................................................................... 3 Designing an HP FlexFabric Architecture for VMware vSphere .................................................................. 4 Designing a vSphere Network Architecture with the Virtual Connect FlexFabric module ............................... 7 vNetwork Distributed Switch Design ................................................................................................... 7 Network I/O Control ........................................................................................................................ 9 Hypervisor Load Balancing Algorithms ............................................................................................... 9 Modifying Server Profiles with Online Servers.................................................................................... 10 HP NC551 and NC553 Minimum Supported Firmware and Drivers .................................................... 10 Appendix A: Virtual Connect Bill of Materials ....................................................................................... 11 Appendix B: Terminology cross-reference ............................................................................................. 11 Appendix C: Glossary of Terms ........................................................................................................... 12 For more information.......................................................................................................................... 14

Transcript of Flex 10 and vSphere 5

Page 1: Flex 10 and vSphere 5

1

HP BladeSystem Networking Reference

Architecture

HP Virtual Connect FlexFabric Module and VMware vSphere 5

Technical white paper

Table of contents

Executive Summary .............................................................................................................................. 2

Virtual Connect FlexFabric Module Hardware Overview .......................................................................... 3

Designing an HP FlexFabric Architecture for VMware vSphere .................................................................. 4

Designing a vSphere Network Architecture with the Virtual Connect FlexFabric module ............................... 7

vNetwork Distributed Switch Design ................................................................................................... 7

Network I/O Control ........................................................................................................................ 9

Hypervisor Load Balancing Algorithms ............................................................................................... 9

Modifying Server Profiles with Online Servers .................................................................................... 10

HP NC551 and NC553 Minimum Supported Firmware and Drivers .................................................... 10

Appendix A: Virtual Connect Bill of Materials ....................................................................................... 11

Appendix B: Terminology cross-reference ............................................................................................. 11

Appendix C: Glossary of Terms ........................................................................................................... 12

For more information .......................................................................................................................... 14

Page 2: Flex 10 and vSphere 5

2

Executive Summary

HP has revolutionized the way IT thinks about networking and server management. With the release

of the HP ProLiant BladeSystem Generation 6 servers, along with Virtual Connect Flex-10 Ethernet

modules, HP provided a great platform for VMware vSphere. Virtual Connect Flex-10 is the world's

first technology to divide and fine-tune 10Gb Ethernet network bandwidth at the server edge. When

combined with Virtual Connect, the BladeSystem architecture streamlines the typical change processes

for provisioning in the datacenter.

HP has since evolved Virtual Connect Flex-10 to the next level: Virtual Connect FlexFabric modules.

By combining the power of ProLiant BladeSystem Generation 7 servers and Virtual Connect, the

Virtual Connect FlexFabric module allows customers to consolidate network connections and storage

fabrics into a single module. This further reduces infrastructure cost and complexity by eliminating

HBA adapters and Fibre Channel modules at the edge.

These servers include virtualization friendly features such as larger memory capacity, dense

population, room for additional mezzanine cards and 4 - 64 (with Intel Hyper-Threading technology

enabled and AMD Opteron 6200-series) processing cores. The following ProLiant BL Servers ship

standard with the NC551i FlexFabric Adapter:

BL465 G7

BL685 G7

The following ProLiant BL Servers ship with the NC553i FlexFabric Adapter:

BL460 G7

BL490 G7

BL620/680 G7

Additionally, the NC551m and NC553m provide support for the FlexFabric Adapter in ProLiant

BladeSystem G6 servers.

NOTE: Please check the latest NC551m1 and NC553m2 QuickSpecs for the official server

support matrix.

The ProLiant Generation 8 servers do not ship with a built-in adapter. Instead, HP offers a FlexLOM

option to where customers can chose which FlexFabric Adapter vendor they want. Please review the

server models QuickSpecs for support FlexLOM adapters.

The FlexFabric Adapter introduces a new Physical Function called the FlexHBA. The FlexHBA, along

with the FlexNIC, adds the unique ability to fine-tune each connection to adapt to your virtual server

channels and workloads on-the-fly. The effect of using the Virtual Connect FlexFabric module is a

reduction in the number of interconnect modules required to uplink outside of the enclosure, while still

maintaining full redundancy across the service console, VMkernel, virtual machine (VM) networks and

1 http://h18004.www1.hp.com/products/blades/components/emulex/nc551m/index.html

2 http://h18004.www1.hp.com/products/blades/components/emulex/nc553m/index.html

Page 3: Flex 10 and vSphere 5

3

storage fabrics. This translates to a lower cost infrastructure with fewer management points, switch

modules, adapter cards and cables.

This whitepaper will focus on designing a highly available network and vSphere Cluster architecture

by providing redundant uplinks across physical modules, but also across enclosures. This design will

also maximize the management features of Virtual Connect, while providing customers with the

flexibility to provide “any networking to any host” within the Virtual Connect domain. Simply put, this

design will not over-provision servers, while keeping the number of uplinks used to a minimum. This

helps reduce infrastructure cost and complexity by trunking the necessary VLANs (IP Subnets) to the

Virtual Connect domain, and minimizing potentially expensive 10Gb uplink ports.

Finally, this document will provide key design best practices for vSphere 5 network architecture with

HP FlexFabric, including:

vDS design for Hypervisor networking

vSwitch and dvPortGroup load balance algorithms

Virtual Connect FlexFabric Module Hardware Overview

The Virtual Connect FlexFabric module is the first Data Center Bridging (DCB) and Fibre Channel over

Ethernet (FCoE) solution introduced into the HP BladeSystem portfolio. It provides 24 line rate ports,

Full-Duplex 240Gbps bridging, single DCB-hop fabric. As shown in Image 1, there are 8 faceplate

ports. Ports X1-X4 are SFP+ transceiver slots only; which can accept a 10Gb or 8Gb SFP+

transceiver. Ports X5-X8 are SFP and SFP+ capable, and do not support 8Gb SFP+ transceivers.

NOTE: The CX-4 port provided by the Virtual Connect Flex-10 and legacy modules has been

depreciated.

Image 1: HP Virtual Connect FlexFabric Module

Ports X7 and X8 are shared with internal Stacking Link ports. If the external port is populated with a

transceiver, the internal Stacking Link is disabled. At least one Stacking Link is required between

modules. Please refer to the Virtual Connect User Guide or Multi-Enclosure Stacking Reference

Guide3 for more information.

3 http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02102153/c02102153.pdf

Page 4: Flex 10 and vSphere 5

4

Important: Even though the Virtual Connect FlexFabric module supports Stacking, stacking

only applies to Ethernet traffic. FC uplinks cannot be consolidated, as it is not possible to

stack the FC ports, nor provide a multi-hop DCB bridging fabric today.

Designing an HP FlexFabric Architecture for VMware

vSphere

In this design, two HP ProLiant c-Class 7000 Enclosures with Virtual Connect FlexFabric modules are

stacked to form a single Virtual Connect management domain4. By stacking Virtual Connect

FlexFabric modules, customer can realize the following benefits:

Management control plane consolidated

More efficient use of WWID, MAC and Serial Number Pools

Provide greater uplink port flexibility and bandwidth

Profile management across stacked enclosures

Shared Uplink Sets provide administrators the ability to distribute VLANs into discrete and defined

Ethernet Networks (vNet.) These vNets can then be mapped logically to a Server Profile Network

Connection allowing only the required VLANs to be associated with the specific server NIC port. This

also allows customers the flexibility to have various network connections for different physical

Operating System instances (i.e. VMware ESX host and physical Windows host.)

As of Virtual Connect Firmware 3.305 release, the following Shared Uplink Set rules apply per

domain:

1000 Unique VLANs (Ethernet Networks) per Virtual Connect Domain

162 Unique Server Mapped VLANs per Server Profile Network Connection

Important: When using FlexNICs, the 162 Unique Server Mapped VLAN maximum is an

aggregate of all FlexNICs that belong to the same 10Gb Physical Interface. A FlexHBA also

counts as a single VNET against the 162 total.

By providing two stacked Enclosures, this will allow for not only Virtual Connect FlexFabric module

failure, but also Enclosure failure. The uplink ports assigned to each Shared Uplink Set (SUS) were

vertically offset to allow for horizontal redundancy purposes, as shown in Figure 1-2.

IP-based storage (NFS and/or iSCSI) can be dedicated and segregated by a separate vNet and

assigned uplink port. This design approach provides administrators to dedicate a network (physically

switched, directly connected or logical within a Shared Uplink Set) to provide access to IP-based

storage arrays.

4 Only available with Virtual Connect Manager Firmware 2.10 or greater. Please review the Virtual Connect Manager Release Notes for more

information regarding domain stacking requirements: http://h18004.www1.hp.com/products/blades/components/c-class-tech-installing.html

5 Only when the Expanded VLAN Capacity feature is enabled in the Domain (Ethernet -> Advanced Settings)

Page 5: Flex 10 and vSphere 5

5

Directly connecting an IP-based Storage array has certain limitations:

Each storage array front-end port will require a unique vNet

Each defined vNet will require separate server network connections

You are limited to the number of IP-based arrays based on the number of unassigned uplink

ports

Note: Please review the Virtual Connect with iSCSI Cookbook Third Edition or newer6 for

more information on how to directly attach an IP-based storage device to Virtual Connect.

Virtual Connect has the capability to create an internal, private network without uplink ports, by using

the low latency mid-plane connections to facilitate communication. This vNet can be used for cluster

heartbeat networks, or in this case VMotion and/or Fault Tolerance traffic. Traffic will not pass to the

upstream switch infrastructure, which will eliminate the bandwidth otherwise consumed.

Figure 1-1: Physical VMware vSphere Cluster Design

Figure 1-2 show the physical cabling. The X5 and X6 Ethernet ports of the FlexFabric module are

connecting to a redundant pair of Top of Rack (ToR) switches, using LACP (802.3AD) for link

redundancy. The ToR switches can be placed End of Row to save on infrastructure cost. Ports X7 are

used for vertical External Stacking Links, while X8 are used for Internal Stacking Links.

6 http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02533991/c02533991.pdf

Page 6: Flex 10 and vSphere 5

6

As noted in the previous section, Virtual Connect FlexFabric Stacking Links will only carry Ethernet

traffic, and do not provide any Fibre Channel stacking options. Thus, ports X1 and X2 from each

module are populated with 8Gb SFP+ transceivers, providing 16Gb net FC bandwidth for storage

access. Ports X3 and X4 are available to provide additional bandwidth if FC storage traffic is

necessary.

If additional Ethernet bandwidth is necessary, ports Enc0:Bay2:X5, Enc0:Bay2:X6, Enc1:Bay1:X5,

and Enc1:Bay1:X6 can be used for additional Ethernet Networks or Shared Uplink Sets.

Figure 1-2: Physical cabling design

Page 7: Flex 10 and vSphere 5

7

Figure 1-3: Logical design

Designing a vSphere Network Architecture with the Virtual

Connect FlexFabric module

The vNetwork Distributed Switch7 (vDS) allows an administrator to create a centralized distributed

vSwitch, which share a common networking configuration across hosts. Port Groups are still utilized in

this model, but have a different association to host uplink ports. Host uplink ports are added to Uplink

Groups (dvUplinkGroup), where a logical association between the dvUplinkGroup and a PortGroup

(dvPortGroup) is formed. vDS can service any of the vmkernel functions; Service Console, VMotion, IP

Storage, and Virtual Machine traffic.

In this section, we will outline the overall vDS design.

vNetwork Distributed Switch Design

Each pair of redundant pNICs will be assigned to dvUplinkGroups, which then are assigned to a

specific vDS. This will simplify host network configuration, while providing all of the benefits of a

vDS. Table 2-1 and Figure 2-1 shows which hypervisor networking function will be assigned to vDS

configuration.

Table 2-1 VMware vDS Configuration Example

VMkernel Function vDS Name dvPortGroup Name

ESXi Host Management dvs1_mgmt dvPortGroup1_Mgmt

VMotion dvs2_vmkernel dvPortGroup2_vmkernel

VM Networking1 dvs3_vmnet dvPortGroup3_vmnet100

7 Requires vSphere 5.0 Enterprise Plus licensing; Please refer to the vSphere 5 Maximums document on supported configuration maximums -

http://www.vmware.com/pdf/vsphere5/r50/vsphere-50-configuration-maximums.pdf

Page 8: Flex 10 and vSphere 5

8

Table 2-1 VMware vDS Configuration Example

VMkernel Function vDS Name dvPortGroup Name

VM NetworkingN dvs3_vmnet dvPortGroupN_vmnetNNN

Figure 2-1: Hypervisor Networking Design

Important: Please know that for HP CloudSystem Matrix installations, the dvPortGroup for

the Service Console must be named “Management Network.”

Mixing vSphere Standard Switches and vSphere Distributed Switches is possible. The current

scalability limits to the vDS may require some designs to incorporate both. HP would recommend

vmkernel functions (vMotion, Management, IP Storage, FT) be assigned to Standard vSwitches, and

use vDS for Virtual Machine networking.

VMware Fault Tolerance (FT) could introduce more complexity in to the overall design. VMware

states that a single 1Gb NIC should be dedicated for FT logging, which would have the potential to

starve any shared pNIC with that of another vmkernel function (i.e. VMotion traffic.) FT has not been

taken into consideration within this document. Even though FT could be shared with another vmkernel

Page 9: Flex 10 and vSphere 5

9

function, and if FT is a design requirement, then the overall impact of its inclusion should be

examined.

With the design example given, there are three options one could choose to incorporate FT Logging:

Table 2-2 VMware Fault Tolerance Options

FT Design Choice Justification Rating

Share with VMotion

network

The design choice to keep VMotion traffic internally to the Domain

allows the use of low latency links for inter-Domain communication.

NetIOC could be used to control and provide VMotion or FT traffic

higher shares, or priority, over the other traffic.

***

Non-redundant VMotion

and FT networks

Dedicate one pNIC for VMotion traffic, and the other for FT

logging traffic. Neither network will provide pNIC redundancy.

**

Add additional

FlexFabric Adapters

and Modules

This option increases the overall CapEx to the solution, but will

provide more bandwidth options.

*

Network I/O Control

Network I/O Control8 (NetIOC), is a feature of the vDS that provides QoS-like capabilities. NetIOC

can identify the following types of traffic:

Virtual Machine

FT Logging

iSCSI (Software Initiator only; HW iSCSI Offload not supported)

NFS

Service Console Management

vMotion

NetIOC can be used to control identified traffic, when multiple types of traffic are sharing the same

pNIC. In our design example above, FT Logging could share the same vDS as the vmkernel, and

NetIOC would be used to control the two types of traffic. NetIOC can also be used to apply an

802.1p tag for DataCenter QOS, which is not supported by Virtual Connect at this time.

Hypervisor Load Balancing Algorithms

VMware provides a number of different NIC teaming algorithms, which are outlined in Table 2-3. As

the table shows, any of the available algorithms can be used, except IP Hash. IP Hash requires

switch assisted load balancing (802.3ad), which Virtual Connect does not support 802.3ad with

server downlink ports. HP and VMware recommend using Originating Virtual Port ID with Standard

vSwitch, and Physical NIC Load when using vDS and NetIOC, as shown in Table 2-3.

8 http://www.vmware.com/files/pdf/techpaper/VMW_Netioc_BestPractices.pdf (vSphere 4.1) or Managing Network Resources chapter in

http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-501-networking-guide.pdf

Page 10: Flex 10 and vSphere 5

10

Table 2-3 VMware Load Balancing Algorithms

Name Algorithm Works with VC

Originating Virtual Port

ID

Choose an uplink based on the virtual port where the

traffic entered the virtual switch.

Yes

Source MAC Address MAC Address seen on vnic port Yes

IP Hash Hash of Source and Destination IP’s. Requires switch

assisted load balancing, 802.3ad. Virtual Connect

does not support 802.3ad on server downlink ports,

as 802.3ad is a Point-to-Point bonding protocol.

No

Physical NIC Load Introduced in vSphere 4.1 and only available with a

vDS, Load-Based Teaming policy monitors the flow

when the mean send or receive utilization on a

dvUplink exceeds 75% capacity over 30-sec intervals.

Yes

Explicit Failover Highest order uplink from the list of Active pNICs. Yes

Modifying Server Profiles with Online Servers

HP fully supports modifying Server Profiles while hosts are still online. For older generation servers,

there are specific minimum NIC driver and firmware revisions required. Please review the Virtual

Connect Release Notes, and the Service Pack for Proliant Release Notes.

HP NC551 and NC553 Minimum Supported Firmware and Drivers

The table lists the minimum support firmware and driver revisions for the Emulex OneConnect-based

adapters from HP:

Table 2-4 Supported minimum versions

Component Version

Firmware 4.0.360.15

Driver (Ethernet) 4.0.355.1

Driver (FC/FCoE) 8.2.2.105.36

When multiple NC55x devices are used on a single host, you may experience an issue where not all

of the FlexNICs will appear within the ESXi kernel when using the 4.0.3551. To resolve this, you

must adjust the be2net driver heap stack allocation. Please refer to the HP Customer Advisory

c032201409 on the process and any future updates.

9 http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c03220140

Page 11: Flex 10 and vSphere 5

11

Appendix A: Virtual Connect Bill of Materials

Table A-1 Virtual Connect FlexFabric module Mapped VLAN BoM

Partnumber Description Qty

571956-B21 HP Virtual Connect FlexFabric Module 4

AJ716A HP StorageWorks 8Gb B-series SW SFP+ 8

487649-B21 .5m 10Gb SFP+ DAC Stacking Cable 2

455883-B21 10Gb SR SFP+ transceiver 4

Or

487655-B21 3m SFP+ 10Gb Copper DAC 4

Appendix B: Terminology cross-reference

Table C-1 Terminology cross-reference

Customer term Industry term IEEE term Cisco term Nortel term HP Virtual Connect term

Port Bonding or

Virtual Port

Port Aggregation

or Port-trunking

LACP

802.3ad

LACP

Etherchannel or

channeling

(PaGP)

MultiLink

Trunking (MLT)

802.3ad LACP

VLAN Tagging VLAN Trunking 802.1Q Trunking 802.1Q Shared Uplink

Set

Page 12: Flex 10 and vSphere 5

12

Appendix C: Glossary of Terms

Table C-1 Glossary

Term Definition

vNet/Virtual Connect Ethernet

Network

A standard Ethernet Network consists of a single broadcast

domain. However, when “VLAN Tunnelling” is enabled within

the Ethernet Network, VC will treat it as an 802.1Q Trunk port,

and all frames will be forwarded to the destined host untouched.

Shared Uplink Set (SUS) An uplink port or a group of uplink ports, where the upstream

switch port(s) is configured as an 802.1Q trunk. Each associated

Virtual Connect Network within the SUS is mapped to a specific

VLAN on the external connection, where VLAN tags are removed

or added as Ethernet frames enter or leave the Virtual Connect

domain.

Auto Port Speed** Let VC automatically determine best FlexNIC speed

Custom Port Speed** Manually set FlexNIC speed (up to Maximum value defined)

DCC** Device Control Channel: method for VC to change either a

FlexNIC or FlexHBA Adapter port settings on the fly (without

power no/off)

EtherChannel* A Cisco proprietary technology that combines multiple NIC or

switch ports for greater bandwidth, load balancing, and

redundancy. The technology allows for bi-directional aggregated

network traffic flow.

FlexNIC** One of four virtual NIC partitions available per FlexFabric

Adapter port. Each capable of being tuned from 100Mb to

10Gb

FlexHBA*** The second Physical Function providing an HBA for either Fibre

Channel or iSCSI functions

IEEE 802.1Q An industry standard protocol that enables multiple virtual

networks to run on a single link/port in a secure fashion through

the use of VLAN tagging.

IEEE 802.3ad An industry standard protocol that allows multiple links/ports to

run in parallel, providing a virtual single link/port. The protocol

provides greater bandwidth, load balancing, and redundancy.

LACP Link Aggregation Control Protocol (see IEEE802.3ad)

LOM LAN-on-Motherboard. Embedded network adapter on the system

board

Maximum Link Connection

Speed**

Maximum FlexNIC speed value assigned to vNet by the network

administrator. Can NOT be manually overridden on the server

profile.

Multiple Networks Link Speed

Settings**

Global Preferred and Maximum FlexNIC speed values that

override defined vNet values when multiple vNets are assigned to

the same FlexNIC

MZ1 or MEZZ1; LOM Mezzanine Slot 1; LAM on Motherboard/systemboard NIC

Page 13: Flex 10 and vSphere 5

13

Network Teaming Software A software that runs on a host, allowing multiple network

interface ports to be combined to act as a single virtual port. The

software provides greater bandwidth, load balancing, and

redundancy.

pNIC Physical NIC port. A FlexNIC is seen by VMware as a pNIC

Port Aggregation Combining ports to provide one or more of the following benefits:

greater bandwidth, load balancing, and redundancy.

Port Aggregation Protocol

(PAgP)*

A Cisco proprietary protocol aids in the automatic creation of

Fast EtherChannel links. PAgP packets are sent between Fast

EtherChannel-capable ports to negotiate the forming of a

channel.

Port Bonding A term typically used in the Unix/Linux world that is synonymous

to NIC teaming in the Windows world.

Preferred Link Connection

Speed**

Preferred FlexNIC speed value assigned by a vNet by the

network administrator.

Trunking (Cisco) 802.1Q VLAN tagging

Trunking (Industry) Combining ports to provide one or more of the following benefits:

greater bandwidth, load balancing, and redundancy. See also

Port Aggregation.

VLAN A virtual network within a physical network.

VLAN Tagging Tagging/marking an Ethernet frame with an identity number

representing a virtual network.

VLAN Trunking Protocol (VTP)* A Cisco proprietary protocol used for configuring and

administering VLANs on Cisco network devices.

vNIC Virtual NIC port. A software-based NIC used by VMs

*The feature is not supported by Virtual Connect. **The feature was added for Virtual Connect Flex-10

***The feature was added for Virtual Connect FlexFabric modules

Page 14: Flex 10 and vSphere 5

For more information

To read more about the Virtual Connect FlexFabric module, go to www.hp.com/go/virtualconnect

To learn more about HP BladeSystem, go to www.hp.com/go/bladesystem

For additional HP BladeSystem technical documents, go to www.hp.com/go/bladesystem/documentation

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Trademark acknowledgments, if needed.

c03278211, March 2012