Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... ·...

26
Deploying a VMware vSphere HA Cluster with HP Virtual Connect FlexFabric Technical white paper Table of contents Executive summary............................................................................................................................... 2 HP Converged Infrastructure.................................................................................................................. 2 HP Virtual Connect FlexFabric ........................................................................................................... 4 Deploying VMware vSphere ESX 4.0 with HP Virtual Connect FlexFabric ................................................... 4 Network Connectivity ....................................................................................................................... 5 Configuring HP Virtual Connect FlexFabric infrastructure .......................................................................... 5 HP SFP+ Transceivers ....................................................................................................................... 7 Defining HP Virtual Connect FlexFabric networks ..................................................................................... 8 Define HP Virtual Connect FlexFabric FCoE........................................................................................... 13 Defining HP Virtual Connect server profiles ........................................................................................... 13 Implementing a proof-of-concept .......................................................................................................... 23 Summary .......................................................................................................................................... 23 Appendix A FlexFabric terminology .................................................................................................. 24 Appendix B Bill of materials ............................................................................................................. 25 For more information.......................................................................................................................... 26

Transcript of Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... ·...

Page 1: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

Deploying a VMware vSphere HA Cluster

with HP Virtual Connect FlexFabric

Technical white paper

Table of contents

Executive summary ............................................................................................................................... 2

HP Converged Infrastructure .................................................................................................................. 2 HP Virtual Connect FlexFabric ........................................................................................................... 4

Deploying VMware vSphere ESX 4.0 with HP Virtual Connect FlexFabric ................................................... 4 Network Connectivity ....................................................................................................................... 5

Configuring HP Virtual Connect FlexFabric infrastructure .......................................................................... 5 HP SFP+ Transceivers ....................................................................................................................... 7

Defining HP Virtual Connect FlexFabric networks ..................................................................................... 8

Define HP Virtual Connect FlexFabric FCoE ........................................................................................... 13

Defining HP Virtual Connect server profiles ........................................................................................... 13

Implementing a proof-of-concept .......................................................................................................... 23

Summary .......................................................................................................................................... 23

Appendix A – FlexFabric terminology .................................................................................................. 24

Appendix B – Bill of materials ............................................................................................................. 25

For more information .......................................................................................................................... 26

Page 2: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

2

Executive summary

Virtualization has rapidly changed from an emerging technology to a datacenter stalwart capable of

hosting mission-critical applications and provisioning levels of redundancy and recovery that were

once impossible or cost prohibitive with industry standard platforms. This document presents an

enterprise reference configuration for HP BladeSystem c-Class infrastructure with HP FlexFabric

configured for VMware vSphere 4.0 Update 1 in a redundant, rapidly recoverable and highly

available configuration. Solutions utilizing HP BladeSystem c-Class infrastructure and HP FlexFabric

can be sized to match most environments.

This white paper provides an illustration of how to install and build a VMware ESX 4.0 HA cluster

with HP ProLiant server blades and HP Virtual Connect (VC) FlexFabric. The focus of the document is

on the steps necessary to configure the VC FlexFabric modules based on a proof-of-concept (POC)

environment. The white paper includes more complex details covering VLAN configurations, NIC

teaming and (FC) configurations for ESX 4.0.

The purpose of the proof-of-concept is to test and validate the Fibre Channel over Ethernet (FCoE)

connectivity and VMware VMotion live migration support to test virtual machine (VM) functions

correctly. Using the steps detailed in the white paper, a VM was previously created and both planned

and unplanned failover tests were successfully executed for live migration and Fibre Channel fail-over.

Target audience: The white paper is targeted for datacenter administrators, infrastructure planners and

system/solution architects. An understanding of the function of virtualization is necessary. For an

excellent overview of implementing VMware vSphere 4 in a datacenter environment, read the guides

on HP VMware alliance solutions site at http://www.hp.com/go/vmware.

HP Converged Infrastructure

Overview

HP Converged Infrastructure is a framework for building a dynamic datacenter that eliminates costly

and rigid IT silos, and unlocks resources for IT innovation rather than IT management. Along with

virtualization, the converged infrastructure has four other overarching requirements: being resilient,

orchestrated, optimized and modular. HP Converged Infrastructure matches the supply of IT resources

with the demand for business applications. By transitioning away from a product-centric approach to

a shared-service management model, organizations can accelerate standardization, reduce

operational cost and accelerate business results.

Page 3: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

3

Figure1. HP approach to Converged Infrastructure encompasses four key areas

HP Virtual Connect FlexFabric simplifies the cabling and physical server consolidation efforts. HP

FlexFabric utilizes two technologies, Converged Enhanced Ethernet (CEE) and Fibre Channel over

Ethernet (FCoE), which comes built-in in every ProLiant G7 server blade. HP FlexFabric is a high

performance, virtualized, low-latency network which will consolidate both Ethernet and Fibre Channel

into a single Virtual Connect module which will lower networking complexities and total cost.

Page 4: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

4

HP Virtual Connect FlexFabric

Virtual Connect FlexFabric represents the 3rd generation of HP award winning Virtual Connect

technology with over 2.6 million ports deployed in datacenters today. This technology is a solution to

a growing problem that simplifies the server edge by replacing traditional switches and modules with

a converged and virtualized way to connect servers to different networks with one device, over one

wire, utilizing industry standards. HP Virtual Connect FlexFabric allows you to eliminate separate

interconnect modules and mezzanine cards for disparate protocols like Ethernet and FC. This allows

you to consolidate all of the traffic onto a single interconnect architecture, simplifying the design and

cabling complexity of your environment.

Beyond just these benefits, VC FlexFabric modules also provide significant flexibility in designing the

network and storage connectivity requirements for your server blades. All of the new ProLiant G7

blades will include VC FlexFabric capability built-in with integrated VC FlexFabric NICs, and any

ProLiant G6 blades can be easily upgraded to support VC FlexFabric with new Virtual Connect

FlexFabric Adapter mezzanine cards. These adapters and mezzanine cards have two, 10 Gb

FlexFabric ports which can be carved up into as many as 4 PCI functions per port. A physical function

can be a FlexNIC, FlexHBA-FCoE, or FlexHBA-iSCSI supporting Ethernet, FCoE, or iSCSI traffic

respectively. Each adapter port can have up to 4 FlexNIC physical functions if there are no storage

requirements or three FlexNICs and 1 FlexHBA physical function for FCoE or iSCSI connectivity. The

bandwidth of these ports can be configured to satisfy the requirements of your environment. Each port

is capable of up to 10 Gb which can be allocated, as necessary, to the physical functions of that

port.

This gives you a tremendous amount of flexibility when designing and configuring your network and

storage requirements for your environment. This flexibility is especially salient for virtualization

environments which have a requirement for a lot of different networks that have varying bandwidth

and segmentation requirements. In a traditional networking design, this would require a number of

additional network cards, cables, and uplink ports which quickly drive up the total cost of the solution.

With VC FlexFabric, you have the unique flexibility to allocate and fine-tune network and storage

bandwidth for each connection and define each of those physical functions with the specific

bandwidth requirements for that network without having to overprovision bandwidth based on static

network speeds.

Deploying VMware vSphere ESX 4.0 with HP Virtual

Connect FlexFabric

The white paper discusses deployment of HP FlexFabric converged module with HP ProLiant BL c-Class

G6 servers in an architecture that follows both HP and VMware best practices for maximum data and

network redundancy. HP and VMware collaborate at an engineering level to ensure that our customer

benefits from the software, hardware and service solutions that are jointly tested certified and

optimized to deliver optimal server and storage performance. HP and VMware have developed a

variety of recommended configurations and services for VMware applications which are appropriate

for particular businesses and technical situations. For more information on these particular

recommended configurations please refer to HP BladeSystem cookbooks.

http://h18004.www1.hp.com/products/blades/components/c-class-tech-installing.html

The configuration in this guide is an HP and VMware recommended tested configuration. It is meant

as guidance to assist you in building architecture for your specific needs; however, this configuration

is provided as a reference only, as specific configurations will vary due to your needs. For example,

processor speed, memory amount, I/O, storage and service recommendations should be seen as a

minimum recommended amount. HP strongly recommends that you work with your local HP Reseller

or HP Sales Representative to help determine the best solution for you.

Page 5: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

5

HP has tested and documented the recommended firmware and software driver version for a

supported HP Virtual Connect FlexFabric environment referred to as a solution recipe. HP strongly

recommends you download and review the HP Virtual Connect FlexFabric Solution Recipe technical

white paper found on the website

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02610285/c02610285.pdf to

enable a successful solution deployment.

For the proof-of-concept, we used a pair of Intel®-based ProLiant c-Class server blades in a single HP

BladeSystem c7000 enclosure; the two HP ProLiant BL490c G6 server blades were configured in a

VMware HA cluster. The storage consisted of using an HP StorageWorks 4400 Enterprise Virtual

Array (EVA4400). For Ethernet and Fibre Channel connectivity from the c7000 enclosure, we used

two HP Virtual Connect FlexFabric modules which are installed in Bays 3 and 4. Since we used HP

ProLiant BL490c G6 server blades, we used HP NC551m FlexFabric mezzanine adapters on slot 1.

This enabled the G6 server blades to connect to the HP Virtual Connect FlexFabric module. By having

two HP FlexFabric modules in the enclosure, we have redundant paths for both network and Fibre

Channel. In addition, we receive additional bandwidth for both protocols.

Network Connectivity

Table 1 provides a list of the networking connectivity requirements. It is important to always use some

sort of documentation in your planning to make sure the requirements are clear before configuring the

Virtual Connect FlexFabric modules and defining the server profiles.

Table 1. Networking details used in the proof-of-concept

Network Name VLAN ID Host or VM Bandwidth NIC Teaming IP Range

Service Console N/A Host 500Mbps Yes 10.11.x.x

VM_A / VM_B 101 & 201 VM 4.5Gbps Yes 192.168.1.x

VMotion (2x) N/A Host 1Gbps Yes 10.0.0.x

In order to satisfy the storage requirement, each host will require redundant ports to the SAN fabric

with 4 Gbps bandwidth. So for each 10 Gb port on the server, 6 Gb will be allocated for networking

and 4 Gb for FCoE giving us a total bandwidth for each host to 20 Gbps when adding redundancy

in the configuration.

Configuring HP Virtual Connect FlexFabric infrastructure

To set the proof-of-concept for HP Virtual Connect FlexFabric we used two modules in bays 3 and 4.

By using the latest firmware version of Onboard Administrator (3.11) in our testing we are able to

place the modules in different slots and use them as primary and secondary modules. The VC

FlexFabric module compatibility rule states that the modules need to be adjacent to another FlexFabric

module. In addition, the ProLiant BL490c G6 servers do not have the appropriate FlexFabrics adapter

on the motherboard or LOM unlike the BladeSystem G7 server blades. The c-Class G6 server blades

have an HP NC551m Dual Port FlexFabric 10 Gb Converged Network Adapter (CNA) mezzanine

adapter on slot 1 which utilizes interconnect Bays 3 and 4. Figure 2 demonstrates the table view of

the port mapping on the enclosure Bay 1 for the first BL490c G6 server.

Page 6: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

6

Figure 2. Port mapping for device bay 1

Once both modules are placed in the appropriate c7000 enclosure interconnect bay, the external

LAN and SAN networks are connected. Figure 3 shows a rear view of the enclosure and front view of

the network and Fibre Channel switches used in the proof-of-concept. It is important to understand

HP’s ProCurve switches and VLAN networking. When configuring VLANs you will need to configure

within the ProCurve network switches, Virtual Connect Manager (VCM) and also with VMware

vCenter Server for the VM network traffic.

Page 7: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

7

Figure 3. Rear view of c7000 enclosure demonstrating wiring connections to the HP VC FlexFabric modules

Figure 3 shows two HP Virtual Connect FlexFabric modules with redundant paths to the HP ProCurve

networking switches with two uplinks per module. The design gives us the redundant network path for

the Ethernet connections from the enclosure to the switches. In addition, there are two Fibre Channel

connections one per module to two FC fabric switches. There are now two paths to the EVA4400

array using standard FC protocol from the module to the HP StorageWorks FC switches and not using

the FCoE protocol. The FCoE protocol is used for communication from the modules downstream to the

server blades. This path eliminates the necessity for a dedicated FC HBA mezzanine card in the

blades servers.

HP SFP+ Transceivers

In order to handle the different protocols such as Ethernet and FC we required protocol specific

transceivers. HP Virtual Connect FlexFabric modules are designed to provide flexibility in the type of

traffic that each port can support as different transceivers can be used in various ports to support

different protocols. In the POC, 4 Gb FC SFP transceivers were installed in ports 1 and 2 on both

modules and 10 GbE SFP+ transceivers were installed in ports 4 and 5. 10 GbE uplinks were utilized

in this testing to support the networking equipment in the lab. If at all possible it would be best to use

10 GbE SFP+ transceivers in production environments to provide 10 GbE uplink speed to supported

10 Gb switches. The protocol personality of HP Virtual Connect FlexFabric module uplink ports X1-X4

is determined by the type of SFP+ transceiver plugged in, i.e. – 4/8Gb FC or 10GbE SFP+. The

remaining ports X5-X8 are fixed 1/10GbE protocol ports and will not accept FC SFP+ transceivers.

Page 8: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

8

Defining HP Virtual Connect FlexFabric networks

With HP Virtual Connect FlexFabric modules inserted into their proper bays and uplinks connected,

we then needed to define the domain, Ethernet network, SAN fabric and the server profiles for

VMware ESX hosts. The steps required can be done by using the web-based Virtual Connect

Manager (VCM) or the VC command line interface (CLI). In this white paper, we will demonstrate

using the VCM to define the network and SAN connections. We will not cover how to create a

domain as it is a straightforward process; and we expect you will have had experience with creating

and configuring domains with the VCM. The message we want to reinforce here is that the domain

import or recovery may only be done from an odd numbered IO bay and once a primary/secondary

bay pair is established, it may not be changed without proper domain deletion.

Figure 4. HP Virtual Connect Network and SAN connectivity mapping for the FlexFabric mezzanine card

The POC was kept to the simplest configuration to connect our service console, VM network traffic

and VMware VMotion traffic through the FlexFabric network connection. As it is indicated in Table 1,

we have a requirement for six Ethernet connections and two SAN connections. In our POC

implementation we defined the two SAN fabrics and six Ethernet networks to meet the LAN and SAN

connectivity requirements. Figure 4 represents a layout of the FlexFabric adapter and specific function

configuration for our POC environment. The diagram in Figure 4 shows that three of the physical

functions on each port are configured as FlexNICs to support networking requirements. The remaining

physical function is the only one that supports both storage connectivity, FC or iSCSI, or it can also be

networked. In our example we will define in our server profile a FCoE connection and it will always

be mapped to the second physical function of the FlexFabric port.

After the domain is created we then define our shared uplink sets followed by the Ethernet

connections. Defining the Ethernet connection is a bit more complex due to requirements for standard

VC Ethernet networks and VC shared uplink sets. The shared uplink sets are to support multiple

tagged VLANs to a single server NIC or multiple NICs and minimize the number of uplinks required.

In the POC there are two shared uplink sets to support VM_A and VM_B Ethernet networks as it was

defined in Table 1 above. During the process of setting up the networking portion through the VCM,

Page 9: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

9

the network setup wizard Map VLAN tags feature will be turned on under the Ethernet

Settings/Advance Settings tab as shown in Figure 5

Figure 5. HP Virtual Connect Manager Network Setup Wizard to configure and select multiple networks

The Map VLAN tags feature provides the ability to use a shared uplink set to present multiple

networks to a single network interface card. You can configure and select multiple networks when

assigning a network to the server profile. It will allow configuring multiple VLANs on the server

network interface cards.

Shared uplink sets were defined during the network wizard setup process by using the VCM. There

were two shared uplink sets created called VLAN_Uplink1 and VLAN_Uplink2. The name of the

shared uplink set can be customized to your choice.

Page 10: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

10

Figure 6. VLAN_Uplink_1 one of the two shared uplinks set properties pages

In Figure 6 shown above we have two network names which are VM-A1 and VM-B1 and they are

both assigned to VLAN_Uplink_1 on Port 4 on module Bay 3. The second shared uplink set,

VLAN_Uplink_2 is assigned to Port 5 on module Bay 4 and it has VM-A2 and VM-B2 networks

associated so both shared uplink sets carry traffic for VLANs 101 and 201 respectively. We then

added a Stand-by link since we had two additional ports connected for redundancy.

HP Virtual Connect has the capability to create an internal network without uplink ports, by using the

low latency mid-plane connections to facilitate communication. The internal network can be used for

VMware VMotion and also for VMware Fault Tolerance network traffic. The network traffic will not

pass to the upstream switch infrastructure eliminating latency or any bandwidth it may otherwise

consume. In our POC we configured two VMware VMotion networks for redundancy purposes and to

demonstrate its functionality. See Figure 7.

As for the service console we used a single 1Gb connection as it was defined in the settings for each

FlexFabric module and the connector type being used was SFP-RJ45. Figure 7 shows two service

consoles for redundancy. As far as the NIC teaming is concerned, that will be configured at the host

level within vCenter Server. Bay 3 and 4 Port X6 were used to connect service console NICs to the

network switch. For our testing purposes, the service console did not have a VLAN Id assigned.

The last four networks are associated with the virtual machine network traffic. These include the VM-

A1, VM-A2, VM-B1 and VM-B2 networks. In order to see the specific details of one of the networks,

select the network name by right clicking once and then selecting edit. Figure 8 demonstrates the

details and the networks properties on the edit network settings page for VM-A1.

Page 11: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

11

Figure 7. Ethernet networks

Figure 8. Network properties for VM-A1

Figure 8 shows the network name VM-A1 and you can see the External VLAN ID 101 for this network

and the network is part of the VLAN_Uplink_1 shared uplink set. VM-A2 is also assigned to the same

VLAN ID (101) but assigned to VLAN_Uplink_2.

Page 12: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

12

Note

At the time of testing, the network drivers being used for Smart Link

were unsupported. The image in Figure 8 has a check mark on Smart Link

but it proved non-functional in our testing. HP and VMware are working on

a driver to support Smart Link. Please refer to HP or VMware support for

updates on the support for Device Control Channel (DCC) and Smart Link.

When we created the uplink ports for all four of these networks we placed them on ports 4 and 5 on

both of the modules. The reason why the shared uplink sets have been defined is so that we can have

the two networks go through the same uplink port with the appropriate VLAN mappings. For

additional detail information on the shared uplinks you can also right click the shared uplink set name

and edit. See Figure 9 for an example of VLAN_Uplink_1 settings.

Figure 9. Example of VLAN_Uplink_1 settings

Both VM-A1 and VM-B1 Ethernet networks are assigned to the shared uplink set called

VLAN_Uplink_1 and the external uplink is set to port 4 on the module in bay 3 with standby on port 5

of the same bay 3 module. The other shared uplink set, VLAN_Uplink_2 is assigned to port 4 and 5

on module in Bay 4 and has VM-A2 and VM-B2 networks associated so it can share both of the

VLANS 101 and 201 on the share uplink set.

HP ProCurve 6600-24G switches or whichever switches used, need to be configured appropriately

to handle the VLAN traffic. Consult with the switch manufacturer or its documentation for specific

details on configuring the upstream switches to handle the VLAN tagged networks.

Page 13: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

13

Define HP Virtual Connect FlexFabric FCoE

Now that we have configured the network and VLANs for the ESX hosts and virtual machine network

traffic we need to configure the SAN fabric. The configuration is pretty straightforward; you need to

go to VCM and select Define SAN Fabric. We selected SAN-3 module in Bay 3 Port X1 and did

the same for SAN-4 but used Bay 4 Port X1 as shown in Figure 10. When you select the ports it will

wait until the VCM finishes to login to the Fabric and report back the speed port status.

Figure 10. Selection of SAN modules

Defining HP Virtual Connect server profiles

One of the last steps in the process is to create the server profiles to use for the two ESX host and the

FlexFabric modules in the c7000 enclosure. The uplinks have already been connected and they are

now configured for Ethernet, FCoE and now it is time to define the server profiles. The configuration

can be done by using the web-based Virtual Connect Manager (VCM) or through VC command line

interface called Virtual Connect Scripting Utility (VCSU). The white paper will focus primarily in using

VCM for the steps.

In the POC we had a requirement for six Ethernet networks and two SAN fabrics to satisfy the LAN

and SAN connections. Figure 4 demonstrates the FlexFabric adapter and also specifies physical

configuration for the POC environment to satisfy the network and SAN connections.

In order for VMware ESX 4.0 to be installed correctly, download the latest qualified NIC and FC

drivers and make sure to correctly define the server profile before beginning the installation process.

Select Define Server Profile option from the drop-down menu. Type in a Profile Name, and then

proceed in selecting your network connections. Remember there are several network connections we

need to use for our Service console, VMs, and VMotion. So we need to add six Ethernet network

connections and two FCoE connections to each server profile. This action will provide all eight

available ports on the blade FlexFabric adapter.

Page 14: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

14

Figure 11 shows a common configuration profile used by both hosts.

Figure 11. VMware ESX host server profile details

Figure 11 shows that there are several items such as unassigned networks and multiple networks.

Since we needed six networks, there are a total of 12 network connections overall. As a best practice

create as many network ports as possible and leave the unused ports unassigned. This way you can

later add/assign ports without having to power down the server. In the testing we disabled the Flex-

10 LOMs on the BL490c G6 servers to use the FlexFabric from the Mezzanine slot 1. Basically the

port mappings as it is shown in the Ethernet Network Connections are alternating between the LOM

and the Mezz1. The mapping column in Figure 11 provides an example of the alternating network

connections.

When creating unassigned network names, you will start with two unassigned network names by

default. Simply right click and select add. Continue until you add 12 unassigned ports. Beginning

with port 3 and port 4, select network names by simply clicking on the name and from the drop down

menu select your service console ports. Then skip two ports and then select port 7 and 8 and again

perform the same process as before but this time select Multiple Networks. Once you select the

multiple networks, an edit screen will appear so more detail information can be provided to complete

the address for the Server VLAN tag to vNet Mappings. See Figure 12 for detail information. Place a

check mark on Force same VLAN mappings as shared uplink sets. The multiple networks assignment

was set to VLAN_Uplink_1. For the last multiple networks we assigned VLAN_Uplink_2 as the shared

uplink set. For the two networks we configured a port speed of 4.5 Gb.

Page 15: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

15

Figure 12. Set multiple network assignment

Once the profile was created we assigned it to the two blades in the enclosure. Again in our POC we

are using two BL490c G6 servers. One of the servers is in Slot 1 and the second one is in Slot 3.

Note because of the VC version being used, the server must be powered off to assign a profile to the

server bay.

Figure 13. Assign server profile (power must be off)

Deploy and configure the VMware ESX host

Once all of the configurations for HP Virtual Connect FlexFabric have been completed, the installation

of the ESX Server will follow. The main software components to have available are the software

drivers for the FCoE and Ethernet. During the installation process of ESX 4.0 U2, we used Emulex NIC

and FCoE software drivers for the NC551m FlexFabric adapter. Since we are not using the LOMs on

the HP ProLiant BL490c G6 servers, which were disabled, this means we are not using interconnect

Bay 1 and Bay 2. There are a number of ways to install and configure VMware ESX. They range from

a manual step-by-step process to being totally automated. This white paper will provide some detail

on each of the major steps in the process, but the in-depth details and implementations steps for each

of the tasks are outside the scope of the white paper.

Page 16: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

16

In our example for the POC, we attached the VMware ESX 4.0 Update 2 ISO DVD image to each

server using the HP Integrated Lights-Out (iLO) virtual media capability and VMware ESX 4.0 Update

2 was installed manually. We used the iLO Remote Management Web Administration to access the

iLO web user interface.

Please download the qualified HP NC551m Dual Port FlexFabric 10Gb Network Adapter driver from

the HP website before starting the installation. These drivers could be placed on a shared network

drive. Since the Ethernet network and FCoE drivers for the VMware ESX 4.0 are not provided in the

distribution, use the custom driver option load.

Figure 14. Select custom drivers to install for ESX

We had already placed our software drivers on a network share. At this point we return back to iLO.

If you need to, select the Web user interface at this point. If you are already there select either the

virtual media tab or you may use the open Virtual media to show connection to the ESX ISO image.

See Figure 15.

Page 17: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

17

Figure 15. Still connected to ESX ISO selected virtual media tab.

To select the software driver select Browse from the Virtual media and locate the software drivers from

the network. In our testing, we selected first the Ethernet network driver and then the FCoE driver.

For information on the process used to install the software drivers, please refer to VMware

knowledge base article 1017401 explaining the process for installing the drivers.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&external

Id=1017401

Page 18: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

18

Figure 16. Selecting custom drivers in ESX 4.-0

Once the Ethernet network driver is selected and added to the custom driver list, select the Add…

button once more and then return back to the virtual media window and select the Browse button from

the Virtual CD/DVD-ROM. Select the FCoE software driver ISO image and return back to the iLO

remote console window for the server. Select the OK button from the warning pop-up window to insert

the ESX driver CD. Select the modules to import which is the vmware-esx-drivers-scsi-lpfc820 scsi

driver for VMware ESX. Select the driver and then select OK to continue. You have now selected and

added the two drivers you need for the HP FlexFabric mezzanine adapter in slot 1. Select Next and

accept the terms and then proceed to select Yes to load the system drivers. Please refer to figure 16

for selecting the network driver and 17 for selecting the FC driver.

Page 19: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

19

Figure 17. Navigate back to virtual media window for VMware ESX 4.0 ISO to continue installation

You will need to go back to the virtual media window and again select Browse and locate the

VMware ESX 4.0 ISO file to continue with the installation of ESX on the server. Please note that since

the time of this testing there could have been driver updates. As a best practice always contact

VMware or HP technical support to obtain the latest qualified drivers. It is also a good practice to

check with the VMware Hardware compatibility list.

After the installation of VMware ESX on the server, the system will reboot and start with ESX installed

on the server blade. Follow your best practices for adding the ESX host to your VMware vCenter

server for management.

Configuring NIC teaming and VLANs with VMware

Once the ESX Server 4.0 is installed and the ESX host is added to the VMware vCenter Server for

management. Select the ESX host and select the Configuration tab. In the Hardware table of contents

we select Networking. The only vmnic configured at the time was vmnic0 for the service console with

vswif0. As you recall we need to have a total of 6 vmnics for the ESX host. We need to add the

remaining 5 vmnics to the ESX host. Figure 18 demonstrates all 6 vmnics on the ESX hosts assigned to

the three vswitches. We have a redundant Service console, Virtual Machine network with VLAN 101,

201 and the VMotion Kernel port with also two redundant vmnics.

Page 20: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

20

Figure 18. Networking for ESX host

Once we add the networking you will notice the adapter speed is identical to what was configured

with the server profile. Next, to team the adapters we select from the Networking vSwitch0 for this

example and select Properties. We select Network adapters and then the Add button. We then select

vmnic1 as it is demonstrated in Figure 19.

Figure 19. Selecting network adapters to perform NIC teaming

Page 21: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

21

Select Next to see the Failover order. In most cases, use the configuration shown and select Next to

continue. Then select Finish to complete the vSwitch0 properties. Now there are two vmnic adapters

as shown below in Figure 20.

Figure 20. Completion of vSwitch0 properties

Now select the Ports tab and select the vSwitch in the Configuration field then select the Edit button.

Select NIC teaming tab. Make sure both vmnic adapters show up for the NIC teaming. For the Load

Balancing policy exception, select Route based on the originating virtual port ID. This is usually the

VMware default policy setting. More on VMware load balancing algorithms will be discussed in the

next section of this white paper. For now, select OK and close. Two vmnics teamed for the service

console are available. Do the same for virtual machine network and VMotion to create the NIC

teams.

Figure 21. Identifying Vmnics teamed for the service console

VMware load balancing algorithms

VMware does provide several NIC teaming algorithms. Figure 22 shows VMware-supported

algorithms. HP will support any of the algorithms except for IP Hash. IP hash will require a switch

assisted load balancing also called 802.3ad which HP Virtual Connect does not support with server

downlink ports. HP and VMware recommend using the default Virtual Port ID.

Page 22: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

22

Figure 22. VMware-supported algorithms

Configuring HP StorageWorks EVA array

In our POC the HP StorageWorks 4400 Enterprise Virtual Array was used and managed with HP

Command View EVA software. The configuration consisted of creating a single 500 GB LUN to be

shared between the two ESX host. A 500 GB Vdisk was created and presented to the two hosts. The

size of the LUN will vary depending on the type of VM and the characteristics that is being deployed

since a shared LUN is a requirement from VMware in order to use features such as VMware VMotion.

The process is simple with a few steps to create, present and rescan with the vSphere client. The new

LUN will need to be formatted for VMFS or it can also be used as Raw Device mappings (RDMs) for

the VMs.

The main item to check is to make sure you use the correct host mode setting in Command View. We

followed the basic steps in configuring the EVA array keeping best practices in mind.

For more information on best practices for HP StorageWorks Enterprise Virtual Array (EVA) with

vSphere 4, please refer to the white paper called,” Configuration best practices for HP StorageWorks

Enterprise Virtual Array (EVA) family and VMware vSphere 4,” at:

http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA1-2185ENW&cc=us&lc=en

Implementing VMware HA and VMware VMotion

HP Virtual Connect FlexFabric combined with VMware VMotion and VMware HA provide the same

redundancy as having two VC-FC and VC-Eth modules except only two instead of four modules are

needed. HP Virtual Connect FlexFabric has the ability to provide redundant paths to the shared LUN

as well as the redundant NIC teaming for networking. Nothing changes in setting up the HP

FlexFabric for VMware VMotion or VMware HA. In the POC, we followed VMware suggested best

practices after configuring HP FlexFabric with FCoE and Ethernet networks. For more information refer

to the VMware best practices paper at www.vmware.com/pdf/Perf_Best_Practices_vSphere4.0.pdf.

In validation testing of VMware VMotion with HP FlexFabric, it functioned well with no modifications.

The same was also true of VMware HA with FlexFabric. In our testing, we manually migrated at least

two virtual machines at a time. VMware best practice with VMotion is not to exceed more than eight

VMotion migrations at a time. The testing stayed well below that number.

Page 23: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

23

Implementing a proof-of-concept

As a matter of best practice for all deployments, HP recommends implementing a proof-of-concept

using a test environment that matches as closely as possible the planned production environment. In

this way, appropriate performance and scalability characterizations can be obtained. For help with a

proof-of-concept, contact an HP Services representative at

(http://www.hp.com/large/contact/enterprise/index.html) or contact your HP partner.

Summary

As more and more customers deploy virtualization in their datacenters and rely on VMware ESX HA

clustering to provide high availability for their VMs, it is critical to understand how to setup and

configure the underlying infrastructure appropriately. This white paper provides example

implementation details on how to build a VMware HA cluster (with VMware VMotion) using the HP

Virtual Connect FlexFabric architecture.

HP Virtual Connect FlexFabric provides clients an open, standards-based choice to network

convergence in an industry that previously had limited options. HP Virtual Connect FlexFabric

converged Ethernet, Fibre Channel, and iSCSI traffic into a single common fabric module, helping to

not only simplify and reduce the costs of the infrastructure fabric, but also provides a key technology

necessary to implement the FlexFabric layer of the HP Converged Infrastructure. This implementation

includes the ability to allocate and fine-tune network bandwidth for each connection and dynamically

make changes to those connections on the fly.

Page 24: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

24

Appendix A – FlexFabric terminology

Virtual Connect FlexFabric Terminology

Term Definition

FlexFabric Port A physical 10Gb port that is capable of being partitioned into four Physical

Functions

FlexNIC Ethernet personality for Physical Functions 1-4 on a FlexFabric port- capable of

being tuned from 100Mbps to 10 Gbps

FlexHBA-iSCSI iSCSI personality for a Physical Function 2 on a FlexFabric port- capable of

being tuned from 100Mbps to 10Gbps

FlexHBA-FCoE FCoE personality for a Physical Function 2 on a FlexFabric port- capable of

being tuned from 100Mbps to 10Gbps

CLP strings FlexFabric Adapter settings written to server hardware by VC/OA when server

is powered off, then read by adapter Option ROM upon system power on.

vNet/ Virtual

Connect Ethernet

Network

A standard Ethernet Network consists of a single broadcast domain. But when

“VLAN Tunneling” is enabled within the Ethernet Network, VC will treat it as

an 802.1.Q trunk port, and all frames will be forwarded to the destined host

untouched.

LOM LAN-on-Motherboard, Embedded network adapter on the system board

Shared Uplink Set

(SUS)

An uplink port or a group of uplink ports, where the upstream

switch port(s) is configured as an 802.1Q trunk. Each associated

Virtual Connect Network within the SUS is mapped to a specific

VLAN on the external connection, where VLAN tags are removed

or added as Ethernet frames enter or leave the Virtual Connect

domain.

MEZZ1 Mezzanine slot 1

vNIC Virtual NIC port. A software based NIC used by VMs

Page 25: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

25

Appendix B – Bill of materials

Table 2. Bill of materials

Quantity Part number Description

1 507017-B21 HP BLc7000 Enclosure, Three-phase North American w/ 6 power

supplies, 10 Fans with 16 Insight Control licenses

1 456204-B21 HP c7000 Onboard Administrator with KVM option

2 455880-B21 HP Virtual Connect FlexFabric 10Gb/24-port Module for c-Class

BladeSystem

1 AF001A HP Rack 10000 G2 Series

2 509314-B21 HP ProLiant 490c G6 X5570 2.93GHz

2 509319-L21 Intel Xeon® Processor X5570

2 580151-B21 HP NC551m Dual Port FlexFabric 10Gb CNA

4 461201-B21 HP 32GB 1.5G SATA SFF SSD

2 J9264A HP ProCurve 6600-24G-4XG Switch

1 AG805C HP StorageWorks EVA4400 Dual Controller Enterprise Virtual

Array w/ Embedded Switch

3 AG638B M6412A FC Drive Enclosure

36 AG556B HP EVA M6412A 146GB 15K 4Gb Fibre Channel Dual Port HDD

1 AF002A HP RACK 10642 G2 (42U)

Page 26: Deploying a VMware vSphere HA Cluster with HP Virtual ... a VMware vSphere HA Cluster with... · Deploying a VMware vSphere HA ... VMotion live migration support to test virtual machine

For more information

The following links provide more information on VMware ESX 4.0 and HP Virtual Connect:

HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios,

http://h18004.www1.hp.com/products/blades/components/c-class-tech-installing.html

HP Virtual Connect FlexFabric Solution Recipe,

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02610285/c02610285.pdf

HP BladeSystem Reference Architecture: Virtual Connect Flex-10 and VMware vSphere 4.0,

http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA2-9642ENW&cc=us&lc=en

Configuration best practices for HP StorageWorks Enterprise Virtual Array (EVA) family and VMware

vSphere 4, http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA1-

2185ENW&cc=us&lc=en

HP ActiveAnswers for VMware Solutions with HP- www.hp.com/go/vmware

For more information on HP Converged Infrastructure – www.hp.com/go/convergedinfrastructure

For more information on HP Virtual Connect- www.hp.com/go/virtualconnect

To help us improve our documents, please provide feedback at

http://h20219.www2.hp.com/ActiveAnswers/us/en/solutions/technical_tools_feedback.html.

© Copyright 2010, 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries.

4AA1-1907ENW, Created September 2010; Updated January 2011, Rev. 1