ISCSI--VMWARE--VNXe--Setup

15
7/23/2019 ISCSI--VMWARE--VNXe--Setup http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 1/15 tting up & Configuring EMC VNXe3150 iSCSI SAN Storage with High Availability  nally, the EMC VNXe3150 got installed and configured and almost ready to start transition from old EMC AX4 to the new VNXe3150. I tial stage, found a bit difficulties to get it configured correctly as every EMC documentation speaks about something else specially w mes to iSCSI High Availability, they mix between NFS HA and iSCSI HA. In reality, both Storage HA of NFS and iSCSI differ from er. Simply, NFS uses Link Aggregations with LAG/LACP and iSCSI is not. uipment NXe 3150 with dual storage processors with software version 2.4.0.20932 o Dell Stackable 6224 - 24 port Gigabit switches configured with Jumbo frames for iSCSI and Flow Control Enabled. e - ESXi VMware ESXi 5.1.0 build-799733 host with 6 Gig pNIC's two pNICs used for iSCSI connectivity only nfiguration on the VNXe e configuration part is a bit of dilemma when it comes to iSCSI connectivity. In VNXe I set up two iSCSI Servers, one for Storage Proces d one for Storage Processor B. Each SP has two IP Addresses configured for each Ethernet Interfaces, eth2 & eth3

Transcript of ISCSI--VMWARE--VNXe--Setup

Page 1: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 1/15

tting up & Configuring EMC VNXe3150 iSCSI SAN Storage with High Availability 

nally, the EMC VNXe3150 got installed and configured and almost ready to start transition from old EMC AX4 to the new VNXe3150. I

tial stage, found a bit difficulties to get it configured correctly as every EMC documentation speaks about something else specially w

mes to iSCSI High Availability, they mix between NFS HA and iSCSI HA. In reality, both Storage HA of NFS and iSCSI differ fromer. Simply, NFS uses Link Aggregations with LAG/LACP and iSCSI is not.

uipment 

NXe 3150 with dual storage processors with software version 2.4.0.20932

o Dell Stackable 6224 - 24 port Gigabit switches configured with Jumbo frames for iSCSI and Flow Control Enabled.

e - ESXi VMware ESXi 5.1.0 build-799733 host with 6 Gig pNIC's two pNICs used for iSCSI connectivity only

nfiguration on the VNXe 

e configuration part is a bit of dilemma when it comes to iSCSI connectivity. In VNXe I set up two iSCSI Servers, one for Storage Proces

d one for Storage Processor B. Each SP has two IP Addresses configured for each Ethernet Interfaces, eth2 & eth3

Page 2: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 2/15

the Ethernet Interfaces are configured with 9000 mtu size for Jumbo Frame;

Page 3: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 3/15

orage Resources Configuration 

orage

ements  iSCSI Server   Port  IP Address  MAC Address   pSwitch Port 

VMKernel iSCSI

PortGroup 

CSI-A  iSCSI_ServerA 

eth2  10.90.8.1  8:0:1b:57:71:3e  2/g1 

iSCSI-01 10.90.8.78

vmk1 

eth3  10.100.8.1  8:0:1b:82:78:dd  1/g1 

iSCSI-02 10.100.8.78

vmk2 

CSI-B  iSCSI_ServerB 

eth2  10.90.8.2  8:0:1b:58:59:0f   2/g2 

iSCSI-01 10.90.8.78

vmk1 

eth3  10.100.8.2  8:0:1b:cd:f3:26  1/g2 

iSCSI-02 10.100.8.78

vmk2 

you can see in the above screen shot and configuration table, each storage process have two Ethernet ports, each Ethernet port is connec

CSI pSwitch, eth2 in SPA matched eth2 in SPB. So since both of these interfaces are connected to the same pSwitch and same IP sub

nfigured on both, single iSCSI VMkernel PortGroup on the same subnet can reach to both Storage Processors through single physical ad

mnic”

NXe Connectivity Diagram 

Page 4: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 4/15

Page 5: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 5/15

CSI Switches Configuration 

th network switches are configured as master and slave stackable switches, basicly for this type of configuration you will not require to

switches as each of pair SP Ethernet are connected to the same switch. i;e SPA-Eth2 in pSwitch1 & SPB-Eth2 in pSwitch1. But with

nfiguration you will require to stack the Switches as you will need to configure LAG/LACP for true High Availabity

t up Jumbo Frames on the two iSCSI gig switches so that all the ports are using support Jumbo Frames. Below commands will le

nfigure all the ports with mtu size 9000

nsole(config)#Interface range ethernet all

nsole(config-if)#mtu 9000

Xi host configuration 

ch of the Gigs port in ESXi host is connected to the Physical iSCSI Switches. Two VMKernel PortGroups created in vSwitch1. Each o

CSI Kernels is mapped to the physical interface in ESXi.

itch Name Num Ports Used Ports Configured Ports MTU Uplinks

witch1 128 8 128 9000 vmnic2,vmnic6

rtGroup Name VLAN ID Used Ports Uplinks

CSI-02 0 1 vmnic6

CSI-01 0 1 vmnic2

mk1 iSCSI-01 IPv4 10.90.8.78 255.255.255.0 10.90.8.255 00:50:56:6e00 65535 true STA TIC

mk2 iSCSI-02 IPv4 10.100.8.78 255.255.255.0 10.100.8.255 00:50:56:64

00 65535

check the connectivity is mapped correctly for each iSCSI PortGroup to reach the correct Ethernet Interface in the Storage Processor; vm

th –I will allow you to specify the source interface to reach the destination iSCSI target; as this will test the whole path end-to-end from V

orage, ESXi Hosts to Physical iSCSI Switches to make sure connectivity can flow with jumbo frame.

Page 6: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 6/15

 

CSI Adapter Port Binding 

th iSCSI VMKernel portgroups has to be enabled for port bindings in the iSCSI Initiator adapter of ESXi.

nnectivity Results 

vmkping -I vmk1 10.90.8.1 -c 50 -s 9000 

NG 10.90.8.1 (10.90.8.1): 9000 data bytes

08 bytes from 10.90.8.1: icmp_seq=0 ttl=255 time=0.596 ms

08 bytes from 10.90.8.1: icmp_seq=1 ttl=255 time=0.575 ms

08 bytes from 10.90.8.1: icmp_seq=2 ttl=255 time=0.548 ms

Page 7: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 7/15

10.90.8.1 ping statistics ---

ackets transmitted, 3 packets received, 0% packet loss

und-trip min/avg/max = 0.548/0.573/0.596 ms

vmkping -I vmk1 10.90.8.2 -c 50 -s 9000 

NG 10.90.8.2 (10.90.8.2): 9000 data bytes

08 bytes from 10.90.8.2: icmp_seq=0 ttl=255 time=0.591 ms

08 bytes from 10.90.8.2: icmp_seq=1 ttl=255 time=0.617 ms

08 bytes from 10.90.8.2: icmp_seq=2 ttl=255 time=0.603 ms

vmkping -I vmk2 10.100.8.1 -c 50 -s 9000 

NG 10.100.8.1 (10.100.8.1): 9000 data bytes

08 bytes from 10.100.8.1: icmp_seq=0 ttl=255 time=0.634 ms

08 bytes from 10.100.8.1: icmp_seq=1 ttl=255 time=0.661 ms

08 bytes from 10.100.8.1: icmp_seq=2 ttl=255 time=0.642 ms

10.100.8.1 ping statistics ---

ackets transmitted, 5 packets received, 0% packet lossund-trip min/avg/max = 0.634/0.661/0.708 ms

vmkping -I vmk2 10.100.8.2 -c 50 -s 9000 

NG 10.100.8.2 (10.100.8.2): 9000 data bytes

08 bytes from 10.100.8.2: icmp_seq=0 ttl=255 time=0.694 ms

08 bytes from 10.100.8.2: icmp_seq=1 ttl=255 time=0.658 ms

08 bytes from 10.100.8.2: icmp_seq=2 ttl=255 time=0.690 ms

d ESXi hosts to VNXe 

tup the ESXi hosts to access VNXe iSCSI SAN Storage. This can be done by browsing into the VNXe > Hosts > VMWare will allow yd ESX hosts either by typing in the IP Address of the vCenter or the management network of the ESXi itself. Then create VMFS datasto

VNXe and make sure you are assigning permission to the ESXi host to access the newly created LUN.

ter the LUN is presented to the ESXi host and formatted with VMFS now it’s time to change the path selection through from default Fix

und Robin and change the Round Robin default IOPS limit in ESXi from 1000 to 1 which will allow you to utilize all the iSCSI paths.

xcli storage nmp psp roundrobin deviceconfig set --type=iops --iops 1 --device=naa.6006048c2fb691695617fc52e06065a2

ce it’s change you will see all the paths with Active(I/O) for each LUN that changed from Fixed to Round Robin.

Page 8: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 8/15

ilover – Failback Testing Scenarios 

r the failover testing I have presented 500 GB LUN and created two Virtual Machines, and installed Windows 2008 R2 Enterprise Ed

les installed on this guest machine;

crosoft Active Directory Role Services

crosoft DNS Server Services

change Server 2010 with all the Roles.

e second Virtual Machines are a Windows 7 Professional client with Microsoft Outlook 2010 connected to the Exchange 2010 MAPI pr

nding and receive emails internally is operational in normal mode.

sting Networking 

ave tested failover with network issues scenarios by disconnecting one pNic “vmnic2” from the vSwitch1 that mapped to iSCSI-01 port

d at the same time vmkping –I vmk1 was running against both VNXe iSCSI Target IP’s SPA-Eth1 “10.90.8.1” & SPB-Eth2 “10.90.8.2

g continues well. If a Storage Processor (SPA) fails/rebooted on VNXe, the working Storage Processor (SPB) picked up the workloa

s handled by SPA.

Page 9: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 9/15

you can see in the above screen shots, Virtual Machines Windows 7 Client continued pinging the Exchange Server and Exchange S

ntinued to ping Windows 7 Client.mk1 = iSCSI-01 which mapped to vmnic2 stopped pinging to the SPA-Eth2 & SPB-Eth2.

N Paths both links mapped to vmnic2 subnet 10.90.8.x dead and 10.100.8.x paths mapped to vmnic6 ‘vmk2’ “iSCSI-02” are livtive(I/O).

Page 10: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 10/15

ng “vmnic6” via vmk2 to 10.100.8.1 & 10.100.8.2 

mkping -I vmk2 10.100.8.1

NG 10.100.8.1 (10.100.8.1): 56 data bytes

bytes from 10.100.8.1: icmp_seq=0 ttl=255 time=0.229 ms

bytes from 10.100.8.1: icmp_seq=1 ttl=255 time=0.192 ms

bytes from 10.100.8.1: icmp_seq=2 ttl=255 time=0.238 ms

10.100.8.1 ping statistics ---

ackets transmitted, 3 packets received, 0% packet loss

und-trip min/avg/max = 0.192/0.220/0.238 ms

vmkping -I vmk2 10.100.8.2

NG 10.100.8.2 (10.100.8.2): 56 data bytes

Page 11: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 11/15

bytes from 10.100.8.2: icmp_seq=0 ttl=255 time=0.235 ms

bytes from 10.100.8.2: icmp_seq=1 ttl=255 time=0.245 ms

10.100.8.2 ping statistics ---

ackets transmitted, 2 packets received, 0% packet loss

und-trip min/avg/max = 0.235/0.240/0.245 ms

link vmnic2 to vSwitch1 and ping resumed back to SPA-Eth2 & SPB-Eth2 

UN paths resumed 

Page 12: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 12/15

sting Power Failure of VNXe Storage Processors-A & Storage Processor-B 

e second test was done by removing the physical power from Storage Processor-B and initiate vmkping to both Ethernet Interface of SPB

th VMKernel vmk1 & vmk2, as a result vmkping continues as the traffic routed peer SP port.

Page 13: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 13/15

sult of ping after 

Page 14: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 14/15

low result shows that Exchange VM continues to ping the Client VM during Storage Processor-B shutdown.

Page 15: ISCSI--VMWARE--VNXe--Setup

7/23/2019 ISCSI--VMWARE--VNXe--Setup

http://slidepdf.com/reader/full/iscsi-vmware-vnxe-setup 15/15

d the same with Storage Processor-A and initiated ping to both Ethernet Interfaces of SPA. Ping continues to both Ethernet Interfaces as w

gs inside each VM “Exchange Server to Client” and vice versa continues as well and Exchange Server VM didn’t give any freeze / erro

ent viewer.

nclusion 

e VNXe3150 high availability feature at storage level and networking level ensures data protection against any single component failu

orage Level and Networking Level.