Emc Isilonlab Scale Out Nas Dg

58
DATA CENTER SOLUTION DEPLOYMENT GUIDE BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS Brocade VCS Fabric and EMC Isilon Scale-out NAS Version: 2.0 Revision Date: December 17, 2013 Abstract This document describes the EMC Isilon scale-out NAS validated with Brocade Networking Solutions deployement guidelines with Brocade VCS Fabric Technologies. Strategic Solutions Lab Guide 1

description

Emc Isilonlab Scale Out Nas Dg

Transcript of Emc Isilonlab Scale Out Nas Dg

Page 1: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Brocade VCS Fabric and EMC Isilon Scale-out NAS Version: 2.0 Revision Date: December 17, 2013 Abstract

This document describes the EMC Isilon scale-out NAS validated with Brocade Networking Solutions deployement guidelines with Brocade VCS Fabric Technologies.

Strategic Solutions Lab Guide 1

Page 2: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

© 2013 Brocade Communications Systems, Inc. All Rights Reserved.

ADX, AnyIO, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, ICX, MLX, MyBrocade, OpenScript, VCS, VDX, and Vyatta are registered trademarks, and HyperEdge, The Effortless Network, and The On-Demand Data Center are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned may be trademarks of their respective owners.

Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.

Copyright © 2012-2013 EMC Corporation. All rights reserved. Published in the USA.

Published October 2012

EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC online support website.

Strategic Solutions Lab Guide 2

Page 3: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Contents Chapter 1 Preface Summary .................................................................................................................................... 6

Introduction ............................................................................................................................................................... 6

Purpose of Deployment Guide ............................................................................................................................ 6

Target Audience ....................................................................................................................................................... 7

Key Contributors ...................................................................................................................................................... 7

Chapter 2 Deployment Architecture Overview ................................................................................................ 8

Architecture Overview .......................................................................................................................................... 8

Summary of deployment components ............................................................................................................ 8

Compute .................................................................................................................................................................. 8

Isilon Storage ........................................................................................................................................................ 8

Brocade Network ................................................................................................................................................ 9

Software Configuration Overview .................................................................................................................... 9

EMC Isilon OneFS ................................................................................................................................................ 9

Brocade Network OS (NOS) ......................................................................................................................... 10

Benefits of Brocade VCS Fabrics with EMC Isilon Scale-out NAS...................................................... 10

Deployment Topology ........................................................................................................................................ 11

Chapter 3 Deployment Configuration Guidelines ........................................................................................ 13

Deployment Overview ........................................................................................................................................ 13

Deployment Topology ........................................................................................................................................ 13

IP Addresses ........................................................................................................................................................... 15

Pre-requisites for configuration ..................................................................................................................... 15

VDX Deployment Configuration Steps ......................................................................................................... 16

Configure vLAGs on RB3 and RB4 connecting to hosts .................................................................... 16

Configure vLAGs on RB5 and RB6 connecting to Isilon Cluster ................................................... 16

Step 1: Creating and configuring a Logical Chassis ............................................................................ 16

Strategic Solutions Lab Guide 3

Page 4: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Step 2: Verify Fabric ISL and Trunk Configurations between VDX switches .......................... 18

Step 3: VLAN creation and configuration ............................................................................................... 19

Step 4: Configure vLAGs on RB3 and RB4 connecting to hosts .................................................... 19

Step 5: Configure vLAGs on RB5 and RB6 connecting to Isilon Cluster .................................... 20

Step 6: Enable Flow Control Support ....................................................................................................... 21

Step 7 - Configure MTU and Jumbo Frames .......................................................................................... 21

Step 8: Validate Port Channel configuration ......................................................................................... 21

Isilon Storage Configuration ............................................................................................................................ 22

Isilon Deployment Configuration Steps ....................................................................................................... 22

Step 1: Setup Node 1 ....................................................................................................................................... 23

Step 2: Add remaining nodes to the cluster .......................................................................................... 24

Step 3: Verify Node & Cluster Status ........................................................................................................ 24

Step 4: Configure Isilon Network from the Web GUI ....................................................................... 25

Step 5: Setup Isilon NAS Shares ................................................................................................................. 29

a) Configure Volume ................................................................................................................................... 29

b) Configure SMB Shares ...................................................................................................................... 30

c) Configure NFS Shares ........................................................................................................................... 31

Server Configuration and Cabling .................................................................................................................. 31

Step 1: Configure MTU and Jumbo Frames .......................................................................................... 32

Step 2: Configure interface bonding on Network Interface Cards (NICs) ................................ 32

A) Create the initial bonding configuration .................................................................................... 33

D) Restart the network stack ................................................................................................................ 34

E) Check bonding stack for errors ....................................................................................................... 34

Step 3: Enable Flow Control ........................................................................................................................ 35

Chapter 4 VDX in a Virtualized Environment ................................................................................................ 36

Topology ................................................................................................................................................................... 36

IP Addresses ........................................................................................................................................................... 37

Description ......................................................................................................................................................... 38

Assumptions....................................................................................................................................................... 38

Configuring Advanced Settings for Isilon Best Practices................................................................. 39

Configure additional Options for VMware Clusters and VMs ........................................................ 39

Step 1: Connect 10Gb interfaces to RB1 & configure ports for VLAN access ......................... 39

Strategic Solutions Lab Guide 4

Page 5: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Step 2: Enable Flow Control Support ....................................................................................................... 40

Step 3: Setup VMware host Network Connections ............................................................................ 40

A) Configure VCS ports with connected uplink interfaces for ESXi storage path............ 41

B) Create Distributed vSwitch in vCenter ........................................................................................ 41

C) Configure Host networking in vCenter ........................................................................................ 46

D) Register vCenter in VCS ..................................................................................................................... 47

Confirm network connections between ESXi hosts and Isilon storage ................................ 48

Step 4: VMware Storage Configuration ................................................................................................... 48

A) Add Isilon Datastores to ESXi Hosts ............................................................................................. 48

Step 5: Configuring Advanced Settings for Isilon Best Practices ................................................. 51

A) Enable advanced parameters for Isilon storage in a VMware environment ............... 51

Step 6: Configure additional Options for VMware Clusters and VMs ......................................... 52

A) vSphere Optimizations ....................................................................................................................... 52

B) Windows VM Optimizations ............................................................................................................ 53

C) Windows 8 Optimizations ................................................................................................................ 54

Appendix A ................................................................................................................................................................ ... 55

Bill of Materials ...................................................................................................................................................... 55

Appendix B ................................................................................................................................................................ ... 56

Management Network ........................................................................................................................................ 56

Pre-requisites .................................................................................................................................................... 56

Configure ICX Switch ...................................................................................................................................... 56

Appendix C ................................................................................................................................................................ ... 57

References ............................................................................................................................................................... 57

Appendix D ................................................................................................................................................................... 58

About Brocade........................................................................................................................................................ 58

Strategic Solutions Lab Guide 5

Page 6: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Chapter 1 Preface Summary

Introduction

Enterprise data centers are constantly faced with requirements to expand their infrastructure to accommodate new data, retain old data longer, and meet the performance needs of new applications. Requirements to meet the enhanced utilization, efficiency, lower costs, and increased flexibility to deliver applications and compute resources on demand. Traditional scalable, high capacity and high performance storage systems were built on SANs; separate networks designed to accommodate storage-specific data flows. However, new developments in distributed applications and server virtualization see increasing adoption of Network Attached Storage (NAS) on Ethernet, thereby bringing the same requirements to Ethernet networks supporting storage that are traditionally found in SANs: scalability, capacity, predictable latency and reliability. Brocade VCS Fabric technology delivers high-performance, reliable networks for NAS solutions, which can scale when needed without disruption to meet the new requirements for NAS storage infrastructure such as found with EMC Isilon Scale-out NAS. And while virtualization has enhanced the efficiency of servers in the data center, it has also magnified the challenges associated with deploying a storage infrastructure to provide the anticipated end-to-end cost savings and management advantages of virtualization. A VCS Fabric with NAS is ideal providing predictable performance and reliability with simplified change management. VCS Fabric technology is built with TRILL/FSPF and provides unique capabilities including distributed intelligence, Automated Migration of Port Profiles (AMPP), virtual link aggregation groups (vLAG), and lossless Ethernet transport removing previous limitations of Ethernet for storage traffic.

Purpose of Deployment Guide

This deployment guide covers configuration of EMC Isilon NAS cluster storage with a Brocade VCS Fabric of VDX switches. In the example configuration the EMC Isilon X200 with the Brocade VDX8770 and VDX6740 switches. When appropriate, best practice recommendations are provided. The guide is valuable beyond the specific EMC Isilon and Brocade VDX products used. The example configuration can be used as a building block for larger scale-out NAS deployments with EMC Isilon node cluster and Brocade VDX Ethernet Fabrics networks with virtual or physical bare-metal servers. This document can be used as a reference deployment guide with servers running Red Hat Enterprise Linux (RHEL) for demonstration purposes.. In addition, an example deployment

Strategic Solutions Lab Guide 6

Page 7: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

with VMware is also detailed in Appendix C. In this example we provide the configuration for the management network using the Brocade ICX 6610 series switch (optional), the configuration details are documented in Appendix B. This deployment guide does not include configuration of disaster recovery or data protection mechanisms, such as replication or backup procedures, outside the basic redundancies included within the VCS Fabric and Isilon storage cluster.

Target Audience This content targets cloud, storage and network architects and engineers who are evaluating and deploying Isilon NAS solutions in their networks that want guidance for how to deploy Isilon with Brocade VCS Fabric technology. The readers of this document are expected to have the enough network expertise/training to install and configure Brocade VDX series switches, EMC Isilon series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable and it is recommended that the readers are familiar with these documents.

Key Contributors The content in this guide was provided by the following key contributors.

Lead Architect: Marcus Thordal, Strategic Solutions Lab Lead Engineer: Anika Suri, EMC OEM Systems Engineer

Strategic Solutions Lab Guide 7

Page 8: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Chapter 2 Deployment Architecture Overview

Architecture Overview This deployment guide uses EMC Isilon array, Brocade VDX series switches to provide the storage, networking, and x86 computing resources running Red Hat Enterprise Linux (RHEL) 6 in our example deployment. The core components that are demonstrated in this deployment guide are compute, storage, and networking. In the example deployment configuration we use Brocade’s VDX portfolio to demonstrate that any combination of VDX Switches work together in a VCS Fabric enabling switch selection to match each specific solutions requirement for port density and interface speeds 1/10/40Gbps, as required. The Isilon storage subsystem uses aggregated interfaces (LAG) across switches for redundancy and increased bandwidth. Within the VCS Fabric, the LAG can span across multiple switches (vLAG) providing redundancy and flexibility. Should one of the links fail, the storage will remain available through a redundant Brocade VDX switch.

Summary of deployment components This section briefly describes the key components of this deployment guide.

Compute

x86 physical servers running an Operating System like Linux or Windows are common compute OS deployments with Scale-out NAS environments. For demonstration purposes Red Hat Enterprise Linux (RHEL) 6 in our example deployment. This deployment guide provides the flexibility to design and implement the customer’s choice of server components. The server infrastructure must conform to the following attributes:

• Sufficient number server with the required cores and memory to support customer applications.

• Sufficient network connections to enable redundant connectivity to the network with the Isilon cluster.

• Excess capacity to withstand a server failure and failover in the environment.

Isilon Storage

EMC Isilon scale-out storage solutions are designed for the enterprise, and are powerful yet simple to install, manage and scale to virtually any size. With EMC Isilon scale-out network-attached storage (NAS), can have massive room for growth-with over 20 petabytes (PB) of capacity per cluster. The Isilon* array provides the following essentials:

• Simple Network Management designed for ease of use

Strategic Solutions Lab Guide 8

Page 9: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

• Massive scalability with easy, grow-as-you-go flexibility

• Automated tiered storage to optimize resources

• Multi-protocol support to maximize operational flexibility

• Resilient data protection for a highly available environment

• Robust security and date encryption options

The Isilon x200 series array provides the most flexible and comprehensive storage product line, that strikes the right balance between large capacity and presenting high performance NFS datastores to hosts.

Brocade Network

All network traffic is carried by Brocade Ethernet Fabric network with redundant cabling and switching of NFS storage traffic. IP Management is carried over separate networks, as explained in Appendix B. This deployment utilizing Brocade Ethernet Fabric Technology enables the implementation of a high performance, efficient, and resilient networks illustrated in the deployment guide. The Brocade VDX Ethernet Fabric networking solutions provides the following attributes:

• Redundant network links for the hosts, storage and between switches.

• Architecture for Traffic isolation based on industry-accepted best practices.

• Support for link aggregation across switches.

• High utilization and high availability networking

• Virtualization automation

Software Configuration Overview This section briefly describes the key software version of the components deployed in this deployment guide.

EMC Isilon OneFS

Operating System is the intelligence behind EMC Isilon scale-out storage systems. OneFS combines the three layers of traditional storage architectures - file system, volume manager, and data protection into one unified software layer, creating a single intelligent file system that spans all nodes within a cluster.

We recommend using the latest OneFS code from EMC. In this example configuration we are using OneFS 6.5.5.

Strategic Solutions Lab Guide 9

Page 10: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Brocade Network OS (NOS)

NOS is a scalable network operating system available for the Brocade VDX switch portfolio. Purpose-built for mission-critical, next a generation data center, NOS supports the following capabilities:

• Simplified network management

• Automatic network formation

• High resiliency

• Self-healing and rapid Network convergence

• Improved network utilization

• Server virtualization integration

With the VCS Logical Chassis feature introduced in NOS 4.0 all VDX switches in an Ethernet fabric are managed as a single logical chassis and appear as a single switch to any network or components attached.

We recommend using the latest Network OS code from Brocade. In this example configuration we are using NOS 4.0.

Benefits of Brocade VCS Fabrics with EMC Isilon Scale-out NAS EMC Isilon Scale-out NAS implemented with Brocade VDX switches with VCS Fabric technology provides innovation for IP storage network. VCS fabric technology delivers high performance, scalability and resiliency at every layer of the data center network. Advantages of deploying Brocade VCS Fabric technology at the IP storage network layer include:

• Scaling the network elastically non-disruptively. Brocade VCS fabrics are elastic, self-forming, and self-healing, allowing administrators to concentrate on service delivery and not fabric administration.

• All-active connections and load balancing throughout Layers 1–3 provide and resiliency at any network layer with optimal load balancing.

• Brocade VCS fabrics are isotropic, offering uniform, multidimensional scalability that enables the broadest diversity of deployment scenarios and operational flexibility. Large or small, Brocade VCS fabrics work and act the same, offering operational efficiencies that span a very wide range of deployed configurations and requirements.

• Brocade VCS fabrics are easily deployed, with a shared control plane and unified management plane that allow the fabric nodes to function and to be managed as a

Strategic Solutions Lab Guide 10

Page 11: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

single entity. Multiple Layer 3 gateways help bring fabric benefits to Layer 3 traffic, providing maximum utilization.

• Wire-speed performance - High-density 10 Gigabit Ethernet and 40 Gigabit Ethernet - ~4 μsec latency within the VCS fabric

Brocade VCS fabrics can be designed to meet virtually any application requirements—in enterprise and service provider data center environments alike. Organizations can start small at the access layer with pilot projects deploying fix form factor VDX series switches, building out the Ethernet Fabric as network needs grow and adjusting capabilities as required. Existing VCS fabrics can be elastically scaled with Brocade VDX 6740 and VDX 8770 switches. The ability to deploy these switches in existing environments—either at the access or aggregation layer, or both— preserves existing investments while future-proofing the network for 40 Gigabit Ethernet and 100 Gigabit Ethernet (VDX 8770 only) technologies to come.

Deployment Topology The Brocade VCS Fabric and EMC Isilon scale-out NAS Deployment Guide with Brocade VDX networking series switches are validated proven best-of-breed technologies to create a complete solution that enables you to make an informed decision when deploying Brocade VDX switches with EMC Isilon scale-out storage. These defined configurations form the basis of creating a custom deployment design. Brocade VDX switches with VCS Fabric technology enable designs with fewer tiers (e.g. a 1-tier/spine only, 2-tier/ Spine & Leaf design rather than 3-tier) decrease cost, complexity, cabling and power/heat for operational efficiency. The network design will be based on the maximum number ports that are required and the desired oversubscription ratio for the traffic between compute and storage devices. Based on the short-term a single tier (“spline”) design may then grow to a two-tier Spine and Leaf design is the next logical step. In this deployment guide a two-tier design has spine switches at the top tier and leaf switches at the bottom tier with servers/compute and storage always attached to leaf switches at the top of every rack (or for higher density leaf switches, top of every N racks) and leaf switches uplink to 2 or more spine switches. For this deployment we recommend that all solution servers, storage arrays, switch interconnects, and switch uplinks have redundant connections. Ensure that the uplinks are connected to the existing customer network, if required. The configuration for the management switch for management ports of all the deployment guide components is outlined in Appendix B of this document. The deployment topology used in this design guide (illustrated in Figure 1) illustrates the major components layout demonstrated comprising in this deployment guide.

Strategic Solutions Lab Guide 11

Page 12: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Figure 1 - Deployment Topology

Note: The Brocade VDX 6740/8770 Hardware Reference Manual and the Brocade VDX 8770 Switch Installation Guide provide instructions on racking, cabling, and powering the VDX 6740s and the VDX 8770s respectively.

Strategic Solutions Lab Guide 12

Page 13: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Chapter 3 Deployment Configuration Guidelines

Deployment Overview The deployment process is divided into the stages for network, storage and compute. Upon completion of the deployment is ready for integration with the existing customer network infrastructure.

Deployment Topology Below is a network diagram of the deployed topology showing a Spine-Leaf architecture with Brocade VDX 8770s (RB1, RB2) in the spine and VDX 6740s (RB3, RB4, RB5, RB6) as the Top of Rack (ToR) leaf switches with the Isilon cluster nodes and the physical servers running RHEL 6 attached. This provides uniform and redundant access for all servers to all storage and simplifies scale-out when adding more servers and NAS nodes. Low latency, high bandwidth, high availability and simple management are maintained as physical resources are added. Isilon intra-cluster communication is handled by a dedicated Isilon Infiniband network. All internal Isilon cluster traffic uses these paths, and must be assigned an internal IP address range during cluster setup. For details, please refer to Isilon Deployment Configuration Steps.

Strategic Solutions Lab Guide 13

Page 14: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Figure 2: Deployment Topology

When connecting the RHEL servers SVR1 and SVR2, it is recommended that a separate dedicated network interface be used respectively for management and storage access. For high availability, the best practice is to use redundant interfaces for each of these networks. It is very common to use on-board 1GbE NICs for management and 10GbE interfaces for storage. Some network adapters, such as the Brocade Fabric Adapter-1860, provide traffic separation and this guide shows a fully redundant deployment example. We will be using 10GbE interfaces on the server for storage traffic and 1GbE for management in this configuration example topology.

Strategic Solutions Lab Guide 14

Page 15: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

The configuration for VCS Fabric is in this section and for the Management switch of all the components is outlined in Appendix B of this document.

IP Addresses When deploying a NAS infrastructure, the logical network infrastructure and IP topology must be planned in advance. In our test environment, we use a separate management network with all IP addresses in the default VLAN 1. For the VCS network, VLAN separation is used for storage (VLAN50) as shown the table below. Table 1. IP Address

Device Type Mangement_IP VLAN1

Store_IP VLAN50

x200_IP InfiniBand

BR-VDX8770-4 VDX 192.168.90.93 BR-VDX8770-4 VDX 192.168.90.94 BR-VDX6740-48 VDX 192.168.90.95 BR-VDX6740-48 VDX 192.168.90.96 BR-VDX6740-48 VDX 192.168.90.97 BR-VDX6740-48 VDX 192.168.90.98

IS-x200-1 NAS 192.168.90.101 192.168.50.101 172.16.1.101 IS-x200-2 NAS 192.168.90.102 192.168.50.102 172.16.1.102 IS-x200-3 NAS 192.168.90.103 192.168.50.103 172.16.1.103 IS-x200-4 NAS 192.168.90.104 192.168.50.104 172.16.1.104 SVR1 HOST 192.168.90.105 192.168.50.105 SVR2 HOST 192.168.90.106 192.168.50.106

Note: Details on configuration of the management network is documented in Appendix B.

Pre-requisites for configuration Before we detail the configuration steps, the user should make a note of the following pre-requisites/assumptions: 1. All physical connections have been made and all Management interfaces for VDX switches

and RHEL servers have IP addresses assigned and are accessible via SSH. On Brocade VDXs all ISL ports connected to the same neighbor VDX switch attempt to form a trunk.

Strategic Solutions Lab Guide 15

Page 16: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

This example configuration assumes that trunks have been formed between respective VDX switches, based on the deployment topology in Figure 2.

2. All VDX switches have the Ports on Demand (POD) licenses already installed, if required. 3. The user has knowledge of user access levels on Brocade VDXs and is familiar with VCS

terminology. For details on setting IP addresses, Licenses, Trunk Port Groups, RbridgeIDs and VCS IDs please refer to Network OS Administrator’s Guide, v4.0.0.

VDX Deployment Configuration Steps The VDXs deployment process is divided into the stages shown in Table 2. Upon completion of the deployment, the VCS Fabric is ready for integration with the existing customer management network and server infrastructure. Table 2. VDX Configuration Steps

Steps VDX Deployment Step Description

1 Creating and configuring a Logical Chassis

2 Verify Fabric ISL and Trunk Configurations between VDX switches

3 VLAN creation and configuration

4 Configure vLAGs on RB3 and RB4 connecting to hosts

5 Configure vLAGs on RB5 and RB6 connecting to Isilon Cluster

6 Enable Flow Control Support

7 Configure MTU and Jumbo Frames

8 Validate Port Channel configuration

Step 1: Creating and configuring a Logical Chassis

Strategic Solutions Lab Guide 16

Page 17: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

When VCS is deployed as a Logical Chassis, it can be managed from a single Virtual IP and configuration changes are automatically saved across all switches in the fabric. In the following example we will show configuration for Logical Chassis with RB5 as primary. RBridge ID is a unique identifier for an RBridge (physical switch in a VCS fabric) and VCS ID is a unique identifier for a VCS fabric. The factory default VCS ID is 1. All switches in a VCS fabric must have the same VCS ID. This example configuration we will be setting all VDXs with VCS ID 1 and RBridge IDs are assigned as per the Deployment Topology in Figure 2.

i) In Privileged EXEC mode, enter the vcs command with options to set the VCS ID, the RBridge ID and enable logical chassis mode for the switch. Please note that the VCS ID is set to the same value on each node that belongs to the cluster, in this example we set it to 1.

Sw0# vcs vcsid 1 rbridge-id 5 logical-chassis enable

ii) The switch reboots after this and you are asked if you want to apply the default

configuration; answer yes.

iii) The switch-attributes command is used to set a meaningful host name to every VDX, as shown in the below example-

Sw0(config)# switch-attributes 5 host-name BRCD6740-RB5 NOTE:. To create a Logical Chassis cluster, the user needs to perform the above steps on every VDX in the VCS fabric, changing only the RBridge ID each time, based on Figure 2 and all physical connectivity requirements have been met.

iv) When you have enabled the logical chassis mode on each node in the cluster, run the show vcs command to determine which node has been assigned as the cluster principal node, which can be used to configure the entire VCS fabric. The arrow (>) denotes the cluster principal node. The asterisk (*) denotes the current logged-in node.

BRCD6740-RB5# show vcs Config Mode : Distributed VCS ID : 1 VCS GUID : 86024da1-b2c2-4b35-955d-41c27598aaa0 Total Number of Nodes : 6 Rbridge-Id WWN Management IP Status HostName ----------------------------------------------------------------------------------------- 5> 10:00:00:05:33:51:63:42 * 192.168.90.97 Online switch …….. 6 10:00:00:05:33:B7:F0:00 192.168.90.98 Online…. <truncated output> NOTE: Any global and local configuration changes now made are distributed automatically to all nodes in the logical chassis cluster. You can enter the RBridge ID configuration mode for any RBridge in the cluster from the cluster principal node, by logging into any of the VDXs in the fabric or by assigning an optional Virtual IP to the entire fabric, as shown below-

Strategic Solutions Lab Guide 17

Page 18: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

BRCD6740-RB5 (config)# vcs virtual ip address 192.168.90.97

In the above example, now the entire fabric can be managed with one Virtual IP- 192.168.90.97. Note: For details on Logical Chassis, please refer to the Network OS Administration Guide, v4.0.0.

Step 2: Verify Fabric ISL and Trunk Configurations between VDX switches It is recommended that the VDXs in this deployment have redundant Fabric ISLs between them. Between two VDXs this is achieved by connecting minimum two cables between any pair of 10Gbps ports on the two switches. The ISLs are self-forming as the VDX platform comes preconfigured with a default port configuration that enables ISL for easy and automatic VCS fabric formation. In this deployment there are two ISLs between each spine and leaf VDXs, since we connect ports in the same portgroup on the two switches the ISLs automatically form a Brocade trunk of 20Gbps each which guarantees frame-based load balancing across the ISLs. With NOS v4.0.0 the number of ISLs in a trunk can vary from 1-16 depending on customer traffic and oversubscription ratio. Configurations for the trunk need to be done on the trunk master. For details on port groups and trunks, please refer to the Network OS Administrator Guide, v4.0.0. The fabric isl enable, fabric trunk enable, no fabric isl enable, and no fabric trunk enable can be used to toggle the ports which are part of a trunked ISL. The following example shows the running configuration of an ISL port on RB5- BRCD6740-RB5# show running-config interface TenGigabitethernet 5/0/20 interface TenGigabitEthernet 5/0/20 fabric isl enable fabric trunk enable no shutdown ! …..

One can verify ISL configurations using the show fabric isl or show fabric trunk commands on RB5, as shown below- BRCD6740-RB5# show fabric isl Rbridge-id: 5 #ISLs: 2 Src Src Nbr Nbr Index Interface Index Interface Nbr-WWN BW Trunk Nbr-Name ------------------------------------------------------------------ 20 Te 5/0/20 20 Te 1/0/20 10:00:00:05:33:40:31:93 10G Yes "BRCD6740-RB1" 21 Te 5/0/21 21 Te 2/0/20 10:00:00:05:33:40:31:94 10G Yes "BRCD6740-RB2" ……. BRCD6740-RB5# show fabric trunk

Strategic Solutions Lab Guide 18

Page 19: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Rbridge-id: 5 Trunk Src Source Nbr Nbr Group Index Interface Index Interface Nbr-WWN ----------------------------------------------------------------- 1 19 Te 5/0/19 19 Te 1/0/19 10:00:00:05:33:40:31:92 1 20 Te 5/0/20 20 Te 1/0/20 10:00:00:05:33:40:31:93 ….

Step 3: VLAN creation and configuration Create VLAN 50 for Storage traffic from RHEL servers on RB5. All management traffic will use VLAN 1, which is the default. When VCS is deployed in distributed mode, it works as a Logical Chassis and therefore VLANs only need to be configured once to be available across the complete VCS Fabric. VDX6740_RB5# conf t VDX6740_RB5(config)# interface Vlan 50 VDX6740_RB5(config-Vlan-50)# description IsilonTest1_Storage

Step 4: Configure vLAGs on RB3 and RB4 connecting to hosts vLAGs provide an additional degree of device-level redundancy, while providing active-active link-level load balancing. In this configuration, we will be configuring Link Aggregation Control Protocol (LACP) 802.3ad port channels. Only ports with the same speed can be aggregated into LACP Port Channels. Each RHEL server uses vLAGs configured on the connected ToR switch in the respective racks (see Figure 2) to connect to the Isilon storage. In the following section we will go through the creation and configuration of vLAGs for SVR1, which is connected, on ports 3/0/37 and 4/0/37. This is defined as port-channel 105. For details on vLAGs, please refer to the Network OS Administrator Guide, v 4.0.0.

i) Create and configure Port Channel 105 for SVR1 - VDX6740_RB5# conf t VDX6740_RB5(config)# interface Port-channel 105 VDX6740_RB5(config-Port-channel-105)# description vLAG_SVR1_Storage VDX6740_RB5(config-Port-channel-105)# switchport VDX6740_RB5(config-Port-channel-105)# switchport mode access VDX6740_RB5(config-Port-channel-105)# switchport access vlan 50 VDX6740_RB5(config-Port-channel-105)# no shutdown

ii) Add the physical ports on RB3 & RB4 to the vLAG -

Strategic Solutions Lab Guide 19

Page 20: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

VDX6740_RB5(config)# int ten 3/0/37 VDX6740_RB5(conf-if-te-3/0/37)# channel-group 105 mode active type standard VDX6740_RB5(conf-if-te-3/0/37)# no shutdown VDX6740_RB5(conf-if-te-3/0/37)# int ten 4/0/37 VDX6740_RB5(conf-if-te-4/0/37)# channel-group 105 mode active type standard VDX6740_RB5(conf-if-te-4/0/37)# no shutdown VDX6740_RB5(conf-if-te-4/0/37)# end

iii) Repeat configuration for SVR2 which is connected to RB3 and RB4 on 3/0/38 and 4/0/38 respectively through Port Channel 106.

Step 5: Configure vLAGs on RB5 and RB6 connecting to Isilon Cluster The Isilon storage system will use bonded interfaces for client connections to increase performance and availability should one or more 10Gb connections fail. Each Isilon node will use LACP port channel groups configured on the two leaf VDX 6740 RB5 and RB6. Each node in the cluster uses vLAGs configured on the connected ToR switches in the respective racks (see Figure 2). In the following section we will go through the creation and configuration of vLAGs for Node 1 which is connected on ports 5/0/41 and 6/0/42. This is defined as port-channel 101.

i) Create and configure Port Channel 101 for Isilon Node 1 -

VDX6740_RB5(config)# interface Port-channel 101 VDX6740_RB5(config-Port-channel-101)# description vLAG_Isilon_Node1 VDX6740_RB5(config-Port-channel-101)# switchport VDX6740_RB5(config-Port-channel-101)# switchport mode access VDX6740_RB5(config-Port-channel-101)# switchport access vlan 50 VDX6740_RB5(config-Port-channel-101)# no shutdown

ii) Add the physical ports on RB5 & RB6 (where Isilon node 1 is connected) to the vLAG-

VDX6740_RB5(config)# int ten 5/0/41 VDX6740_RB5(conf-if-te-5/0/41)# channel-group 101 mode active type standard VDX6740_RB5(conf-if-te-5/0/41)# no shutdown VDX6740_RB5(conf-if-te-1/2/42)# int ten 6/0/42 VDX6740_RB5(conf-if-te-6/0/42)# channel-group 101 mode active type standard VDX6740_RB5(conf-if-te-6/0/42)# no shutdown VDX6740_RB5(conf-if-te-6/0/42)# end

iii) Repeat step 4 to enable vLAGs for Isilon nodes 2-4 based on Figure 2.

Strategic Solutions Lab Guide 20

Page 21: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Step 6: Enable Flow Control Support It is recommended to enable Flow Control on vLAG-facing interfaces connected to the Isilon nodes, as shown below.

i) Enable QOS Flow Control for both tx and rx on RB5 and RB6 connected to Isilon cluster-

VDX6740_RB5# conf t VDX6740_RB5(config)# interface Port-channel 101 VDX6740_RB5(config-Port-channel-101)# qos flowcontrol tx on rx on

ii) Repeat step i) to enable flow control for VDX interfaces connected to Isilon nodes 2-4 as well.

Step 7 - Configure MTU and Jumbo Frames Brocade VDX Series switches support the transport of jumbo frames. This solution for Scale-Out NAS recommends an MTU set at 9216 (Jumbo frames) for efficient storage and migration traffic. Jumbo frames are enabled by default on the Brocade ISL trunks. However, to accommodate end-to-end jumbo frame support on the network for the edge systems, this feature can be enabled under the vLAG interface. Please note that for end-to-end flow control and Jumbo frames, they need to be enabled on the host servers and the storage as well with the same MTU size of 9216.

i) Configuring Jumbo Frames-

BRCD6740-RB5# configure terminal BRCD6740-RB5(config)# interface Port-channel 101 BRCD6740-RB5(config-Port-channel-101)# mtu (<NUMBER:1522-9216>) (9216): 9216

ii) Repeat step i) on all Port Channel interfaces connecting to the Isilon (101-104)

and the RHEL servers.

Step 8: Validate Port Channel configuration

Strategic Solutions Lab Guide 21

Page 22: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

After performing Steps 1-6, we recommend the user to validate each vLAG Port-channel Interface. In the below example, we are validating Port Channel 101. VDX6740_RB5# show running-config interface Port-channel 101 interface Port-channel 101 vlag ignore-split mtu 9216 switchport switchport mode access switchport access vlan 50 qos flowcontrol tx on rx on no shutdown

Isilon Storage Configuration The OneFS Admin Guide describes the Isilon cluster as follows:

“A cluster includes two networks: an internal network to exchange data between nodes and an external network to handle client connections. Nodes exchange data through the internal network with a proprietary, unicast protocol over InfiniBand. Each node includes redundant InfiniBand ports so you can add a second internal network in case the first one fails. Clients reach the cluster with 1 GigE or 10 GigE Ethernet. Since every node includes Ethernet ports, the cluster's bandwidth scales with performance and capacity as you add nodes.”

Deploying the EMC Isilon is one of the more simple and direct processes available in a NAS appliance. In this section, we describe how to configure the Isilon cluster and join all nodes to it. To build the cluster, the Isilon x200 requires setting up just one node with the addressing information. OneFS automatically expands its cluster when additional nodes are added after a few keystrokes. Therefore, most of the work involves setting up node1, and then telling subsequent nodes to join its cluster. Once you connect to the serial console, follow the onscreen prompts. Assumptions

1. The test environment uses four Isilon X200 nodes, and limits the IP address ranges to 10 addresses. Production environments would allow for more nodes for future expansion.

2. With the main purpose of this test being validating the data path, the test did not use advanced domain services such as directory authentication or DNS.

Isilon Deployment Configuration Steps

Strategic Solutions Lab Guide 22

Page 23: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

The Isilon cluster deployment process is divided into the stages shown in Table 3. Upon completion of the deployment, the Isilon cluster is ready for integration with customer network and server infrastructure. Table 3. Configure Isilon Cluster

Steps Isilon Deployment Step Description

1 Setup Node 1

2 Add remaining nodes to the cluster

3 Verify Node & Cluster Status

4 Configure Isilon Network from the Web GUI

5 Setup Isilon NAS Shares

Step 1: Setup Node 1

1. Connect to the serial console on node1 with a null modem connector a. 115200/8/N/1/Hardware b. Press Enter to start the setup wizard

2. Create a new cluster 3. Change the root password from the default: Password! 4. Change the admin password from the default: Password! 5. Enable SupportIQ

a. Enter company name: Brocade b. Enter contact name: TestAdmin@brocade

6. Enter a new name for the cluster: EMCworld 7. Use the default current encoding: utf-8 8. Configure cluster internal IB interface (int-a):

a. Configure netmask: 255.255.255.0 b. Configure IP range: 172.16.1.101-172.16.1.110

9. Configure external management interface ext-1: a. Configure netmask: 255.255.255.0 b. Configure MTU: 1500 c. Configure IP range: 192.168.90.101-192.168.90.105

Strategic Solutions Lab Guide 23

Page 24: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

10. Enter default gateway: 192.168.90.1 11. Configure SmartConnect settings (optional)

Note: SmartConnect (VIP for failover) is configured after the cluster is online, during the bonded network interface configuration

12. Configure DNS settings (optional) 13. Configure cluster date and time

a. Configure time zone: Pacific Time Zone 14. Configure cluster join mode: Manual

NOTE: The default option is Manual, and as this is an initial cluster setup, it’s fine for the additional node to initiate the join. After the system moves into production, it may be prudent to change the join mode to Secure. Manual — joins can be initiated by either the node or the cluster. Secure — joins can be initiated only by the cluster.

15. Commit these changes and initialize node1

Step 2: Add remaining nodes to the cluster

1. Connect to the serial console on additional nodes with a null-modem connector a. 115200/8/N/1/Hardware b. Press Enter to start the setup wizard

2. Join an existing cluster: EMCworld

Step 3: Verify Node & Cluster Status

1. SSH to each node via their ext-1 IP address or connect via serial console 2. Run isi status to view cluster status

EMCworld-1# isi status Cluster Name: EMCworld Cluster Health: [ OK ] Cluster Storage: HDD SSD Size: 41T (43T Raw) 0 (0 Raw) VHS Size: 2.0T Used: 23G (< 1%) 0 (n/a) Avail: 41T (> 99%) 0 (n/a) Health Throughput (bps) HDD Storage SSD Storage ID |IP Address |DASR| In Out Total| Used / Size |Used / Size ---+---------------+----+-----+-----+-----+------------------+---------------- 1|192.168.90.101 | OK | 74K| 264K| 338K| 5.8G/ 10T(< 1%)| (No SSDs) 2|192.168.90.102 | OK | 0| 0| 0| 5.7G/ 10T(< 1%)| (No SSDs) 3|192.168.90.103 | OK | 171| 0| 171| 5.7G/ 10T(< 1%)| (No SSDs) 4|192.168.90.104 | OK | 98K| 0| 98K| 5.7G/ 10T(< 1%)| (No SSDs) ------------------------+-----+-----+-----+------------------+---------------- Cluster Totals: | 173K| 264K| 437K| 23G/ 41T(< 1%)| (No SSDs)

Strategic Solutions Lab Guide 24

Page 25: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Health Fields: D = Down, A = Attention, S = Smartfailed, R = Read-Only Critical Events: Cluster Job Status: No running jobs. No paused or waiting jobs. No failed jobs. Recent job results: Time Job Event --------------- -------------------------- ------------------------------ 04/02 17:44:17 MultiScan[1] Succeeded (LOW) 04/02 17:59:10 MultiScan[2] Succeeded (LOW) 04/02 18:01:50 MultiScan[3] Succeeded (LOW)

Step 4: Configure Isilon Network from the Web GUI i) Login to the Isilon Administration Console Web GUI using the node1 management

IP address of the cluster just configured.

ii) From the Cluster tab, select Networking iii) Click “Add Subnet” and follow the onscreen instructions iv) Set subnet Name, Description, and Netmask

a. Name: S-Datastore

Strategic Solutions Lab Guide 25

Page 26: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

b. Description: Datastore c. Netmask: 255.255.255.0 d. Gateway: none e. SmartConnect: 192.168.50.111

v) Add and IP address pool for the cluster nodes

a. Name: Datastore b. IP range: 192.168.50.101-192.168.50.110

Strategic Solutions Lab Guide 26

Page 27: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

vi) Define the SmartConnect Settings

Note: SmartConnect has two modes available: Basic and Advanced, which requires an additional license from EMC. Unlicensed Basic mode balances client connections by using a round robin policy, selecting the next available node on a rotating basis. For more info on the Advanced policies, see the OneFS Admin Guide.

a. Zone name: zone1 b. Connect Policy: Round Robin c. Service Subnet: S-Datastore

Strategic Solutions Lab Guide 27

Page 28: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

vii) Add available interfaces to the subnet, choose aggregated links from each node viii) Use LACP for the Aggregation Mode (since we configured the vLAG as LACP)

Strategic Solutions Lab Guide 28

Page 29: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

ix) When the switch port channels are configured properly, the Isilon will show

green indicators for all 10Gb interfaces in the cluster

We have now completed the network connectivity setup for the Isilon.

Step 5: Setup Isilon NAS Shares You will need to configure NFS shares on the Isilon for access by clients. We also enable SMB sharing on the datastore to allow a Windows management station to upload/manipulate files to the storage. The SMB share also provides another path to show customers the NAS system’s capabilities.

a) Configure Volume

1. Login to the Web GUI using the node1 management IP address 2. Navigate to File System -> Smart Pools -> Disk Pools 3. Click Manually Add Disk Pool

a. Pool Name: X200_43TB_6GB-RAM b. Protection Level: +2:1 c. Add all node resources to the pool

Strategic Solutions Lab Guide 29

Page 30: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

4. Click Submit

b) Configure SMB Shares

1. Navigate to File Sharing -> SMB -> Add Share a. Share name: ifs b. Description: Isilon OneFS c. Users and Groups: <default>

2. Click Submit

Strategic Solutions Lab Guide 30

Page 31: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

c) Configure NFS Shares

1. Navigate to Files Sharing -> NFS -> Add Export a. Description: MountPoint for Servers b. Directories: /ifs c. Enable mount access to subdirectories d. Access Control: <defaults>

2. Click Submit

Server Configuration and Cabling The server deployment process is divided into the stages shown in Table 4. Upon completion of the deployment, the servers are ready to communicate with the VCS Fabric and Isilon cluster. In this example configuration we are using two RHEL servers, but depending on customer infrastructure the number and kind of Operating System used can vary. Table 4. Configure RHEL Server

Steps ServerDeployment Step Description

1 Configure MTU and Jumbo Frames

Strategic Solutions Lab Guide 31

Page 32: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Steps ServerDeployment Step Description

2 Configure Interface bonding on Network Interface Cards (NICs)

3 Enable flow control

While the choice of servers to implement in the compute layer is flexible, it is recommended to use enterprise class servers designed for the datacenter. This type of server has redundant power supplies and work well with Scale-Out architectures. In this deployment we used Red Hat Enterprise Linux 6 as the Operating System, but any other OS such as Microsoft Windows or VMware can be used as well.

Step 1: Configure MTU and Jumbo Frames As explained in the VDX configuration section, enabling Jumbo frames end-to-end provide better network performance. On a RHEL server, Jumbo frames can be enabled using the ifconfig command, as shown below-

i) Edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file

# vi /etc/sysconfig/network-scripts/ifcfg-eth0

ii) Add MTU, settings-

MTU="9000"

iii) Save and close the file. Restart networking: # service network restart

NOTE: The above commands may vary depending on the Linux distribution used. For latest commands, please refer to Red Hat website (links in References section)

Step 2: Configure interface bonding on Network Interface Cards (NICs)

Strategic Solutions Lab Guide 32

Page 33: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

RHEL allows administrators to bind NICs together into a single channel using the bonding kernel module and a special network interface, called a channel bonding interface. Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy. Details of this can be found on the Red Hat’s website (refer to References section of this document). We are using 10Gbps interfaces on the host for storage traffic and 1Gbps interfaces for the management network. In this example we have configured a bonding device with two SLAVE-Devices (2 separate interfaces) for LACP aggregation- eth0 and eth1 via the VIM editor, as shown below. Please note that to be successfully aggregated; both interfaces should have the same speed.

A) Create the initial bonding configuration i) Configure your SLAVE-Devices [root@linux-server ~] vim /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes BOOTPROTO=none USERCTL=no SLAVE=yes MASTER=bond0 [root@linux-server ~] vim /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 ONBOOT=yes BOOTPROTO=none USERCTL=no SLAVE=yes MASTER=bond0

ii) Configure your MASTER-(BOND) Device [root@linux-server ~] vim /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 ONBOOT=yes BOOTPROTO=static USERCTL=no NETWORK=10.80.114.0 NETMASK=255.255.255.0 GATEWAY=10.80.114.1 IPADDR=10.80.114.163

B) Configure Bonding modules i) Activate the bonding settings (modprobe.conf)

Strategic Solutions Lab Guide 33

Page 34: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

[root@linux-server ~] vim /etc/modprobe.conf alias bond0 bonding options bond0 miimon=100 mode=4 lacp_rate=1 ii) Load the bonding modules into the “running” kernel [root@linux-server ~] modprobe bond0 [root@linux-server ~] lsmod | grep bond bonding 126649 0 ipv6 432161 49 bonding ……. <truncated output> C) Configure your network settings [root@linux-server ~] vim /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=linux-server

D) Restart the network stack [root@linux-server ~] service network restart …. <truncated output>

E) Check bonding stack for errors [root@linux-server ~] cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.5.0 (October 7, 2012) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: fast Active Aggregator Info: Aggregator ID: 3 Number of ports: 2 Actor Key: 9 Partner Key: 4 Partner Mac Address: 38:22:d6:ea:2f:fd Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:19:99:b9:a9:2c Aggregator ID: 3 Slave Interface eth1: MII Status: up

Strategic Solutions Lab Guide 34

Page 35: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Link Failure Count: 0 Permanent HW addr: 00:19:99:b9:a9:2d Aggregator ID: 3

Please make sure that you can see “LACP rate: fast” in the output. If for some reason LACP rate shows up as slow, reboot your server.

Step 3: Enable Flow Control For better performance, it is recommended to enable flow control on the hosts. Depending on the kind of Operating System running, this can be enabled via Device Manager (in Windows), Ethtool (in Linux) or vSphere (in a VMware environment). Due to the complexity and various parameter options available, we will not be covering this step in detail in this deployment guide. For details on how to do this, please refer the Red Hat website (links in References section).

Strategic Solutions Lab Guide 35

Page 36: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Chapter 4 VDX in a Virtualized Environment

Topology Below is a diagram of the network topology showing Spine-Leaf architecture with the Isilon cluster nodes attached to the spine and the ESXi servers attached to leaf switches at the Top of Rack (ToR). This provides uniform and redundant access for all servers to all storage and simplifies scale-out when adding more servers and NAS nodes. Low latency, high bandwidth, high availability and simple management are maintained as physical resources are added.

Figure 3. Deployment Topology with ESXi servers

Brocade VCS Fabric

ISILON Scale-out NAS Nodes

esx231

esx221

ClientsvSphere

esx219

esx225

ClientsvSphere

RB4 RB5 RB6

RB1 RB2

RB3

Isilon intra-cluster communication is handled by a dedicated Isilon Infiniband network. All internal Isilon cluster traffic uses these paths, and must be assigned an internal IP address range during cluster setup.

Strategic Solutions Lab Guide 36

Page 37: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

IP Addresses When deploying a NAS infrastructure, the logical network infrastructure and IP topology must be planned in advance; in the test bed we use a separate management network with all ip addresses in the default VLAN 1. For the VCS network VLAN separation is used for storage (VLAN50), VM application (VLAN60) and vMotion (VLAN70) shown in the table below.

Device Type

Mangement_IP VLAN1

Store_IP VLAN50

VM_IP VLAN60

VMotion_IP VLAN70

x200_IP InfiniBand

BR-VDX8770-4 VDX 192.168.90.93 BR-VDX8770-4 VDX 192.168.90.94 BR-VDX6740-

48 VDX 192.168.90.95 BR-VDX6740-

48 VDX 192.168.90.96 BR-VDX6740-

48 VDX 192.168.90.97 BR-VDX6740-

48 VDX 192.168.90.98

IS-x200-1 NAS 192.168.90.101

192.168.50.101

172.16.1.101

IS-x200-2 NAS 192.168.90.102

192.168.50.102

172.16.1.102

IS-x200-3 NAS 192.168.90.103

192.168.50.103

172.16.1.103

IS-x200-4 NAS 192.168.90.104

192.168.50.104

172.16.1.104

esx231 ESXi 192.168.90.231

192.168.50.231

192.168.60.231

192.168.70.231

esx219 ESXi

192.168.90.219

192.168.50.219

192.168.60.219

192.168.70.219

esx221 ESXi

192.168.90.221

192.168.50.221

192.168.60.221

192.168.70.221

esx225 ESXi

192.168.90.225

192.168.50.225

192.168.60.225

192.168.70.225

vCenter

VMware

192.168.90.100

vmio-analyzer1

VMware

192.168.60.240

vmio-analyzer2

VMware

192.168.60.241

RH5.5 VM

192.168.60.242

Strategic Solutions Lab Guide 37

Page 38: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

w2k8-VM1 VM

192.168.60.243

w2k8-VM2 VM

192.168.60.244

Web Server VM

192.168.60.245

VMS (Security)

VMware

192.168.60.246

Table 5: IP addresses in an ESXi environment

Description The Isilon storage system will use bonded interfaces for client connections to increase performance and availability should one or more 10Gb connections fail. Each node will use port channel groups configured in the two spine VDX 8770-4 (RB1 and RB2).

Node1, 1/2/41 & 2/2/42, port-channel 101 Node2, 1/2/43 & 2/2/44, port-channel 102 Node3, 1/2/45 & 2/2/46, port-channel 103 Node4, 1/2/47 & 2/2/48, port-channel 104

Assumptions 1. The fabric should already be configured and RBridge and VCS IDs assigned to the

switches.

NOTE: When the VCS is deployed in distributed mode the VCS is configured as a Logical Chassis from a single entry point using the VCS Virtual IP and configuration changes are automatically saved across all switches in the fabric. In the following example we will show configuration for distributed mode (Logical Chassis)

Table 5. VDX Configuration Steps

Steps VDX Deployment Step Description

1 Connect 10Gb interfaces to RB1 & configure ports for VLAN access

2 Enable Flow Control Support

3 Setup VMware host Network Connections

4 VMware Storage Configuration

Strategic Solutions Lab Guide 38

Page 39: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Steps VDX Deployment Step Description

5 Configuring Advanced Settings for Isilon Best Practices

6 Configure additional Options for VMware Clusters and VMs

Step 1: Connect 10Gb interfaces to RB1 & configure ports for VLAN access

1. SSH to the VDX switch or connect to the serial console 2. Configure VLANs

----------- VDX8770_RB1# conf t VDX8770_RB1(config)# interface Vlan 50 VDX8770_RB1(config-Vlan-50)# description IsilonTest1_Storage VDX8770_RB1(config)# interface Vlan 60 VDX8770_RB1(config-Vlan-60)# description IsilonTest1_VM_Application VDX8770_RB1(config)# interface Vlan 70 VDX8770_RB1(config-Vlan-60)# description IsilonTest1_vMotion -----------

NOTE: When the VCS is deployed in distributed mode the VCS is as a Logical Chassis and therefore VLANs only need to be configured once to be available across the complete VCS

3. Configure vLAG (LACP Port Channel) for Isilon Node1 connected to RB1 & RB2

----------- VDX8770_RB1(config)# interface Port-channel 101 VDX8770_RB1(config-Port-channel-101)# description vLAG_Isilon_Node1 VDX8770_RB1(config-Port-channel-101)# switchport VDX8770_RB1(config-Port-channel-101)# switchport mode access VDX8770_RB1(config-Port-channel-101)# switchport access vlan 50 VDX8770_RB1(config-Port-channel-101)# no shutdown -----------

4. Add the physical ports on RB1 & RB2 (where Isilon node 1 is connected) to the vLAG

----------- VDX8770_RB1(config)# int ten 1/2/41 VDX8770_RB1(conf-if-te-1/2/41)# channel-group 101 mode active type standard VDX8770_RB1(conf-if-te-1/2/41)# no shutdown VDX8770_RB1(conf-if-te-1/2/42)# int ten 2/2/42 VDX8770_RB1(conf-if-te-2/2/42)# channel-group 101 mode active type standard VDX8770_RB1(conf-if-te-2/2/42)# no shutdown

Strategic Solutions Lab Guide 39

Page 40: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

VDX8770_RB1(conf-if-te-2/2/42)# end -----------

5. Repeat steps 3-4 to enable vLAGs for Isilon nodes 2-4.

Step 2: Enable Flow Control Support

1. Enable QOS Flow Control for both tx and rx on RB1 and RB2 ----------- VDX8770_RB1# conf t VDX8770_RB1(config)# interface Port-channel 101 VDX8770_RB1(config-Port-channel-101)# qos flowcontrol tx on rx on -----------

2. Validate vLAG Port-channel 101 Interface qos ----------- VDX8770_RB1# show running-config interface Port-channel 101 interface Port-channel 101 vlag ignore-split switchport switchport mode access switchport access vlan 50 qos flowcontrol tx on rx on no shutdown -----------

3. Repeat steps 1-2 to enable flow control for Isilon nodes 2-4

Step 3: Setup VMware host Network Connections The virtual distributed switch provides network access to storage via the physical uplinks on each ESXi server. Best practice is to separate virtual switches for VM application access and vMotion using dedicated uplinks for each distributed or standard vSwitch. In this document we will only show the setup for the distributed vSwitch used to connect to the NFS namespace provided by the Isilon for the ESXi servers to use as data store. NOTE: In the currently available versions of ESXi –all traffic to a single NFS mount point will always use a single uplink interface regardless if you have multiple interfaces which are defined in a LAG. The Lag will provide redundancy not load balancing. You can achieve some level of load balancing by configuring Load Based Teaming (LBT) which kicks in when a single vmnic reaches 75% utilization. For more information on LBT see vmware.com. After completing the VMware ESXi configuration we will configure AMPP for the VCS to automatically integrate with vCenter for VM placement in VLANs based on Port Group membership.

Strategic Solutions Lab Guide 40

Page 41: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Each ESXi server uses vLAGs configured on the connected ToR switch in the respective racks (see figure 1) In the following we will go through the configuration for ESXi_231. Server ESXi_231 is connected on ports 3/0/37 and 4/0/37. This is defined as port-channel 231.

Prerequisites 1. Two uplinks per server for the virtual switch used for storage traffic. 2. For each ESXi server a VMkernel interface is defined to be used for NFS traffic and IP

address assigned according to the IP topology. 3. vCenter Server (or vCenter Appliance) is already deployed and the servers are already

added/managed by the vCenter.

A) Configure VCS ports with connected uplink interfaces for ESXi storage path

1. Configure vLAG (Static Port Channel) for ESXi_231 connected to RB3 & RB4

---------- VDX6740_RB1(config)# interface Port-channel 231 VDX6740_RB1(config-Port-channel-101)# description vLAG_ESXi231_Storage VDX6740_RB1(config-Port-channel-101)# port-profile-port VDX6740_RB1(config-Port-channel-101)# no shutdown ----------

2. Add the physical ports on RB3 & RB4 (where ESXi231 is connected) to the vLAG

---------- VDX6740_RB1(config)# int ten 3/0/37 VDX6740_RB1(conf-if-te-3/0/37)# channel-group 101 mode on type standard VDX6740_RB1(conf-if-te-3/0/37)# no shutdown VDX6740_RB1(config)# int ten 4/0/37 VDX6740_RB1(conf-if-te-4/0/37)# channel-group 101 mode on type standard VDX6740_RB1(conf-if-te-4/0/37)# no shutdown ----------

3. Repeat steps 1-2 for the interfaces on the remaining ESXi nodes.

B) Create Distributed vSwitch in vCenter

1. Login to vSphere Client and press Ctrl-Shift-N to open the Network inventory

Strategic Solutions Lab Guide 41

Page 42: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

2. Click Add a vSphere Distributed Switch

3. Set the switch name: dvS-Storage

4. Add the host and both 10G physical interfaces

Strategic Solutions Lab Guide 42

Page 43: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

5. Click Finish to complete the creation of the distributed vSwitch

Strategic Solutions Lab Guide 43

Page 44: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

6. Edit Settings for dvS-Storage 7. Under Advanced, enable CDP Operation = Both (this is necessary for the VCS

integration to work)

8. Click OK 9. Edit Settings for dvPortGroup 10. Change name to dvPG-50_Storage

NOTE: It is useful to include the VLAN ID in the port group name for easy identification.

11. Set the VLAN Policy to 50

Strategic Solutions Lab Guide 44

Page 45: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

12. Verify NIC Teaming option is “Route based on IP hash” since we have connected to a vLAG

Strategic Solutions Lab Guide 45

Page 46: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

13. Click OK to complete the Port group configuration

C) Configure Host networking in vCenter

1. Navigate to Hosts and Clusters in vSphere Client 2. Select the first ESXi node and open Networking in the Configuration tab 3. Select the Distributed Switch dvS-Storage 4. Click Manage virtual adapters and select Add 5. Select New virtual adapter 6. Select VMkernel type 7. Select port group dvSPG-50_Storage

8. Enter the IP address and netmask for the ESXi host a. IP address: 192.168.50.xx b. Netmask: 255.255.255.0

Strategic Solutions Lab Guide 46

Page 47: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

9. Review settings and click Finish 10. Repeat configuration steps for all ESXi hosts

D) Register vCenter in VCS

1. SSH to the VDX switch or connect to the serial console

------------- VDX6740_RB1# conf t VDX6740_RB1(config)# vcenter IsilonTest1url https://192.168.90.100 username root password "Password!" VDX6740_RB1(config)# vcenter IsilonTest1 activate VDX6740_RB1(config)# vcenter IsilonTest1 interval 10 -------------

2. Verify status of vCenter networks

------------- VDX6740_RB1# show vnetwork vcenter status vCenter Start Elapsed (sec) Status ================= ==================================================== IsilonTest1 2013-04-09 20:26:07 11 Success VDX6740_RB1# show vnetwork dvs vcenter IsilonTest1 dvSwitch Host Uplink Name Switch Interface ===================================================================== dvS-Storage ESXi_221 vmnic4 - vmnic5 - ESXi_231 vmnic4 - vmnic5 - Total Number of Entries: 4

Strategic Solutions Lab Guide 47

Page 48: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

VDX6740_RB1# show vnetwork dvs vcenter dvpgs dvPortGroup dvSwitch Vlan ================= =============== ========= dvPG-50_Storage dvS-Storage 50-50, dvPG-60_VMs dvS-VMs 60-60, dvPG-70_vMotion dvS-vMotion 70-70, dvS-Storage-DVUplinks-17 dvS-Storage 0-4094, dvS-VMs-DVUplinks-20 dvS-VMs 0-4094, dvS-vMotions-DVUplinks-23 dvS-vMorion 0-4094, Total Number of Entries: 6 VDX6740_RB1# sh vnet vms Virtual Machine Associated MAC IP Addr Host ================== =============== =========== ============== vCenter Server 00:0c:29:56:8a:00 - ESXi_221 vmware-io-analyzer 00:50:56:bb:60:24 - ESXi_231 w2k8-VM1 00:50:56:99:00:01 - ESXi_211 Total Number of Entries: 3 -------------

Confirm network connections between ESXi hosts and Isilon storage 1. SSH to each ESXi host 2. Run ping checks between the host and its counterparts and the X200 nodes for dvSPG-

50_Storage a. vmkping 192.168.50.101 b. vmkping 192.168.50.102 c. vmkping 192.168.50.103 d. vmkping 192.168.50.104 e. vmkping 192.168.50.111

3. Run ping checks between the host and its counterparts for dvPG-70_vMotion a. vmkping 192.168.70.231 b. vmkping 192.168.70.219 c. vmkping 192.168.70.227 d. vmkping 192.168.70.229

Step 4: VMware Storage Configuration NFS allows multiple connections from a single host, meaning an ESXi host can mount the same NFS export multiple times as separate datastores to distribute sessions. For demo purposes, setup at least one datastore using the Isilon SmartConnect IP address for storage failover. Add multiple datastores using the same IP if desired.

A) Add Isilon Datastores to ESXi Hosts

1. Login to vSphere Client and press Ctrl-Shift-H to open Hosts and Clusters

Strategic Solutions Lab Guide 48

Page 49: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

2. Select the first ESXi node and open Storage in the Configuration tab 3. Click Add Storage…

a. Type: Network File System

4. Enter NFS access information using the Isilon SmartConnect IP a. Server: 192.168.50.111 b. Folder: /ifs c. Datastore Name: IsilonVIP-50

NOTE: Change the datastore name to support additional mounts

Strategic Solutions Lab Guide 49

Page 50: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

5. Review your settings and click Finish 6. Repeat these steps for remaining ESXi hosts

Strategic Solutions Lab Guide 50

Page 51: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Step 5: Configuring Advanced Settings for Isilon Best Practices At this point, the system is ready to create virtual machines on the NFS datastore(s). EMC has additional recommended best practice options that may improve performance and manageability in larger demos or production environments.

A) Enable advanced parameters for Isilon storage in a VMware environment These recommendations come from the document “EMC Isilon Scale-out Storage with VMware vSphere 5” provided by EMC.

1. Enable SDRS in the vSphere Web Client: a. Browse to the datastore cluster in the vSphere Web Client navigator b. Click the Manage tab and click Settings c. Under Services, select Storage DRS and click Edit d. Select Turn ON vSphere DRS and click OK e. Optionally, disable only I/O-related functions of SDRS

i. Under Storage DRS, click Edit ii. Deselect the Enable I/O metric for Storage DRS option and click OK

2. Enable SIOC a. Select a datastore in the vSphere Client inventory and click the Configuration

tab b. Click Properties c. Enable Storage I/O Control d. Leave Congestion Threshold at 30ms e. Note: This setting is specific to the datastore and not to the host.

3. Download the VAAI-NAS plugin from EMC and install with VMware Update Manager to offload certain cloning, snapshot, and vMotion operations from ESXi to the Isilon cluster.

a. Full File Clone – moves jobs for cloning to the storage backend, reducing ESXi load

b. Extended Statistics – improve utilization accuracy of VMs c. Reserve Space – enable Thin-provisioning for Eager/Lazy Zeroed virtual disks

4. Enable vSphere API for Storage Awareness (VASA) a. SSH to any node in the cluster and log in as root b. Enable VASA by running the following command:

isi services apache2 enable isi services isi_vasa_d enable

c. Download the vendor provider certificate to your desktop via http://<ip_addr>

d. In vSphere, navigate to Home->Administration->Storage Providers and click Add

e. Fill out the following fields in the Add Vendor Provider window:

Strategic Solutions Lab Guide 51

Page 52: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Name: name for this VASA provider, e.g. EMC Isilon Systems URL: http://<ip_addr>:8081/vasaprovider Login: root Password: root password

f. Enable Use Vendor Provider Certificate checkbox g. Browse to the Certificate location for the certificate on your desktop h. Click OK.

Note: to disable VASA later, run the following commands from SSH: isi services apache2 disable isi services isi_vasa_d disable

5. Define custom attributes for VASA a. 10g b. Clustername c. Diskpool d. iSCSI e. NplusX f. Replica

6. Create VM storage profiles 7. Assign multiple dynamic IPs to each Isilon node in a dynamic IP pool 8. Mount /ifs datastore to each ESXi host in a mesh topology 9. Enable jumbo frames on 10G storage links 10. Configure MCT paths between switches for cluster nodes 11. Enable X200 protection at N+2:1 using SmartPool policy 12. Set SmartCache optimization to Random using SmartPool policy 13. Use a single dedicated datastore to hold the hypervisor swap files (.vswp) for all ESXi

hosts.

Step 6: Configure additional Options for VMware Clusters and VMs The following items were useful specifically for building the Isilon setup in a closed environment, but may have application in other environments so we document them here.

A) vSphere Optimizations

1. Disable Shell Warnings for SSH/remote access in vSphere

NOTE: The default settings for ESXi will show a security warning when SSH is enabled, and since most production activities do not require SSH, VMware recommends that administrators only enable SSH when they need it. For proof of concept and demo labs, or full-time SSH access, it’s useful to disable the SSH warning for a clean interface.

Strategic Solutions Lab Guide 52

Page 53: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

a. Select the ESXi host from the Inventory b. Select Configuration tab, Advanced Settings from the Software menu c. Set UserVars > UserVars.SuppressShellWarning = 1 d. You can also do this via the command-line:

vim-cmd hostsvc/advopt/update UserVars.SuppressShellWarning long 1 2. For IO-intensive VMs, use the PVSCSI adapter (Paravirtual) which increases

throughput & reduces CPU overhead 3. Align VMDK files at 8K boundaries for OneFS & create VM templates

Note: Since Windows Vista and Windows Server 2008, all Windows versions align automatically during OS installation. Previous versions and upgraded systems are not aligned. Note: RedHat & CentOS Linux version 6 systems align automatically during OS installation. Previous versions and upgraded systems are not aligned.

a. Format legacy Windows disks with 8K Blocks with diskpart i. create partition primary align=8

http://support.microsoft.com/kb/923076 4. Use 8192KB allocation unit (block size) when formatting virtual disks

a. Windows: DISKPART> format fs=NTFS label=<"label"> unit=8192 quick b. Linux: mke2fs –b 8192 –L <"label"> /dev/<dev#>

5. Advanced NFS Settings for vSphere are available from VMware in KB #1007909. Heed all cautions and recommendations from VMware and Isilon. Your mileage may vary. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1007909#NFSHEap

B) Windows VM Optimizations

1. Create a Windows Server 2008 R2 Template 2. Enable copy/paste through vSphere

a. Edit Settings -> Options -> Advanced -> General b. Click Configuration Parameters c. Add isolation.tools.copy.disable = false d. Add isolation.tools.paste.disable = false

3. Disable Password Expiration in local Group Policy a. Run gpedit b. Navigate to Computer Config/Windows Settings/Security Settings/Account

Policies/Password Policy c. Set Maximum password age = 0

4. Disable Require Ctrl-Alt-Del to login a. Run gpedit b. Navigate to Computer Config/Windows Settings/Security Settings/Local

Policies/Security Options c. Enable Interactive logon: Do not require Ctrl+Alt+Del

5. Run gpupdate to apply new policies

Strategic Solutions Lab Guide 53

Page 54: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

6. Extract Sysinternals to C:\Program Files\SysinternalsSuite and add to the Path a. System Properties/Advanced/Environment Variables

7. Install PuTTY utility (installs & adds path for C:\Program Files\PuTTY) a. Run putty-0.62_x64-installer.exe b. Import putty-sess.reg to preload putty sessions for the demo

8. Create hosts file for internal name resolution in C:\Windows\System32\drivers\etc\hosts

192.168.60.240 VMIO1 192.168.60.241 VMIO2 192.168.60.242 RHEL242 192.168.60.243 Win2K8-VM1 192.168.60.244 Win2K8-VM2 192.168.60.245 WEBSRV1 192.168.60.246 VMS 192.168.50.101 STORE1 192.168.50.102 STORE2 192.168.50.103 STORE3 192.168.50.104 STORE4 192.168.50.111 ISILONVIP-50

9. Configure w32time server on W2K8-VM1 a. Import w32time-server.reg to the registry b. Run sc triggerinfo w32time start/networkon stop/networkoff c. Run net start w32time

10. Configure w32time client on other hosts a. Import w32time-client.reg to the registry b. Run w32tm /config /manualpeerlist:"192.168.60.243,0x01" /syncfromflags:manual

/update c. Run sc triggerinfo w32time start/networkon stop/networkoff d. Run net start w32time

C) Windows 8 Optimizations

1. Enable Built-in Administrator account a. Open compmgmt.msc and navigate to Local Users b. Right-click Administrator and select Set Password

i. Password: Password! c. Right-click Administrator and select Properties

i. Uncheck Account Disabled

Strategic Solutions Lab Guide 54

Page 55: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Appendix A

Bill of Materials The following products are used in this deployment- Identifier Vendor Model Notes Spine Switch Brocade VDX 8770-4 Modular switch with 10Gb and 40Gb

interfaces Spine Switch Brocade VDX 8770-4 Modular switch with 10Gb and 40Gb

interfaces ToR Brocade VDX 6740-48 48 ports of 10Gb ToR Brocade VDX 6740-48 48 ports of 10Gb ToR Brocade VDX 6740-48 48 ports of 10Gb ToR Brocade VDX 6740-48 48 ports of 10Gb Management Network

Brocade ICX 6610-48P

RH Server x86 X3630 M3 Red Hat Enterprise Linux (RHEL) 6 RH Server x86 X3630 M3 Red Hat Enterprise Linux (RHEL) 6 Isilon Node EMC Isilon X200 4 total in cluster

Strategic Solutions Lab Guide 55

Page 56: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Appendix B

Management Network For completion we briefly describe setup of the switch used for the management network in the test bed. All switches, servers, and storage cluster nodes have management network interfaces separate from the production or dataflow network. These connect to the Top of Rack management switch, supplied by the Brocade ICX 6610-48P in this configuration example. We apply basic switch authentication with SSH logins so the ICX login process is similar to the VDX, for a consistent management experience.

Pre-requisites 1. No directory authentication exists in this setup, so we will use internal accounts and

passwords.

Configure ICX Switch

1. Connect to the serial console of the ICX switch 2. Enter Enable mode and then Config mode

a. enable b. conf t

3. Configure switch addressing a. ip address 192.168.90.90 255.255.255.0 b. no ip dhcp-client enable c. ip default-gateway 192.168.90.1

4. Configure authentication a. username admin priv 0 create <password> b. aaa authentication web-server default local c. aaa authentication enable default local d. aaa authentication login default local e. aaa authentication login privilege-mode f. console timeout 15 g. enable telnet authentication h. telnet timeout 15

5. Configure SSH access a. crypto key generate b. ip ssh key-auth no c. ip ssh scp enable

6. Optionally, disable telnet access a. no telnet server

Strategic Solutions Lab Guide 56

Page 57: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Appendix C

References

References • Data Center Infrastructure: Base Reference Architecture

o VCS Fabric Blocks o Data Center Template, Server Virtualization o Data Center Template, VCS Fabric Leaf-Spine

• Brocade Network OS Administrator’s Guide, v4.0.0 • Brocade Network OS Command Reference, v4.0.0 • Brocade 6740/6740T Hardware Reference Manual • Red Hat Portal- Using Interface Channel Bonding

Note: The following require login to EMC PowerLink site • Isilon OneFS Administration Guide, v7.0

https://support.emc.com/docu44506_OneFS-7.0-Administration-Guide.pdf?language=en_US

• Isilon OneFS Command Reference, v7.0 https://support.emc.com/docu44507_OneFS-7.0-Command-Reference.pdf?language=en_US

• Best Practice Guide, EMC Isilon Scale-out Storage with VMware vSphere 5 https://support.emc.com/docu39424_Best-Practice-Guide:-EMC-Isilon-Scale-Out-Storage-with-VMware-vSphere-5.pdf?language=en_US

Strategic Solutions Lab Guide 57

Page 58: Emc Isilonlab Scale Out Nas Dg

DATA CENTER SOLUTION DEPLOYMENT GUIDE

BROCADE VCS FABRIC AND EMC ISILON SCALE-OUT NAS

Appendix D

About Brocade Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection. Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility. To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (www.brocade.com)

Strategic Solutions Lab Guide 58