EMC VSPEX with Brocade Networking Solutions for … · Proven Infrastructure EMC VSPEX Abstract...
Transcript of EMC VSPEX with Brocade Networking Solutions for … · Proven Infrastructure EMC VSPEX Abstract...
Proven Infrastructure
EMC VSPEX
Abstract
This document describes the EMC VSPEX Proven Infrastructure with
Brocade VDX networking for private cloud deployments with
Microsoft Hyper-V and EMC VNXe for up to 100 virtual machines using
iSCSI Storage.
October, 2013
EMC® VSPEX™ with Brocade Networking
Solutions for PRIVATE CLOUD Microsoft® Windows® Server 2012 with Hyper-V™ for up to
100 Virtual Machines
Enabled by Brocade VDX with VCS Fabrics, EMC VNXe™ and EMC Next-
Generation Backup
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
2
Copyright © 2013 EMC Corporation. All rights reserved. Published in the
USA.
Published October 2013
EMC believes the information in this publication is accurate of its
publication date. The information is subject to change without notice.
The information in this publication is provided as is. EMC Corporation
makes no representations or warranties of any kind with respect to the
information in this publication, and specifically disclaims implied warranties
of merchantability or fitness for a particular purpose. Use, copying, and
distribution of any EMC software described in this publication requires an
applicable software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of
EMC Corporation in the United States and other countries. All other
trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to
the technical documentation and advisories section on the EMC online
support website.
© 2013 Brocade Communications Systems, Inc. All Rights Reserved.
ADX, AnyIO, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric
OS, ICX, MLX, MyBrocade, OpenScript, VCS, VDX, and Vyatta are
registered trademarks, and HyperEdge, The Effortless Network, and The
On-Demand Data Center are trademarks of Brocade Communications
Systems, Inc., in the United States and/or in other countries. Other brands,
products, or service names mentioned may be trademarks of their
respective owners.
Notice: This document is for informational purposes only and does not set
forth any warranty, expressed or implied, concerning any equipment,
equipment feature, or service offered, or to be offered, by Brocade.
Brocade reserves the right to make changes to this document at any time,
without notice, and assumes no responsibility for its use. This informational
document describes features that may not be currently available.
Contact a Brocade sales office for information on feature and product
availability. Export of technical data contained in this document may
require an export license from the United States government.
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines
Enabled by Brocade VDX with VCS Fabric Technology, EMC VNXe and
EMC Next-Generation Backup
Part Number H10939.1
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology, EMC
VNXe and EMC Next-Generation Backup
3
Contents
Chapter 1 Executive Summary13
Introduction ........................................................................................................................... 14
Target audience ................................................................................................................ 14
Document purpose .......................................................................................................... 14
Business needs ..................................................................................................................... 15
Chapter 2 Solution Overview 17
Introduction ........................................................................................................................... 18
Virtualization ......................................................................................................................... 18
Compute ................................................................................................................................ 18
Network ................................................................................................................................... 19
Storage .................................................................................................................................... 19
Chapter 3 Solution Technology Overview 21
Overview ................................................................................................................................. 22
Summary of key components ................................................................................... 23
Virtualization ......................................................................................................................... 24
Overview .............................................................................................................................................. 24
Microsoft Hyper-V ......................................................................................................................... 24
Microsoft System Center Virtual Machine Manager (SCVMM) ....................... 24
High Availability with Hyper-V Failover Clustering .................................................... 24
EMC Storage Integrator ............................................................................................................. 25
Compute ................................................................................................................................ 25
Network ................................................................................................................................... 27
Overview .............................................................................................................................................. 27
Brocade VDX Ethernet Fabric switch series .................................................................. 27
Server and Storage Virtualization Automation Support ....................................... 28
Storage .................................................................................................................................... 29
Contents
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
4
Overview .............................................................................................................................................. 29
EMC VNXe series ............................................................................................................................. 29
Backup and recovery .................................................................................................... 30
EMC Avamar ..................................................................................................................................... 30
Other technologies .......................................................................................................... 30
EMC XtemSW Cache (Optional) ......................................................................................... 30
Chapter 4 Solution Architecture Overview 33
Solution Overview ............................................................................................................. 34
Solution architecture ....................................................................................................... 34
Overview .............................................................................................................................................. 34
Architecture for up to 50 virtual machines.................................................................... 35
Architecture for up to 100 virtual machines ................................................................. 36
Key components ............................................................................................................................ 36
Hardware resources ..................................................................................................................... 38
Software resources ........................................................................................................................ 40
Server configuration guidelines ................................................................................ 40
Overview .............................................................................................................................................. 40
Hyper-V memory virtualization .............................................................................................. 40
Memory configuration guidelines ....................................................................................... 42
Brocade network configuration guidelines ...................................................... 43
Overview .............................................................................................................................................. 43
VLAN ....................................................................................................................................................... 43
Enable jumbo frames .................................................................................................................. 45
MC/S ....................................................................................................................................................... 45
Link Aggregation ............................................................................................................................ 45
Brocade Virtual Link Aggregation Group (vLAG)..................................................... 45
Brocade Inter-Switch Link (ISL) Trunks ................................................................................ 45
Equal-Cost Multipath (ECMP) ................................................................................................ 46
Pause Flow Control ....................................................................................................................... 46
Storage configuration guidelines ............................................................................ 47
Overview .............................................................................................................................................. 47
Hyper-V storage virtualization for VSPEX ......................................................................... 48
Storage layout for 50 virtual machines ............................................................................ 49
Storage layout for 100 virtual machines ......................................................................... 50
High availability and failover ..................................................................................... 51
Overview .............................................................................................................................................. 51
Virtualization layer ......................................................................................................................... 51
Compute layer ................................................................................................................................ 51
Contents
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
5
Brocade VDX Network layer ................................................................................................... 52
Storage layer ..................................................................................................................................... 53
Backup and recovery configuration guidelines ............................................ 54
Overview .............................................................................................................................................. 54
Backup characteristics ............................................................................................................... 54
Backup layout for up to100 virtual machines.............................................................. 55
Sizing guidelines .................................................................................................................. 55
Reference workload........................................................................................................ 56
Overview .............................................................................................................................................. 56
Defining the reference workload ........................................................................................ 56
Applying the reference workload .......................................................................... 57
Overview .............................................................................................................................................. 57
Example 1: Custom-built application ............................................................................... 57
Example 2: Point of sale system ............................................................................................ 57
Example 3: Web server ............................................................................................................... 58
Example 4: Decision-support database .......................................................................... 58
Summary of examples ................................................................................................................ 58
Implementing the reference architectures ...................................................... 59
Overview .............................................................................................................................................. 59
Resource types ................................................................................................................................ 59
CPU resources .................................................................................................................................. 59
Memory resources ......................................................................................................................... 60
Brocade network resources .................................................................................................... 60
Storage resources .......................................................................................................................... 61
Implementation summary ........................................................................................................ 61
Quick assessment .............................................................................................................. 62
Overview .............................................................................................................................................. 62
CPU requirements .......................................................................................................................... 62
Memory requirements ................................................................................................................. 63
Storage performance requirements ................................................................................. 63
I/O operations per second (IOPs) ....................................................................................... 63
I/O size ................................................................................................................................................... 63
I/O latency ......................................................................................................................................... 64
Storage capacity requirements ........................................................................................... 64
Determining equivalent Reference virtual machines ............................................ 64
Fine tuning hardware resources ........................................................................................... 67
Chapter 5 VSPEX Configuration Guidelines 71
Overview ................................................................................................................................. 72
Contents
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
6
Pre-deployment tasks ..................................................................................................... 73
Overview .............................................................................................................................................. 73
Deployment prerequisites ........................................................................................................ 74
Customer configuration data ................................................................................... 75
Prepare and Configure Brocade VDX switches ............................................ 75
Overview .............................................................................................................................................. 75
Brocade VDX Switch Platform Considerations ........................................................... 75
Prepare Brocade Network Infrastructure ....................................................................... 76
Complete Network Cabling ................................................................................................... 77
Brocade VDX 6710 and 6720 Switch Configuration Summary ............. 78
Brocade VDX 6710 Configuration........................................................................... 78
Step 1: Verify VDX NOS Licenses ......................................................................................... 79
Step 2: Assign and Verify VCS ID and RBridge ID..................................................... 79
Step 3: Assign Switch Name ................................................................................................... 80
Step 4: VCS Fabric ISL Port Configuration ..................................................................... 80
Step 5: Create required VLANs ............................................................................................ 83
Step 6: Create vLAG for Microsoft Server ...................................................................... 84
Step 7: Configure Switch Interfaces for VNXe ........................................................... 87
Step 8: Connecting the VCS Fabric to an existing Infrastructure through
Uplinks .................................................................................................................................................... 90
Step 9 - Configure MTU and Jumbo Frames ................................................................ 92
Step 10 - AMPP configuration for live migrations ...................................................... 92
Brocade VDX 6720 Configuration........................................................................... 93
Step 1: Verify VDX NOS Licenses ......................................................................................... 93
Step 2: Assign and Verify VCS ID and RBridge ID..................................................... 94
Step 3: Assign Switch Name ................................................................................................... 95
Step 4: VCS Fabric ISL Port Configuration ..................................................................... 95
Step 5: Create required VLANs ............................................................................................ 98
Step 6: Create vLAG for Microsoft Server ...................................................................... 99
Step 7: Configure Switch Interfaces for VNXe ......................................................... 104
Step 8: Connecting the VCS Fabric to an existing Infrastructure through
Uplinks .................................................................................................................................................. 107
Step 9 - Configure MTU and Jumbo Frames .............................................................. 109
Step 10 - AMPP configuration for live migrations .................................................... 109
Prepare and configure storage array ................................................................ 110
Overview ............................................................................................................................................ 110
VNXe configuration .................................................................................................................... 110
Provision storage for iSCSI datastores ............................................................................. 111
Contents
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
7
Install and configure Hyper-V hosts ..................................................................... 112
Overview ............................................................................................................................................ 112
Install Hyper-V and configure failover clustering .................................................... 113
Configure Windows host networking .............................................................................. 113
Publish VNXe datastores to Hyper-V ............................................................................... 113
Connect Hyper-V datastores ............................................................................................... 113
Plan virtual machine memory allocations ................................................................... 114
Install and configure SQL server database ..................................................... 115
Overview ............................................................................................................................................ 115
Create a virtual machine for Microsoft SQL server ................................................ 115
Install Microsoft Windows on the virtual machine .................................................. 115
Install SQL Server ........................................................................................................................... 116
Configure SQL Server for SCVMM ..................................................................................... 116
System Center Virtual Machine Manager server deployment .......... 117
Overview ............................................................................................................................................ 117
Create a SCVMM host virtual machine ........................................................................ 118
Install the SCVMM guest OS .................................................................................................. 118
Install the SCVMM server ......................................................................................................... 118
Install the SCVMM Management Console .................................................................. 118
Install the SCVMM agent locally on a host ................................................................. 118
Add a Hyper-V cluster into SCVMM ................................................................................. 118
Create a virtual machine in SCVMM .............................................................................. 118
Create a template virtual machine ................................................................................ 119
Deploy virtual machines from the template virtual machine ........................ 119
Summary ............................................................................................................................... 119
Chapter 6 Validating the Solution 121
Overview ............................................................................................................................... 122
Post-install checklist ........................................................................................................ 123
Deploy and test a single virtual server ............................................................... 123
Verify the redundancy of the solution components ................................ 123
Appendix A Bill of Materials 125
Bill of materials ................................................................................................................... 126
Appendix B Customer Configuration Data Sheet 129
Customer configuration data sheet ................................................................... 130
Appendix C References 133
References .......................................................................................................................... 134
Contents
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
8
EMC documentation ................................................................................................................. 134
Other documentation .............................................................................................................. 134
Appendix D About VSPEX 135
About VSPEX ....................................................................................................................... 136
Appendix E Validation with Microsoft Hyper-V Fast Track v3
137
Overview ............................................................................................................................... 138
Business case for validation ...................................................................................... 138
Process requirements .................................................................................................... 139
Step one: Core prerequisites ................................................................................................ 139
Step two: Select the VSPEX Proven Infrastructure platform ............................. 139
Step three: Define additional Microsoft Hyper-V Fast Track Program
components .................................................................................................................................... 140
Step four: Build a detailed Bill of Materials .................................................................. 141
Step five: Test the environment........................................................................................... 141
Step six: Document and publish the solution ............................................................. 141
Additional resources ...................................................................................................... 142
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
9
Figures
Figure 1. VSPEX private cloud components ............................................................... 22 Figure 2. Compute layer flexibility ..................................................................................... 26 Figure 3. Example of a highly available network design.................................... 28 Figure 4. Logical architecture for 50 virtual machines ......................................... 35 Figure 5. Logical architecture for 100 virtual machines ...................................... 36 Figure 6. Hypervisor memory consumption ................................................................. 41 Figure 7. Required networks .................................................................................................. 44 Figure 8. Hyper-V virtual disk types .................................................................................... 48 Figure 9. Storage layout for 50 virtual machines ...................................................... 49 Figure 10. Storage layout for 100 virtual machines ................................................... 50 Figure 11. High Availability at the virtualization layer .............................................. 51 Figure 12. Redundant power supplies ............................................................................... 51 Figure 13. Network layer High Availability ....................................................................... 52 Figure 14. VNXe series High Availability ............................................................................ 53 Figure 15. Resource pool flexibility ....................................................................................... 59 Figure 16. Required resource from the Reference virtual machine pool .. 65 Figure 17. Aggregate resource requirements from the Reference virtual
machine pool............................................................................................................. 66 Figure 18. Customizing server resources ........................................................................... 67 Figure 19. Sample Ethernet network architecture ..................................................... 77 Figure 20. VCS Fabric port types ........................................................................................... 81 Figure 21. VDX 6710-54 ................................................................................................................ 81 Figure 22. Creating VLANs ......................................................................................................... 84 Figure 23. Example VCS/VDX network topology with Infrastructure
connectivity ................................................................................................................ 90 Figure 24. Port types ...................................................................................................................... 96 Figure 25. VDX 6720-24 ................................................................................................................ 96 Figure 26. VDX 6720-60 ................................................................................................................ 97 Figure 27. Creating VLANs ......................................................................................................... 99 Figure 28. Example VCS/VDX network topology with Infrastructure
connectivity ............................................................................................................. 107
Figures
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
10
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
11
Tables
Table 1. VNXe customer benefits .....................................................................................29 Table 2. Solution hardware ...................................................................................................38 Table 3. Solution software ......................................................................................................40 Table 4. Network hardware ..................................................................................................43 Table 5. Storage hardware ...................................................................................................47 Table 6. Backup profile characteristics .........................................................................54 Table 7. Virtual machine characteristics .....................................................................56 Table 8. Blank worksheet row ..............................................................................................62 Table 9. Reference virtual machine resources ........................................................64 Table 10. Example worksheet row ......................................................................................65 Table 11. Example applications ...........................................................................................66 Table 12. Server resource component totals ..............................................................68 Table 13. Blank customer worksheet .................................................................................69 Table 14. Deployment process overview .......................................................................72 Table 15. Tasks for pre-deployment ...................................................................................73 Table 16. Deployment prerequisites checklist.............................................................74 Table 17. Brocade VDX 6710 and VDX 6720 Configuration Steps ...............78 Table 18. Tasks for storage configuration .................................................................... 110 Table 19. Tasks for server installation .............................................................................. 112 Table 20. Tasks for SQL server database setup ........................................................ 115 Table 21. Tasks for SCVMM configuration ................................................................... 117 Table 22. Tasks for testing the installation .................................................................... 122 Table 23. List of components used in the VSPEX solution for 50 virtual
machines .................................................................................................................... 126 Table 24. List of components used in the VSPEX solution for 100 virtual
machines .................................................................................................................... 127 Table 25. Common server information ......................................................................... 130 Table 26. Hyper-V server information ............................................................................. 130 Table 27. Array information .................................................................................................. 131 Table 28. Network infrastructure information............................................................ 131 Table 29. VLAN information .................................................................................................. 131 Table 30. Service accounts .................................................................................................. 131 Table 31. Hyper-V Fast Track component classification ................................... 140
Tables
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
12
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
13
Chapter 1 Executive Summary
This chapter presents the following topics:
Introduction 14
Target audience 14
Document purpose 14
Business needs 15
Executive Summary
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
14
Introduction
EMC VSPEX with Brocade networking solutions are validated and modular
architectures built with proven best-of-breed technologies to create
complete virtualization solutions on compute, networking, and storage
layers. VSPEX helps to reduce virtualization planning and configuration
burdens. When embarking on server virtualization, virtual desktop
deployment, or IT consolidation, VSPEX accelerates your IT Transformation
by enabling faster deployments, choice, greater efficiency, and lower risk.
This document is a comprehensive guide to the technical aspects of this
solution. Server capacity is provided in generic terms for required
minimums of CPU, memory, and network interfaces; the customer can
select the server hardware that meet or exceed the stated minimums.
Target audience
The reader of this document is expected to have the necessary training
and background to install and configure Microsoft Hyper-V, Brocade VDX
series switches, EMC VNXe series storage systems, and associated
infrastructure as required by this implementation. The document provides
external references where applicable. The reader should be familiar with
these documents.
Readers should also be familiar with the infrastructure and database
security policies of the customer installation.
Users focusing on selling and sizing a Microsoft Hyper-V private cloud
infrastructure should pay particular attention to the first four chapters of this
document. After purchase, implementers of the solution can focus on the
configuration guidelines in Chapter 5, the solution validation in Chapter 6,
and the appropriate references and appendices.
Document purpose
This document serves as an initial introduction to the VSPEX architecture,
an explanation on how to modify the architecture for specific
engagements and instructions on how to deploy the system effectively.
The VSPEX with Brocade VDX private cloud architecture provides the
customer with a modern system capable of hosting a large number of
virtual machines at a consistent performance level. This solution runs on
the Microsoft Hyper-V virtualization layer backed by the highly available
VNX™ family storage. The compute and network components are
customer-definable, and should be redundant and sufficiently powerful to
handle the processing and data needs of the virtual machine
environment.
Executive Summary
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
15
The 50 and 100 virtual machines environments are based on a defined
reference workload. Because not every virtual machine has the same
requirements, this document contains methods and guidance to adjust
your system to be cost-effective when deployed.
A private cloud architecture is a complex system offering. This document
facilitates the setup by providing upfront software and hardware material
lists, step-by-step sizing guidance and worksheets, and verified
deployment steps. When the last component is installed, there are
validation tests to ensure that your system is up and running properly.
Following the procedures defined in this document ensures an efficient
and painless journey to the cloud.
Business needs
Customers require a scalable, tiered, and highly available infrastructure on
which to deploy their business and mission-critical applications. Several
new technologies are available to assist customers in consolidating and
virtualizing their server infrastructure, but customers need to know how to
use these technologies to maximize the investment, support service-level
agreements, and reduce the total cost of ownership (TCO).
This solution addresses the following challenges:
Availability: Stand-alone servers incur downtime for maintenance or
unexpected failures. Clusters of redundant stand-alone nodes are
inefficient in the use of CPU, disk, and memory resources.
Server management and maintenance: Individually maintained servers
require significant repetitive activities for monitoring, problem resolution,
patching, and other common activities. Therefore, the maintenance is
labor intensive, costly, error-prone, and inefficient. Security, downtime,
and outage risks are elevated.
Ease of solution deployment: While small and medium businesses (SMB)
must address the same IT challenges as larger enterprises, the staffing
levels, experience, and training are generally more limited. IT generalists
are often responsible for managing the entire IT infrastructure, and reliance
is placed on third-party sources for maintenance or other tasks. The
perceived complexity of the IT function raises fear of risk and may block
the adoption of new technology. Therefore, the simplicity of deployment
and management are highly valued.
Network performance and resiliency: Networking is added locally to
provide connectivity between physical servers & storage and existing
infrastructure. Network is sized for 1 & 10 GbE performance sizing
requirements and is deployed in a HA dual fabric for resiliency.
Executive Summary
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
16
Storage efficiency: Storage that is added locally to physical servers or
provisioned directly from a shared resource or array often leads to over-
provisioning and waste.
Backup: Traditional backup approaches are slow and frequently
unreliable. There tends to be inflection points (or plateaus) in the
virtualization adoption curve when the number of virtual machines
increases from a few to 100 or more. With a few virtual machines, the
situation can be manageable and most organizations can get by with
existing tools and processes. However, when the virtual environment
grows, the backup and recovery processes often become the limiting
factors in the deployment.
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
17
Chapter 2 Solution Overview
This chapter presents the following topics:
Introduction 18
Virtualization 18
Compute 18
Network 19
Storage 19
Solution Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
18
Introduction
The EMC VSPEX private cloud for Microsoft Hyper-V with Brocade VDX
solution provides a complete system architecture capable of supporting
up to 100 virtual machines with a redundant server/network topology and
highly available storage. The core components that make up this
particular solution are virtualization, storage, server, compute, and
networking.
Virtualization
Microsoft Hyper-V is a leading virtualization platform in the industry. For
years, Hyper-V has provided flexibility and cost savings to end users by
consolidating large, inefficient server farms into nimble, reliable cloud
infrastructures.
Features like Live Migration which enables a virtual machine to move
between different servers with no disruption to the guest operating system,
and Dynamic Optimization which performs Live Migration automatically to
balance loads, make Hyper-V a solid business choice.
With the release of Windows Server 2012, a Microsoft virtualized
environment can host virtual machines with up to 64 virtual CPUs and 1 TB
of virtual RAM.
Compute
VSPEX provides the flexibility to design and implement your choice of
server components. The infrastructure must conform to the following
attributes:
Sufficient processor cores and memory to support the required
number and types of virtual machines
Sufficient network connections to enable redundant connectivity to
the system switches
Excess capacity to withstand a server failure and failover in the
environment
Solution Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
19
Network
Brocade VDX switches with VCS Fabric Technology enables the
implementation of a high performance, efficient, and resilient network with
this VSPEX solution. The Brocade VDX switching infrastructure provides the
following attributes:
Redundant network links for the hosts, switches, and storage.
Architecture for Traffic isolation based on industry-accepted best
practices.
Support for link aggregation.
High utilization and high availability networking
Virtualization automation
Storage
The EMC VNX storage family is the leading shared storage platform in the
industry. VNX provides both file and block access with a broad feature set
which makes it an ideal choice for any private cloud implementation.
The following VNXe storage components are sized for the stated reference
architecture workload:
Host adapter ports – Provide host connectivity via fabric into the
array.
Storage Processors – The compute components of the storage
array, which are used for all aspects of data moving into, out of,
and between arrays along with protocol support.
Disk drives – Disk spindles that contain the host/application data
and their enclosures.
The 50 and 100 virtual machine Hyper-V private cloud solutions discussed in
this document are based on the VNXe3150™ and VNXe3300™ storage
arrays respectively. VNXe3150 can support a maximum of 100 drives and
VNXe3300 can host up to 150 drives.
The EMC VNXe series supports a wide range of business class features ideal
for the private cloud environment, including:
Thin Provisioning
Replication
Snapshots
File Deduplication and Compression
Quota Management
Solution Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
20
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
21
Chapter 3 Solution
Technology
Overview
This chapter presents the following topics:
Overview 22
Summary of key components 23
Virtualization 24
Compute 25
Network 27
Storage 29
Backup and recovery 30
Other technologies 30
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
22
Overview
This solution uses the EMC VNXe series, Brocade VDX switches with VCS
Fabric technology, and Microsoft Hyper-V to provide storage, network,
and server hardware consolidation in a private cloud. The new virtualized
infrastructure is centrally managed to provide efficient deployment and
management of a scalable number of virtual machines and associated
shared storage.
Figure 1 depicts the general solution components.
Figure 1. VSPEX private cloud components
These components are described in more detail in the following sections.
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
23
Summary of key components
This section briefly describes the key components of this solution.
Virtualization
The virtualization layer enables the physical implementation of
resources to be decoupled from the applications that use them. In
other words, the application view of the available resources is no
longer directly tied to the hardware. This enables many key
features in the private cloud concept.
Compute
The compute layer provides memory and processing resources for
the virtualization layer software, and for the needs of the
applications running within the private cloud. The VSPEX program
defines the minimum amount of compute layer resources required,
and enables the customer to implement the requirements using any
server hardware that meets these requirements.
Network
Brocade VDX switches, with VCS Fabric technology; connect the
users of the Private Cloud to the resources in the cloud and the
storage layer to the compute layer. EMC VSPEX solutions with
Brocade VDX switches provide the required connectivity for the
solution and general guidance on network architecture. The EMC
VSPEX solutions also enable the customer to implement a solution
that provides a cost effective, resilient, and operationally efficient
virtualization platform.
Storage
The storage layer is critical for the implementation of the private
cloud. With multiple hosts to access shared data, many of the use
cases defined in the private cloud concept can be implemented.
The EMC VNXe storage family used in this solution provides high-
performance data storage while maintaining high availability.
Backup and recovery
The optional backup and recovery components of the solution
provide data protection when the data in the primary system is
deleted, damaged, or otherwise unusable.
The Solution architecture section provides details on all the components
that make up the reference architecture.
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
24
Virtualization
Virtualization enables greater flexibility in the application layer by
potentially eliminating hardware downtime for maintenance, and
enabling the physical capability of the system to change without affecting
the hosted applications. In a server virtualization or private cloud use
case, it enables multiple independent virtual machines to share the same
physical hardware, rather than being directly implemented on dedicated
hardware.
Microsoft Hyper-V, a Windows Server role that was introduced in Windows
Server 2008, transforms or virtualizes computer hardware resources,
including CPU, memory, storage, and network. This transformation creates
fully functional virtual machines that run their own operating systems and
applications just like physical computers.
Hyper-V and Failover Clustering provide a high-availability virtualized
infrastructure along with Cluster Shared Volumes (CSVs). Live Migration
and Live Storage Migration enable seamless migration of virtual machines
from one Hyper-V server to another and stored files from one storage
system to another, with minimal performance impact.
SCVMM is a centralized management platform for the virtualized
datacenter. With SCVMM, administrators can configure and manage the
virtualization host, networking, and storage resources in order to create
and deploy virtual machines and services to private clouds. When
deployed, SCVMM greatly simplifies provisioning, management and
monitoring of the Hyper-V environment.
Hyper-V achieves high availability by using the Windows Server 2012
Failover Clustering feature. High availability is impacted by both planned
and unplanned downtime, and Failover Clustering can significantly
increase the availability of virtual machines in both situations. Windows
Server 2012 Failover Clustering is configured on the Hyper-V host so that
virtual machines can be monitored for health and moved between nodes
of the cluster. This configuration has the following key advantages:
If the physical host server that Hyper-V and the virtual machines are
running on must be updated, changed, or rebooted, the virtual
machines can be moved to other nodes of the cluster. You can
move the virtual machines back after the original physical host
server is back to service.
If the physical host server that Hyper-V and the virtual machines are
running on fails or is significantly degraded, the other members of
the Windows Failover Cluster take over the ownership of the virtual
machines and bring them online automatically.
Overview
Microsoft Hyper-V
Microsoft System Center Virtual Machine Manager (SCVMM)
High Availability with Hyper-V Failover Clustering
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
25
If the virtual machine fails, it can be restarted on the same host
server or moved to another host server. Since Windows 2012 Server
Failover Cluster detects this failure, it automatically takes recovery
steps based on the settings in the resource properties of the virtual
machine. Downtime is minimized because of the detection and
recovery automation.
EMC Storage Integrator (ESI) is an agent-less, no-charge plug-in that
enables application-aware storage provisioning for Microsoft Windows
server applications, Hyper-V, VMware, and Xen Server environments.
Administrators can easily provision block and file storage for Microsoft
Windows or for Microsoft SharePoint sites by using wizards in ESI. ESI
supports the following functions:
Provisioning, formatting, and presenting drives to Windows servers
Provisioning new cluster disks and adding them to the cluster
automatically
Provisioning shared CIFS storage and mounting it to Windows servers
Provisioning SharePoint storage, sites, and databases in a single
wizard
Compute
The choice of a server platform for an EMC VSPEX infrastructure is not only
based on the technical requirements of the environment, but on the
supportability of the platform, existing relationships with the server provider,
advanced performance and management features, and many other
factors. For this reason, EMC VSPEX solutions are designed to run on a wide
variety of server platforms. Instead of requiring a given number of servers
with a specific set of requirements, VSPEX documents a number of
processor cores and an amount of RAM that must be achieved. This can
be implemented with 2 or 20 servers and still be considered the same
VSPEX solution.
In the example shown in Figure 2, assume that the compute layer
requirements for a given implementation are 25 processor cores, and 200
GB of RAM. One customer might want to implement this solution using
white-box servers containing 16 processor cores and 64 GB of RAM, while a
second customer chooses a higher-end server with 20 processor cores and
144 GB of RAM.
The first customer needs four of the servers they chose, while the second
customer needs two.
EMC Storage Integrator
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
26
Figure 2. Compute layer flexibility
Note To enable high availability at the compute layer, each customer
needs one additional server to ensure that the system can maintain
business operations if a server fails.
The following best practices apply to the compute layer:
Use a number of identical or at least compatible servers. VSPEX
implements hypervisor level high-availability technologies that may
require similar instruction sets on the underlying physical hardware.
By implementing VSPEX on identical server units, you can minimize
compatibility problems in this area.
When implementing high availability on the hypervisor layer, the
largest virtual machine you can create is constrained by the
smallest physical server in the environment.
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
27
Implement the available high availability features in the
virtualization layer, and ensure that the compute layer has sufficient
resources to accommodate at least single-server failures. This
enables the implementation of minimal-downtime upgrades and
tolerance for single-unit failures.
Within the boundaries of these recommendations and best practices, the
compute layer for EMC VSPEX can be flexible to meet your specific needs.
The key constraint is that you provide sufficient processor cores and RAM
per core to meet the needs of the target environment.
Network
The VSPEX with Brocade VDX networking validated solution uses virtual
local area networks (VLANs) to segregate network traffic of VSPEX
reference architecture for iSCSI storage traffic to improve throughput,
manageability, application separation, high availability, and security. The
Brocade VDX networking solution provides redundant network links for
each Microsoft Hyper-V Windows server applications, Hyper-V, the VNXe
storage array, switch interconnect ports, and customer infrastructure uplink
ports. If a link is lost with any of the Brocade VDX network infrastructure
ports, the link fails over to another port. All network traffic is distributed
across the active links.
The Brocade® VDX with VCS Fabrics helps simplify networking
infrastructures through innovative technologies and VSPEX infrastructure
topology design. Brocade VDX 6710/6720 switches support this strategy by
simplifying network architecture and deployment while increasing network
performance and resiliency with Ethernet fabrics. Brocade VDX with VCS
Fabric technology supports active – active links for all traffic from the
virtualized compute servers to the EMC VNXe storage arrays. The Brocade
VDX provides a network with high availability and redundancy by using link
aggregation for EMC VNXe storage array.
The Brocade network switch infrastructure provides redundant network
links for each Hyper-V host, the storage array, the switch interconnect
ports, and the switch uplink ports. This configuration provides both
redundancy and additional network bandwidth. Automatic and
transparent failover is provided using the Brocade VDX networking solution
infrastructure or deploying it alongside other components of the solution.
Figure 3 shows an example of the highly available network topology.
Overview
Brocade VDX Ethernet Fabric switch series
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
28
Figure 3. Example of a highly available network design
Brocade VDX with VCS Fabric technology supports active – active links for
all traffic from the virtualized compute servers to the EMC VNXe storage
arrays. EMC unified storage platforms provide network high availability or
redundancy by using link aggregation. Link aggregation enables multiple
active Ethernet connections to appear as a single link with a single MAC
address, and potentially multiple IP addresses. In this solution, Link
Aggregation Control Protocol (LACP) is configured on VNXe, combining
multiple Ethernet ports into a single virtual device. If a link is lost in the
Ethernet port, the link fails over to another port. All network traffic is
distributed across the active links.
Brocade VCS Fabric technology offers unique features to support
virtualized server and storage environments. Brocade network Hypervisor
automation; for example, provides secure connectivity and full visibility to
virtualized server resources with dynamic learning and activation of port
profiles. With configuration of port profiles, the VDX switches support
Hyper-V mobility between Microsoft Windows servers.
Server and Storage Virtualization Automation Support
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
29
Storage
The storage layer is also a key component of any Cloud Infrastructure
solution that stores and serves data generated by application and
operating systems within the datacenter. A centralized storage platform
often increases storage efficiency, management flexibility, and reduces
total cost of ownership. In this VSPEX solution, EMC VNXe Series is used for
providing virtualization at the storage layer.
EMC VNX family is optimized for virtual applications delivering industry-
leading innovation and enterprise capabilities for file and block storage in
a scalable, easy-to-use solution. This next-generation storage platform
combines powerful and flexible hardware with advanced efficiency,
management, and protection software to meet the demanding needs of
today’s enterprises.
The VNXe series is powered by the Intel Xeon processors, for intelligent
storage that automatically and efficiently scales in performance, while
ensuring data integrity and security.
The VNXe series is purpose-built for IT managers in smaller environments
and the VNX series is designed to meet the high-performance, high-
scalability requirements of midsize and large enterprises. Table 1 shows the
customer benefits.
Table 1. VNXe customer benefits
Feature
Next-generation unified storage, optimized for virtualized
applications
Capacity optimization features including compression,
deduplication, thin provisioning, and application-centric
copies
High availability, designed to deliver five 9s availability
Simplified management with EMC Unisphere™ for a
single management interface for all network-attached
storage (NAS), storage area network (SAN), and
replication needs
Overview
EMC VNXe series
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
30
Software Suites
Local Protection Suite—Increases productivity with snapshots of
production data.
Remote Protection Suite—Protects data against localized failures,
outages, and disasters.
Application Protection Suite—Automates application copies and
provides replica management.
Security and Compliance Suite—Keeps data safe from changes,
deletions, and malicious activity.
Software Packs
VNXe Total Value Pack—Includes the Remote Protection,
Application Protection and Security and Compliance Suite.
Backup and recovery
EMC backup and recovery solutions – EMC Avamar Business Edition and
EMC Data Domain - deliver the protection confidence and efficiency
needed to accelerate deployment of VSPEX Private Clouds.
Our solutions are proven to reduce backup times by 90% and speed
recoveries with single step restore for worry-free protection. And our
protection storage systems add another layer of assurance, with end-to-
end verification and self-healing for ensured recovery.
Our solutions also deliver big saving. With industry-leading deduplication,
you can reduce backup storage by 10-30x, backup management time by
81%, and WAN bandwidth by 99% for efficient DR —delivering a 7-month
payback on average. You'll be able to scale simply and efficiently as your
environment grows.
Other technologies
In addition to the required technical components for EMC VSPEX solutions,
other technologies may provide additional value depending on the
specific use case. These include, but are not limited to the technologies
listed below.
EMC XtemSW CacheTM is a server Flash caching solution that reduces
latency and increases throughput to improve application performance by
using intelligent caching software and PCIe Flash technology.
EMC Avamar
EMC XtemSW Cache (Optional)
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
31
Server-side Flash caching for maximum speed
XtremSW Cache software caches the most frequently referenced data on
the server-based PCIe card, thereby putting the data closer to the
application.
XtremSW Cache caching optimization automatically adapts to changing
workloads by determining which data is most frequently referenced and
promoting it to the server Flash card. This means that the “hottest” or most
active data automatically resides on the PCIe card in the server for faster
access.
XtremSW Cache offloads the read traffic from the storage array, which
allows it to allocate greater processing power to other workloads. While
one workload is accelerated with XtremSW Cache, the array’s
performance for other workloads is maintained or even slightly enhanced.
Write-through caching to the array for total protection
XtemSW Cache accelerates reads and protects data by using a write-
through cache to the storage to deliver persistent high availability,
integrity, and disaster recovery.
Application agnostic
XtemSW Cache is transparent to applications, so no rewriting, retesting, or
recertification is required to deploy XtemSW Cache in the environment.
Minimum impact on system resources
XtremSW Cache does not require a significant amount of memory or CPU
cycles, as all flash and wear-leveling management is done on the PCIe
card, and does not use server resources. However, unlike other PCIe
solutions, there is no significant overhead from using XtremSW Cache on
server resources.
XtemSW Cache creates the most efficient and intelligent I/O path from the
application to the datastore, which results in an infrastructure that is
dynamically optimized for performance, intelligence, and protection for
both physical and virtual environments.
XtemSW Cache active/passive clustering support
XtemSW Cache clustering scripts configuration ensures that stale data is
never retrieved. The scripts use cluster management events to trigger a
mechanism that purges the cache. The XtemSW Cache-enabled
active/passive cluster ensures data integrity, and accelerates application
performance.
XtemSW Cache performance considerations
The following are the XtemSW Cache performance considerations:
On a write request, XtemSW Cache first writes to the array, then to
the cache, and then completes the application I/O.
Solution Technology Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
32
On a read request, XtemSW Cache satisfies the request with
cached data, or, when the data is not present, retrieves the data
from the array, writes it to the cache, and then returns it to the
application. The trip to the array can be in the order of
milliseconds, therefore the array limits how fast the cache can work.
As the number of writes increases, XtemSW Cache performance
decreases.
XtemSW Cache is most effective for workloads with a 70 percent, or
more, read/write ratio, with small, random I/O (8 K is ideal). I/O
greater than 128 K will not be cached in XtemSW Cache v1.5.
Note For more information, refer to the XtemSW Cache Installation and
Administration Guide v1.5.
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
33
Chapter 4 Solution
Architecture
Overview
This chapter presents the following topics:
Solution Overview 34
Solution architecture 34
Server configuration guidelines 40
Brocade network configuration guidelines 43
Storage configuration guidelines 47
High availability and failover 51
Backup and recovery configuration guidelines 54
Sizing guidelines 55
Reference workload 56
Applying the reference workload 57
Implementing the reference architectures 59
Quick assessment 62
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
34
Solution Overview
VSPEX Proven Infrastructure solutions are built with proven best-of-breed
technologies to create a complete virtualization solution that enables you
to make an informed decision when choosing and sizing the hypervisor,
compute, networking, and storage layers. VSPEX eliminates virtualization
planning and configuration burdens by leveraging extensive
interoperability, functional, and performance testing by EMC. VSPEX
accelerates your IT Transformation to cloud-based computing by enabling
faster deployment, more choice, higher efficiency, and lower risk.
This section is intended to be a comprehensive guide to the major aspects
of this solution. Server capacity is specified in generic terms for required
minimums of CPU, memory, and network interfaces; the customer is free to
select the server and networking hardware that meet or exceed the
stated minimums. The specified storage architecture, along with a system
meeting the server and network requirements outlined, is validated by
EMC to provide high levels of performance while delivering a highly
available architecture for your private cloud deployment.
Each VSPEX Proven Infrastructure balances the storage, network, and
compute resources needed for a set number of virtual machines, which
have been validated by EMC. In practice, each virtual machine has its
own set of requirements that rarely fit a predefined idea of what a virtual
machine should be. In any discussion about virtual infrastructures, it is
important to first define a reference workload. Not all servers perform the
same tasks, and it is impractical to build a reference that takes into
account every possible combination of workload characteristics.
Solution architecture
The VSPEX Proven Infrastructure for Microsoft Hyper-V private clouds with
EMC VNXe is validated at two different points of scale, one with up to 50
virtual machines, and the other with up to 100 virtual machines. The
defined configurations form the basis of creating a custom solution.
Note VSPEX uses the concept of a Reference Workload to describe and
define a virtual machine. Therefore, one physical or virtual server in
an existing environment may not be equal to one virtual machine in
a VSPEX solution. Evaluate your workload in terms of the reference
to achieve an appropriate point of scale.
Overview
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
35
The architecture diagram shown in Figure 4 characterizes the validated
infrastructure with a Brocade VDX solution for up to 50 virtual machines.
Figure 4. Logical architecture for 50 virtual machines
Architecture for up to 50 virtual machines
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
36
The architecture diagram shown in Figure 5 characterizes the infrastructure
with a Brocade VDX solution validated for up to 100 virtual machines.
Figure 5. Logical architecture for 100 virtual machines
Note The networking components of either solution can be implemented
using 1 GbE or 10 GbE IP networks, if sufficient bandwidth and
redundancy meet the listed requirements.
The architecture includes the following key components:
Microsoft Hyper-V—Provides a common virtualization layer to host a server
environment. The specifics of the validated environment are listed in Table
2. Hyper-V provides a highly available infrastructure through features such
as:
Live Migration — Provides live migration of virtual machines within a
virtual infrastructure cluster, with no virtual machine downtime or
service disruption.
Live Storage Migration — Provides live migration of virtual machine
disk files within and across storage arrays with no virtual machine
downtime or service disruption.
Architecture for up to 100 virtual machines
Key components
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
37
Failover Clustering High Availability (HA) – Detects and provides
rapid recovery for a failed virtual machine in a cluster.
Dynamic Optimization (DO) – Provides load balancing of
computing capacity in a cluster with support of SCVMM.
Microsoft System Center Virtual Machine Manager (SCVMM)—SCVMM is
not required for this solution. However, if deployed, it (or its corresponding
function in Microsoft System Center Essentials) simplifies provisioning,
management, and monitoring of the Hyper-V environment.
Microsoft SQL Server 2012—SCVMM, if used, requires a SQL Server
database instance to store configuration and monitoring details.
DNS Server — DNS services are required for the various solution
components to perform name resolution. The Microsoft DNS service
running on a Windows Server 2012 is used.
Active Directory Server — Active Directory services are required for the
various solution components to function properly. The Microsoft Active
Directory Service running on a Windows Server 2012 is used.
Brocade VDX 6710/6720 Ethernet Fabric Network — All network traffic is
carried by the Brocade Ethernet Fabric network with redundant cabling
and switches. User and management traffic is carried over a shared
network while iSCSI storage traffic is carried over a private, non-routable
subnet.
EMC VNXe 3150 array—Provides storage by presenting Internet Small
Computer System Interface (iSCSI) datastores to Hyper-V hosts for up to 50
virtual machines.
EMC VNXe 3300 array—Provides storage by presenting Internet Small
Computer System Interface (iSCSI) datastores to Hyper-V hosts for up to
100 virtual machines.
These datastores for both deployment sizes are created by using
application-aware wizards included in the EMC Unisphere interface.
VNXe series storage arrays include the following components:
Storage Processors (SPs) support block and file data with UltraFlexTM
I/O technology that supports iSCSI, CIFS, and NFS protocols The SPs
provide access for all external hosts and for the file side of the VNXe
array.
Battery backup units are battery units within each storage processor
and provide enough power to each storage processor to ensure
that any data in flight is destaged to the vault area in the event of a
power failure. This ensures that no writes are lost. Upon restart of
the array, the pending writes are reconciled and persisted.
Disk-array Enclosures (DAE) house the drives used in the array.
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
38
Table 2 lists the hardware used in this solution.
Table 2. Solution hardware
Hardware Configuration Notes
Hyper-V
servers
Memory:
2 GB RAM per virtual machine
100 GB RAM across all servers for the
50-virtual-machine configuration
200 GB RAM across all servers for the
100-virtual-machine configuration
2 GB RAM reservation per host for
hypervisor
CPU:
One vCPU per virtual machine
One to four vCPUs per physical core
Network:
Two 10 GbE NIC ports per server
Note To implement Microsoft Hyper-V High
Availability (HA) functionality and to meet
the listed minimums, the infrastructure
should have one additional server.
Configured as a
single Hyper-V
cluster.
Brocade
Network
infrastructure
Minimum switching capacity:
Two physical VDX 6710/6720 switches*
One 1 GbE port per storage processor
for management Two 10 GbE ports
per storage processor for data
Redundant
Brocade VDX
Ethernet Fabric
configuration
For 50 & 100 Virtual Machines
Brocade Ethernet Fabric Switch*
Two VDX 6710 – 48 port
o 6 x 1 GbE ports per Hyper-V
server
1 GbE iSCSI
Server option
Brocade Ethernet Fabric Switch
Two VDX 6720 – 24 port
o Two 10 GbE ports per Hyper-V
server
10 GbE iSCSI
Server option
Hardware resources
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
39
Hardware Configuration Notes
Storage Common:
Two Storage Processors
(active/active)
Two 10GbE interfaces per storage
processor for data
For 50 Virtual Machines
EMC VNXe 3150
Forty-five 300 GB 15k RPM 3.5-inch SAS
disks (9 * 300 GB 4+1 R5 Performance
Drive Packs)
Two 300 GB 15k RPM 3.5-inch SAS disks
as hot spares
For 100 Virtual Machines
EMC VNXe 3300
Seventy-seven 300 GB 15k RPM 3.5-
inch SAS disks (11 * 300 GB 6+1 R5
Performance Drive Packs)
Three 300 GB 15k RPM 3.5-inch SAS
disks as hot spares
Include the
initial disk pack
on the VNXe.
Shared
infrastructure
In most cases, a customer environment will
already have configured the infrastructure
services such as Active Directory, DNS, and
other services. The setup of these services is
beyond the scope of this document.
If this configuration is being implemented
with non-existing infrastructure, a minimum
number of additional servers is required:
Two physical servers
16 GB RAM per server
Four processor cores per server
Two 10 GbE ports per server
These servers
and the roles
they fulfill may
already exist in
the customer
environment;
however, they
must exist
before VSPEX is
deployed.
EMC Next-
Generation
Backup
For 50 virtual machines
Avamar Business Edition ½ Capacity
For 100 virtual machines
Avamar Business Edition Full Capacity
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
40
Table 3 lists the software used in this solution.
Table 3. Solution software
Software Configuration
Microsoft Hyper-V
Operating system for Hyper-V
hosts
Windows 2012 Datacenter Edition
(Datacenter Edition is necessary to support
the number of virtual machines in this
solution)
System Center Virtual Machine
Manager
Version 2012 SP1
Microsoft SQL Server Version 2012 Enterprise Edition
VNXe
Software version 2.2.0.16150
Next-Generation Backup
Avamar Business Edition 7.0 SP1 – for up to 100 virtual machines
Server configuration guidelines
When designing and ordering the compute/server layer of the VSPEX
solution, several factors may alter the final purchase. From a virtualization
perspective, if a system workload is well estimated, features like Dynamic
Memory and Smart Paging can reduce the aggregate memory
requirement.
If the virtual machine pool does not have a high level of peak or
concurrent usage, the number of vCPUs may be reduced. Conversely, if
the applications being deployed are highly computational in nature, the
number of CPUs and memory to be purchased may need to increase.
Microsoft Hyper-V has a number of advanced features that help to
maximize performance and overall resource utilization. The most
important of these are in the area of memory management. This section
describes some of these features and the items to consider in the
environment.
In general, you can consider virtual machines on a single hypervisor
consuming memory as a pool of resources. Figure 6 is an example.
Software resources
Overview
Hyper-V memory virtualization
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
41
Figure 6. Hypervisor memory consumption
This basic concept is enhanced by understanding the technologies
presented in this section.
Dynamic Memory
Dynamic Memory, which was introduced in Windows Server 2008 R2 SP1,
increases physical memory efficiency by treating memory as shared
resources and allocating it to the virtual machines dynamically. Actual
used memory of each virtual machine is adjusted on demand. Dynamic
Memory enables more virtual machines to run by reclaiming unused
memory from idle virtual machines. In Windows Server 2012, Dynamic
Memory enables the dynamic increase of the maximum memory available
to virtual machines.
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
42
Smart Paging
Even with Dynamic Memory, Hyper-V allows more virtual machines than
physical available memory. There is most likely a memory gap between
minimum memory and startup memory. Smart Paging is a memory
management technique that leverages disk resources as temporary
memory replacement. It swaps out less-used memory to disk storage and
swaps back in when needed, which may cause performance to degrade
as a drawback. Hyper-V continues to leverage the guest paging when
the host memory is oversubscribed, as it is more efficient than Smart Paging.
Non-Uniform Memory Access
Non-Uniform Memory Access (NUMA) is a multi-node computer
technology that enables a CPU to access remote-node memory. This type
of memory access is costly in terms of performance, so Windows Server
2012 employs a process known as processor affinity, which strives to keep
threads pinned to a particular CPU to avoid remote-node memory access.
In previous versions of Windows, this feature is only available to the host.
Windows Server 2012 extends this functionality into the virtual machines,
which can now realize improved performance in SMP environments.
This section provides guidelines to configure server memory for this solution.
The guidelines take into account Hyper-V memory overhead and the
virtual machine memory settings.
Hyper-V memory overhead
Virtualized memory has some associated overhead, which includes the
memory consumed by Hyper-V, the parent partition, and additional
overhead for each virtual machine. Leave at least 2 GB memory for
Hyper-V parent partition for this solution.
Virtual machine memory
In this solution, each virtual machine gets 2 GB memory in fixed mode.
Memory configuration guidelines
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
43
Brocade network configuration guidelines
This section provides for setting up a redundant, highly available network
configuration for this VSPEX solution. The guidelines take into account
Jumbo Frames, VLANs, and Multiple Connections per Session (MC/S). For
detailed network resource requirements, refer to Table 4.
Table 4. Network hardware
Hardware Configuration Notes
Network
infrastructure
Minimum switching capacity:
Two physical switches
Two 10 GbE ports per Hyper-V server
o Optionally Six 1 GbE ports per
Hyper-V server
One 1GbE port per storage processor
for management
Two 10-GbE ports per storage
processor for data
Redundant
Brocade VDX
Ethernet Fabric
switch
configuration
It is a best practice to isolate network traffic so that the traffic between
hosts and storage, hosts and clients, and management traffic all move
over isolated networks. In some cases physical isolation may be required
for regulatory or policy compliance reasons; but in many cases logical
isolation using VLANs is sufficient. This solution calls for a minimum of three
VLANs for the following usage:
Client access
Storage
Management/Live Migration
Figure 7 depicts these VLANs.
Overview
VLAN
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
44
Figure 7. Required networks
Note Figure 7 demonstrates the network connectivity requirements for a
VNXe 3300 using 10 GbE network connections (1 GbE for the
Management Network). A similar topology should be created
when using the VNXe 3150 array.
The client access network is for users of the system, or clients, to
communicate with the infrastructure. The Storage Network is used for
communication between the compute layer and the storage layer. The
Management network is used for administrators to have a dedicated way
to access the management connections on the storage array, network
switches, and hosts.
Note Some best practices call for additional network isolation for cluster
traffic, virtualization layer communication, and other features.
These additional networks can be implemented if necessary, but
they are not required.
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
45
Brocade VDX Series switches support the transport of jumbo frames. This
solution for EMC VSPEX private cloud recommends an MTU set at 9216
(jumbo frames) for efficient storage and migration traffic. Jumbo frames
are enabled by default on the Brocade ISL trunks. However, to
accommodate end-to-end jumbo frame support on the network for the
edge hosts, this feature can be enabled for the interface connected to
the Microsoft Hyper-V hosts, and the VNXe. The default Maximum
Transmission Unit (MTU) on these interfaces is 1500. This MTU is set to 9216 to
optimize the network for jumbo frame support.
Multiple Connections per Session (MC/S) is configured on each Hyper-V
host so that each host network interface has one iSCSI session to each
VNXe storage processor (SP) interface. In this solution, four iSCSI sessions
are configured between each host and each VNXe SP (each VNXe iSCSI
server).
A link aggregation resembles an Ethernet channel, but uses the Link
Aggregation Control Protocol (LACP) IEEE 802.3ad standard. The IEEE
802.3ad standard supports link aggregations with two or more ports. All
ports in the aggregation must have the same speed and be full duplex. In
this solution, Link Aggregation Control Protocol (LACP) can be configured
to the customer infrastructure network, combining multiple Ethernet ports
into a single virtual device. If a link is lost in the Ethernet port, the link fails
over to another port. All network traffic is distributed across the active links.
Brocade Virtual Link Aggregation Groups (vLAGs) are used for the
Microsoft Hyper-V host and customer infrastructure. In the case of the
VNXe, a dynamic Link Aggregation Control Protocol (LACP) vLAG is not
used with MC/S and iSCSI. While Brocade ISLs are used as interconnects
between Brocade VDX switches within a Brocade VCS fabric, industry
standard LACP LAGs are supported for connecting to other network
devices outside the Brocade VCS fabric. Typically, LACP LAGs can only be
created using ports from a single physical switch to a second physical
switch. In a Brocade VCS fabric, a vLAG can be created using ports from
two Brocade VDX switches to a device to which both VDX switches are
connected. This provides an additional degree of device-level
redundancy, while providing active-active link-level load balancing.
In VSPEX Stack Brocade Inter-Switch Link (ISL) Trunking is used within the
Brocade VCS fabric to provide additional redundancy and load
balancing between the iSCSI clients and iSCSI storage. Typically, multiple
links between two switches are bundled together in a Link Aggregation
Group (LAG) to provide redundancy and load balancing. Setting up a
LAG requires lines of configuration on the switches and selecting a hash-
based algorithm for load balancing based on source-destination IP or
MAC addresses.
Enable jumbo frames
MC/S
Link Aggregation
Brocade Virtual Link Aggregation Group (vLAG)
Brocade Inter-Switch Link (ISL) Trunks
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
46
All flows with the same hash traverse the same link, regardless of the total
number of links in a LAG. This might result in some links within a LAG, such
as those carrying flows to a storage target, being over utilized and packets
being dropped, while other links in the LAG remain underutilized. Instead
of LAG-based switch interconnects, Brocade VCS Ethernet fabrics
automatically form ISL trunks when multiple connections are added
between two Brocade VDX® switches. Simply adding another cable
increases bandwidth, providing linear scalability of switch-to-switch traffic,
and this does not require any configuration on the switch. In addition, ISL
trunks use a frame-by-frame load balancing technique, which evenly
balances traffic across all members of the ISL trunk group.
A standard link-state routing protocol that runs at Layer 2 determines if
there are Equal-Cost Multipaths (ECMPs) between RBridges in an Ethernet
fabric and load balances the traffic to make use of all available ECMPs. If
a neighbor switch is reachable via several interfaces with different
bandwidths, all of them are treated as “equal-cost” paths. While it is
possible to set the link cost based on the link speed, such an algorithm
complicates the operation of the fabric. Simplicity is a key value of
Brocade VCS Fabric technology, so an implementation is chosen in the
test case that does not consider the bandwidth of the interface when
selecting equal-cost paths. This is a key feature needed to expand
network capacity, to keep ahead of customer bandwidth requirements.
Brocade VDX Series switches support the Pause Flow Control feature. IEEE
802.3x Ethernet pause and Ethernet Priority-Based Flow Control (PFC) are
used to prevent dropped frames by slowing traffic at the source end of a
link. When a port on a switch or host is not ready to receive more traffic
from the source, perhaps due to congestion, it sends pause frames to the
source to pause the traffic flow. When the congestion is cleared, the port
stops requesting the source to pause traffic flow, and traffic resumes
without any frame drop. When Ethernet pause is enabled, pause frames
are sent to the traffic source. Similarly, when PFC is enabled, there is no
frame drop; pause frames are sent to the source switch.
Equal-Cost Multipath (ECMP)
Pause Flow Control
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
47
Storage configuration guidelines
Hyper-V allows more than one method of utilizing storage when hosting
virtual machines. The solutions are tested utilizing iSCSI and the storage
layout described adheres to all current best practices. The customer or
architect with required knowledge can make modifications based on the
systems usage and load if necessary.
Table 5 lists the required hardware for the storage configuration.
Table 5. Storage hardware
Hardware Configuration Notes
Storage Common:
Two storage processors
(active/active)
Two 10 GbE interfaces per storage
processor
For 50 virtual machines
EMC VNXe 3150
Forty-five 300 GB 15k RPM 3.5-inch SAS
disks (9 * 300 GB 4+1 R5 Performance
Drive Packs)
Two 300 GB 15k RPM 3.5-inch SAS disks
as hot spares
For 100 virtual machines
EMC VNXe 3300
Seventy-seven 300 GB 15k RPM 3.5-
inch SAS disks (11 * 300 GB 6+1 R5
Performance Drive Packs)
Three 300 GB 15k RPM 3.5-inch SAS
disks as hot spares
Include the
initial disk pack
on the VNXe.
Overview
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
48
This section provides guidelines to set up the storage layer of the solution to
provide high availability and the expected level of performance.
Windows Server 2012 Hyper-V and Failover Clustering leverage Cluster
Shared Volumes v2 and new Virtual Hard Disk Format (VHDX) features to
virtualize storage presented from external shared storage system to host
virtual machines.
Figure 8. Hyper-V virtual disk types
Cluster Shared Volumes v2
Cluster Shared Volumes (CSV) was introduced in Windows Server 2008 R2.
They enable all cluster nodes to have simultaneous access to the shared
storage for hosting virtual machines. Windows Server 2012 introduces a
number of new capabilities with CSV v2, which includes flexible
application, file storage, integration with other Windows Server 2012
features, single name space, and improved backup, and restore.
New Virtual Hard Disk format
Hyper-V in Windows Server 2012 contains an update to the VHD format
called VHDX, which has much larger capacity and built-in resiliency. The
main new features of VHDX format are:
Support for virtual hard disk storage with the capacity of up to 64 TB
Additional protection against data corruption during power failures
by logging updates to the VHDX metadata structures
Optimal structure alignment of the virtual hard disk format to suit
large sector disks
The VHDX format also has the following features:
Larger block sizes for dynamic and differential disks, which enables
the disks to meet the needs of the workload
Hyper-V storage virtualization for VSPEX
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
49
The 4 KB logical sector virtual disk that enables increased
performance when used by applications and workloads that are
designed for 4-KB sectors
The ability to store custom metadata about the files that the user
might want to record, such as the operating system version or
applied updates
Space reclamation features that can result in smaller file size and
enables the underlying physical storage device to reclaim unused
space (Trim for example requires direct-attached storage or SCSI
disks and Trim-compatible hardware.)
Figure 9 shows the overall storage layout of the 50 virtual machine solution.
Figure 9. Storage layout for 50 virtual machines
Storage layout overview
The architecture for up to 50 virtual machines uses the following
configuration:
Forty-five 300 GB SAS disks allocated to a single storage pool as nine
4+1 RAID 5 groups (sold as nine packs of five disks).
At least one hot spare allocated for every 30 disks of a given type.
At least four iSCSI LUNs allocated to the Hyper-V cluster from the
single storage pool to serve as datastores for the virtual servers.
Storage layout for 50 virtual machines
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
50
Figure 10 shows the overall storage layout of the 100 virtual machine
solution.
Figure 10. Storage layout for 100 virtual machines
Storage layout overview
The architecture for up to 100 virtual machines uses the following
configuration:
Seventy-seven 300 GB SAS disks allocated to a single storage pool
as eleven 6+1 RAID 5 groups (sold as 11 packs of seven disks).
At least one hot spare disk allocated for every 30 disks of a given
type.
At least 10 iSCSI LUNs allocated to the Hyper-V cluster from the
single storage pool to serve as datastores for the virtual servers.
Note If more capacity is required in either configuration, larger drives may
be substituted. To meet the load recommendations, the drives all
must be 15k RPM and the same size. If different sizes are utilized,
storage layout algorithms may give sub-optimal results.
Storage layout for 100 virtual machines
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
51
High availability and failover
This VSPEX solution provides a highly available virtualized server, network,
and storage infrastructure. By implementing the solution in this guide,
single-unit failures can survive with minimal or no impact to business
operations.
Configure high availability in the virtualization layer, and configure the
hypervisor to automatically restart failed virtual machines. Figure 11
illustrates the hypervisor layer responding to a failure in the compute layer.
Figure 11. High Availability at the virtualization layer
By implementing high availability at the virtualization layer, even in a
hardware failure, the infrastructure attempts to keep as many services
running as possible.
Use enterprise class servers designed for the datacenter to implement the
compute layer when possible. This type of server has redundant power
supplies, which should be connected to separate Power Distribution units
(PDUs) in accordance with your server vendor’s best practices.
Figure 12. Redundant power supplies
Overview
Virtualization layer
Compute layer
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
52
Configure high availability in the virtualization layer. The compute layer
must be configured with enough resources so that the total number of
available resources meets the needs of the environment, even with a
server failure, as demonstrated in Figure 11.
The advanced networking features of the VNX family and Brocade VDX
with VCS Ethernet Fabric provide protection against network connection
failures at the array. Each Hyper-V host has multiple connections to user
and storage Ethernet networks to guard against link failures. These
connections should be spread across multiple Brocade Ethernet Fabric
switches to guard against component failure in the network.
Figure 13. Network layer High Availability
Note Figure 13 demonstrates a highly available network topology based
on VNXe 3300. A similar topology should be constructed if using the
VNXe 3150.
By ensuring that there are no single points of failure in the network layer,
the compute layer is able to access storage, and communicate with users
even if a component fails.
Brocade VDX Network layer
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
53
The VNX family is designed for five 9s availability by using redundant
components throughout the array. All of the array components are
capable of continued operation in case of hardware failure. The RAID disk
configuration on the array provides protection against data loss caused by
individual disk failures, and the available hot spare drives can be
dynamically allocated to replace a failing disk, as shown in Figure 14.
Figure 14. VNXe series High Availability
EMC Storage arrays are designed to be highly available by default.
Configure the storage arrays according to the installation guides to ensure
that no single unit failures cause data loss or unavailability.
Storage layer
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
54
Backup and recovery configuration guidelines
This section provides guideline to set up a backup and recovery
environment for this VSPEX solution. It describes how to characterize and
design the backup environment.
This VSPEX solution was sized with the application environment profile
shown in Table 6.
Table 6. Backup profile characteristics
Profile characteristic Value
Number of users 500 for 50 virtual machines
1,000 for 100 virtual machines
Number of virtual machines 50 for 50 virtual machines
100 for 100 virtual machines
Note 20% DB, 80% Unstructured
Exchange data 0.5 TB for 50 virtual machines
1 TB for 100 virtual machines
Note 1 GB mail box per user
SharePoint data 0.25 TB for 50 virtual machines
0.5 TB for 100 virtual machines
SQL server 0.25 TB for 50 virtual machines
0.5 TB for 100 virtual machines
User data 2.5 TB for 50 virtual machines
5 TB for 100 virtual machines
(5.0 GB per user)
Daily change rate for the applications
Exchange data 10%
SharePoint data 2%
SQL server 5%
User data 2%
Retention per data types
All DB data 14 Dailies
User data 30 Dailies, 4 Weeklies, 1 Monthly
Overview
Backup characteristics
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
55
Avamar Business Edition is a purpose built backup applicance that
provides a conveniently sized, turnkey, afforadable, deduplicated backup
solution. Designed for mid-market companies, it features simplified
management making it ideal for organizations with limited IT resources.
And with built-in storage resiliency, it eliminates the requirement and
expense of a second replicated system. Powered by industry leading EMC
Avamar software, the Avamar Business Edition delivers fast, daily full
backups along with one-step recovery for VSPEX Proven Infrastructures.
Sizing guidelines
The following sections describe definitions of the reference workload used
to size and implement the VSPEX architectures, guidance on how to
correlate those reference workloads to actual customer workloads, and
how that may change the end delivery from the server and network
perspective.
You can modify the storage definition by adding drives for greater
capacity and performance. The disk layouts are created to provide
support for the appropriate number of virtual machines at the defined
performance level along with typical operations such as snapshots.
Decreasing the number of recommended drives or stepping down to a
lower performing array type can result in lower IOPS per virtual machine
and a reduced user experience due to higher response times.
Backup layout for up to100 virtual machines
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
56
Reference workload
When considering an existing server to move into a virtual infrastructure,
you have the opportunity to gain efficiency by right-sizing the virtual
hardware resources assigned to that system.
Each VSPEX Proven Infrastructure balances the storage, network, and
compute resources needed for a set number of virtual machines that have
been validated by EMC. In practice, each virtual machine has its own set
of requirements that rarely fit a predefined idea of what a virtual machine
should be. In any discussion about virtual infrastructures, it is important to
first define a reference workload. Not all servers perform the same tasks,
and it is impractical to build a reference model that takes into account
every possible combination of workload characteristics.
To simplify the discussion, we have defined a representative customer
reference workload. By comparing your actual customer usage to this
reference workload, you can extrapolate which reference architecture to
choose.
For the VSPEX solutions, the reference workload is defined as a single virtual
machine. Table 7 lists the characteristics of this virtual machine:
Table 7. Virtual machine characteristics
Characteristic Value
Virtual machine operating system Microsoft Windows Server 2012
Datacenter Edition
Virtual processors per virtual
machine
1
RAM per virtual machine 2 GB
Available storage capacity per
virtual machine
100 GB
I/O operations per second (IOPS) per
virtual machine
25
I/O pattern Random
I/O read/write ratio 2:1
This specification for a virtual machine is not intended to represent any
specific application. Rather, it represents a single common point of
reference against which other virtual machines can be measured.
Overview
Defining the reference workload
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
57
Applying the reference workload
The reference architectures create a pool of resources that are sufficient
to host a target number of Reference virtual machines with the
characteristics shown in Table 7. The customer virtual machines may not
exactly match the specifications. In that case, define a single specific
customer virtual machine as the equivalent of a number of Reference
virtual machines, and assume the virtual machines are in use in the pool.
Continue to provision virtual machines from the resource pool until no
resources remain.
A small custom-built application server needs to move into this
infrastructure. The physical hardware that supports the application is not
fully utilized. A careful analysis of the existing application reveals that the
application can use one processor, and needs 3 GB of memory to run
normally. The I/O workload ranges from four IOPS at idle time to a peak of
15 IOPS when busy. The entire application consumes about 30 GB of local
hard drive storage.
Based on the numbers, the following resources are required from the
resource pool:
CPU resources for one virtual machine
Memory resources for two virtual machines
Storage capacity for one virtual machine
I/Os for one virtual machine
In this example, a single virtual machine uses the resources for two of the
Reference virtual machines. If the original pool has the resources to
provide 100 Reference virtual machines, the resources for 98 Reference
virtual machines remain.
The database server for a customer’s point of scale system needs to move
into this virtual infrastructure. It is currently running on a physical system
with four CPUs and 16 GB of memory. It uses 200 GB of storage and
generates 200 IOPS during an average busy cycle.
The following resources are required to virtualize this application:
CPUs of four Reference virtual machines
Memory of eight Reference virtual machines
Storage of two Reference virtual machines
I/Os of eight Reference virtual machines
Overview
Example 1: Custom-built application
Example 2: Point of sale system
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
58
In this case, the one virtual machine uses the resources of eight Reference
virtual machines. To implement this one machine on a pool for 100
Reference virtual machines, the resources of eight Reference virtual
machines are consumed and resources for 92 Reference virtual machines
remain.
The web server of the customer needs to move into this virtual
infrastructure. It is currently running on a physical system with 2 CPUs and 8
GB of memory. It uses 25 GB of storage and generates 50 IOPS during an
average busy cycle.
The following resources are required to virtualize this application:
CPUs of two Reference virtual machines
Memory of four Reference virtual machines
Storage of one Reference virtual machines
I/Os of two Reference virtual machines
In this case, the one virtual machine would use the resources of four
Reference virtual machines. If the configuration is implemented on a
resource pool for 100 Reference virtual machines, resources for 96
Reference virtual machines remain.
The database server for a customer’s decision-support system needs to
move into this virtual infrastructure. It is currently running on a physical
system with 10 CPUs and 64 GB of memory. It uses 5 TB of storage and
generates 700 IOPS during an average busy cycle.
The following resources are required to virtualize this application:
CPUs of 10 Reference virtual machines
Memory of 32 Reference virtual machines
Storage of 52 Reference virtual machines
I/Os of 28 Reference virtual machines
In this case, the one virtual machine uses the resources of 52 Reference
virtual machines. If this configuration is implemented on a resource pool
for 100 Reference virtual machines, resources for 48 Reference virtual
machines remain.
The four examples illustrate the flexibility of the resource pool model. In all
four cases, the workloads simply reduce the amount of available resources
in the pool. All four examples can be implemented on the same virtual
infrastructure with an initial capacity for 100 Reference virtual machines,
and resources for 34 Reference virtual machines remain in the resource
pool, as shown in Figure 15.
Example 3: Web server
Example 4: Decision-support database
Summary of examples
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
59
Figure 15. Resource pool flexibility
In more advanced cases, there may be tradeoffs between memory and
I/O or other relationships where increasing the amount of one resource
decreases the need for another. In these cases, the interactions between
resource allocations become highly complex, and are outside the scope
of the document. Once the change in resource balance has been
examined and the new level of requirements is known, these virtual
machines can be added to the infrastructure using the method described
in the examples.
Implementing the reference architectures
The reference architectures require a set of hardware to be available for
the CPU, memory, network, and storage needs of the system. In this VPSEX
solution, these are presented as general requirements that are
independent of any particular implementation. This section describes
some considerations for implementing the requirements.
The reference architectures define the hardware requirements for this
VSPEX solution in terms of the following basic types of resources:
CPU resources
Memory resources
Brocade network resources
Storage resources
This section describes the resource types, how to use them in the reference
architectures, and key considerations for implementing them in a
customer environment.
The architectures define the number of required CPU cores instead of a
specific type or configuration. It is intended that new deployments use
recent revisions of common processor technologies. It is assumed that
they perform as well as, or better than the systems used to validate the
solution.
In any running system, it is important to monitor the utilization of resources
and adapt as needed. The Reference virtual machine and required
hardware resources in the reference architectures assume that there are
no more than four virtual CPUs for each physical processor core (4:1 ratio).
Overview
Resource types
CPU resources
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
60
In most cases, this provides an appropriate level of resources for the
hosted virtual machines; however, this ratio may not be appropriate in all
use cases. Monitor the CPU utilization at the hypervisor layer to determine
if more resources are required.
Each virtual server in the reference architectures is defined to have 2 GB of
memory. In a virtual environment, it is common to provision virtual
machines with more memory than the hypervisor physically has, due to
budget constraints. The memory over commitment technique takes
advantage of the fact that each virtual machine may not fully utilize the
amount of memory allocated to it. Therefore, it makes business sense to
oversubscribe the memory usage to some degree. The administrator has
the responsibility to monitor the oversubscription rate such that it does not
shift the bottleneck away from the server and become a burden to the
storage subsystem via swapping.
This solution is validated with statically assigned memory and no over
commitment of memory resources. If memory over commit is used in a
real-world environment, regularly monitor the system memory utilization,
and associated page file I/O activity to ensure that a memory shortfall
does not cause unexpected results.
The reference architecture outlines the minimum needs of the system. If
additional bandwidth is needed, it is important to add capability at both
the storage array and the hypervisor host to meet the requirements. The
options for Brocade network connectivity on the server depend on the
type of server for either 1 or 10GbE connectivity. The storage arrays have
a number of included network ports, and have the option to add ports
using EMC FLEX I/O modules and connectivity via 10 GbE.
For reference purposes in the validated environment, EMC assumes that
each virtual machine generates 25 IOs per second with an average size of
8 KB. This means that each virtual machine is generating at least 200 KB/s
of traffic on the storage network. For an environment rated for 100 virtual
machines, this comes out to a minimum of approximately 20 MB/sec. This
is well within the bounds of modern networks. However, this does not
consider other operations. For example, additional bandwidth is needed
for the following operations:
User network traffic
Virtual machine migration
Administrative and management operations
Memory resources
Brocade network resources
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
61
The requirements for each of these vary depending on how the
environment is being used. It is not practical to provide concrete numbers
in this context. However, the network described in the reference
architecture for each solution should be sufficient to handle average
workloads for the preceding use cases. The specific network layer
connectivity for the Brocade VDX Fabric solution is defined in Chapter 5.
Regardless of the network traffic requirements, always have at least two
physical network connections that are shared for a logical network so that
a single link failure does not affect the availability of the system. Design
the network to ensure that the aggregate bandwidth in a failure is
sufficient to accommodate the full workload.
The reference architectures contain layouts for the disks used in the
validation of the system. Each layout balances the available storage
capacity with the performance capability of the drives. There are a few
layers to consider when examining storage sizing. Specifically, the array
has a collection of disks that are assigned to a storage pool. From that
storage pool, you can provision datastores to the Microsoft Hyper-V
cluster. Each layer has a specific configuration that is defined for the
solution and documented in the deployment guide.
It is generally acceptable to replace drive types with a type that has more
capacity with the same performance characteristics or with ones that
have higher performance characteristics and the same capacity.
Similarly, it is acceptable to change the placement of drives in the drive
shelves in order to comply with updated or new drive shelf arrangements.
In other cases where there is a need to deviate from the proposed number
and type of drives specified, or the specified pool and datastore layouts,
ensure that the target layout delivers the same or greater resources to the
system.
The requirements that are stated in the reference architectures are what
EMC considers the minimum set of resources to handle the workloads
required based on the stated definition of a reference virtual server. In any
customer implementation, the load of a system varies over time as users
interact with the system. However, if the customer virtual machines differ
significantly from the reference definition, the system may require
additional resources.
Storage resources
Implementation summary
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
62
Quick assessment
An assessment of the customer environment helps ensure that you
implement the correct VSPEX solution. This section provides an easy-to-use
worksheet to simplify the sizing calculations, and help assess the customer
environment.
Summarize the applications that are planned for migration into the VSPEX
private cloud. For each application, determine the number of virtual
CPUs, the amount of memory, the required storage performance, the
required storage capacity, and the number of Reference virtual machines
required from the resource pool. Applying the reference workload
provides examples of this process.
Fill out a row in the worksheet for each application, as shown in Table 8.
Table 8. Blank worksheet row
Application
CPU
(virtual
CPUs)
Memory
(GB) IOPS
Capacity
(GB)
Equivalent
Reference
virtual
machines
Example
application
Resource
requirements
Equivalent
Reference
virtual
machines
Fill out the resource requirements for the application. The row requires
inputs on four different resources: CPU, Memory, IOPS, and Capacity.
Optimizing CPU utilization is a significant goal for almost any virtualization
project. A simple view of the virtualization operation suggests a one-to-
one mapping between physical CPU cores and virtual CPU cores
regardless of the physical CPU utilization. In reality, consider whether the
target application can effectively use all of the presented CPUs. Use a
performance-monitoring tool, such as Microsoft perfmon to examine the
CPU Utilization counter for each CPU. If they are equivalent, implement
that number of virtual CPUs when moving into the virtual infrastructure.
However, if some CPUs are used and some are not, consider decreasing
the number of virtual CPUs that are required.
In any operation involving performance monitoring, it is a best practice to
collect data samples for a period of time that includes all of the
operational use cases of the system. Use either the maximum or 95th
percentile value of the resource requirements for planning purposes.
Overview
CPU requirements
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
63
Server memory plays a key role in ensuring application functionality and
performance. Therefore, each server process has different targets for the
acceptable amount of available memory. When moving an application
into a virtual environment, consider the current memory available to the
system, and monitor the free memory by using a performance-monitoring
tool like perfmon, to determine if it is being used efficiently.
The storage performance requirements for an application are usually the
least understood aspect of performance. Three components become
important when discussing the I/O performance of a system.
The number of requests coming in, or IOPS
The size of the request, or I/O size -- a request for 4 KB of data is
significantly easier and faster to process than a request for 4 MB of
data
The average I/O response time or latency
The Reference virtual machine calls for 25 I/O operations per second. To
monitor this on an existing system use a performance-monitoring tool like
perfmon, which provides several counters that can help here.
Logical Disk\Disk Transfer/sec
Logical Disk\Disk Reads/sec
Logical Disk\Disk Writes/sec
The Reference virtual machine assumes a 2:1 read: write ratio. Use these
counters to determine the total number of IOPS, and the approximate
ratio of reads to writes for the customer application.
The I/O size is important because smaller I/O requests are faster and easier
to process than large I/O requests. The Reference virtual machine
assumes an average I/O request size of 8 KB, which is appropriate for a
large range of applications. Use perfmon or another appropriate tool to
monitor the “Logical Disk\Avg. Disk Bytes/Transfer” counter to see the
average I/O size. Most applications use I/O sizes that are even powers of 2
KB (i.e. 4 KB, 8 KB, 16 KB, and 32 KB, and so on) are common. The
performance counter does a simple average, so it is common to see 11 KB
or 15 KB instead of the common I/O sizes.
The Reference virtual machine assumes an 8 KB I/O size. If the average
customer I/O size is less than 8 KB, use the observed IOPS number.
However, if the average I/O size is significantly higher, apply a scaling
factor to account for the large I/O size. A safe estimate is to divide the I/O
size by 8 KB and use that factor. For example, if the application is using
mostly 32 KB I/O requests, use a factor of four (32 KB / 8 KB = 4). If that
application is doing 100 IOPS at 32 KB, the factor indicates to plan for 400
IOPS since the Reference virtual machine assumed 8 KB I/O sizes.
Memory requirements
Storage performance requirements
I/O operations per second (IOPs)
I/O size
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
64
The average I/O response time, or I/O latency, is a measurement of how
quickly I/O requests are processed by the storage system. The VSPEX
solutions are designed to meet a target average I/O latency of 20 ms. The
recommendations in the Sizing guidelines section should allow the system
to continue to meet that target, however it is worthwhile to monitor the
system and re-evaluate the resource pool utilization if needed. To monitor
I/O latency, use the “Logical Disk\Avg. Disk sec/Transfer” counter in
perfmon. If the I/O latency is continuously over the target, re-evaluate the
virtual machines in the environment to ensure that they are not using more
resources than intended.
The storage capacity requirement for a running application is usually the
easiest resource to quantify. Determine how much space on disk the
system is using, and add an appropriate factor to accommodate growth.
For example, to virtualize a server that is currently using 40 GB of a 200 GB
internal drive with anticipated growth of approximately 20% over the next
year, 48 GB are required. EMC also recommends reserving space for
regular maintenance patches and swapping files. In addition, some file
systems, like Microsoft NTFS, degrade in performance if they become too
full.
With all of the resources defined, determine an appropriate value for the
equivalent Reference virtual machines line by using the relationships in
Table 9. Round all values up to the closest whole number.
Table 9. Reference virtual machine resources
Resource Value for Reference
virtual machine
Relationship between requirements and
equivalent Reference virtual machines
CPU 1 Equivalent Reference virtual machines =
resource requirements
Memory 2 Equivalent Reference virtual machines =
(resource requirements)/2
IOPS 25 Equivalent Reference virtual machines =
(resource requirements)/25
Capacity 100 Equivalent Reference virtual machines =
(resource requirements)/100
I/O latency
Storage capacity requirements
Determining equivalent Reference virtual machines
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
65
For example, the point of sale system used in Example 2: Point of sale
system earlier in the paper requires 4 CPUs, 16 GB of memory, 200 IOPS and
200 GB of storage. This translates to four Reference virtual machines of
CPU, eight Reference virtual machines of memory, eight Reference virtual
machines of IOPS, and two Reference virtual machines of capacity. Table
10 demonstrates how that machine fits into the worksheet row. Use the
maximum value of the row to fill in the column for equivalent Reference
virtual machines. Eight Reference virtual machines are required in this
example.
Table 10. Example worksheet row
Application
CPU
(virtual
CPUs)
Memory
(GB) IOPS
Capacity
(GB)
Equivalent
Reference
virtual
machines
Example
application
Resource
requirements
4 16 200 200
Equivalent
Reference
virtual
machines
4 8 8 2 8
Figure 16. Required resource from the Reference virtual machine pool
Once the worksheet has been filled out for each application that the
customer wants to migrate into the virtual infrastructure, compute the sum
of the “equivalent Reference virtual machines” column on the right side of
the worksheet as shown in Table 11, to calculate the total number of
Reference virtual machines that are required in the pool. In the example,
the result of the calculation from Table 9 is shown for clarity, along with the
value, rounded up to the nearest whole number, to use.
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
66
Table 11. Example applications
Server resources Storage resources
Application CPU
(virtual
CPUs)
Memory
(GB)
IOPS Capacity
(GB)
Reference
virtual
machines
Example
application
#1: Custom-
built
application
Resource
requirements
1 3 15 30
Equivalent
Reference
virtual
machines
1 2 1 1 2
Example
application
#2: Point of
sale system
Resource
requirements
4 16 200 200
Equivalent
Reference
virtual
machines
4 8 8 2 8
Example
application
#3: Web
server
Resource
requirements
2 8 50 25
Equivalent
Reference
virtual
machines
2 4 2 1 4
Example
application
#4: Decision
support
database
Resource
requirements
10 64 700 5120
(5TB)
Equivalent
Reference
virtual
machines
10 32 28 52 52
Total equivalent Reference virtual machines 66
The VSPEX private cloud solutions define discrete resource pool sizes.
Figure 17 shows 34 Reference virtual machines available after applying all
four examples in 100 virtual machine solutions.
Figure 17. Aggregate resource requirements from the Reference virtual machine
pool
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
67
In the case of Table 11, the customer requires 66 virtual machines of
capability from the pool. Therefore, the 100 virtual machine resource pool
provides sufficient resources for the current needs as well as room for
growth.
In most cases, the recommended hardware for servers and storage is sized
appropriately based on the process described. However, in some cases
there may be a requirement to further customize the hardware resources
that are available to the system. While a complete description of system
architecture is beyond the scope of this document, additional
customization can be done at this point.
Storage resources
In some applications, there is a need to separate application data from
other workloads. The storage layouts in the VSPEX architectures put all of
the virtual machines in a single resource pool. In order to achieve
workload separation, purchase additional disk drives for the application
workload and add them to a dedicated pool.
It is not appropriate to reduce the size of the main resource pool in order
to support application isolation, or to reduce the capability of the pool.
The storage layouts presented in the 50 and 100 virtual machine solutions
are designed to balance many different factors in terms of high
availability, performance, and data protection. Changing the
components of the pool can have significant and difficult-to-predict
impacts on other areas of the system.
Server resources
For the server resources in the VSPEX private cloud solution, it is possible to
customize the hardware resources for varying workloads. Figure 18 is an
example.
Figure 18. Customizing server resources
To achieve this customization, total the resource requirements for the
server components, as shown in Table 12. In the “Server Component
Totals” row, add up the server resource requirements from the applications
in the table.
Fine tuning hardware resources
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
68
Table 12. Server resource component totals
Server resources Storage
resources
Application CPU
(virtual
CPUs)
Memory
(GB)
IOPS Capacity
(GB)
Reference
virtual
machines
Example
application #1:
Custom-built
application
Resource
requirements
1 3 15 30
Equivalent
Reference virtual
machines
1 2 1 1 2
Example
application #2:
Point of sale
system
Resource
requirements
4 16 200 200
Equivalent
Reference virtual
machines
4 8 8 2 8
Example
application #3:
Web server
Resource
requirements
2 8 50 25
Equivalent
Reference virtual
machines
2 4 2 1 4
Example
application #4:
Decision support
database
Resource
requirements
10 64 700 5120
(5TB)
Equivalent
Reference virtual
machines
10 32 28 52 52
Total equivalent Reference virtual machines 66
Server resource component totals 17 155
In this example, the target architecture required 17 virtual CPUs and 155
GB of memory. This translates to five physical processor cores and 155 GB
of memory, plus 2 GB for the hypervisor on each physical server. In
contrast, the 100 Reference virtual machine resource pool documented in
the VSPEX solution calls for 200 GB of memory plus 2 GB for each physical
server to run the hypervisor, and at least 25 physical processor cores. In this
environment, the solution can be effectively implemented with fewer
server resources.
Note Keep high availability requirements in mind when customizing the
resource pool hardware.
Table 13 shows a blank worksheet.
Solution Architecture Overview
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
69
Table 13. Blank customer worksheet
Server resources Storage resources Application CPU
(virtual
CPUs)
Memory
(GB)
IOPS Capacity
(GB)
Reference
virtual
machines
Resource requirements
Equivalent Reference
virtual machines
Resource requirements
Equivalent Reference
virtual machines
Resource requirements
Equivalent Reference
virtual machines
Resource requirements
Equivalent Reference
virtual machines
Resource requirements
Equivalent Reference
virtual machines
Total equivalent Reference virtual machines
Server resource component totals
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
71
Chapter 5 VSPEX
Configuration
Guidelines
This chapter presents the following topics:
Overview 72
Pre-deployment tasks 73
Customer configuration data 75
Prepare and Configure Brocade VDX switches 75
Brocade VDX 6710 and 6720 Switch Configuration Summary 78
Brocade VDX 6710 Configuration 78
Brocade VDX 6720 Configuration 93
Prepare and configure storage array110
Install and configure Hyper-V hosts112
Install and configure SQL server database 115
System Center Virtual Machine Manager server deployment 117
Summary 119
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
72
Overview
The deployment process is divided into the stages shown in Table 14. Upon
completion of the deployment, the VSPEX infrastructure should be ready
for integration with the existing customer network and server infrastructure.
Table 14 lists the main stages in the solution deployment process. The table
also includes references to chapters where relevant procedures are
provided.
Table 14. Deployment process overview
Stage Description Reference and
documentation
1 Verify prerequisites Pre-deployment tasks
2 Obtain the deployment tools Pre-deployment tasks
3 Gather customer configuration
data
Customer configuration data
4 Rack and cable the
components
Refer to the vendor
documentation.
5 Configure the switches, networks
and connect to the customer
network
Prepare and Configure
Brocade VDX switches
6 Install and configure the VNXe Prepare and configure storage
array
7 Configure virtual machine
datastores
Prepare and configure storage
array
8 Install and configure the servers Install and configure Hyper-V
hosts
9 Set up SQL Server (used by
SCVMM)
Install and configure SQL server
database
10 Install and configure SCVMM System Center Virtual Machine
Manager server deployment
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
73
Pre-deployment tasks
Pre-deployment tasks include procedures that are not directly related to
environment, installation, or configuration; however, the results are needed
at the time of installation. Examples of pre-deployment tasks are
collection of hostnames, IP addresses, VLAN IDs, license keys, installation
media, and so on. Perform these tasks before the customer visit to
decrease the time required onsite.
Table 15. Tasks for pre-deployment
Task Description Reference
Gather
documents
Gather the related documents listed in
Appendix C. These documents are
used throughout the text of this
document to provide details on setup
procedures and deployment best
practices for the components of the
solution.
Appendix C
EMC documentation
Gather
tools Gather the required and optional tools
for the deployment. Use Table 16 to
confirm that all equipment, software,
and appropriate licenses are
available before the deployment
process.
Table 16 Deployment
prerequisites checklist
Gather
data
Collect the customer-specific
configuration data for networking,
naming, and required accounts. Enter
this information into the Appendix B
for reference during the deployment
process.
Appendix B
Overview
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
74
Table 16 itemizes the hardware, software, and license requirements to
configure the solution. For additional information on hardware and
software, refer to Table 2 and Table 3.
Table 16. Deployment prerequisites checklist
Requirement Description Reference
Hardware Physical servers to host virtual servers: Sufficient
physical server capacity to host 50 or 100 virtual
machines.
Table 2 Solution
hardware
Windows Server 2012 servers to host virtual
infrastructure servers.
Note This requirement may be covered in the
existing infrastructure.
Brocade VDX Fabric Networking: Switch port
capacity and capabilities as required by the
virtual server infrastructure. Sufficient 1 or
10GbE ports for connectivity of physical server c
to host 50-100 virtual servers, VNXe, and
customer infrastructure.
EMC VNXe 3150 (50 virtual machines) or VNXe
3300 (100 virtual machines) multiprotocol
storage array with the required disk layout.
Software SCVMM 2012 installation media.
Microsoft Windows Server 2012 installation
media.
Microsoft SQL Server 2012 or newer installation
media.
Note This requirement may be covered in the
existing infrastructure.
Licenses
Microsoft Windows Server 2012 Datacenter
Edition license keys.
Note This requirement may be covered by an
existing Microsoft Key Management Server
(KMS).
Microsoft SQL Server license key.
Note This requirement may be covered by
existing infrastructure.
SCVMM 2012 license keys.
Deployment prerequisites
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
75
Customer configuration data
To reduce the onsite time, assemble information such as IP addresses and
hostnames as part of the planning process.
Appendix B provides a table to maintain a record of relevant information.
Expand or contract this form as required. Information may be added,
modified, and recorded as deployment progresses.
Additionally, complete the VNXe Series Configuration Worksheet, available
on EMC Online Support, to provide the most comprehensive array-specific
information.
Prepare and Configure Brocade VDX switches
This section provides the requirements for Brocade network infrastructure
needed to support this architecture. For validated levels of performance
and high availability, this solution requires the switching capacity that is
provided in Appendix B. Brocade VCS (Virtual Cluster Switching) Fabric
technology is an Ethernet technology that allows you to create flatter,
virtualized, and converged data center networks. Brocade VCS Fabric
technology is elastic, permitting you to start small, typically at the access
layer, and expand your network at your own pace.
Brocade VCS Fabric technology is built upon three core design principles:
Performance
Automation
Resiliency
The Brocade VDX switches with VCS Fabric technology are deployed
redundantly to form an Ethernet fabric for the VSPEX networking layer. To
the rest of the network, the Ethernet fabric appears as a single logical
chassis, which able to exchange information among each other using
distributed intelligence.
This section describes the Brocade VDX with VCS Fabrics network switch
configuration procedure for the infrastructure connectivity between
Microsoft Hyper-V servers, existing customer network, and iSCSI attached
VNXe storage. At the point of deployment, the new equipment is being
connected to the existing customer network and potentially existing
compute servers with either 1 or 10 GbE attached NICs.
This VSPEX Private Cloud solution is designed with the VDX 6710 for 1GbE
attached Microsoft Hyper-V servers and the VDX 6720(24/60 port) switches
for 10GbE attached Microsoft Hyper-V servers and is enabled with VCS
Fabric Technology.
Overview
Brocade VDX Switch Platform Considerations
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
76
The VCS Fabric technology has the following characteristics:
It is an Ethernet Fabric switched network. The Ethernet fabric utilizes
an emerging standard called Transparent Interconnection of Lots of
Links (TRILL) as the underlying technology.
All switches automatically know about each other and all
connected physical and logical devices.
All paths in the fabric are available. Traffic is always distributed
across equal-cost paths. Traffic from the source to the destination
can travel across two paths.
Traffic travels across the shortest path.
If a single link fails, traffic is automatically rerouted to other available
paths. If one of the links in Active Path #1 goes down, traffic is
seamlessly rerouted across Active Path #2.
Spanning Tree Protocol (STP) is not necessary because the Ethernet
fabric appears as a single logical switch to connected servers,
devices, and the rest of the network.
Traffic can be switched from one Ethernet fabric path to the other
Ethernet fabric path.
VCS is enabled by default on the Brocade VDX 6710, however if VCS has
been disabled then the following command will enable VCS on the switch.
Use the following command to configure the VCS ID and the RBridge ID
only if VCS needs to be enabled.
switch#vcs enable
In addition, it is important to consider the airflow direction of the switches.
Brocade VDX switches are available in both port side exhaust and port
side intake configurations. Depending upon the hot-aisle, cold-aisle
considerations choose the appropriate airflow. For more information, refer
to the Brocade VDX 6710 Hardware Reference Manual or the Brocade
VDX 6720 Hardware Reference Manual as provided in Appendix B.
The infrastructure network requires redundant network links for each
Windows host, the storage array, the switch interconnect ports, and the
switch uplink ports. This configuration provides both redundancy and
additional network bandwidth. This configuration is required regardless of
whether the network infrastructure for the solution already exists or is being
deployed alongside other components of the solution.
Figure 19 shows a sample redundant Ethernet infrastructure for this solution.
The diagram illustrates the use of redundant switches and links to ensure
that no single points of failure exist in the network connectivity.
Prepare Brocade Network Infrastructure
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
77
Figure 19. Sample Ethernet network architecture
Connect Brocade switch ports to all servers, storage arrays, inter-switch
links, and uplinks. Ensure that all solution servers, storage arrays, switch
interconnects, and switch uplinks have redundant connections. Ensure
that the uplinks are connected to the existing customer network.
The Brocade VDX 6710 Switch Installation Guide and the Brocade VDX
6720 Switch Installation Guide provide instructions on racking, cabling, and
powering the VDX 6710/6720. There are no specific setup steps for this
solution.
Note: At this point, the new equipment is being connected to the existing
customer network. Be careful that unforeseen interactions do not
cause service issues on the customer network.
Complete Network Cabling
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
78
Brocade VDX 6710 and 6720 Switch Configuration Summary
Listed below is the procedure required to deploy the Brocade VDX 6710
and VDX 6720 switches with VCS Fabric Technology in the VSPEX Private
Cloud Solution from 50 to 100 Virtual Machines.
Table 17. Brocade VDX 6710 and VDX 6720 Configuration Steps
Brocade VDX Configuration Steps
Step 1 Verify VDX NOS Licenses
Step 2 Assign and Verify VCS ID and RBridge ID
Step 3 Assign Switch Name
Step 4 VCS Fabric ISL Port Configuration
Step 5 Create required VLANs
Step 6 Create vLAG for Microsoft Server
Step 7 Configure Switch Interfaces for VNXe
Step 8 Connecting the VCS Fabric to customer’s infrastructure
Step 9 Configure MTU and Jumbo Frames
Step 10 AMPP Configuration for live migrations
Please see end of this chapter for related documents
Brocade VDX 6710 Configuration
Use the following procedure to configure a VDX 6710 based fabric.
During the switch configuration process, some of the configuration
commands may require a switch restart. To save settings across restarts,
run the copy running-config startup-config command after making any
configuration changes.
Note: Before running a command that requires a switch restart, back up
the switch configuration using the copy running-config startup-
config, as shown:
BRCD6710# copy running-config startup-config
This operation will modify your startup configuration. Do you
want to continue? [y/n]:y
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
79
Before starting the switch configurations, make sure you have the required
licenses available for the VDX 6710 Switches. In the VSPEX Private Cloud
offering for up to 100 Virtual Machines, the Brocade VCS Fabric license is
built into NOS.
Managing Licenses
The following management tasks and associated commands apply to
both permanent and temporary licenses.
A. Displaying the Switch License ID
The switch license ID identifies the switch for which the license is valid. You
will need the switch license ID when you activate a license key, if
applicable.
To display the switch license ID, enter the show license id command in the
privileged EXEC mode, as shown.
BRCD6710# show license id
Rbridge-Id License ID
===============================================================
00:00:05:33:54:C6:3E
B. Displaying a License
You can display installed licenses with the show license command. The
following example displays a Brocade VDX 6710 licensed for a VCS fabric.
This configuration does not include FCoE features.
BRCD6710# show license
rbridge-id: 21
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
VCS Fabric license
Feature name:VCS_FABRIC
Refer to the Network OS Administrator’s Guide Supporting Network OS
v3.0.1 in Appendix C for additional licensing related information.
Assign every switch in a VCS fabric the same VCS Fabric ID (VCS ID) and a
unique RBridge ID. The VCS ID is similar to a Fabric ID in FC fabrics and the
RBridge ID is similar to a Domain ID. The default VCS ID is set to 1 on each
VDX switch so it does not need to be changed in a one-cluster
implementation. The RBridge ID is also set to 1 by default on each VDX
switch, but each switch needs its own unique ID.
Value range for RBridge ID is 1-239.
Value range for VCS ID is 1-8192.
Assign the RBridge ID, as shown
BRCD6710# vcs rbridge-id 21
Step 1: Verify VDX NOS Licenses
Step 2: Assign and Verify VCS ID and RBridge ID
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
80
Note: Changing the RBridge ID requires a switch restart to clear any
existing configuration on the switch. Before changing the VCS ID or
the RBridge ID back up the switch configuration using the
command copy running-config startup-config.
After assigning a VCS or RBridge ID, verify the configuration using the
“show vcs” command. Please note that the correct Config Mode for VCS
is “Local-Only,” as shown:
BRCD6710# show vcs
Config Mode: Local-Only
VCS ID: 1
Total Number of Nodes 2
Rbridge-Id WWN Management IP Status Host Name
21 >10:00:00:05:33:52:21:8A* 10.246.54.145 Online VDX-6710-21
22 10:00:00:05:33:51:A9:E5 10.246.54.146 Online VDX-6710-22
“>” denotes coordinator or principal switch.
“*” denotes local switch.
Every switch is assigned the default host name of sw0. Use the switch-
attributes command to set a meaningful host name, as shown:
BRCD6710(config)# switch-attributes 21 host-name BRCD6710-RB21
Note: To save settings across restarts run the copy running-config startup-config
command after making any configuration changes.
The VDX platform comes preconfigured with a default port configuration
that enables ISL and Trunking for easy and automatic VCS fabric
formation. However, for edge port devices the port configuration requires
editing to accommodate specific connections.
The interface format is: rbridge id/slot/port number
For example: 21/0/49
The default port configuration for the 10GbE ports can be seen with the
show running-configuration command, as shown:
BRCD6710# show running-configuration interface
TenGigabitEthernet 21/0/49
!
interface TenGigabitEthernet 21/0/49
fabric isl enable
no shutdown
!
<truncated output>
Step 3: Assign Switch Name
Step 4: VCS Fabric ISL Port Configuration
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
81
There are two types of ports in a VCS fabric, ISL ports, and the edge ports.
The ISL port connects VCS fabric switches whereas edge ports connect to
end devices or non-VCS Fabric mode switches or routers.
Figure 20. VCS Fabric port types
Configuring Fabric ISLs and Trunks
Brocade ISLs connect VDX switches in VCS mode. All ISL ports connected
to the same neighbor VDX switch attempt to form a trunk. Trunk formation
requires that all ports between the switches are set to the same speed and
are part of the same port group.
The recommendation is to have at least two trunks with at least two links in
a solution, but the number of required trunks depends on I/O requirements
and the switch model. The maximum number of ports allowed per trunk
group is normally eight, but the VDX 6710 only has 6 ports that can be used
as fabric ISLs. Shown below are the port groups for each VDX platform.
Depending on the platform solution and bandwidth requirements, it may
be necessary to increase the number of trunks or links per trunk.
Figure 21. VDX 6710-54
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
82
As shown in Figure 21, ports 49-54 on the VDX 6710 are 10G ports and form
a port group. It is recommended that the VDXs in the VSPEX architecture
have Fabric ISLs between them. Between two VDX 6710s, this can be
achieved by connecting cables between any two 10G ports on the
switches. The ISLs are self-forming. You can use the fabric isl enable, fabric
trunk enable, no fabric isl enable, and no fabric trunk enable commands
to toggle the port states, if needed. Below is the running configuration of
an ISL port on RB21, as an example.
BRCD6710# show running-config interface TenGigabitethernet 21/0/49
interface TenGigabitEthernet 21/0/49
fabric isl enable
fabric trunk enable
no shutdown
Verify Fabric ISL and Trunk Configuration
BRCD6710-RB21# show fabric isl
Rbridge-id: 21 #ISLs: 2
Src Src Nbr Nbr
Index Interface Index Interface Nbr-WWN BW Trunk Nbr-Name
-----------------------------------------------------------------
49 Te 21/0/49 49 Te 22/0/49 10:00:00:05:33:40:31:93 20G Yes
"BRCD6710-RB22"
BRCD6710-RB21# show fabric islports
Name: BRCD6710-RB21
State: Online
Role: Fabric Subordinate
VCS Id: 1
Config Mode:Local-Only
Rbridge-id: 21
WWN: 10:00:00:05:33:6d:7f:77
FCF MAC: 00:05:33:6d:7f:77
Index InterfaceState Operational State
=================================================================
1 gi 21/0/1 Down
2 gi 21/0/2 Down
3 gi 21/0/3 Down
Output Truncated
49 Te 21/0/49 Up ISL (Trunk port, Primary is Te 21/0/50)
50 Te 21/0/50 Up ISL 10:00:00:05:33:00:77:80 "BRCD6710-RB22"
(upstream)(Trunk Primary)
BRCD6710-RB21# show fabric trunk
Rbridge-id: 21
Trunk Src Source Nbr Nbr
Group Index Interface Index Interface Nbr-WWN
-----------------------------------------------------------------
1 49 Te 21/0/49 49 Te 22/0/49 10:00:00:05:33:6F:27:57
1 50 Te 21/0/50 50 Te 22/0/50 10:00:00:05:33:6F:27:57
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
83
The steps in this section provide guideline to create required VLANs as
mentioned below:
VLAN Name VLAN ID VLAN Description
Storage VLAN 20 This VLAN is for iSCSI traffic
Cluster VLAN 30 This VLAN is for cluster live
migration
Management VLAN 10 Management VLAN
To create a VLAN interface, perform the following steps from privileged
EXEC mode.
1. Enter the configure terminal command to access global
configuration mode.
BRCD6710-RB21# configure terminal
Entering configuration mode terminal
BRCD6710-RB21(config)#
2. Enter the interface VLAN command to assign the VLAN interface
number.
BRCD6710-RB21(config)# interface Vlan 20
BRCD6710-RB21(config-Vlan-20)#
3. Create other required VLANs as described in above table. To view
defined VLANs on the RBridge use show VLAN brief command.
BRCD6710-RB21# show vlan brief
Total Number of VLANs configured : 4
VLAN Name State Ports
(F)-FCoE, (u)-Untagged, (t)-Tagged, (c)-Converged
======== ================================================
1 default ACTIVE Gi 21/0/27(t) Po 44(t) Po 55(t)
10 VLAN0010 ACTIVE Gi 21/0/27(t) Po 44(t) Po 55(t)
20 VLAN0020 ACTIVE Gi 21/0/27(t) Po 44(t) Po 55(t)
30 VLAN0030 ACTIVE Gi 21/0/27(t) Po 44(t) Po 55(t)
Step 5: Create required VLANs
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
84
Figure 22. Creating VLANs
1. Configure vLAG Port-channel Interface on Brocade VDX 6710-RB21 for
Host_A and Host_B.
BRCD6710-RB21# configure terminal
BRCD6710-RB21(config)# interface Port-channel 44
BRCD6710-RB21(config-Port-channel-44)# mtu 9216
BRCD6710-RB21(config-Port-channel-44)# speed 1000
BRCD6710-RB21(config-Port-channel-44)# description Host_A-vLAG-44
BRCD6710-RB21(config-Port-channel-44)# switchport
BRCD6710-RB21(config-Port-channel-44)# switchport mode trunk
BRCD6710-RB21(config-Port-channel-44)# switchport trunk allowed
vlan all
BRCD6710-RB21(config-Port-channel-44)# no shutdown
BRCD6710-RB21# configure terminal
BRCD6710-RB21(config)# interface Port-channel 55
BRCD6710-RB21(config-Port-channel-55)# mtu 9216
BRCD6710-RB21(config-Port-channel-55)# speed 1000
BRCD6710-RB21(config-Port-channel-55)# description Host_B-vLAG-55
BRCD6710-RB21(config-Port-channel-55)# switchport
BRCD6710-RB21(config-Port-channel-55)# switchport mode trunk
BRCD6710-RB21(config-Port-channel-55)# switchport trunk allowed
vlan all
BRCD6710-RB21(config-Port-channel-55)# no shutdown
Step 6: Create vLAG for Microsoft Server
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
85
2. Configure Interface TenGigabitEthernet 21/0/10 and 21/0/11 on
Brocade VDX 6710-RB21.
BRCD6710-RB21# configure terminal
BRCD6710-RB21(config)# interface GigabitEthernet 21/0/10
BRCD6710-RB21(conf-if-gi-21/0/10)# description Host_A-vLAG-44
BRCD6710-RB21(conf-if-gi-21/0/10)# channel-group 44 mode active
type standard
BRCD6710-RB21(conf-if-gi-21/0/10)# lacp timeout long
BRCD6710-RB21(conf-if-gi-21/0/10)# no shutdown
BRCD6710-RB21# configure terminal
BRCD6710-RB21(config)# interface GigabitEthernet 21/0/11
BRCD6710-RB21(conf-if-gi-21/0/11)# description Host_B-vLAG-55
BRCD6710-RB21(conf-if-gi-21/0/11)# channel-group 55 mode active
type standard
BRCD6710-RB21(conf-if-gi-21/0/11)# lacp timeout long
BRCD6710-RB21(conf-if-gi-21/0/11)# no shutdown
3. Configure vLAG Port-channel Interface on Brocade VDX 6710-RB22 for
Host_A and Host_B.
BRCD6710-RB22# configure terminal
BRCD6710-RB22(config)# interface Port-channel 44
BRCD6710-RB22(config-Port-channel-44)# mtu 9216
BRCD6710-RB22(config-Port-channel-44)# speed 1000
BRCD6710-RB22(config-Port-channel-44)# description Host_A-vLAG-44
BRCD6710-RB22(config-Port-channel-44)# switchport
BRCD6710-RB22(config-Port-channel-44)# switchport mode trunk
BRCD6710-RB22(config-Port-channel-44)# switchport trunk allowed
vlan all
BRCD6710-RB22(config-Port-channel-44)# no shutdown
BRCD6710-RB22# configure terminal
BRCD6710-RB22(config)# interface Port-channel 55
BRCD6710-RB22(config-Port-channel-55)# mtu 9216
BRCD6710-RB22(config-Port-channel-55)# speed 1000
BRCD6710-RB22(config-Port-channel-55)# description Host_B-vLAG-55
BRCD6710-RB22(config-Port-channel-55)# switchport
BRCD6710-RB22(config-Port-channel-55)# switchport mode trunk
BRCD6710-RB22(config-Port-channel-55)# switchport trunk allowed
vlan all
BRCD6710-RB22(config-Port-channel-55)# no shutdown
4. Configure Interface GigabitEthernet 22/0/10 and 21/0/11 on Brocade
VDX6710-RB22.
BRCD6710-RB22# configure terminal
BRCD6710-RB22(config)# interface GigabitEthernet 22/0/10
BRCD6710-RB22(conf-if-gi-22/0/10)# description Host_A-vLAG-44
BRCD6710-RB22(conf-if-gi-22/0/10)# channel-group 44 mode active
type standard
BRCD6710-RB22(conf-if-gi-22/0/10)# lacp timeout long
BRCD6710-RB22(conf-if-gi-22/0/10)# no shutdown
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
86
BRCD6710-RB22# configure terminal
BRCD6710-RB22(config)# interface GigabitEthernet 22/0/11
BRCD6710-RB22(conf-if-gi-22/0/11)# description Host_B-vLAG-55
BRCD6710-RB22(conf-if-gi-22/0/11)# channel-group 55 mode active
type standard
BRCD6710-RB22(conf-if-gi-22/0/11)# lacp timeout long
BRCD6710-RB22(conf-if-gi-22/0/11)# no shutdown
5. Validate vLAG Port-channel Interface on Brocade VDX 6710-RB21 and
VDX 6710-RB22 to Host_A and Host_B.
BRCD6710-RB21# show interface Port-channel 44
Port-channel 44 is up, line protocol is up
Hardware is AGGREGATE, address is 0005.448c.adee
Current address is 0005.448c.adee
Description: Host_A-vLAG-44
Interface index (ifindex) is 671088673
Minimum number of links to bring Port-channel up is 1
MTU 9216 bytes
LineSpeed Actual : 1000 Mbit
Allowed Member Speed : 1000 Mbit
BRCD6710-RB21# show interface Port-channel 55
Port-channel 55 is up, line protocol is up
Hardware is AGGREGATE, address is 0005.448c.adee
Current address is 0005.448c.adee
Description: Host_B-vLAG-55
Interface index (ifindex) is 671088673
Minimum number of links to bring Port-channel up is 1
MTU 9216 bytes
LineSpeed Actual : 1000 Mbit
Allowed Member Speed : 1000 Mbit
BRCD6710-RB22# show interface Port-channel 44
Port-channel 44 is up, line protocol is up
Hardware is AGGREGATE, address is 0005.448c.adce
Current address is 0005.448c.adce
Description: Host_A-vLAG-44
Interface index (ifindex) is 671088973
Minimum number of links to bring Port-channel up is 1
MTU 9216 bytes
LineSpeed Actual : 1000 Mbit
Allowed Member Speed : 1000 Mbit
BRCD6710-RB22# show interface Port-channel 55
Port-channel 55 is up, line protocol is up
Hardware is AGGREGATE, address is 0005.448c.adee
Current address is 0005.448c.adee
Description: Host_B-vLAG-55
Interface index (ifindex) is 671088673
Minimum number of links to bring Port-channel up is 1
MTU 9216 bytes
LineSpeed Actual : 1000 Mbit
Allowed Member Speed : 1000 Mbit
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
87
6. Validate Interface GigabitEthernet 21/0/10 on Brocade VDX6710-RB21
and Interface GigabitEthernet 22/0/10 on Brocade BRCD6710-RB22. BRCD6710-RB21# show interface GigabitEthernet 21/0/10
GigabitEthernet 21/0/10 is up, line protocol is up (connected)
Hardware is Ethernet, address is 0005.448c.adb6
Current address is 0005.448c.adb6
Description: Host_A-vLAG-44
Interface index (ifindex) is 671088673
MTU 9216 bytes
LineSpeed : 1000 Mbit, Duplex: Full
Flowcontrol rx: off, tx: off
BRCD6710-RB21# show interface GigabitEthernet 21/0/11
GigabitEthernet 21/0/11 is up, line protocol is up (connected)
Hardware is Ethernet, address is 0005.448c.adb6
Current address is 0005.448c.adb6
Description: Host_B-vLAG 55
Interface index (ifindex) is 671088673
MTU 9216 bytes
LineSpeed : 1000 Mbit, Duplex: Full
Flowcontrol rx: off, tx: off
BRCD6710-RB22# show interface GigabitEthernet 22/0/10
GigabitEthernet 22/0/10 is up, line protocol is up (connected)
Hardware is Ethernet, address is 0005.448c.adb6
Current address is 0005.448c.adb6
Description: Host_A-vLAG 44
Interface index (ifindex) is 671088973
MTU 9216 bytes
LineSpeed : 1000 Mbit, Duplex: Full
Flowcontrol rx: off, tx: off
BRCD6710-RB22# show interface GigabitEthernet 22/0/11
GigabitEthernet 22/0/11 is up, line protocol is up (connected)
Hardware is Ethernet, address is 0005.448c.adb6
Current address is 0005.448c.adb6
Description: Host_B-vLAG 55
Interface index (ifindex) is 671088973
MTU 9216 bytes
LineSpeed : 1000 Mbit, Duplex: Full
Flowcontrol rx: off, tx: off
1. Configure Switch Interfaces for VNXe connections on RB21 and RB22. BRCD6710-RB21# configure terminal
Entering configuration mode terminal
BRCD6710-RB21(config)# interface GigabitEthernet 21/0/27
BRCD6710-RB21(conf-if-gi-21/0/27)#
BRCD6710-RB21(conf-if-gi-21/0/27)# mtu 9216
BRCD6710-RB21(conf-if-gi-21/0/27)# description VNXe-Port-eth2
BRCD6710-RB21(conf-if-gi-21/0/27)# switchport
BRCD6710-RB21(conf-if-gi-21/0/27)# switchport mode trunk
BRCD6710-RB21(conf-if-gi-21/0/27)# switchport trunk allowed vlan all
BRCD6710-RB21(conf-if-gi-21/0/27)# switchport trunk tag native-vlan
BRCD6710-RB21(conf-if-gi-21/0/27)# qos flowcontrol tx on rx on
BRCD6710-RB21(conf-if-gi-21/0/27)# no shutdown
Step 7: Configure Switch Interfaces for VNXe
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
88
BRCD6710-RB21# configure terminal
Entering configuration mode terminal
BRCD6710-RB21(config)# interface GigabitEthernet 21/0/28
BRCD6710-RB21(conf-if-gi-21/0/28)#
BRCD6710-RB21(conf-if-gi-21/0/28)# mtu 9216
BRCD6710-RB21(conf-if-gi-21/0/28)# description VNXe-Port-eth4
BRCD6710-RB21(conf-if-gi-21/0/28)# switchport
BRCD6710-RB21(conf-if-gi-21/0/28)# switchport mode trunk
BRCD6710-RB21(conf-if-gi-21/0/28)# switchport trunk allowed vlan all
BRCD6710-RB21(conf-if-gi-21/0/28)# switchport trunk tag native-vlan
BRCD6710-RB21(conf-if-gi-21/0/28)# qos flowcontrol tx on rx on
BRCD6710-RB21(conf-if-gi-21/0/28)# no shutdown
BRCD6710-RB22# configure terminal
Entering configuration mode terminal
BRCD6710-RB22(config)# interface GigabitEthernet 22/0/27
BRCD6710-RB22(conf-if-gi-22/0/27)#
BRCD6710-RB22(conf-if-gi-22/0/27)# mtu 9216
BRCD6710-RB22(conf-if-gi-22/0/27)# description VNXe-Port-eth3
BRCD6710-RB22(conf-if-gi-22/0/27)# switchport
BRCD6710-RB22(conf-if-gi-22/0/27)# switchport mode trunk
BRCD6710-RB22(conf-if-gi-22/0/27)# switchport trunk allowed vlan all
BRCD6710-RB22(conf-if-gi-22/0/27)# switchport trunk tag native-vlan
BRCD6710-RB22(conf-if-gi-22/0/27)# qos flowcontrol tx on rx on
BRCD6710-RB22(conf-if-gi-22/0/27)# no shutdown
BRCD6710-RB22# configure terminal
Entering configuration mode terminal
BRCD6710-RB22(config)# interface GigabitEthernet 22/0/28
BRCD6710-RB22(conf-if-gi-22/0/28)#
BRCD6710-RB22(conf-if-gi-22/0/28)# mtu 9216
BRCD6710-RB22(conf-if-gi-22/0/28)# description VNXe-Port-eth5
BRCD6710-RB22(conf-if-gi-22/0/28)# switchport
BRCD6710-RB22(conf-if-gi-22/0/28)# switchport mode trunk
BRCD6710-RB22(conf-if-gi-22/0/28)# switchport trunk allowed vlan all
BRCD6710-RB22(conf-if-gi-22/0/28)# switchport trunk tag native-vlan
BRCD6710-RB22(conf-if-gi-22/0/28)# qos flowcontrol tx on rx on
BRCD6710-RB22(conf-if-gi-22/0/28)# no shutdown
2. Validate GigabitEthernet Interface on Brocade VDX 6710-RB21 and
VDX 6710-RB22 to VNXe.
BRCD6710-RB21# show interface gigabitethernet 21/0/27
GigabitEthernet 21/0/27 is up, line protocol is up (connected)
Hardware is Ethernet, address is 0005.3392.6402
Current address is 0005.3392.6402
Fixed Copper RJ45 Media Present
Description: VNXe-Port-eth2
Interface index (ifindex) is 8993480844
MTU 9216 bytes
LineSpeed : 1000 Mbit, Duplex: Full
Flowcontrol rx: on, tx: on
Priority Tag disable
IPv6 RA Guard disable
Last clearing of show interface counters: 1w1d22h
Queueing strategy: fifo
Receive Statistics:
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
89
525356 packets, 45326840 bytes
Unicasts: 499599, Multicasts: 25311, Broadcasts: 297
64-byte pkts: 0, Over 64-byte pkts: 497559, Over 127-byte
pkts: 26735
Over 255-byte pkts: 348, Over 511-byte pkts: 410, Over 1023-
byte pkts: 0
Over 1518-byte pkts(Jumbo): 304
Runts: 0, Jabbers: 0, CRC: 0, Overruns: 0
Errors: 149, Discards: 0
Transmit Statistics:
639020 packets, 53714014 bytes
Unicasts: 477953, Multicasts: 34663, Broadcasts: 126404
Underruns: 0
Errors: 0, Discards: 0
Rate info (interval 299 seconds):
Input 0.000000 Mbits/sec, 0 packets/sec, 0.00% of line-rate
Output 0.000512 Mbits/sec, 1 packets/sec, 0.00% of line-rate
Time since last interface status change: 1d14h02m
BRCD6710-RB22# show interface gigabitethernet 22/0/27
GigabitEthernet 22/0/27 is up, line protocol is up (connected)
Hardware is Ethernet, address is 0005.3393.8364
Current address is 0005.3393.8364
Fixed Copper RJ45 Media Present
Description: VNXe-Port-eth3
Interface index (ifindex) is 4698513548
MTU 9216 bytes
LineSpeed : 1000 Mbit, Duplex: Full
Flowcontrol rx: on, tx: on
Priority Tag disable
IPv6 RA Guard disable
Last clearing of show interface counters: 1w1d22h
Queueing strategy: fifo
Receive Statistics:
5281 packets, 670248 bytes
Unicasts: 1, Multicasts: 5191, Broadcasts: 89
64-byte pkts: 0, Over 64-byte pkts: 90, Over 127-byte pkts:
5191
Over 255-byte pkts: 0, Over 511-byte pkts: 0, Over 1023-byte
pkts: 0
Over 1518-byte pkts(Jumbo): 0
Runts: 0, Jabbers: 0, CRC: 0, Overruns: 0
Errors: 0, Discards: 0
Transmit Statistics:
495890 packets, 61240812 bytes
Unicasts: 88, Multicasts: 455892, Broadcasts: 39910
Underruns: 0
Errors: 0, Discards: 0
Rate info (interval 299 seconds):
Input 0.000000 Mbits/sec, 0 packets/sec, 0.00% of line-rate
Output 0.000000 Mbits/sec, 0 packets/sec, 0.00% of line-rate
Time since last interface status change: 1d08h36m
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
90
Brocade VDX 6710 switches can be uplinked to be accessible from an
existing network infrastructure. On VDX 6710 platforms, you will need to
use 10G uplinks for this (ports 49-54). The uplink should be configured to
match whether or not the customer’s network is using tagged or untagged
traffic.
The following example can be leveraged as a guideline to connect VCS
fabric to existing infrastructure network:
Figure 23. Example VCS/VDX network topology with Infrastructure connectivity
Creating virtual link aggregation groups (vLAGs) to the Infrastructure
Network
Create vLAGs from each RBridge to Infrastructure Switches that in turn
provide access to resources at the core network.
This example illustrates the configuration for RB21 and RB22.
1. Use the channel-group command to configure interfaces as
members of a port channel to the infrastructure switches that
interface to the core. This example uses port channel 4 on Grp1,
RB21.
BRCD6710-RB21# configure terminal
BRCD6710-RB21(config)# in te 21/0/49
BRCD6710-RB21(conf-if-te-21/0/49)# channel-group 4 mode passive
type standard
BRCD6710-RB21(conf-if-te-21/0/49)# in te 21/0/50
Step 8: Connecting the VCS Fabric to an existing Infrastructure through Uplinks
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
91
BRCD6710-RB21(conf-if-te-21/0/50)# channel-group 4 mode passive
type standard
2. Use the switchport command to configure the port channel
interface. In the following example, it is assigned to trunk mode and
allows all VLANs on the port channel.
BRCD6710-RB21(conf-if-te-21/0/50)# interface port-channel 4
BRCD6710-RB21(config-Port-channel-4)# switchport
BRCD6710-RB21(config-Port-channel-4)# switchport mode trunk
BRCD6710-RB21(config-Port-channel-4)# switchport trunk allowed
vlan all
BRCD6710-RB21(config-Port-channel-4)# no shutdown
3. Configure RB22 as shown above.
BRCD6710-RB22# configure terminal
BRCD6710-RB22(config)# in te 22/0/49
BRCD6710-RB22(conf-if-te-22/0/49)# channel-group 4 mode active
type standard
BRCD6710-RB22(conf-if-te-22/0/49)# in te 22/0/50
BRCD6710-RB22(conf-if-te-22/0/50)# channel-group 4 mode active
type standard
BRCD6710-RB22(config)# interface port-channel 4
BRCD6710-RB22(config-Port-channel-4)# switchport
BRCD6710-RB22(config-Port-channel-4)# switchport mode trunk
BRCD6710-RB22(config-Port-channel-4)# switchport trunk allowed
vlan all
BRCD6710-RB22(config-Port-channel-4)# no shutdown
4. Use the do show port-chan command to confirm that the vLAG
comes up and is configured correctly.
Note: The LAG must be configured on the MLX MCT as well before the
vLAG can become operational.
BRCD6710-RB21(config-Port-channel-4)# do show port-chan 4
LACP Aggregator: Po 4 (vLAG)
Aggregator type: Standard
Ignore-split is enabled
Member rbridges:
rbridge-id: 21 (2)
rbridge-id: 22 (2)
Admin Key: 0004 - Oper Key 0004
Partner System ID - 0x0001,01-80-c2-00-00-01
Partner Oper Key 30002
Member ports on rbridge-id 21:
Link: Te 21/0/49 (0x151810000F) sync: 1 *
Link: Te 21/0/50 (0x1518110010) sync: 1
BRCD6710-RB22(config-Port-channel-4)# do show port-channel 4
LACP Aggregator: Po 4 (vLAG)
Aggregator type: Standard
Ignore-split is enabled
Member rbridges:
rbridge-id: 21 (2)
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
92
rbridge-id: 22 (2)
Admin Key: 0004 - Oper Key 0004
Partner System ID - 0x0001,01-80-c2-00-00-01
Partner Oper Key 30002
Member ports on rbridge-id 22:
Link: Te 22/0/49 (0x161810000F) sync: 1
Link: Te 22/0/50 (0x1618110010) sync: 1
Brocade recommends using Jumbo frames for an iSCSI based architecture
such as this. Set the MTU to 9216 for the switch ports used for storage
network of iSCSI, CIFS, or NFS protocols. Consult the Brocade configuration
guide for additional details.
Configuring MTU
Note This must be performed on all RBbridges where a given interface port-
channel is located. In this example, interface port-channel 44 is on
RBridge 21 and RBridge 22, so we will apply configurations from both
RBridge 21 and RBridge 22.
Example to enable Jumbo Frame Support on applicable VDX interfaces for
which Jumbo Frame support is required:
BRCD6710# configure terminal
BRCD6710(config)# interface Port-channel 44
BRCD6710(config-Port-channel-44)# mtu
(<NUMBER:1522-9216>) (9216): 9216
Brocade AMPP (Automatic Migration of Port Profiles) technology enhances
network-side virtual machine migration by allowing VM migration across
physical switches, switch ports, and collision domains. In traditional
networks, port-migration tasks usually require manual configuration
changes as VM migration across physical server and switches can result in
non-symmetrical network policies. Port setting information must be
identical at the destination switch and port.
Brocade VCS Fabrics support automatically moving the port profile in
synchronization with a VM moving to a different physical server. This allows
VMs to be migrated without the need for network ports to be manually
configured on the destination switch.
Please refer AMPP Configuration section of Network OS Administration
Guide for AMPP configuration and monitoring.
Step 9 - Configure MTU and Jumbo Frames
Step 10 - AMPP configuration for live migrations
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
93
Brocade VDX 6720 Configuration
Use the following procedure to configure a VDX 6720 based fabric.
During the switch configuration process, some of the configuration
commands may require a switch restart. To save settings across restarts,
run the copy running-config startup-config command after making any
configuration changes.
Note: Before running a command that requires a switch restart, back up
the switch configuration using the copy running-config startup-
config, as shown:
BRCD6720# copy running-config startup-config
This operation will modify your startup configuration. Do you want
to continue? [y/n]:y
Before starting the switch configurations, make sure you have the required
licenses available for the VDX 6720 Switches. In the VSPEX Private Cloud
offering for up to 100 Virtual Machines, the Brocade VCS Fabric license is
built into NOS.
VDX 6720-24 and VDX 6720-60 have a Ports on Demand (PoD) incremental
license feature.
Managing Licenses
The following management tasks and associated commands apply to
both permanent and temporary licenses.
Note: License management in Network OS v3.0.1 is supported only on the local
RBridge. You cannot configure or display licenses on remote nodes in the
fabric.
A. Displaying the Switch License ID
The switch license ID identifies the switch for which the license is valid. You
will need the switch license ID when you activate a license key, if
applicable.
To display the switch license ID, enter the show license id command in the
privileged EXEC mode, as shown.
VDX6720# show license id
Rbridge-Id License ID
===================================================
22 10:00:00:05:33:51:A9:E5
Step 1: Verify VDX NOS Licenses
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
94
B. Displaying a License
You can display installed licenses with the show license command. The
following example displays a Brocade VDX 6720 licensed for a VCS fabric.
This configuration does not include FCoE features.
VDX6720# show license
rbridge-id: 22
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Ports on Demand license - additional 10 port upgrade
license
Feature name:PORTS_ON_DEMAND_1
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Ports on Demand license - additional 10 port upgrade
license
Feature name:PORTS_ON_DEMAND_2
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
VCS Fabric license
Feature name:VCS_FABRIC
Refer to Network OS Administrator’s Guide Supporting Network OS v3.0.1 in
Appendix C for additional licensing related information.
Assign every switch in a VCS fabric the same VCS Fabric ID (VCS ID) and a
unique RBridge ID. The VCS ID is similar to a Fabric ID in FC fabrics and the
RBridge ID is similar to a Domain ID. The default VCS ID is set to 1 on each
VDX switch so it does not need to be changed in a one-cluster
implementation. The RBridge ID is also set to 1 by default on each VDX
switch, but if more than one switch is to be added to the fabric then each
switch needs its own unique ID.
Value range for RBridge ID is 1-239.
Value range for VCS ID is 1-8192.
Assign the RBridge ID, as shown
BRCD6720# vcs rbridge-id 21
Note: Changing the RBridge ID requires a switch restart to clear any existing
configuration on the switch. Before changing the VCS ID or the RBridge ID
back up the switch configuration using the copy running-config startup-
config command.
Step 2: Assign and Verify VCS ID and RBridge ID
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
95
After assigning a VCS or RBridge ID, verify the configuration using the show
vcs command. Please note that the correct Config Mode for VCS is
“Local-Only,” as shown:
BRCD6720# show vcs
Config Mode: Local-Only
VCS ID 1
Total Number of Nodes: 2
Rbridge-Id WWN Management IP Status Host Names
21 >10:00:00:05:33:52:21:8A* 10.246.54.145 Online VDX-6720-21
22 10:00:00:05:33:51:A9:E5 10.246.54.146 Online VDX-6720-22
“>” denotes coordinator or principal switch. “*” denotes local switch
Every switch is assigned the default host name of “sw0,” but must be
changed for easy recognition and management using the switch-
attributes command. Use the switch-attributes command to set host
name, as shown:
BRCD6720# configure terminal
BRCD6720(config)# switch-attributes 21 host-name BRCD6720-RB21
Note: To save settings across restarts run the copy running-config startup-config
command after making any configuration changes.
The VDX platform comes preconfigured with a default port configuration
that enables ISL and Trunking for easy and automatic VCS fabric
formation. However, for edge port devices the port configuration requires
editing to accommodate specific connections.
The interface format is:
rbridge id/slot/port number e.g 21/0/49
The default port configuration for the 10GbE ports can be seen with the
show running-configuration command, as shown:
BRCD6720# show running-configuration interface
TenGigabitEthernet 21/0/49
!
interface TenGigabitEthernet 21/0/49
fabric isl enable
fabric trunk enable
no shutdown
!
….
<truncated output>
There are two types of ports in a VCS fabric, ISL ports, and the edge ports.
The ISL port connects VCS fabric switches whereas edge ports connect to
end devices or non-VCS Fabric mode switches or routers.
Step 3: Assign Switch Name
Step 4: VCS Fabric ISL Port Configuration
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
96
Figure 24. Port types
Configuring Fabric ISLs and Trunks
Brocade ISLs connect VDX switches in VCS mode. All ISL ports connected
to the same neighbor VDX switch attempt to form a trunk. Trunk formation
requires that all ports between the switches are set to the same speed and
are part of the same port group.
The recommendation is to have at least two trunks with at least two links in
a solution, but the number of required trunks depends on I/O requirements
and the switch model. The maximum number of ports allowed per trunk
group is normally eight. Shown below are the port groups for the VDX 6720
platforms.
Depending on the platform solution and bandwidth requirements, it may
be necessary to increase the number of trunks or links per trunk.
Figure 25. VDX 6720-24
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
97
Figure 26. VDX 6720-60
It is recommended that the VDXs in the VSPEX architecture have Fabric ISLs
between them. Between two VDX 6720 switches, this can be achieved by
connecting cables between any two 10G ports on the switches. The ISLs
are self-forming. You can use the fabric isl enable, fabric trunk enable, no
fabric isl enable, and no fabric trunk enable commands to toggle the port
states, if needed. The following example shows the running configuration
of an ISL port on RB21.
BRCD6720# show running-config interface TenGigabitethernet 21/0/49
interface TenGigabitEthernet 21/0/49
fabric isl enable
fabric trunk enable
no shutdown
Verify Fabric ISL and Trunk Configuration
BRCD6720-RB21# show fabric isl
Rbridge-id: 21 #ISLs: 2
Src Src Nbr Nbr
Index Interface Index Interface Nbr-WWN BW Trunk Nbr-Name
49 Te 21/0/49 49 Te 22/0/49 10:00:00:05:33:40:31:93 20G Yes
"BRCD6720-RB22"
BRCD6720-RB21# show fabric islports
Name: BRCD6720-RB21
State: Online
Role: Fabric Subordinate
VCS Id: 1
Config Mode:Local-Only
Rbridge-id: 21
WWN: 10:00:00:05:33:6d:7f:77
FCF MAC: 00:05:33:6d:7f:77
Index InterfaceState Operational State
1 Te 21/0/1 Down
2 Te 21/0/2 Down
3 Te 21/0/3 Down
Output Truncated
49 Te 21/0/49 Up ISL (Trunk port, Primary is Te 21/0/50)
50 Te 21/0/50 Up ISL 10:00:00:05:33:00:77:80 "BRCD6720-RB22"
(upstream)(Trunk Primary)
BRCD6720-RB21# show fabric trunk
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
98
Rbridge-id: 21
Trunk Src Source Nbr Nbr
Group Index Interface Index Interface Nbr-WWN
------------------------------------------------------------------
1 49 Te 21/0/49 49 Te 22/0/49 10:00:00:05:33:6F:27:57
1 50 Te 21/0/50 50 Te 22/0/50 10:00:00:05:33:6F:27:57
The steps in this section provide guideline to create required VLANs as
mentioned below:
VLAN Name VLAN ID VLAN Description
Storage VLAN 20 This VLAN is for iSCSI traffic
Cluster VLAN 30 This VLAN is for cluster live
migration
Management VLAN 10 Management VLAN
To create a VLAN interface, perform the following steps from privileged
EXEC mode.
1. Enter the configure terminal command to access global
configuration mode.
BRCD6720-RB21# configure terminal
Entering configuration mode terminal
BRCD6720-RB21(config)#
2. Enter the interface vlan command to assign the VLAN interface
number.
BRCD6720-RB21(config)# interface Vlan 20
BRCD6720-RB21(config-Vlan-20)#
3. Create other required VLANs as described in above table. To view
defined VLANs on the RBridge use show vlan brief command :
BRCD6720-RB21# show vlan brief
Total Number of VLANs configured : 4
VLAN Name State Ports
(F)-FCoE, (u)-Untagged, (t)-Tagged, (c)-Converged
================================================
1 default ACTIVE Te 21/0/27(t) Po 44(t) Po 55(t)
10 VLAN0010 ACTIVE Te 21/0/27(t) Po 44(t) Po 55(t)
20 VLAN0020 ACTIVE Te 21/0/27(t) Po 44(t) Po 55(t)
30 VLAN0030 ACTIVE Te 21/0/27(t) Po 44(t) Po 55(t)
Step 5: Create required VLANs
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
99
20G Brocade Trunk
Brocade VDX 6720
RLOM
51 62 73 84 139 1410 1511 1612 2117 2218 2319 2420
Brocade VDX 6720
RLOM
51 62 73 84 139 1410 1511 1612 2117 2218 2319 2420
10G10G 10G10G
10G 10G
VDX6720-RB21 VDX6720-RB22
vLAG Po 44
VNXe
10G 10G
vLAG Po 55
Host A Host B
Figure 27. Creating VLANs
1. Configure vLAG Port-channel Interface on Brocade VDX 6720-
RB21 for Host_A and Host_B.
BRCD6720-RB21# configure terminal
BRCD6720-RB21(config)# interface Port-channel 44
BRCD6720-RB21(config-Port-channel-44)# mtu 9216
BRCD6720-RB21(config-Port-channel-44)# speed 10000
BRCD6720-RB21(config-Port-channel-44)# description Host_A-vLAG-44
BRCD6720-RB21(config-Port-channel-44)# switchport
BRCD6720-RB21(config-Port-channel-44)# switchport mode trunk
BRCD6720-RB21(config-Port-channel-44)# switchport trunk allowed
vlan all
BRCD6720-RB21(config-Port-channel-44)# no shutdown
BRCD6720-RB21# configure terminal
BRCD6720-RB21(config)# interface Port-channel 55
BRCD6720-RB21(config-Port-channel-55)# mtu 9216
BRCD6720-RB21(config-Port-channel-55)# speed 10000
BRCD6720-RB21(config-Port-channel-55)# description Host_B-vLAG-55
BRCD6720-RB21(config-Port-channel-55)# switchport
BRCD6720-RB21(config-Port-channel-55)# switchport mode trunk
BRCD6720-RB21(config-Port-channel-55)# switchport trunk allowed
vlan all
BRCD6720-RB21(config-Port-channel-55)# no shutdown
Step 6: Create vLAG for Microsoft Server
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
100
2. Configure Interface TenGigabitEthernet 21/0/10 and 21/0/11 on
Brocade VDX6720-RB21.
BRCD6720-RB21# configure terminal
BRCD6720-RB21(config)# interface TenGigabitEthernet 21/0/10
BRCD6720-RB21(conf-if-te-21/0/10)# description Host_A-vLAG-44
BRCD6720-RB21(conf-if-te-21/0/10)# channel-group 44 mode active
type standard
BRCD6720-RB21(conf-if-te-21/0/10)# lacp timeout long
BRCD6720-RB21(conf-if-te-21/0/10)# no shutdown
BRCD6720-RB21# configure terminal
BRCD6720-RB21(config)# interface TenGigabitEthernet 21/0/11
BRCD6720-RB21(conf-if-te-21/0/11)# description Host_B-vLAG-55
BRCD6720-RB21(conf-if-te-21/0/11)# channel-group 55 mode active
type standard
BRCD6720-RB21(conf-if-te-21/0/11)# lacp timeout long
BRCD6720-RB21(conf-if-te-21/0/11)# no shutdown
3. Configure vLAG Port-channel Interface on Brocade VDX 6720-
RB22 for Host_A and Host_B.
BRCD6720-RB22# configure terminal
BRCD6720-RB22(config)# interface Port-channel 44
BRCD6720-RB22(config-Port-channel-44)# mtu 9216
BRCD6720-RB22(config-Port-channel-44)# speed 10000
BRCD6720-RB22(config-Port-channel-44)# description Host_A-vLAG-44
BRCD6720-RB22(config-Port-channel-44)# switchport
BRCD6720-RB22(config-Port-channel-44)# switchport mode trunk
BRCD6720-RB22(config-Port-channel-44)# switchport trunk allowed
vlan all
BRCD6720-RB22(config-Port-channel-44)# no shutdown
BRCD6720-RB22# configure terminal
BRCD6720-RB22(config)# interface Port-channel 55
BRCD6720-RB22(config-Port-channel-55)# mtu 9216
BRCD6720-RB22(config-Port-channel-55)# speed 10000
BRCD6720-RB22(config-Port-channel-55)# description Host_B-vLAG-55
BRCD6720-RB22(config-Port-channel-55)# switchport
BRCD6720-RB22(config-Port-channel-55)# switchport mode trunk
BRCD6720-RB22(config-Port-channel-55)# switchport trunk allowed
vlan all
BRCD6720-RB22(config-Port-channel-55)# no shutdown
4. Configure Interface TenGigabitEthernet 22/0/10 and 21/0/11 on
Brocade VDX6720-RB22.
BRCD6720-RB22# configure terminal
BRCD6720-RB22(config)# interface TenGigabitEthernet 22/0/10
BRCD6720-RB22(conf-if-te-22/0/10)# description Host_A-vLAG-44
BRCD6720-RB22(conf-if-te-22/0/10)# channel-group 44 mode active
type standard
BRCD6720-RB22(conf-if-te-22/0/10)# lacp timeout long
BRCD6720-RB22(conf-if-te-22/0/10)# no shutdown
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
101
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
102
BRCD6720-RB22# configure terminal
BRCD6720-RB22(config)# interface TenGigabitEthernet 22/0/11
BRCD6720-RB22(conf-if-te-22/0/11)# description Host_B-vLAG-55
BRCD6720-RB22(conf-if-te-22/0/11)# channel-group 55 mode active
type standard
BRCD6720-RB22(conf-if-te-22/0/11)# lacp timeout long
BRCD6720-RB22(conf-if-te-22/0/11)# no shutdown
5. Validate vLAG Port-channel Interface on Brocade VDX 6720-RB21
and VDX 6720-RB22 to Host_A and Host_B.
BRCD6720-RB21# show interface Port-channel 44
Port-channel 44 is up, line protocol is up
Hardware is AGGREGATE, address is 0005.448c.adee
Current address is 0005.448c.adee
Description: Host_A-vLAG-44
Interface index (ifindex) is 672088673
Minimum number of links to bring Port-channel up is 1
MTU 9216 bytes
LineSpeed Actual : 10000 Mbit
Allowed Member Speed : 10000 Mbit
BRCD6720-RB21# show interface Port-channel 55
Port-channel 55 is up, line protocol is up
Hardware is AGGREGATE, address is 0005.448c.adee
Current address is 0005.448c.adee
Description: Host_B-vLAG-55
Interface index (ifindex) is 672088673
Minimum number of links to bring Port-channel up is 1
MTU 9216 bytes
LineSpeed Actual : 10000 Mbit
Allowed Member Speed : 10000 Mbit
BRCD6720-RB22# show interface Port-channel 44
Port-channel 44 is up, line protocol is up
Hardware is AGGREGATE, address is 0005.448c.adce
Current address is 0005.448c.adce
Description: Host_A-vLAG-44
Interface index (ifindex) is 672088973
Minimum number of links to bring Port-channel up is 1
MTU 9216 bytes
LineSpeed Actual : 10000 Mbit
Allowed Member Speed : 10000 Mbit
BRCD6720-RB22# show interface Port-channel 55
Port-channel 55 is up, line protocol is up
Hardware is AGGREGATE, address is 0005.448c.adee
Current address is 0005.448c.adee
Description: Host_B-vLAG-55
Interface index (ifindex) is 672088673
Minimum number of links to bring Port-channel up is 1
MTU 9216 bytes
LineSpeed Actual : 10000 Mbit
Allowed Member Speed : 10000 Mbit
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
103
6. Validate Interface TenGigabitEthernet 21/0/10 on Brocade
VDX6720-RB21 and Interface TenGigabitEthernet 22/0/10 on
Brocade BRCD6720-RB22.
BRCD6720-RB21# show interface TenGigabitEthernet 21/0/10
TenGigabitEthernet 21/0/10 is up, line protocol is up (connected)
Hardware is Ethernet, address is 0005.448c.adb6
Current address is 0005.448c.adb6
Description: Host_A-vLAG-44
Interface index (ifindex) is 672088673
MTU 9216 bytes
LineSpeed : 10000 Mbit, Duplex: Full
Flowcontrol rx: off, tx: off
BRCD6720-RB21# show interface TenGigabitEthernet 21/0/11
TenGigabitEthernet 21/0/11 is up, line protocol is up (connected)
Hardware is Ethernet, address is 0005.448c.adb6
Current address is 0005.448c.adb6
Description: Host_B-vLAG 55
Interface index (ifindex) is 672088673
MTU 9216 bytes
LineSpeed : 10000 Mbit, Duplex: Full
Flowcontrol rx: off, tx: off
BRCD6720-RB22# show interface TenGigabitEthernet 22/0/10
TenGigabitEthernet 22/0/10 is up, line protocol is up (connected)
Hardware is Ethernet, address is 0005.448c.adb6
Current address is 0005.448c.adb6
Description: Host_A-vLAG 44
Interface index (ifindex) is 672088973
MTU 9216 bytes
LineSpeed : 10000 Mbit, Duplex: Full
Flowcontrol rx: off, tx: off
BRCD6720-RB22# show interface TenGigabitEthernet 22/0/11
TenGigabitEthernet 22/0/11 is up, line protocol is up (connected)
Hardware is Ethernet, address is 0005.448c.adb6
Current address is 0005.448c.adb6
Description: Host_B-vLAG 55
Interface index (ifindex) is 672088973
MTU 9216 bytes
LineSpeed : 10000 Mbit, Duplex: Full
Flowcontrol rx: off, tx: off
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
104
1. Configure Switch Interfaces for VNXe connections on RB21 and
RB22.
BRCD6720-RB21# configure terminal
Entering configuration mode terminal
BRCD6720-RB21(config)# interface TenGigabitEthernet 21/0/27
BRCD6720-RB21(conf-if-te-21/0/27)#
BRCD6720-RB21(conf-if-te-21/0/27)# mtu 9216
BRCD6720-RB21(conf-if-te-21/0/27)# description VNXe-Port-eth2
BRCD6720-RB21(conf-if-te-21/0/27)# switchport
BRCD6720-RB21(conf-if-te-21/0/27)# switchport mode trunk
BRCD6720-RB21(conf-if-te-21/0/27)# switchport trunk allowed vlan
all
BRCD6720-RB21(conf-if-te-21/0/27)# switchport trunk tag native-
vlan
BRCD6720-RB21(conf-if-te-21/0/27)# qos flowcontrol tx on rx on
BRCD6720-RB21(conf-if-te-21/0/27)# no shutdown
BRCD6720-RB21# configure terminal
Entering configuration mode terminal
BRCD6720-RB21(config)# interface TenGigabitEthernet 21/0/28
BRCD6720-RB21(conf-if-te-21/0/28)#
BRCD6720-RB21(conf-if-te-21/0/28)# mtu 9216
BRCD6720-RB21(conf-if-te-21/0/28)# description VNXe-Port-eth4
BRCD6720-RB21(conf-if-te-21/0/28)# switchport
BRCD6720-RB21(conf-if-te-21/0/28)# switchport mode trunk
BRCD6720-RB21(conf-if-te-21/0/28)# switchport trunk allowed vlan
all
BRCD6720-RB21(conf-if-te-21/0/28)# switchport trunk tag native-
vlan
BRCD6720-RB21(conf-if-te-21/0/28)# qos flowcontrol tx on rx on
BRCD6720-RB21(conf-if-te-21/0/28)# no shutdown
BRCD6720-RB22# configure terminal
Entering configuration mode terminal
BRCD6720-RB22(config)# interface TenGigabitEthernet 22/0/27
BRCD6720-RB22(conf-if-te-22/0/27)#
BRCD6720-RB22(conf-if-te-22/0/27)# mtu 9216
BRCD6720-RB22(conf-if-te-22/0/27)# description VNXe-Port-eth3
BRCD6720-RB22(conf-if-te-22/0/27)# switchport
BRCD6720-RB22(conf-if-te-22/0/27)# switchport mode trunk
BRCD6720-RB22(conf-if-te-22/0/27)# switchport trunk allowed vlan
all
BRCD6720-RB22(conf-if-te-22/0/27)# switchport trunk tag native-
vlan
BRCD6720-RB22(conf-if-te-22/0/27)# qos flowcontrol tx on rx on
BRCD6720-RB22(conf-if-te-22/0/27)# no shutdown
BRCD6720-RB22# configure terminal
Entering configuration mode terminal
Step 7: Configure Switch Interfaces for VNXe
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
105
BRCD6720-RB22(config)# interface TenGigabitEthernet 22/0/28
BRCD6720-RB22(conf-if-te-22/0/28)#
BRCD6720-RB22(conf-if-te-22/0/28)# mtu 9216
BRCD6720-RB22(conf-if-te-22/0/28)# description VNXe-Port-eth5
BRCD6720-RB22(conf-if-te-22/0/28)# switchport
BRCD6720-RB22(conf-if-te-22/0/28)# switchport mode trunk
BRCD6720-RB22(conf-if-te-22/0/28)# switchport trunk allowed vlan
all
BRCD6720-RB22(conf-if-te-22/0/28)# switchport trunk tag native-
vlan
BRCD6720-RB22(conf-if-te-22/0/28)# qos flowcontrol tx on rx on
BRCD6720-RB22(conf-if-te-22/0/28)# no shutdown
2. Validate TenGigabitEthernet Interface on Brocade VDX 6720-RB21
and VDX 6720-RB22 to VNXe.
BRCD6720-RB21# show interface TenGigabitEthernet 21/0/27
TenGigabitEthernet 21/0/27 is up, line protocol is up (connected)
Hardware is Ethernet, address is 0005.3392.6402
Current address is 0005.3392.6402
Fixed Copper RJ45 Media Present
Description: VNXe-Port-eth2
Interface index (ifindex) is 8993480844
MTU 9216 bytes
LineSpeed : 10000 Mbit, Duplex: Full
Flowcontrol rx: on, tx: on
Priority Tag disable
IPv6 RA Guard disable
Last clearing of show interface counters: 1w1d22h
Queueing strategy: fifo
Receive Statistics:
525356 packets, 45326840 bytes
Unicasts: 499599, Multicasts: 25311, Broadcasts: 297
64-byte pkts: 0, Over 64-byte pkts: 497559, Over 127-byte
pkts: 26735
Over 255-byte pkts: 348, Over 511-byte pkts: 410, Over 1023-
byte pkts: 0
Over 1518-byte pkts(Jumbo): 304
Runts: 0, Jabbers: 0, CRC: 0, Overruns: 0
Errors: 149, Discards: 0
Transmit Statistics:
639020 packets, 53714014 bytes
Unicasts: 477953, Multicasts: 34663, Broadcasts: 126404
Underruns: 0
Errors: 0, Discards: 0
Rate info (interval 299 seconds):
Input 0.000000 Mbits/sec, 0 packets/sec, 0.00% of line-rate
Output 0.000512 Mbits/sec, 1 packets/sec, 0.00% of line-rate
Time since last interface status change: 1d14h02m
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
106
BRCD6720-RB22# show interface TenGigabitEthernet 22/0/27
TenGigabitEthernet 22/0/27 is up, line protocol is up (connected)
Hardware is Ethernet, address is 0005.3393.8364
Current address is 0005.3393.8364
Fixed Copper RJ45 Media Present
Description: VNXe-Port-eth3
Interface index (ifindex) is 4698513548
MTU 9216 bytes
LineSpeed : 10000 Mbit, Duplex: Full
Flowcontrol rx: on, tx: on
Priority Tag disable
IPv6 RA Guard disable
Last clearing of show interface counters: 1w1d22h
Queueing strategy: fifo
Receive Statistics:
5281 packets, 670248 bytes
Unicasts: 1, Multicasts: 5191, Broadcasts: 89
64-byte pkts: 0, Over 64-byte pkts: 90, Over 127-byte pkts:
5191
Over 255-byte pkts: 0, Over 511-byte pkts: 0, Over 1023-byte
pkts: 0
Over 1518-byte pkts(Jumbo): 0
Runts: 0, Jabbers: 0, CRC: 0, Overruns: 0
Errors: 0, Discards: 0
Transmit Statistics:
495890 packets, 61240812 bytes
Unicasts: 88, Multicasts: 455892, Broadcasts: 39910
Underruns: 0
Errors: 0, Discards: 0
Rate info (interval 299 seconds):
Input 0.000000 Mbits/sec, 0 packets/sec, 0.00% of line-rate
Output 0.000000 Mbits/sec, 0 packets/sec, 0.00% of line-rate
Time since last interface status change: 1d08h36m
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
107
Brocade VDX 6720 switches can be uplinked to be accessible from an
existing network infrastructure. On VDX 6720 platforms, you will need to
use 10G uplinks for this (ports 49-54). The uplink should be configured to
match whether or not the customer’s network is using tagged or untagged
traffic.
The following example can be leveraged as a guideline to connect VCS
fabric to existing infrastructure network:
Figure 28. Example VCS/VDX network topology with Infrastructure connectivity
Creating virtual link aggregation groups (vLAGs) to the Infrastructure
Network
Create vLAGs from each RBridge to Infrastructure Switches that in turn
provide access to resources at the core network.
This example illustrates the configuration for RB21 and RB22.
1. Use the channel-group command to configure interfaces as
members of a port channel to the infrastructure switches that
interface to the core. This example uses port channel 4 on Grp1,
RB21.
Step 8: Connecting the VCS Fabric to an existing Infrastructure through Uplinks
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
108
BRCD6720-RB21# configure terminal
BRCD6720-RB21(config)# in te 21/0/49
BRCD6720-RB21(conf-if-te-21/0/49)# channel-group 4 mode passive
type standard
BRCD6720-RB21(conf-if-te-21/0/49)# in te 21/0/50
BRCD6720-RB21(conf-if-te-21/0/50)# channel-group 4 mode passive
type standard
2. Use the switchport command to configure the port channel
interface. Here we assign it to trunk mode and allow all VLANs on
the port channel.
BRCD6720-RB21(conf-if-te-21/0/50)# interface port-channel 4
BRCD6720-RB21(config-Port-channel-4)# switchport
BRCD6720-RB21(config-Port-channel-4)# switchport mode trunk
BRCD6720-RB21(config-Port-channel-4)# switchport trunk allowed
vlan all
BRCD6720-RB21(config-Port-channel-4)# no shutdown
3. Configure RB22 as shown above.
BRCD6720-RB22# configure terminal
BRCD6720-RB22(config)# in te 22/0/49
BRCD6720-RB22(conf-if-te-22/0/49)# channel-group 4 mode active
type standard
BRCD6720-RB22(conf-if-te-22/0/49)# in te 22/0/50
BRCD6720-RB22(conf-if-te-22/0/50)# channel-group 4 mode active
type standard
BRCD6720-RB22(config)# interface port-channel 4
BRCD6720-RB22(config-Port-channel-4)# switchport
BRCD6720-RB22(config-Port-channel-4)# switchport mode trunk
BRCD6720-RB22(config-Port-channel-4)# switchport trunk allowed
vlan all
BRCD6720-RB22(config-Port-channel-4)# no shutdown
4. Use the do show port-chan command to confirm that the vLAG
comes up and is configured correctly.
Note: The LAG must be configured on the MLX MCT as well before the vLAG can
become operational.
BRCD6720-RB21(config-Port-channel-4)# do show port-chan 4
LACP Aggregator: Po 4 (vLAG)
Aggregator type: Standard
Ignore-split is enabled
Member rbridges:
rbridge-id: 21 (2)
rbridge-id: 22 (2)
Admin Key: 0004 - Oper Key 0004
Partner System ID - 0x0001,01-80-c2-00-00-01
Partner Oper Key 30002
Member ports on rbridge-id 21:
Link: Te 21/0/49 (0x151810000F) sync: 1 *
Link: Te 21/0/50 (0x1518110010) sync: 1
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
109
BRCD6720-RB22(config-Port-channel-4)# do show port-channel 4
LACP Aggregator: Po 4 (vLAG)
Aggregator type: Standard
Ignore-split is enabled
Member rbridges:
rbridge-id: 21 (2)
rbridge-id: 22 (2)
Admin Key: 0004 - Oper Key 0004
Partner System ID - 0x0001,01-80-c2-00-00-01
Partner Oper Key 30002
Member ports on rbridge-id 22:
Link: Te 22/0/49 (0x161810000F) sync: 1
Link: Te 22/0/50 (0x1618110010) sync: 1
Brocade recommends using Jumbo frames for an iSCSI based architecture
such as this. Set the MTU to 9216 for the switch ports used for storage
network of iSCSI, CIFS, or NFS protocols. Consult the Brocade configuration
guide for additional details.
Configuring MTU
Note This must be performed on all RBbridges where a given interface port-
channel is located. In this example, interface port-channel 44 is on
RBridge 21 and RBridge 22, so we will apply configurations from both
RBridge 21 and RBridge 22.
Example to enable Jumbo Frame Support on applicable VDX interfaces for
which Jumbo Frame support is required:
BRCD6720# configure terminal
BRCD6720(config)# interface Port-channel 44
BRCD6720(config-Port-channel-44)# mtu
(<NUMBER:1522-9216>) (9216): 9216
Brocade AMPP (Automatic Migration of Port Profiles) technology enhances
network-side virtual machine migration by allowing VM migration across
physical switches, switch ports, and collision domains. In traditional
networks, port-migration tasks usually require manual configuration
changes as VM migration across physical server and switches can result in
non-symmetrical network policies. Port setting information must be
identical at the destination switch and port.
Brocade VCS Fabrics support automatically moving the port profile in
synchronization with a VM moving to a different physical server. This allows
VMs to be migrated without the need for network ports to be manually
configured on the destination switch.
Please refer AMPP Configuration section of Network OS Administration
Guide for AMPP configuration and monitoring.
Step 9 - Configure MTU and Jumbo Frames
Step 10 - AMPP configuration for live migrations
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
110
Prepare and configure storage array
This section describes how to configure the VNXe storage array and
provision storage for this VSPEX solution.
Overview
In the solution, Table 18 shows how the VNXe series provides Hyper-V
datastores based on the iSCSI servers for Windows hosts.
Table 18. Tasks for storage configuration
Task Description Reference
Set up initial
VNXe
configuration
Configure the IP address
information and other key
parameters on the VNXe.
VNXe 3150 or VNXe 3300
System Installation Guide
VNXe Series Configuration
Worksheet
Provision storage
for Hyper-V
datastores
Create iSCSI servers (targets) to
be presented to the Windows
servers (iSCSI initiators) as
Hyper-V datastores hosting the
virtual servers.
Prepare VNXe
The VNXe 3150 or VNXe 3300 System Installation Guide provides instructions
on assembly, racking, cabling, and powering the VNXe. There are no
specific setup steps for this solution.
Set up initial VNXe configuration
After completing the initial VNXe setup, you need to configure key
information about the existing environment so that the storage array can
communicate. Configure the following items in accordance with your IT
datacenter policies and existing infrastructure information.
DNS
NTP
Storage network interfaces
Storage network IP address
CIFS services and Active Directory domain membership
The reference documents listed in Table 18 provide more information on
how to configure the VNXe platform. The Storage layout for 50 virtual
machines and Storage layout for 100 virtual machines sections provide
more information on the disk layout.
Overview
VNXe configuration
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
111
Complete the following steps in EMC Unisphere to configure iSCSI servers
on the VNXe array to be used to store virtual servers:
1. Create a pool with the appropriate number of disks.
a. In Unisphere, select System Storage Pools.
b. Select Configure Disks and manually create a new pool by Disk
Type for SAS drives. The validated configuration uses a single
pool with 45 drives (for 50 virtual machines) or 77 drives (for 100
virtual machines). In other scenarios, create separate pools. The
Storage configuration guidelines section provides additional
information.
Note Create your hot spare disks at this point. Refer to the
VNXe 3150 or VNXe 3300 System Installation Guide for
additional information.
Figure 9 depicts the target storage layout for 50 virtual
machines while Figure 10 depicts the target storage layout for
100 virtual machines.
Note As a performance best practice, all of the drives in the pool
should be of the same size and speed.
2. Create an iSCSI server.
a. In Unisphere, select Settings iSCSI Server Settings Add iSCSI
Server. The wizard appears.
b. Refer to VNXe 3150/VNXe 3300 System Installation Guide for
detailed instructions to create an iSCSI server.
3. Create a Hyper-V storage resource.
a. In Unisphere, select Storage Hyper-V Create.
Create an iSCSI datastore in the pool and iSCSI server. The size
of the datastore is determined by the number of virtual
machines that it contains. The Storage configuration guidelines
section provides additional information about partitioning
virtual machines into separate datastores. The validated
configuration uses four 1.5 TB datastores (for 50 virtual
machines) or 10 750 GB datastores (for 100 virtual machines
with the size of 70 GB each).
Note Do not enable Thin Provisioning.
b. If snapshot data protection is needed, configure the protection
space.
The validated configuration also enables the use of array-based
snapshots to maintain point-in-time views of the datastores. The
snapshots can be used as sources for backups or other use
cases. When utilizing snapshots, consider the issues that the
customers may experience.
Provision storage for iSCSI datastores
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
112
There is a short-term increase in the I/O latency when taking an
iSCSI snapshot. To avoid this increase being noticeable, do not
set multiple snapshots to occur on the same schedule.
When the most recent snapshot is deleted on a large LUN, a
new snapshot cannot be created until this process completes,
which may take considerable time. To avoid this situation, use
the snapshot scheduling tool in Unisphere.
Note This solution is validated with VNXe Operating
Environment version 2.2.0.16150. There is a known issue
with array-based snapshots in this version that is
addressed in a hot fix. Later revisions of the VNXe
Operating Environment will incorporate the necessary
changes. Contact EMC Customer Support or reference
primus article emc293164 to obtain this hot fix.
Install and configure Hyper-V hosts
This section provides the requirements for the installation and configuration
of the Windows hosts and infrastructure servers to support the architecture.
Table 19 describes the tasks that must be completed.
Table 19. Tasks for server installation
Task Description Reference
Install Windows
Hosts
Install Windows Server 2012 on the
physical servers that are deployed for
the solution.
http://technet.microsoft.com
/en-us/library/jj134246.aspx
Install Hyper-V
and configure
Failover
Clustering
1. Add the Hyper-V Server role.
2. Add the Failover Clustering
feature.
3. Create and configure the Hyper-V
cluster.
http://technet.microsoft.com
/en-us/library/jj134246.aspx
Configure
Windows host
networking
Configure Windows host networking,
including NIC teaming and Multiple
Connections per Session (MC/S).
http://technet.microsoft.com
/en-us/library/jj134246.aspx
Publish VNXe
datastores to
Hyper-V
Configure the VNXe to allow the
Hyper-V hosts to access the
datastores created in the section
Publish VNXe datastores to Hyper-V.
VNXe System Installation
Guide
Connect to
Hyper-V
datastores
Connect the Hyper-V datastores to
the Windows hosts as Cluster Shared
Volumes (CSV) to the Hyper-V failover
cluster.
Using a VNXe System with
Microsoft Windows Hyper-V.
http://technet.microsoft.com
/en-us/library/jj612868.aspx
Overview
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
113
To install and configure Failover Clustering, complete the steps:
1. Install and patch Windows Server 2012 on each Windows host.
2. Configure the Hyper-V role and the Failover Clustering feature.
Table 19 provides the steps and references to accomplish the
configuration tasks.
To ensure optimal performance and availability, the following numbers of
network interface card (NIC) are required:
At least one NIC is used for virtual machine networking and
management (can be separated by network or VLAN if necessary).
At least two NICs are required for iSCSI connection (configured as
MC/S or MPIO).
At least one NIC is used for Live Migration.
At the end of the Prepare and configure storage array section, you have
datastores ready to be published to the Hypervisor. With the hypervisors
installed, return to Unisphere and add the Hyper-V servers to the list of hosts
that are allowed to access the datastores.
Connect the datastores configured in the section Prepare and configure
storage array to the appropriate Windows hosts as Cluster Shared
Volumes. The datastores configured for the following storage are used:
Virtual server storage
Infrastructure virtual machine storage (if required)
SQL Server storage (if required)
Using a VNXe System with Microsoft Windows Hyper-V provides the
instructions on how to connect the Hyper-V datastores to the Windows
host.
After the datastores are connected and formatted on one of the hosts,
and then add the clustered disks as CSV disks.
The process for configuring these settings is outlined in the Microsoft
document Using Live Migration with Cluster Shared Volumes in Windows
Server 2008 R2.
Install Hyper-V and configure failover clustering
Configure Windows host networking
Publish VNXe datastores to Hyper-V
Connect Hyper-V datastores
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
114
Server capacity is required for two purposes in the solution:
To support the new virtualized server infrastructure.
To support the required infrastructure services such as
authentication/authorization, DNS, and database.
For information on minimum infrastructure services hosting requirements,
refer to Table 2. If existing infrastructure services meet the requirements, the
hardware listed for infrastructure services is not required.
Memory configuration
Proper sizing and configuration of the solution necessitates care being
taken when configuring server memory. An overview of how memory is
managed in a Hyper-V environment is provided here.
Memory virtualization techniques enable the hypervisor to abstract
physical host resources such as Dynamic Memory in order to provide
resource isolation across multiple virtual machines while avoiding resource
exhaustion. In cases where advanced processors (such as Intel processors
with Extended Page Table – or EPT support) are deployed, this abstraction
takes place within the CPU. Otherwise, this process occurs within the
hypervisor itself.
There are multiple techniques within the hypervisor for you to maximize the
use of system resources like memory. However, it is not a best practice to
substantially over commit resources, as this can lead to poor system
performance. The exact implications of memory over commitment in a
real-world environment are difficult to predict. The more overcommitted
your memory resources are, the more performance can suffer from
resource exhaustion.
Plan virtual machine memory allocations
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
115
Install and configure SQL server database
Most of the customers use management tools to provision and manage
their server virtualization solution even though it is not required. These
management tools typically requires a database backend. SCVMM uses
SQL Server 2012 as the database platform.
This section describes how to set up and configure a SQL Server database
for the solution. At the end of this section, you have Microsoft SQL server
installed on a virtual machine, with the SCVMM-required databases
configured. Table 20 shows the detailed setup tasks.
Table 20. Tasks for SQL server database setup
Task Description Reference
Create a
virtual
machine for
Microsoft SQL
Server
Create a virtual machine
to host SQL Server.
Verify if the virtual server
meets the hardware and
software requirements.
http://msdn.microsoft.com/en-
us/library/ms143506.aspx
Install Microsoft
Windows on
the virtual
machine
Install Microsoft Windows
Server 2012 Standard
Edition on the virtual
machine.
http://technet.microsoft.com/en-
us/library/jj134246.aspx
Install Microsoft
SQL Server
Install Microsoft SQL Server
on the designated virtual
machine.
http://technet.microsoft.com/en-
us/library/bb500395.aspx
Configure SQL
Server for
SCVMM
Configure a remote SQL
Server instance ready for
SCVMM to use.
http://technet.microsoft.com/en-
us/library/gg610656.aspx
Note The customer environment may already contain a SQL Server that is
designated for this role. In that case, refer to the section Configure
SQL Server for SCVMM.
Create the virtual machine with enough computing resources on one of
the Windows servers designated for infrastructure virtual machines, and use
the datastore designated for the shared infrastructure.
SQL Server must run on Microsoft Windows Server. Install the required
Windows Server version on the virtual machine and select the appropriate
network, time, and authentication settings.
Overview
Create a virtual machine for Microsoft SQL server
Install Microsoft Windows on the virtual machine
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
116
Install SQL Server on the virtual machine from the SQL Server installation
media. The Microsoft TechNet website provides information on how to
install SQL Server.
One of the installable components in the SQL Server installer is the SQL
Server Management Studio (SSMS). Install this component on the SQL
server directly or on an administrator console. SSMS must be installed on at
least one system.
To change the default path for storing data files, perform the following
steps:
1. Right-click the server object in SSMS and select Database
Properties. The Properties dialog appears.
2. Change the default data and log directories for new databases
created on the server.
To use SCVMM in this solution, configure the SQL Server for remote
connection. The requirements and steps to configure it correctly are
available in the article Configuring a Remote Instance of SQL Server for
VMM. Refer to the list of documents in Appendix C for more information.
It is a best practice to create individual login accounts for each service
that accesses a database on the SQL Server.
Install SQL Server
Configure SQL Server for SCVMM
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
117
System Center Virtual Machine Manager server deployment
This section provides information on how to configure System Center Virtual
Machine Manager (SCVMM). Complete the tasks in Table 21.
Table 21. Tasks for SCVMM configuration
Task Description Reference
Create the
SCVMM host VM
Create a virtual
machine to be used for
the SCVMM Virtual
Center Server
Install the SCVMM
Guest OS
Install Windows Server
2012 Datacenter Edition
on the SCVMM host
virtual machine
Install the SCVMM
server
Install the SCVMM server http://technet.microsoft.com/en-
us/library/cc764327.aspx
Install the SCVMM
Management
Console
Install the SCVMM
Management Console
http://technet.microsoft.com/en-
us/library/bb740758.aspx
Install the SCVMM
agent locally on
the hosts
Install the SCVMM agent
locally on the hosts that
are managed by
SCVMM
http://technet.microsoft.com/en-
us/library/bb740757.aspx
Add a Hyper-V
cluster into
SCVMM
Add the Hyper-V cluster
(Install and configure
Hyper-V hosts) into
SCVMM.
http://technet.microsoft.com/en-
us/library/gg610671.aspx
Create a virtual
machine in
SCVMM
Create a virtual
machine in SCVMM
http://technet.microsoft.com/en-
us/library/gg610679.aspx
Create a
template virtual
machine
Create a template
virtual machine from the
existing virtual machine.
Create the hardware
profile and Guest
Operating System profile
during the procedure
http://technet.microsoft.com/en-
us/library/bb740832.aspx
Deploy virtual
machines from
the template
virtual machine
Deploy the virtual
machines from the
template virtual
machine
http://technet.microsoft.com/en-
us/library/bb963734.aspx
Overview
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
118
If the Microsoft Hyper-V server is to be deployed as a virtual machine on a
Hyper-V server that is installed as part of this solution, connect directly to
an Infrastructure Hyper-V server by using the Hyper-V manager.
Create a virtual machine on the Microsoft Hyper-V server with the
customer’s guest OS configuration by using infrastructure server datastore
presented from the storage array.
The memory and processor requirements for the SCVMM server depend on
the number of the managed Hyper-V hosts and virtual machines.
Install the guest OS on the SCVMM host virtual machine. Install the
requested Windows Server version on the virtual machine and select
appropriate network, time, and authentication settings.
Before installing the SCVMM server, set up the VMM database and the
default library server.
Refer to the article Installing the VMM Server to install the SCVMM server.
The SCVMM Management Console is a client tool to manage the SCVMM
server. Install the VMM Management Console on the same computer as
the VMM server.
Refer to the article Installing the VMM Administrator Console to install the
SCVMM Management Console.
If there are hosts that must be managed on a perimeter network, install a
VMM agent locally on the host before it is added to VMM. Optionally,
install a VMM agent locally on a host in a domain before adding the host
to VMM.
Refer to the article Installing a VMM Agent Locally on a Host to install a
VMM agent locally on a host.
Add the deployed Microsoft Hyper-V cluster to the SCVMM. SCVMM
manages the Hyper-V cluster.
Refer to the article How to Add a Host Cluster to VMM to add the Hyper-V
cluster.
Create a virtual machine in SCVMM. This virtual machine will be converted
to virtual machine template. After the virtual machine is installed, install the
software, and change the Windows and application settings.
Refer to the article How to Create a Virtual Machine with a Blank Virtual
Hard Disk to create a virtual machine.
Create a SCVMM host virtual machine
Install the SCVMM guest OS
Install the SCVMM server
Install the SCVMM Management Console
Install the SCVMM agent locally on a host
Add a Hyper-V cluster into SCVMM
Create a virtual machine in SCVMM
VSPEX Configuration Guidelines
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
119
The virtual machine is removed after the virtual machine is converted into
a template. Backup the virtual machine, because the virtual machine may
be destroyed during template creation.
Create a hardware profile and a Guest Operating System profile while you
create a template. You can use the profiler to deploy the virtual machines.
Refer to the article How to Create a Template from a Virtual Machine to
create the template.
Refer to the article How to Deploy a Virtual Machine to deploy the virtual
machines.
When using the deployment wizard, you can save the PowerShell scripts
and reuse them to deploy the other virtual machines if the virtual machine
configurations are the same.
Summary
In this chapter, the required steps to deploy and configure the various
aspects of the VSPEX solution were provided, which included both the
physical and logical components. At this point, you should have a fully
functional VSPEX solution.
Create a template virtual machine
Deploy virtual machines from the template virtual machine
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
121
Chapter 6 Validating the
Solution
This chapter presents the following topics:
Overview 122
Post-install checklist 123
Deploy and test a single virtual server 123
Verify the redundancy of the solution components 123
Validating the Solution
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
122
Overview
This chapter provides a list of items to be reviewed once the solution has
been configured. The goal of this chapter is to verify the configuration and
functionality of specific aspects of the solution, and ensure that the
configuration supports core availability requirements.
Complete the tasks in Table 22.
Table 22. Tasks for testing the installation
Task Description Reference
Post-install
checklist
Verify that adequate virtual ports
exist on each Hyper-V host virtual
switch.
http://blogs.technet.com/b/gavin
mcshera/archive/2011/03/27/34163
13.aspx
Verify that each Hyper-V host has
access to the required
datastores and VLANs.
http://social.technet.microsoft.com
/wiki/contents/articles/151.hyper-v-
virtual-networking-survival-guide-
en-us.aspx
Using a VNXe System with Microsoft
Windows Hyper-V
Verify that the Live Migration
interfaces are configured
correctly on all Hyper-V hosts.
http://technet.microsoft.com/en-
us/library/hh831435.aspx
Deploy and test
a single virtual
server
Deploy a single virtual machine
by using the System Center
Virtual Machine Manager
(SCVMM) interface.
http://channel9.msdn.com/Events/
TechEd/NorthAmerica/2012/VIR310
Verify
redundancy of
the solution
components
Perform a reboot for each
storage processor in turn, and
ensure that the storage
connectivity is maintained.
N/A
Disable each of the redundant
switches in turn and verify that
the Hyper-V host, virtual
machine, and storage array
connectivity remains intact.
Vendor documentation
On a Hyper-V host that contains
at least one virtual machine,
restart the host and verify that
the virtual machine can
successfully migrate to an
alternate host.
http://technet.microsoft.com/en-
us/library/gg610576.aspx
Validating the Solution
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
123
Post-install checklist
The following configuration items are critical to the functionality of the
solution, and should be verified prior to the deployment into production.
On each Hyper-V server, verify the following items:
The VLAN for virtual machine networking is configured correctly.
The iSCSI Storage networking is configured correctly and each
server has access to the required Hyper-V datastores.
A network interface card (NIC) is configured correctly for Live
Migration.
Deploy and test a single virtual server
To verify the operation of the solution, it is important to perform a
deployment of a virtual machine in order to verify that the procedure
completes as expected.
Verify the following items:
The virtual machine is added to the applicable domain.
The virtual machine has access to the expected networks.
You can log in to the virtual machine.
Verify the redundancy of the solution components
To ensure that the components of the solution maintain availability
requirements, it is important to test specific scenarios related to
maintenance or a hardware failure.
1. Reboot each VNXe Storage Processor in turn and verify that
connectivity to Hyper-V datastores is maintained throughout each
reboot.
a. In Unisphere, navigate to Settings Service System.
b. In the System Components pane, select Storage Processor SPA.
c. In the Service Actions pane, select Reboot.
d. Click Execute service action.
e. During the reboot cycle, check the presence of datastores on
Hyper-V hosts.
f. Wait until the SP has finished rebooting and is available in
Unisphere.
g. Repeat the steps b to e for Storage Processor SPB.
Validating the Solution
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
124
2. To verify that the network redundancy features function as
expected, disable each of the redundant switching infrastructures
in turn. Verify that all the components of the solution maintain
connectivity to each other and any existing client infrastructure.
3. On a Hyper-V host that contains at least one virtual machine,
restart the host and verify that the virtual machine can successfully
migrate to an alternate host.
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
125
Appendix A Bill of Materials
This appendix presents the following topic:
Bill of materials 126
Bill of Materials
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
126
Bill of materials
Table 23. List of components used in the VSPEX solution for 50 virtual
machines
Component Solution for 50 virtual machines
Microsoft
Hyper-V
servers
CPU 1 x vCPU per virtual machine
4 x vCPUs per physical core
50 x vCPUs
Minimum of 13 Physical CPUs
Memory 2 GB RAM per virtual machine
2 GB RAM reservation per Hyper-V host
Minimum of 102 GB RAM
Network – 10GbE 2 x 10 GbE NICs per server
Note To implement Microsoft Hyper-V High Availability (HA) functionality
and to meet the listed minimums, the infrastructure should have at least
one additional server beyond the number needed to meet the minimum
requirements.
Brocade
Network
infrastructure
Common 2 x physical switches with Inter-Switch Links (ISLs)
active/active redundant network.
1 x 1 GbE port per storage processor for
management
1 GbE network VDX 6710
6 x 1 GbE ports per vSphere server
2 x 1 GbE ports per storage processor for data
10 GbE network VDX 6720
2 x 10 GbE ports per vSphere server
2 x 10 GbE ports per storage processor for data
EMC Next-
Generation
Backup
Avamar 1 x Avamar Business Edition – Half Capacity
EMC VNXe
series storage
array
Common EMC VNXe 3150
2 x storage processor(active / active)
45 x 300 GB 15k RPM 3.5-inch SAS disks
2 x 300 GB 15k RPM 3.5-inch SAS disks as hot spares
10 GbE Network 1 x 10 GbE I/O module for each storage processor
(each module includes two ports)
Bill of Materials
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
127
Table 24. List of components used in the VSPEX solution for 100 virtual
machines
Component Solution for 100 virtual machines
Microsoft Hyper-
V servers
CPU 1 x vCPU per virtual machine
4 x vCPUs per physical core
100 x vCPUs
Minimum of 25 Physical CPUs
Memory 2 GB RAM per virtual machine
2 GB RAM reservation per Hyper-V host
Minimum of 202 GB RAM
Network – 10 GbE 2 x 10 GbE NICs per server
Note To implement Microsoft Hyper-V High Availability (HA) functionality
and to meet the listed minimums, the infrastructure should have at least
one additional server beyond the number needed to meet the minimum
requirements.
Network
infrastructure
Common 2 x physical switches
1 x 1 GbE port per control station for
management
10 GbE Network 2 x 10 GbE ports per Hyper-V server
2 x 10 GbE ports per storage processor
EMC Next-
Generation
Backup
Avamar 1 x Avamar Business Edition
EMC VNXe series
storage array
Common EMC VNXe 3300
2 x storage processors (active / active)
77 x 300 GB 15k RPM 3.5-inch SAS disks
3 x 300 GB 15k RPM 3.5-inch SAS disks as hot
spares
10 GbE Network 1 x 10 GbE I/O module for each storage
processor
(each module includes two ports)
Bill of Materials
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
128
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
129
Appendix B Customer
Configuration
Data Sheet
This appendix presents the following topic:
Customer configuration data sheet130
Customer Configuration Data Sheet
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
130
Customer configuration data sheet
Before you start the configuration, gather some customer-specific network,
and host configuration information. The following tables provide
information on assembling the required network and host address,
numbering, and naming information. This worksheet can also be used as a
“leave behind” document for future reference.
The VNXe Series Configuration Worksheet should be cross-referenced to
confirm customer information.
Table 25. Common server information
Server name Purpose Primary IP
Domain Controller
DNS Primary
DNS Secondary
DHCP
NTP
SMTP
SNMP
System Center Virtual
Machine Manager
SQL Server
Table 26. Hyper-V server information
Server
name Purpose
Primary
IP
Private net
(storage)
addresses
Hyper-V
Host 1
Hyper-V
Host 2
…
Customer Configuration Data Sheet
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
131
Table 27. Array information
Array name
Admin account
Management IP
Storage pool name
Datastore name
iSCSI Server IP
Table 28. Network infrastructure information
Name Purpose IP Subnet
mask
Default
gateway
Ethernet Switch 1
Ethernet Switch 2
…
Table 29. VLAN information
Name Network Purpose VLAN ID Allowed subnets
Virtual Machine
Networking
Management
iSCSI Storage Network
Public (client access)
Live Migration
(optional)
Table 30. Service accounts
Account Purpose Password (optional,
secure appropriately)
Windows Server administrator
Array administrator
SCVMM administrator
SQL Server administrator
Customer Configuration Data Sheet
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
132
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
133
Appendix C References
This appendix presents the following topic:
References 134
References
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
134
References
The following documents, located on the EMC Online Support website,
provide additional and relevant information. Access to these documents
depends on your login credentials. If you do not have access to a
document, contact your EMC representative.
VNXe System Installation Guide
VNXe Series Configuration Worksheet
EMC Backup and Recovery Options For VSPEX Private Clouds
Using a VNXe System with Microsoft Windows Hyper-V
For documentation on Microsoft SQL Server, Hyper-V, and Microsoft System
Center Virtual Machine Manager (SCVMM), refer to the following articles:
Installing the VMM Server
How to Add a Host Cluster to VMM
How to Create a Template from a Virtual Machine
Using Live Migration with Cluster Shared Volumes in Windows Server
2008 R2
Configuring a Remote Instance of SQL Server for VMM
Installing Virtual Machine Manager
Installing the VMM Administrator Console
Installing a VMM Agent Locally on a Host
Adding Hyper-V Hosts and Host Clusters to VMM
How to Create a Virtual Machine with a Blank Virtual Hard Disk to
create a virtual machine
How to Deploy a Virtual Machine
Installing Windows Server 2012
Hardware and Software Requirements for Installing SQL Server 2012
Install SQL Server 2012
How to Install a VMM Management Server
EMC documentation
Other documentation
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
135
Appendix D About VSPEX
This appendix presents the following topic:
About VSPEX 136
About VSPEX
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
136
About VSPEX
EMC has joined forces with the industry leading providers of IT infrastructure
to create a complete virtualization solution that accelerates deployment
of cloud infrastructures. Built with best-of-breed technologies, VSPEX
enables faster deployment, more simplicity, greater choice, higher
efficiency, and lower risk.
Validation by EMC ensures predictable performance and enables
customers to select technology that leverages their existing IT infrastructure
while significantly reducing planning, sizing, and configuration burdens.
VSPEX provides a proven infrastructure for the customers that look to gain
simplicity that is characteristic of truly converged infrastructures while at
the same time gaining more choice in individual solution components.
VSPEX solutions are proven by EMC, and are packaged and sold
exclusively by EMC channel partners. VSPEX provides channel partners
with more opportunity, a faster sales cycle, and end-to-end enablement.
By working even more closely together, EMC and its channel partners can
now deliver infrastructure that accelerates the journey to the cloud for
more customers.
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
137
Appendix E Validation with
Microsoft Hyper-V
Fast Track v3
This appendix presents the following topic:
Overview 138
Business case for validation 138
Process requirements 139
Additional resources 142
Validation with Microsoft Hyper-V Fast Track v3
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
138
Overview
The Microsoft Hyper-V Fast Track Program is a reference architecture
validation framework designed by Microsoft to validate end-to-end
virtualization solutions comprised of Microsoft software products. These
software products have been tightly integrated and tested with specific
hardware components, and built and configured according to best
practices defined by Microsoft and the hardware vendors. Customers
receive a fully built, read-to-run solution at their site. Microsoft handles
primary support in conjunction with the solution owner (hardware vendors
and/or system integrators) to ensure end-to-end solution support.
Unlike the EMC VSPEX Proven Infrastructure solutions, which offer partners
the flexibility to choose the solution components, the Microsoft Hyper-V
Fast Track Program are locked configurations based on specific end-to-
end architectures. Similar to the Windows Logo Program, any significant
changes (such as a different HBA or BIOS) invalidate the architecture
unless Microsoft validates the changes.
VSPEX Proven Infrastructure solutions provide a valuable platform to serve
as potential Microsoft Hyper-V Fast Track Program validated solutions,
because much of the heavy-lifting, such sizing and performance
validation, are completed by EMC. Customers can also benefit from a
solution that has been thoroughly tested, validated, and approved by
Microsoft. This section describes the steps for EMC VSPEX partners to take a
VSPEX Private Infrastructure solution through the Microsoft Hyper-V Fast
Track Program.
Business case for validation
The release of Microsoft Windows Server 2012 R2 introduces significant
product enhancements, and is Microsoft’s second-generation cloud-
optimized server operating system. Microsoft identified key areas or pillars
to focus on, including:
Continuous Availability
Virtualization
Performance
Additionally, the release of the Microsoft System Center 2012 R2 product
suite introduces powerful, flexible new tools to integrate with the new
features of Windows Server 2012 R2. System Center Orchestrator, Virtual
Machine Manager, Operations Manager, and Data Protection Manager
provide customers the tools to cohesively build and manage virtualized
cloud infrastructures.
The Microsoft Hyper-V Fast Track Program, now in its third iteration,
incorporates these products into a pre-built, bundled cloud solution based
on collective best practices. This eliminates design guesswork and
Validation with Microsoft Hyper-V Fast Track v3
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
139
implementation problems, and allows organizations to implement cloud-
based solutions rapidly within their IT infrastructure. Furthermore, since the
end-to-end configuration is tested and validated, customers avoid many
of the issues in a complex, multi-tiered environment such as driver and/or
firmware incompatibilities.
EMC VSPEX partners that certify VSPEX Proven Infrastructures in the
Microsoft Hyper-V Fast Track Program can create additional revenue
streams from the services that comprise virtualization solutions.
Process requirements
Solution validation for the Microsoft Hyper-V Fast Track Program is a
significant endeavor. Using a VSPEX Proven Infrastructure solution as a basis
eliminates a significant portion of the required work. Any VSPEX Proven
Infrastructure that uses Microsoft Windows Server 2012 (or later) as the
hypervisor is a viable candidate.
An EMC VSPEX partner must also be a Microsoft Gold partner. Obtain
Microsoft Hyper-V Fast Track Program v3 documentation and program
guidelines directly from Microsoft by sending a request to the following
alias: [email protected]. Upon receipt, thoroughly review the
documentation and program requirements to become familiar with the
process.
There are certain support obligations defined in the Microsoft Hyper-V Fast
Track Program. Contact Microsoft, or refer to program documentation for
further details.
Select any VSPEX Proven Infrastructure solution based on Microsoft
Windows Server 2012.
Step one: Core prerequisites
Step two: Select the VSPEX Proven Infrastructure platform
Validation with Microsoft Hyper-V Fast Track v3
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
140
After choosing the base VSPEX Proven Infrastructure, partners must define
additional architectural requirements to comply with the Microsoft Hyper-V
Fast Track Program guidelines and requirements. Program documentation
classifies these components as described in Table 31.
Table 31. Hyper-V Fast Track component classification
Icon Level Description
Mandatory Required to pass Microsoft validation.
Recommend Optional. This is an industry-standard recommendation,
but is not required to pass the Microsoft validation.
Optional Optional. Presents an alternate method to consider,
and is not required to pass the Microsoft validation.
Partners must ensure that all mandatory components are included in the
solution. EMC strongly advises partners to include recommended
components to ensure the solution is robust and competitive.
Partners must make the following changes to a VSPEX Proven
Infrastructure. All hardware components must be logo certified for
Windows Server 2012. Refer to
http://www.windowsservercatalog.com/ for device certification
information. Use the WHCK process and the SysDev Dashboard
portal as starting points for the certification process, and send proof
of certification to the Microsoft Hyper-V Fast Track Program Team for
review.
Provide a SKU, part number, or another simple and efficient process
to purchase or resell the solution. Send details of the ordering
process to the Microsoft Hyper-V Fast Track Program Team for
review.
Servers must meet the following minimum requirements:
2 to 4 server nodes with clustering installed (cluster nodes).
Dual processor sockets, with 6 cores per socket (12 cores total).
32 GB RAM (4 GB per virtual machine and management host).
1 Gigabyte Ethernet (GbE) cluster interconnect.
Additional network isolation is required for cluster heartbeat traffic.
Ensure the environment meets the following minimum network
requirements:
Two physically separate networks. The cluster heartbeat network
must be on a distinctly separate subnet from the hosted network
traffic.
Step three: Define additional Microsoft Hyper-V Fast Track Program components
Validation with Microsoft Hyper-V Fast Track v3
EMC® VSPEX™ with Brocade Networking Solutions for Private
Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100
Virtual Machines Enabled by Brocade VDX with VCS Fabric
Technology, EMC VNXe and EMC Next-Generation Backup
141
1 GbE or greater network adapter for internal communications,
and 1 GbE or greater network adapter for external LAN
communications for each node.
1 GbE or greater network speed for Live Migration traffic and
cluster communication. EMC recommends using a 10 GbE
network dedicated to Live Migration.
Do not share the virtual machine network adapter with the host
operating system.
EMC and Microsoft do not support configurations with a single
network connection.
Configure Network Teaming so that:
The solution can withstand the loss of any single adapter without
losing server connectivity.
The solution uses NIC teaming to provide high availability for the
virtual machine networks. Microsoft supports third party teaming
or Microsoft teaming.
Create a detailed bill of materials that includes the following major
components:
Hardware manufacturer, model, firmware, BIOS, and driver versions,
and vendor part number for:
Servers
HBAs
Switches
Storage arrays
Software
Any other major components
Install and configure the end-to-end environment. Run the Windows
Cluster Validation Tool to verify the environment configuration, and
Failover Clustering support. Send the results of this test to the Microsoft
Hyper-V Fast Track Program Team for review. Refer to
http://technet.microsoft.com/en-us/library/jj134244.aspx for more
information about the Windows Cluster Validation Tool.
Use the available solution template from the Microsoft Hyper-V Fast Track
Program Team, or create a solution document based on the appropriate
VSPEX Proven Infrastructure Design Guide. Add the additional required
content per step three above, and then submit the final solution
document to Microsoft and EMC for posting. An example solution created
by Cisco and EMC, which follows the Microsoft Hyper-V Fast Track Program
v2 guidelines, is available at:
http://www.cisco.com/en/US/netsol/ns1203/index.html.
Step four: Build a detailed Bill of Materials
Step five: Test the environment
Step six: Document and publish the solution
Validation with Microsoft Hyper-V Fast Track v3
EMC® VSPEX™ with Brocade Networking Solutions for Private Cloud
Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual
Machines Enabled by Brocade VDX with VCS Fabric Technology,
EMC VNXe and EMC Next-Generation Backup
142
Additional resources
Microsoft Hyper-V Fast Track Program v3 documentation is only available
for Microsoft partners, although some material exists on the Microsoft
Partner Portal, TechNet, and various Microsoft blog sites. For the best
results, engage directly with the Microsoft Hyper-V Fast Track Program v3
Partner Program Management Team via their email alias at
[email protected]. Alternatively, Microsoft partners can work
through their Microsoft Technical Account Managers (TAMs). The public
website is http://www.microsoft.com/en-us/server-cloud/private-
cloud/fast-track.aspx.