Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

79

Click here to load reader

description

Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Transcript of Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Page 1: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Proven Solutions Guide

EMC UNIFIED STORAGE SOLUTIONS

Abstract

This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure for virtual desktops enabled by VMware vSphere™ 4.1, and Citrix XenDesktop™ 5 with an EMC® VNX5300™ unified storage platform. This paper focuses on sizing and scalability, and highlights new features introduced in EMC VNX™, VMware vSphere™, and Citrix XenDesktop. EMC unified storage leverages advanced technologies like EMC FAST Cache to optimize performance of the virtual desktop environment, helping to support service-level agreements.

June 2011

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (FC), VMWARE vSPHERE 4.1, AND CITRIX XENDESKTOP 5

Page 2: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 1: Introduction

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide 2

Copyright © 2011 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

VMware, ESX, VMware vCenter, VMware View, and VMware vSphere are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners.

VMware and/or Citrix may have substituted components of the original environment in this document with hardware of a similar (or higher) specification to the original equipment used in the EMC Proven Solution. The content contained in this document originated from a validated EMC Proven Solution. The modification introduced by VMware and/or Citrix may have caused changes in performance, functionality, or scalability of the original solution. Please refer to www.emc.com/solutions for further information on validated EMC Proven Solutions.

Part Number: H8252.1

Page 3: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Table of contents

3 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

Table of contents

1 Introduction ................................................................................................. 9

Introduction to the new EMC VNX series for unified storage .................................................. 9

Software suites available .......................................................................................................... 10

Software packages available ..................................................................................................... 10

Document overview ............................................................................................................ 10

Use case definition .................................................................................................................... 10

Purpose ..................................................................................................................................... 11

Scope ........................................................................................................................................ 11

Not in scope .............................................................................................................................. 11

Audience ................................................................................................................................... 11

Prerequisites ............................................................................................................................. 11

Terminology .............................................................................................................................. 11

Reference architecture ........................................................................................................ 12

Corresponding reference architecture ........................................................................................ 12

Reference architecture logical diagram ...................................................................................... 13

Configuration ...................................................................................................................... 14

Test results ................................................................................................................................ 14

Hardware resources ................................................................................................................... 14

Software resources .................................................................................................................... 15

2 Citrix Virtual Desktop Infrastructure ............................................................. 17

Citrix XenDesktop 5 ............................................................................................................ 17

Introduction .............................................................................................................................. 17

Deploying Citrix XenDesktop components ................................................................................. 17

Citrix XenDesktop controller ...................................................................................................... 18

Machine Creation Services (MCS) .............................................................................................. 18

vSphere 4.1 infrastructure .................................................................................................. 18

vSphere 4.1 overview ................................................................................................................ 18

vSphere cluster ......................................................................................................................... 18

Windows infrastructure ....................................................................................................... 19

Introduction .............................................................................................................................. 19

Microsoft Active Directory .......................................................................................................... 19

Microsoft SQL Server ................................................................................................................. 19

DNS Server ................................................................................................................................ 19

DHCP Server .............................................................................................................................. 20

Page 4: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Table of contents

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

4

3 Storage Design ........................................................................................... 21

EMC VNX series storage architecture .................................................................................. 21

Introduction .............................................................................................................................. 21

Storage layout ........................................................................................................................... 22

VMware View components ......................................................................................................... 22

EMC VNX FAST Cache ................................................................................................................. 22

EMC VNX VAAI Support .............................................................................................................. 23

EMC VSI for VMware vSphere ..................................................................................................... 23

EMC PowerPath Virtual Edition .................................................................................................. 24

vCenter Server storage layout .................................................................................................... 24

VNX shared file systems ............................................................................................................ 24

Roaming profiles and folder redirection ..................................................................................... 24

EMC VNX for File Home Directory feature .................................................................................... 24

Profile export ............................................................................................................................. 25

Capacity .................................................................................................................................... 25

4 Network Design .......................................................................................... 26

Considerations ................................................................................................................... 26

Physical design considerations ................................................................................................. 26

Logical design considerations ................................................................................................... 26

Link aggregation ........................................................................................................................ 26

VNX for file network configuration ....................................................................................... 27

Data Mover ports ....................................................................................................................... 27

LACP configuration on the Data Mover ....................................................................................... 27

ESX network configuration ......................................................................................................... 28

Increase the number of vSwitch virtual ports ............................................................................. 28

Cisco 6509 configuration .................................................................................................... 29

Overview ................................................................................................................................... 29

Cabling ...................................................................................................................................... 29

Server uplinks ........................................................................................................................... 29

Data Movers .............................................................................................................................. 30

Fibre Channel network configuration .................................................................................. 30

Introduction .............................................................................................................................. 30

Zone configuration .................................................................................................................... 31

5 Installation and Configuration ..................................................................... 32

Installation overview ........................................................................................................... 32

VMware vSphere components ............................................................................................ 33

vCenter Server CPU requirement ................................................................................................ 33

Page 5: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Table of contents

5 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

Appropriate access to vCenter SDK ............................................................................................ 33

PowerPath Virtual Edition .......................................................................................................... 34

Citrix XenDesktop components ........................................................................................... 34

Citrix XenDesktop installation overview ..................................................................................... 34

Citrix XenDesktop machine catalog configuration ...................................................................... 34

Throttle commands to vCenter Server ........................................................................................ 37

Virtual desktop idle pool settings .............................................................................................. 39

Storage components ........................................................................................................... 40

Storage pools ............................................................................................................................ 40

EMC VNX VAAI Support .............................................................................................................. 40

Enable FAST Cache .................................................................................................................... 41

VNX Home Directory feature ....................................................................................................... 42

6 Testing and Validation ................................................................................ 45

Validated environment profile ............................................................................................ 45

Profile characteristics ................................................................................................................ 45

Use cases .................................................................................................................................. 46

Login VSI ................................................................................................................................... 46

Login VSI launcher ..................................................................................................................... 47

FAST Cache configuration .......................................................................................................... 47

Boot storm results .............................................................................................................. 48

Test methodology ...................................................................................................................... 48

Pool individual disk load ........................................................................................................... 48

Pool LUN load ............................................................................................................................ 49

Storage processor IOPS ............................................................................................................. 50

Storage processor utilization ..................................................................................................... 51

FAST Cache IOPS ....................................................................................................................... 52

ESX CPU load ............................................................................................................................. 53

ESX disk response time ............................................................................................................. 54

Antivirus results .................................................................................................................. 54

Test methodology ...................................................................................................................... 54

Pool individual disk load ........................................................................................................... 55

Pool LUN load ............................................................................................................................ 56

Storage processor IOPS ............................................................................................................. 57

Storage processor utilization ..................................................................................................... 58

FAST Cache IOPS ....................................................................................................................... 59

ESX CPU load ............................................................................................................................. 60

ESX disk response time ............................................................................................................. 61

Patch install results ............................................................................................................ 61

Test methodology ...................................................................................................................... 61

Page 6: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

List of Tables

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

6

Pool individual disk load ........................................................................................................... 62

Pool LUN load ............................................................................................................................ 63

Storage processor IOPS ............................................................................................................. 64

Storage processor utilization ..................................................................................................... 65

FAST Cache IOPS ....................................................................................................................... 66

ESX CPU load ............................................................................................................................. 67

ESX disk response time ............................................................................................................. 68

Login VSI results ................................................................................................................. 68

Test methodology ...................................................................................................................... 68

Login VSI result summary .......................................................................................................... 68

Login storm timing ..................................................................................................................... 69

Pool individual disk load ........................................................................................................... 70

Pool LUN load ............................................................................................................................ 71

Storage processor IOPS ............................................................................................................. 72

Storage processor utilization ..................................................................................................... 72

FAST Cache IOPS ....................................................................................................................... 73

ESX CPU load ............................................................................................................................. 74

ESX disk response time ............................................................................................................. 75

FAST Cache benefits ........................................................................................................... 76

Case study ................................................................................................................................ 76

7 Conclusion ................................................................................................. 79

Summary ............................................................................................................................ 79

References .......................................................................................................................... 79

White papers ............................................................................................................................. 79

Other documentation ................................................................................................................ 79

List of Tables Table 1. Terminology ............................................................................................................... 12 Table 2. Solution hardware ...................................................................................................... 14 Table 3. Solution software ....................................................................................................... 15 Table 4. File systems ............................................................................................................... 24 Table 5. Environment profile .................................................................................................... 45

List of Figures

Page 7: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

List of Figures

7 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

Figure 1. Reference architecture ................................................................................................ 13 Figure 2. Storage layout ............................................................................................................ 22 Figure 3. UNC path for roaming profiles .................................................................................... 25 Figure 4. Rear view of the two VNX5300 Data Movers ................................................................ 27 Figure 5. vSwitch configuration ................................................................................................. 28 Figure 6. Load-balancing policy ................................................................................................ 28 Figure 7. vSwitch virtual ports ................................................................................................... 29 Figure 8. Zone configuration ..................................................................................................... 31 Figure 9. Select Create Catalog ................................................................................................. 34 Figure 10. Select the Machine type ............................................................................................. 35 Figure 11. Select the cluster host ................................................................................................ 35 Figure 12. Specify the number of virtual machines ...................................................................... 36 Figure 13. Select an Active Directory location .............................................................................. 36 Figure 14. Review the summary .................................................................................................. 37 Figure 15. Select Hosts ............................................................................................................... 37 Figure 16. Select Change details ................................................................................................. 38 Figure 17. Change Host Details window ...................................................................................... 38 Figure 18. Advanced Host Details window .................................................................................. 38 Figure 19. Eight thick LUNs of 500 GB ......................................................................................... 40 Figure 20. Hardware Acceleration ............................................................................................... 40 Figure 21. FAST Cache tab ........................................................................................................... 41 Figure 22. Enable FAST Cache ..................................................................................................... 42 Figure 23. MMC snap-in .............................................................................................................. 42 Figure 24. Sample virtual desktop properties .............................................................................. 43 Figure 25. Boot storm - Disk IOPS for a single SAS drive .............................................................. 48 Figure 26. Boot storm - LUN IOPS and response time .................................................................. 49 Figure 27. Boot storm - Storage processor total IOPS .................................................................. 50 Figure 28. Boot storm - Storage processor utilization .................................................................. 51 Figure 29. Boot storm - FAST Cache IOPS .................................................................................... 52 Figure 30. Boot storm - ESX CPU load .......................................................................................... 53 Figure 31. Boot storm - Average Guest Millisecond/Command counter ....................................... 54 Figure 32. Antivirus - Disk I/O for a single SAS drive.................................................................... 55 Figure 33. Antivirus - LUN IOPS and response time...................................................................... 56 Figure 34. Antivirus - Storage processor IOPS ............................................................................. 57 Figure 35. Antivirus - Storage processor utilization ..................................................................... 58 Figure 36. Antivirus - FAST Cache IOPS ........................................................................................ 59 Figure 37. Antivirus - ESX CPU load ............................................................................................. 60 Figure 38. Antivirus - Average Guest Millisecond/Command counter .......................................... 61 Figure 39. Patch install - Disk IOPS for a single SAS drive ............................................................ 62 Figure 40. Patch install - LUN IOPS and response time ................................................................ 63 Figure 41. Patch install - Storage processor IOPS ........................................................................ 64 Figure 42. Patch install - Storage processor utilization ................................................................ 65 Figure 43. Patch install - FAST Cache IOPS .................................................................................. 66 Figure 44. Patch install - ESX CPU load ........................................................................................ 67 Figure 45. Patch install - Average Guest Millisecond/Command counter ..................................... 68 Figure 46. Login VSI response times ........................................................................................... 69 Figure 47. Login storm timing ..................................................................................................... 70 Figure 48. Login VSI - Disk IOPS for a single SAS drive ................................................................ 70 Figure 49. Login VSI - LUN IOPS and response time ..................................................................... 71 Figure 50. Login VSI - Storage processor IOPS ............................................................................. 72 Figure 51. Login VSI - Storage processor utilization ..................................................................... 72

Page 8: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

List of Figures

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

8

Figure 52. Login VSI - FAST Cache IOPS ....................................................................................... 73 Figure 53. Login VSI - ESX CPU load ............................................................................................ 74 Figure 54. Login VSI - Average Guest Millisecond/Command counter .......................................... 75 Figure 55. Boot storm – average latency comparison .................................................................. 76 Figure 56. Antivirus scan – scan time comparison ...................................................................... 77 Figure 57. Patch storm – average latency comparison ................................................................. 78

Page 9: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 1: Introduction

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide 9

1 Introduction This chapter introduces the solution and its components, and includes the following sections:

Introduction to the new EMC VNX series for unified storage

Document overview

Reference architecture

Configuration

Introduction to the new EMC VNX series for unified storage The EMC™ VNX® series is a collection of new unified storage platforms that unifies EMC Celerra® and EMC CLARiiON® into a single product family. This innovative series meets the needs of environments that require simplicity, efficiency, and performance while keeping up with the demands of data growth, pervasive virtualization, and budget pressures. Customers can benefit from the new VNX features such as:

Next-generation unified storage, optimized for virtualized applications

Automated tiering with Flash and Fully Automated Storage Tiering for Virtual Pools (FAST VP) that can be optimized for the highest system performance and lowest storage cost simultaneously

Multiprotocol support for file, block, and object with object access through Atmos™ Virtual Edition (Atmos VE)

Simplified management with EMC Unisphere™ for a single management framework for all NAS, SAN, and replication needs

Up to three times improvement in performance with the latest Intel multicore CPUs, optimized for Flash

6 Gb/s SAS back end with the latest drive technologies supported:

3.5” 100 GB and 200 GB Flash, 3.5” 300 GB, and 600 GB 15k or 10k rpm SAS, and 3.5” 2 TB 7.2k rpm NL-SAS

2.5” 300 GB and 600 GB 10k rpm SAS

Expanded EMC UltraFlex™ I/O connectivity—Fibre Channel (FC), Internet Small Computer System Interface (iSCSI), Common Internet File System (CIFS), Network File System (NFS) including parallel NFS (pNFS), Multi-Path File System (MPFS), and Fibre Channel over Ethernet (FCoE) connectivity for converged networking over Ethernet

The VNX series includes five new software suites and two new software packages, making it easier and simpler to attain the maximum overall benefits.

Page 10: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 1: Introduction

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

10

VNX FAST Suite—Automatically optimizes for the highest system performance and the lowest storage cost simultaneously.

VNX Local Protection Suite—Practices safe data protection and repurposing.

VNX Remote Protection Suite—Protects data against localized failures, outages, and disasters.

VNX Application Protection Suite—Automates application copies and proves compliance.

VNX Security and Compliance Suite—Keeps data safe from changes, deletions, and malicious activity.

VNX Total Protection Pack—Includes local, remote, and application protection suites.

VNX Total Efficiency Package—Includes all five software suites (not available for the VNX5100™).

VNX Total Value Package—Includes all three protection software suites and the Security and Compliance Suite (the VNX5100 exclusively supports this package).

Document overview EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect realworld deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges that are currently facing its customers.

This Proven Solutions Guide summarizes a series of best practices that were discovered or validated during testing of the EMC Infrastructure for Virtual Desktops Enabled by EMC VNX™ Series, VMware vSphere™ 4.1, and Citrix XenDesktop 5 solution by using the following products:

EMC VNX series

Citrix XenDesktop 5

VMware vSphere 4.1

EMC PowerPath® Virtual Edition

The following six use cases are examined in this solution:

Boot storm

Antivirus scan

Microsoft security patch install

Login storm

Software suites available

Software packages available

Use case definition

Page 11: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 1: Introduction

11 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

User workload simulated with Login VSI tool

Chapter 6: Testing and Validation contains the test definitions and results for each use case.

The purpose of this solution is to provide a virtualized infrastructure for virtual desktops powered by Citrix XenDesktop 5, VMware vSphere 4.1, EMC VNX series, VNX FAST Cache, and storage pools.

This solution includes all the attributes required to run this environment, such as hardware and software, including Active Directory, and the required Citrix XenDesktop configuration.

Information in this document can be used as the basis for a solution build, white paper, best practices document, or training.

This Proven Solutions Guide contains the results observed from testing the EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5 solution. The objectives of this testing are to establish:

A reference architecture of validated hardware and software that permits easy and repeatable deployment of the solution.

The storage best practices to configure the solution in a manner that provides optimal performance, scalability, and protection in the context of the midtier enterprise market.

Implementation instructions are beyond the scope of this document. Information on how to install and configure Citrix XenDesktop 5 components, vSphere 4.1, and the required EMC products is outside the scope of this document. Links to supporting documentation for these products are supplied where applicable.

The intended audience for this Proven Solutions Guide is:

EMC, VMware, and Citrix customers

EMC, VMware, and Citrix partners

Internal EMC, VMware, Citrix personnel

It is assumed that the reader has a general knowledge of the following products:

VMware vSphere 4.1

Citrix XenDesktop 5

EMC VNX series

Table 1 provides terms frequently used in this paper.

Purpose

Scope

Not in scope

Audience

Prerequisites

Terminology

Page 12: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 1: Introduction

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

12

Table 1. Terminology

Term Definition

EMC VNX FAST Cache

EMC VNX FAST Cache is a feature that enables the use of EFD as an expanded cache layer for the array.

Machine Creation Services (MCS)

MCS is a collection of services that work together to create virtual desktops from a master desktop image on demand, optimizing storage utilization, and providing a pristine virtual desktop to each user every time they log on.

Login VSI Login VSI is a third-party benchmarking tool developed by Login Consultants that simulates real-world VDI workload by using an AutoIT script, and determines the maximum system capacity based on the response time of the users.

Reference architecture This Proven Solutions Guide has a corresponding Reference Architecture document that is available on EMC Powerlink® and EMC.com. The EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series(FC), VMware vSphere 4.1, and Citrix XenDesktop 5 — Reference Architecture provides more details.

If you do not have access to these documents, contact your EMC representative.

The reference architecture and the results in this Proven Solutions Guide are valid for 1,000 Windows 7 virtual desktops conforming to the workload described in “Validated environment profile” on page 45.

Corresponding reference architecture

Page 13: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 1: Introduction

13 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

Figure 1 depicts the logical architecture of the midsize solution.

Figure 1. Reference architecture

Reference architecture logical diagram

Page 14: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 1: Introduction

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

14

Configuration Chapter 6: Testing and Validation provides more information on the performance results.

Table 2 lists the hardware used for the solution.

Table 2. Solution hardware

Hardware Quantity Configuration Notes

EMC VNX5300™

1 Two Data Movers (active/standby)

Three DAEs configured with:

Thirty 300 GB, 15k rpm 3.5” SAS disks

Nine 2 TB, 7200 rpm 3.5” NL-SAS disks

Three 100 GB, 3.5” Flash drives

VNX shared storage

Intel-based servers

3 Memory: 20 GB RAM

CPU: Two Intel Xeon E5450 3.0 GHz quad-core processors

Internal storage: One 67 GB disk

External storage: VNX5300 (FC)

HBA: One QLogic ISP2532 with dual 8 Gb ports

NIC: Two Broadcom NetXtreme II BCM 1000 Base-T Adapters

ESX ®cluster to host infrastructure virtual machines

Intel-based servers

16 Memory: 72 GB RAM

CPU: Two Intel Xeon X5550 2.6 GHz quad-core processors

Internal storage: One 67 GB disk

External storage: VNX5300 (FC)

HBA: One QLogic ISP2532 with dual 8 Gb ports

NIC: Four Broadcom NetXtreme II BCM 1000 Base-T Adapters

Two ESX clusters to host 1,000 virtual desktops

Brocade DS5100

2 Twenty four 8 Gb ports Redundant SAN A/B configuration

Test results

Hardware resources

Page 15: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 1: Introduction

15 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

Table 3 lists the hardware used for the solution.

Table 3. Solution software

Software Configuration

EMC VNX5300

VNX OE for File Release 7.0.12

VNX OE for Block Release 31 (05.31.000.5.006)

VSI for VMware vSphere: Unified Storage Management

Version 4.1

VSI for VMware vSphere: Storage Viewer

Version 4.0.1

VSI for VMware vSphere: Path Management

Version 4.0.1

XenDesktop Desktop Virtualization

Citrix XenDesktop Controller Version 5 Platinum Edition

OS for XenDesktop Controller Windows Server 2008 R2 Enterprise Edition

Microsoft SQL Server Version 2008 Enterprise Edition (64-bit)

VMware vSphere

ESX Server ESX 4.1 (Build 208167)

vCenter Server 4.1 (Build 208111)

OS for vCenter Server Windows Server 2008 R2 Enterprise Edition

PowerPath Virtual Edition 5.4 SP2 build 298

Virtual Desktops

Note: These software is used to generate the test load

Operating system MS Windows 7 Enterprise (32-bit)

Microsoft Office Office 2007 Version 12

Internet Explorer 8.0.7600.16385

Adobe Reader 9.1.0

McAfee Virus Scan 8.7.0i Enterprise

Adobe Flash Player 10

Bullzip PDF Printer 6.0.0.865

Login VSI (VDI workload generator) 2.1.2 Pro edition

Software resources

Page 16: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 1: Introduction

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

16

Page 17: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 2: Citrix Virtual Desktop Infrastructure

17 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

2 Citrix Virtual Desktop Infrastructure This chapter describes the general design and layout instructions that apply to the specific components used during the development of this solution. This chapter includes the following sections:

Citrix XenDesktop 5

vSphere 4.1 infrastructure

Windows infrastructure

Citrix XenDesktop 5 Citrix XenDesktop offers a powerful and flexible desktop virtualization solution that allows you to deliver on-demand virtual desktops and applications to users anywhere, using any type of device. With XenDesktop’s FlexCast delivery technology, you can deliver every type of virtual desktop, tailored to individual performance and personalization needs for unprecedented flexibility and mobility.

Powered by Citrix HDX technologies, XenDesktop provides a superior user experience with Flash multimedia and applications, 3D graphics, webcams, audio, and branch office delivery, while using less bandwidth than alternative solutions. The high-speed delivery protocol provides unparalleled responsiveness over any network, including low bandwidth and high latency WAN connections.

This solution is deployed using two XenDesktop 5 controllers that are capable of scaling up to 1,000 virtual desktops.

The core elements of a Citrix XenDesktop 5 implementation are:

Citrix XenDesktop 5 controllers

Citrix License Server

VMware vSphere 4.1

Additionally, the following components are required to provide the infrastructure for a XenDesktop 5 deployment:

Microsoft Active Directory

Microsoft SQL Server

DNS Server

DHCP Server

Introduction

Deploying Citrix XenDesktop components

Page 18: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 2: Citrix Virtual Desktop Infrastructure

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

18

The Citrix XenDesktop controller is the central management location for virtual desktops and has the following key roles:

Broker connections between users and virtual desktops

Control the creation and retirement of virtual desktop images

Assign users to desktops

Control the state of the virtual desktops

Control access to the virtual desktops

Two XenDesktop 5 controllers were used in this solution to provide high availability as well as load balancing of brokered desktop connections. A Citrix License Server is installed on one of the controllers.

Machine Creation Services (MCS) is a new provisioning mechanism introduced in XenDesktop 5. It is integrated with the new XenDesktop management interface, Desktop Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle management from a centralized point of management.

MCS allows several types of machines to be managed within a catalog in Desktop Studio, including dedicated and pooled machines. Desktop customization is persistent for dedicated machines, while a pooled machine is required if a non-persistent desktop is appropriate.

In this solution, 1,000 virtual desktops that are running Windows 7 were provisioned by using MCS. The desktops were deployed from two dedicated machine catalogs.

Desktops provisioned using MCS share a common base image within a catalog. Because of this, the base image is typically accessed with sufficient frequency to naturally leverage EMC VNX FAST Cache, where frequently accessed data is promoted to Flash drives to provide optimal I/O response time with fewer physical disks.

vSphere 4.1 infrastructure VMware vSphere 4.1 is the market-leading virtualization hypervisor used across thousands of IT environments around the world. VMware vSphere 4.1 can transform or virtualize computer hardware resources, including the CPU, RAM, hard disk, and network controller to create fully functional virtual machines that run their own operating systems and applications just like physical computers.

The high-availability features in VMware vSphere 4.1 along with VMware Distributed Resource Scheduler (DRS) and Storage vMotion® enable seamless migration of virtual desktops from one ESX server to another with minimal or no impact to customers’ usage.

Two vSphere clusters were deployed to house 1,000 desktops in this solution. Each cluster consists of eight ESX servers to support 500 desktops, resulting in 62-63

Citrix XenDesktop controller

Machine Creation Services (MCS)

vSphere 4.1 overview

vSphere cluster

Page 19: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 2: Citrix Virtual Desktop Infrastructure

19 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

virtual machines per ESX server. Each cluster had access to four datastores for desktop provisioning, for a total of 125 virtual machines per data store.

The Infrastructure cluster consists of three ESX servers and holds the following virtual machines:

Windows 2008 R2 domain controller—provides DNS, Active Directory, and DHCP services.

SQL Server 2008 SP2 on Windows 2008 R2—provides databases for vCenter Server and XenDesktop 5 controllers.

vCenter Server on Windows 2008 R2—provides management services for the VMware clusters and XenDesktop controllers.

XenDesktop 5 controllers on Windows 2008 R2—provide services for managing virtual desktops.

Windows infrastructure Microsoft Windows provides the infrastructure used to support the virtual desktops and includes the following components:

Microsoft Active Directory

Microsoft SQL Server

DNS Server

DHCP Server

The Windows domain controller runs the Active Directory service that provides the framework to manage and support the virtual desktop environment. Active Directory performs the following functions:

Manages the identities of users and their information

Applies group policy objects

Deploys software and updates

Microsoft SQL Server is a relational database management system (RDBMS). A dedicated SQL Server 2008 SP2 is used to provide the required databases to vCenter Server and XenDesktop controllers.

DNS is the backbone of Active Directory and provides the primary name resolution mechanism for Windows servers and clients.

In this solution, the DNS role is enabled on the domain controller.

Introduction

Microsoft Active Directory

Microsoft SQL Server

DNS Server

Page 20: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 2: Citrix Virtual Desktop Infrastructure

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

20

The DHCP Server provides the IP address, DNS Server name, gateway address, and other information to the virtual desktops.

In this solution, the DHCP role is enabled on the domain controller. The DHCP scope is configured to accommodate the range of IP addresses for 1,000 or more virtual desktop machines.

DHCP Server

Page 21: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 3: Storage Design

21 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

3 Storage Design The storage design described in this section applies to the specific components of this solution.

EMC VNX series storage architecture

The EMC VNX series is a dedicated network server optimized for file and block access that delivers high-end features in a scalable and easy-to-use package.

The VNX series delivers a single-box block and file solution, which offers a centralized point of management for distributed environments. This makes it possible to dynamically grow, share, and cost-effectively manage multiprotocol file systems and provide multiprotocol block access. Administrators can take advantage of simultaneous support for NFS and CIFS protocols by allowing Windows and Linux/UNIX clients to share files using the sophisticated file-locking mechanisms of VNX for file and VNX for block for high-bandwidth or for latency-sensitive applications.

This solution uses both block-based and file-based storage to leverage the benefits that each of the following provides:

Block-based storage over Fibre Channel (FC) is used to store the VMDK files for all virtual desktops. This has the following benefits:

Block storage leverages the VAAI APIs (introduced in vSphere 4.1) that include a hardware-accelerated copy to improve the performance and for granular locking of the VMFS to increase scaling.

Unified Storage Management plug-in provides seamless integration with VMware vSphere to simplify the provisioning of datastores or VMs.

PowerPath Virtual Edition (PowerPath/VE) allows better performance and scalability as compared to the native multipathing options.

File-based storage is provided by a CIFS share. This has the following benefits:

Redirection of user data and roaming profiles to a central location for easy backup and administration.

Single instancing and compression of unstructured user data to provide the highest storage utilization and efficiency.

This section explains the configuration of the storage that was provided over FC to the ESX cluster to store the VMDK images and the storage that was provided over CIFS to redirect user data and roaming profiles.

Introduction

Page 22: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 3: Storage Design

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

22

The following diagram shows the storage layout of the disks.

Figure 2. Storage layout

The following storage configuration was used in the solution:

Four SAS disks (0_0 to 0_3) are used for the VNX OE.

Disks 0_4, 1_10, and 2_13 are hot spares. These disks are denoted as hot spare in the storage layout diagram.

Twenty SAS disks (0_5 to 0_14 and 1_0 to 1_9) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. Eight LUNs of 500 GB each are carved out of the pool and presented to the ESX servers.

Two Flash drives (1_11 and 1_12) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.

Five SAS disks (2_0 to 2_4) on the RAID 5 storage pool 2 are used to store the infrastructure virtual machines. Two LUNs of 500 GB each are carved out of the pool and presented to the ESX servers.

Eight NL-SAS disks (2_5 to 2_12) on the RAID 6 (6+2) group are used to store user data and roaming profiles. Two VNX file systems are created from the NL-SAS storage pool, a 2 TB file system for profiles and a 4 TB file system for user data.

Disks 1_13, 1_14, and 2_14 are unbound. They are not used for testing this solution.

VNX FAST Cache, a part of the VNX FAST Suite, enables Flash drives to be used as an expanded cache layer for the array. The VNX5300 is configured with two 100 GB Flash drives in a RAID 1 configuration for a 93 GB read/write-capable cache. This is the minimum amount of FAST Cache. Larger configurations are supported for scaling beyond 1,000 desktops.

Storage layout

VMware View components

EMC VNX FAST Cache

Page 23: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 3: Storage Design

23 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

FAST Cache is an array-wide feature available for both file and block storage. FAST Cache works by examining 64 KB chunks of data in FAST Cache-enabled objects on the array. Frequently accessed data is copied to the FAST Cache and subsequent accesses to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to Flash drives. This dramatically improves the response times for the active data and reduces data hot spots that can occur within the LUN.

FAST Cache is an extended read/write cache that enables XenDesktop to deliver consistent performance at Flash drive speeds by absorbing read-heavy activities such as boot storms and antivirus scans, and write-heavy workloads such as operating system patches and application updates. This extended read/write cache is an ideal caching mechanism for MCS in XenDesktop 5 because the base desktop image and other active user data are accessed with sufficient frequency to service the data is directly from the Flash drives without having to access slower drives at a lower storage tier.

Hardware acceleration with VMware vStorage API for Array Integration (VAAI) is a storage enhancement in vSphere 4.1 where ESX can offload specific storage operations to compliant storage hardware such as the EMC VNX series. With storage hardware assistance, ESX performs these operations faster and consumes fewer CPU, memory, and storage fabric bandwidth resources.

EMC Virtual Storage Integrator (VSI) for VMware vSphere is a plug-in to VMware vSphere Client that provides a single management interface used for managing EMC storage within the vSphere environment. Features can be added and removed from VSI independently, providing flexibility for customizing VSI user environments. Features are managed using the VSI Feature Manager. VSI provides a unified user experience, allowing new features to be introduced rapidly in response to changing customer requirements.

The following features were used during the validation testing:

Storage Viewer (SV) – extends the vSphere client to facilitate the discovery and identification of EMC VNX storage devices that are allocated to VMware ESX hosts and virtual machines. SV presents the underlying storage details to the virtual datacenter administrator, merging the data of several different storage mapping tools into a few seamless vSphere client views.

Path Management – provides a mechanism for changing the multipath policy for groups of LUNs based on storage class and virtualization object. This feature works with devices managed by VMware Native Multipathing and PowerPath/VE.

Unified Storage Management –simplifies storage administration of the EMC VNX unified storage platform. It enables VMware administrators to provision new NFS and VMFS datastores, and RDM volumes seamlessly within vSphere client.

Refer to the EMC VSI for VMware vSphere product guides on Powerlink for more information.

EMC VNX VAAI Support

EMC VSI for VMware vSphere

Page 24: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 3: Storage Design

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

24

Each data store that is used to store VMDK files is placed on the VNX5300 storage over FC. PowerPath/VE is enabled for all FC-based LUNs to efficiently use all the available paths for storage, and to minimize the effect of micro-bursting I/O patterns.

A storage pool of 20 x 300 GB SAS drives was configured on the VNX to provide the storage required for virtual desktops. Eight LUNs were carved out of the pool to present to the ESX clusters as eight datastores. Each of these 500 GB datastores accommodates 125 virtual machines. This allows each desktop to grow to a maximum average size of 4 GB. The pool of desktops created in XenDesktop is balanced across all eight datastores.

Virtual desktops use two VNX shared file systems, one for user profiles and the other to redirect user storage. Each file system is exported to the environment through a CIFS share.

The following table shows the file systems used for user profiles and redirected user storage.

Table 4. File systems

File system Use Size

profiles_fs Users’ profile data 2 TB

userdata1_fs Users’ data 4 TB

The local user profile is not recommended in a VDI environment. A performance penalty is incurred when a new local profile is created whenever a user logs in to a new desktop image. On the other hand, roaming profiles and folder redirection allow user data to be stored centrally on a network location that resides on a CIFS share hosted by VNX. This reduces the performance hit during user logon while allowing user data to roam with the profiles.

Alternative profile management tools such as Citrix User Profile Manager, or a third-party tool such as AppSense Environment Manager, provide more advanced and granular features to manage various user profile scenarios. Refer to User Profiles for XenApp and XenDesktop on the Citrix website for further details.

The EMC VNX for File Home Directory feature uses the userdata1_fs file system to automatically map the H: drive of each virtual desktop to the users’ own dedicated subfolder on the share. This ensures that each user has exclusive rights to a dedicated home drive share. This export does not need to be created manually. The Home Directory feature automatically maps this share for each user.

The Documents folder for each user is also redirected to this share. This allows users to recover the data in the Documents folder by using the VNX Snapshots for File. The file system is set at an initial size of 1 TB, and can extend itself automatically when more space is required.

EMC PowerPath Virtual Edition

vCenter Server storage layout

VNX shared file systems

Roaming profiles and folder redirection

EMC VNX for File Home Directory feature

Page 25: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 3: Storage Design

25 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The profiles_fs file system is used to store user roaming profiles. It is exported through CIFS. The UNC path to the export is configured in Active Directory for roaming profiles as shown in the following figure:

Figure 3. UNC path for roaming profiles

The file systems leverage Virtual Provisioning™ and compression to provide flexibility and increased storage efficiency. If single instancing and compression are enabled, unstructured data such as user documents typically leads to a 50 percent reduction in consumed storage.

The VNX file systems for user profiles and documents are configured as follows:

profiles_fs is configured to consume 2 TB of space. Assuming 50 percent space savings, each profile can grow up to 4 GB in size. The file system can be extended if more space is needed.

userdata1_fs is configured to consume 4 TB of space. Assuming 50 percent space savings, each user will be able to store 8 GB of data. The file system can be extended if more space is needed.

Profile export

Capacity

Page 26: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 4: Network Design

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

26

4 Network Design This chapter describes the network design used in this solution and contains the following sections:

Considerations

VNX for file network configuration

Cisco 6509 configuration

Fibre Channel network configuration

Considerations

EMC recommends that network switches support gigabit Ethernet (GbE) connections, Link Aggregation Control Protocol (LACP) or Etherchannel, and the switch ports support copper-based media.

This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security.

The IP scheme for the virtual desktop network must be designed with enough IP addresses in one or more subnets for the DHCP Server to assign them to each virtual desktop.

VNX platforms provide network high availability or redundancy by using link aggregation. This is one of the methods used to address the problem of link or switch failure.

Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address and potentially multiple IP addresses.

In this solution, LACP is configured on VNX, combining two GbE ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links.

Physical design considerations

Logical design considerations

Link aggregation

Page 27: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 4: Network Design

27 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

VNX for file network configuration

The VNX5300 is configured with two Data Movers. The Data Movers can be configured in an active/active or active/passive configuration. In the active/passive configuration, the passive Data Mover serves as a failover device for the active Data Mover. In this solution, the Data Movers operate in the active/passive mode.

The VNX5300 Data Movers are configured for four 1 Gb interfaces on a single I/O module. Link Aggregation Control Protocol (LACP) is used to configure ports cge-2-0 and cge-2-1 to support virtual machine traffic, home folder access, and external access for roaming profiles. Ports cge-2-2 and cge-2-3 were left free for future expansion.

The lacp1 device was used to support virtual machine traffic, home folder access, and external access for roaming profiles.

The following figure shows the rear view of the two VNX5300 Data Movers that include four copper gigabit ethernet (cge) ports each in I/O expansion slot 2.

Figure 4. Rear view of the two VNX5300 Data Movers

To configure the link aggregation that uses cge-2-0 and cge-2-1 on Data Mover 2, run the following command:

$ server_sysconfig server_2 -virtual -name <Device Name> -create

trk –option "device=cge-2-0,cge-2-1 protocol=lacp"

To verify if the ports are channeled correctly, run the following command:

$ server_sysconfig server_2 -virtual -info lacp1

server_2:

*** Trunk lacp1: Link is Up ***

*** Trunk lacp1: Timeout is Short ***

*** Trunk lacp1: Statistical Load C is IP ***

Device Local Grp Remote Grp Link LACP Duplex Speed

--------------------------------------------------------------

cge-2-0 10003 5888 Up Up Full 1000 Mbs

cge-2-1 10003 5888 Up Up Full 1000 Mbs

The remote group number must match for both ports, and the LACP status must be “Up.” Verify if the appropriate speed and duplex are established as expected.

Data Mover ports

LACP configuration on the Data Mover

Page 28: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 4: Network Design

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

28

All network interfaces in this solution use 1 GbE connections. All ESX servers have two onboard GbE controllers that are configured for NIC teaming to provide multipathing and network load balancing. The following diagram shows the vSwitch configuration in vCenter Server.

Figure 5. vSwitch configuration

The NIC teaming load balancing policy for the vSwitch needs to be set to Route based on IP hash as shown below. This policy is compatible with the Etherchannel port channel configured on the network switch.

Figure 6. Load-balancing policy

A vSwitch, by default, is configured with 24 or 120 virtual ports (depending on the ESX version), which may not be sufficient in a VDI environment. On the ESX servers that host the virtual desktops, each port is consumed by a virtual desktop. Set the number of ports based on the number of virtual desktops that will run on each ESX server.

Note Reboot the ESX server for the changes to take effect.

ESX network configuration

Increase the number of vSwitch virtual ports

Page 29: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 4: Network Design

29 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

Figure 7. vSwitch virtual ports

If an ESX server goes down or needs to be placed in maintenance mode, other ESX servers within the cluster must accommodate additional virtual desktops that are migrated from the ESX server that goes offline. One must take into account the worst-case scenario when determining the maximum number of virtual ports per vSwitch. If there are not enough virtual ports, the virtual desktops will not be able to obtain an IP from the DHCP server.

Cisco 6509 configuration

The nine-slot Cisco Catalyst 6509-E switch provides high port densities that are ideal for many wiring closet, distribution, and core network deployments, as well as data center deployments.

In this solution, the ESX server and VNX Data Mover cabling are evenly spread across two WS-x6748 1 Gb line cards to provide redundancy and load balancing of the network traffic.

The server uplinks to the switch are configured in an Etherchannel port channel group to increase the utilization of server network resources and provide redundancy.

Overview

Cabling

Server uplinks

Page 30: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 4: Network Design

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

30

Etherchannel mode is proven to be compatible with the vSwitch NIC teaming load balancing option Route based on IP hash.

The following is an example of the Etherchannel configuration for one of the server ports:

description 8/10 9048-43 rtpsol189-1

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 274,516-527

switchport mode trunk

no ip address

spanning-tree portfast trunk

channel-group 22 mode on

The network ports for each VNX5300 Data Mover are connected to the 6509-E switch. The Data Mover 2 ports cge-2-0 and cge-2-1 are configured with LACP to provide redundancy in case of a NIC or port failure.

The following is an example of the switch configuration for one of the Data Mover ports:

description 7/4 9047-4 rtpsol22-dm2.0

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 274,516-527

switchport mode trunk

mtu 9216

no ip address

spanning-tree portfast trunk

channel-group 23 mode active

Fibre Channel network configuration

Two Brocade DS5100 series FC switches are used to provide the storage network for this solution. The switches are configured in a SAN A/SAN B configuration to provide a fully redundant fabric.

Each server has a single connection to each fabric to provide load-balancing and failover capabilities. Each storage processor has two links to the SAN fabrics for a total of four available front-end ports. The zoning is configured so that each server has four available paths to the storage array.

Data Movers

Introduction

Page 31: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 4: Network Design

31 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

Single initiator, and multiple target zoning are used in this solution. Each server initiator is zoned to two storage targets on the array. The following diagram shows the zone configuration for the SAN A fabric.

Figure 8. Zone configuration

Zone configuration

Page 32: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 5: Installation and Configuration

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

32

5 Installation and Configuration This chapter describes how to install and configure this solution and includes the following sections:

Installation overview

VMware vSphere components

Storage components

Installation overview

This section provides an overview of the configuration of the following components:

Desktop pools

Storage pools

FAST Cache

VNX Home Directory

PowerPath/VE

The installation and configuration steps for the following components are available on the Citrix (www.citrix.com) and VMware (www.vmware.com) websites:

Citrix XenDesktop 5

VMware ESX 4.1

VMware vSphere 4.1

The installation and configuration of the following components are not covered:

Microsoft System Center Configuration Manager (SCCM)

Microsoft Active Directory, DNS, and DHCP

Microsoft SQL Server 2008 SP2

Page 33: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 5: Installation and Configuration

33 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

VMware vSphere components

Boot storm tests have shown that concurrent power on/off tasks of virtual machines in vCenter impose a heavy CPU load on the vCenter server. It is recommended to configure at least four vCPUs for the vCenter Server virtual machine to manage 1,000 desktops.

Citrix XenDesktop 5 controllers require appropriate access to communicate with the SDK of the VMware vCenter Server. This is achieved by one of the following methods depending on the security requirements:

HTTPS access to vCenter SDK:

1. On the VMware vCenter Server, replace the default SSL certificate. The Replacing vCenter Server Certificates paper on the VMware website provides more details on how to replace the default SSL certificate.

2. Open an MMC and the Certificates snap-in on the XenDesktop controllers.

3. Select Certificates > Trusted Root Certification Authorities Certificates and import the trusted root certificate for the SSL certificate created in step 1.

HTTP access to vCenter SDK:

1. Log in to the vCenter Server and open C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\proxy.xml file.

2. Navigate to the tag where serverNamespace=/sdk. Do not modify the /sdkTunnel properties.

<e id="5">

<_type>vim.ProxyService.LocalServiceSpec</_type>

<accessMode>httpAndHttps</accessMode>

<port>8085</port>

<serverNamespace>/sdk</serverNamespace>

</e>

3. Change accessMode to httpAndHttps. Alternatively, set accessMode to httpOnly to disable HTTPS.

4. Repeat steps 2 and 3 for serverNamespace=/. This change is required in XenDesktop 5 in order for MCS to function correctly. It is not required in previous XenDesktop versions.

5. Save the file and restart the vmware-hostd process using the following command. You may have to reboot the vCenter Server if SDK is inaccessible after restarting the process:

service mgmt-vmware restart

vCenter Server CPU requirement

Appropriate access to vCenter SDK

Page 34: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 5: Installation and Configuration

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

34

PowerPath/VE 5.4 SP2 supports ESX 4.1. The EMC PowerPath/VE for VMware vSphere Installation and Administration Guide available on Powerlink provides the procedure to install and configure PowerPath/VE. There are no special configuration requirements for this solution.

The PowerPath/VE binaries and support documentation are available on Powerlink.

Citrix XenDesktop components

The Citrix online product documentation (or Citrix eDocs) available on the Citrix website has detailed procedures to install XenDesktop 5. There are no special configuration requirements for this solution.

In this solution, persistent desktops were created using Machine Creation Services (MCS) to allow users to maintain their desktop customization. Complete the following steps to create a dedicated machine catalog with persistent desktops.

1. Open Desktop Studio on one of the XenDesktop 5 controllers in Start>All Programs > Citrix > Desktop Studio.

2. Select Machines in the left-hand pane.

3. Right-click Machines and select Create Catalog from the context menu.

Figure 9. Select Create Catalog

PowerPath Virtual Edition

Citrix XenDesktop installation overview

Citrix XenDesktop machine catalog configuration

Page 35: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 5: Installation and Configuration

35 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

4. On the Machine Type page, select Dedicated from the Machine type list box and click Next.

Figure 10. Select the Machine type

5. On the Master Image page, select the cluster host from which the virtual desktops are to be deployed. Click the … button to select a virtual machine or VM snapshot as the master image. Click Next to proceed.

Figure 11. Select the cluster host

Page 36: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 5: Installation and Configuration

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

36

6. On the Number of VMs page, type in the number of virtual machines to create and adjust the VM specification if needed. Click Next.

Figure 12. Specify the number of virtual machines

7. On the Create Accounts page, select an Active Directory container in which the computer accounts are to be created. Enter the account naming scheme of your choice. An example of xd#### will create computer account names xd0001 through xd0500. These names are also used when virtual machines are created. Click Next.

Figure 13. Select an Active Directory location

8. On the Administrators page, make any required changes and click Next.

Page 37: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 5: Installation and Configuration

37 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

9. On the Summary page, verify the settings for the catalog, name the catalog, and click Finish to start the deployment of the machines.

Figure 14. Review the summary

10. You can check the machine creation status by monitoring the green progress bar while you browse the list of catalogs. Once the machines are created, they need to be added to a desktop group before any user can access the virtual desktops.

The number of concurrent requests sent from XenDesktop controllers to the vCenter Server can be adjusted to either expedite powering on/off of virtual machines, or back off the number of concurrent operations so the VMware infrastructure is not overwhelmed. Complete the following steps to change the throttle rate:

1. Open Desktop Studio on one of the XenDesktop 5 controllers in Start>All Programs>Citrix>Desktop Studio.

2. Expand Configuration and select Hosts in the left-hand pane.

Figure 15. Select Hosts

Throttle commands to vCenter Server

Page 38: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 5: Installation and Configuration

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

38

3. In the right-hand pane, right-click on the existing host connection and select Change details.

Figure 16. Select Change details

4. In the Change Host Details window, click Advanced.

Figure 17. Change Host Details window

5. In the Advanced Host Details window, make the required changes and click OK twice to save the changes.

Figure 18. Advanced Host Details window

Page 39: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 5: Installation and Configuration

39 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

XenDesktop controllers manage the number of virtual desktops that are idle based on time, and automatically optimizes the idle pool settings in the desktop group based on the number of virtual desktops in the group.

These default idle pool settings need to be adjusted according to customer requirements to have virtual machines powered on in advance to avoid a boot storm scenario. During the validation testing, the idle desktop count is set to match the number of desktops in the group to ensure that all desktops are powered on in a steady state and ready for client connections immediately.

Complete the following steps to change the idle pool settings for a desktop group that consists of dedicated machines:

1. Open Desktop Studio on one of the XenDesktop 5 controllers in Start>All Programs>Citrix-Desktop Studio.

2. Select Desktop Studio in the left-hand pane.

3. Select Powershell tab in the right-hand pane.

4. Select Launch PowerShell in the bottom-right-hand corner of the right-hand pane.

5. In the PowerShell window, enter the following command where XD5DG1 is the desktop group name, and PeakBufferSizePercent and OffPeakBufferSizePercent are percentages of desktops to be powered up during peak and off-peak hours:

PS C:\> set-BrokerDesktopGroup XD5DG1 -

PeakBufferSizePercent 100 -OffPeakBufferSizePercent 100

Virtual desktop idle pool settings

Page 40: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 5: Installation and Configuration

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

40

Storage components

Storage pools in the EMC VNX OE support heterogeneous drive pools. In this solution, a RAID 5 storage pool was configured from 20 SAS drives. Eight thick LUNs of 500 GB each were created from this storage pool as shown in the following figure. FAST Cache was enabled for the pool.

Figure 19. Eight thick LUNs of 500 GB

VAAI is enabled by default on the VNX storage array. To confirm from an ESX server whether the storage system supports VAAI, verify if the value under Hardware Acceleration column is Supported while browsing the datastores as shown below:

Figure 20. Hardware Acceleration

ESX 4.1 supports three VAAI primitives on block-based storage:

Full copy enables the storage arrays to make full copies of the data on the array without having the ESX server read and write the data.

Block zeroing enables storage arrays to zero out a large number of blocks to speed up provisioning of virtual machines.

Hardware-assisted locking provides an alternative means to protect the metadata for VMFS cluster file systems, thereby improving the scalability of large ESX server farms sharing a datastore.

Storage pools

EMC VNX VAAI Support

Page 41: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 5: Installation and Configuration

41 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following ESX advanced settings determine if the primitives are to be enabled (1) or disabled (0).

1. Full copy – DataMover.HardwareAcceleratedMove

2. Block zeroing – DataMover.HardwareAcceleratedInit

3. Hardware-assisted locking – VMFS3.HardwareAcceleratedLocking

Refer to the What’s New in VMware vSphere 4.1 – Storage white paper for further details on storage enhancements in vSphere 4.1.

FAST Cache is enabled as an array-wide feature in the system properties of the array in Unisphere™. Click the FAST Cache tab, then click Create, and select the eligible Flash drives to create the FAST Cache. There are no user-configurable parameters for the FAST Cache.

Figure 21. FAST Cache tab

Enable FAST Cache

Page 42: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 5: Installation and Configuration

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

42

To enable FAST Cache for any LUN in a pool, go to the properties of the pool in Unisphere™ and click the Advanced tab. Select Enabled to enable FAST Cache, as shown in the following figure.

Figure 22. Enable FAST Cache

The VNX Home Directory installer is available on the NAS Tools and Application CD for each VNX OE for file release, and can be downloaded from Powerlink.

After the VNX Home Directory feature is installed, use the Microsoft Management Console (MMC) snap-in to configure the feature. A sample configuration is shown in the following two figures.

Figure 23. MMC snap-in

VNX Home Directory feature

Page 43: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 5: Installation and Configuration

43 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

For any user account that ends with a suffix between 1 and 1,000, the sample configuration shown in the following figure automatically creates a user home directory in the following location and maps the H: drive to this path:

\userdata1_fs file system in the format \userdata1_fs\<domain>\<user>

Each user has exclusive rights to the folder.

Figure 24. Sample virtual desktop properties

Page 44: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 5: Installation and Configuration

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

44

Page 45: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

45 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

6 Testing and Validation This chapter provides a summary and characterization of the tests performed to validate the solution. The goal of the testing was to characterize the performance of the solution and its component subsystems during the following scenarios:

Boot storm of all desktops

McAfee antivirus full scan on all desktops

Security patch install with Microsoft SCCM

User workload testing using Login VSI

Validated environment profile

The solution was validated with the following environment profile.

Table 5. Environment profile

Profile characteristic Value

Number of virtual desktops 1,000

Virtual desktop OS Windows 7 Enterprise (32-bit)

CPU per virtual desktop 1 vCPU

Number of virtual desktops per CPU core 7.8125

RAM per virtual desktop 1 GB

Desktop provisioning method Machine Creation Services (MCS)

Average storage available for each virtual desktop 4 GB (vmdk and vswap)

Average IOPS per virtual desktop at steady state 6 IOPS

Average peak IOPS per virtual desktop during boot storm

84 IOPS

Number of datastores to store virtual desktops 8

Number of virtual desktops per data store 125

Profile characteristics

Page 46: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

46

Profile characteristic Value

Disk and RAID type for datastores RAID 5, 300 GB, 15k rpm, 3.5” SAS disks

Disk and RAID type for CIFS shares to host roaming user profiles and home directories

RAID 6, 2 TB, 7200 RPM, 3.5” NL-SAS disks

Number of VMware clusters 2

Number of ESX servers per cluster 8

Number of VMs per cluster 500

Four common use cases were executed to validate whether the solution performed as expected under heavy load situations.

The tested use cases are listed below:

Simultaneous boot of all desktops

Full antivirus scan of all desktops

Installation of a security update using SCCM on all desktops

Login and steady state user load simulated using the Login VSI medium workload

In each use case, a number of key metrics are presented showing the overall performance of the solution.

To run a user load against the desktops, the Virtual Session Index (VSI) tool was used. VSI provided the guidance to gauge the maximum number of users a desktop environment can support. The Login VSI workload can be categorized as light, medium, heavy, and custom. A medium workload was selected for testing and had the following characteristics:

The workload emulated a medium knowledge worker who uses Microsoft Office, Internet Explorer, and Adobe Acrobat Reader.

After a session started, the medium workload repeated every 12 minutes.

The response time was measured every 2 minutes during each loop.

The medium workload opened up to five applications simultaneously.

The type rate was 160 ms for each character.

The medium workload in VSI 2.0 was approximately 35 percent more resource-intensive than VSI 1.0.

Approximately 2 minutes of idle time were included to simulate real-world users.

Use cases

Login VSI

Page 47: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

47 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

Each loop of the medium workload opened and used the following:

Microsoft Outlook 2007—Browsed 10 messages.

Microsoft Internet Explorer—One instance was left open to BBC.co.uk, one instance browsed Wired.com, Lonelyplanet.com, and a heavy Flash application gettheglass.com (not used with MediumNoFlash workload).

Microsoft Word 2007—One instance to measure the response time and one instance to edit the document.

Bullzip PDF Printer and Adobe Acrobat Reader—The Word document was printed and the PDF was reviewed.

Microsoft Excel 2007—A very large sheet was opened and random operations were performed.

Microsoft PowerPoint 2007—A presentation was reviewed and edited.

7-zip—Using the command line version, the output of the session was zipped.

A Login VSI launcher is a Windows system that launches desktop sessions on target virtual desktops. There are two types of launchers—master and slave. There is only one master in a given test bed and there can be as many slave launchers as required.

The number of desktop sessions a launcher can run is typically limited by CPU or memory resources. Login consultants recommend using a maximum of 45 sessions per launcher with two CPU cores (or two dedicated vCPUs) and 2 GB RAM, when the GDI limit has not been tuned (default). However with the GDI limit tuned, this limit extends to 60 sessions per two-core machine.

In this validated testing, 1,000 desktop sessions were launched from 48 launcher virtual machines, resulting in approximately 21 sessions established per launcher. Each launcher virtual machine is allocated two vCPUs and 4 GB of RAM. There were no bottlenecks observed on the launchers during the VSI-based tests.

For all tests, FAST Cache was enabled for the storage pool holding the eight datastores used to house the 1,000 desktops.

Login VSI launcher

FAST Cache configuration

Page 48: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

48

Boot storm results

This test was conducted by selecting all the desktops in vCenter Server and selecting Power On. Overlays are added to the graphs to show when the last power-on task completed and when the IOPS to the pool LUNs achieved a steady state.

For the boot storm test, all the desktops were powered on within 5 minutes and achieved steady state approximately 3 minutes later. The total start-to-finish time was approximately 8 minutes for all desktops to register with the XenDesktop controllers. This section describes the boot storm results for each of the three use cases when powering up the desktop pools.

The following graph shows the disk IOPS for a single SAS drive in the storage pool that stores the eight datastores for virtual desktops. Because the statistics from all the drives in the pool were similar, a single drive is reported for the purposes of clarity and readability of the graph.

Figure 25. Boot storm - Disk IOPS for a single SAS drive

During peak load, the disk serviced a maximum of 94 IOPS. FAST Cache helped to reduce the disk load.

Test methodology

Pool individual disk load

Page 49: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

49 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows the LUN IOPS and response time from one of the datastores. Because the statistics from all LUNs were similar, a single LUN is reported for the purposes of clarity and readability of the graph.

Figure 26. Boot storm - LUN IOPS and response time

During peak load, the LUN response time did not exceed 6 ms and the datastore serviced over 12,000 IOPS.

Pool LUN load

Page 50: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

50

The following graph shows the total IOPS serviced by the storage processor during the test.

Figure 27. Boot storm - Storage processor total IOPS

Storage processor IOPS

Page 51: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

51 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows the storage processor utilization during the test. The pool- based LUNs were split across both SPs to balance the load equally across both SPs.

Figure 28. Boot storm - Storage processor utilization

The virtual desktops generated high levels of I/O during the peak load of the boot storm test, while the SP utilization remained below 60 percent.

Storage processor utilization

Page 52: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

52

The following graph shows the IOPS serviced from FAST Cache during the boot storm test.

Figure 29. Boot storm - FAST Cache IOPS

At peak load, FAST Cache serviced over 80,000 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and SP memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced over 19,000 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would take roughly 105 SAS drives to achieve the same level of performance. However, EMC does not recommend using a 105:2 ratio for SAS to SSD replacement. EMC's recommended ratio is 20:1 because workloads may vary.

FAST Cache IOPS

Page 53: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

53 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows the CPU load from the ESX servers in the VMware clusters. All servers had similar results. Therefore, a single server is reported.

Figure 30. Boot storm - ESX CPU load

The ESX server briefly achieved a CPU utilization of approximately 47 percent during peak load in this test. It should be noted that hyperthreading was enabled to double the number of logical CPUs.

ESX CPU load

Page 54: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

54

The following graph shows the Average Guest Millisecond/Command counter, which is shown as GAVG in esxtop. This counter represents the response time for I/O operations issued to the storage array.

Figure 31. Boot storm - Average Guest Millisecond/Command counter

The GAVG values for the virtual desktop storage on the datastores were below 8 ms. This indicates excellent performance under this load.

Antivirus results

This test was conducted by scheduling a full scan of all desktops through a custom script using McAfee 8.7. The full scans were started over the course of 15 minutes on all desktops. The total start-to-finish time was approximately 77 minutes.

ESX disk response time

Test methodology

Page 55: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

55 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows the disk I/O for a single SAS drive in the storage pool that contains the eight datastores. Because the statistics from all drives in the pool were similar, only a single drive is reported for clarity and readability of the graph.

Figure 32. Antivirus - Disk I/O for a single SAS drive

Although the IOPS serviced by the individual drives in the pool reached as high as 437 IOPS, the disk response time was capped at 10 ms.

Pool individual disk load

Page 56: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

56

The following graph shows the LUN IOPS and response time from one of the datastores. Because the statistics from all the LUNs were similar, only a single LUN is reported for clarity and readability of the graph.

Figure 33. Antivirus - LUN IOPS and response time

During peak load, the LUN response time remained within 22 ms, and the datastore serviced over 5500 IOPS. The majority of the read I/O was served by the FAST Cache and not by the pool LUN.

Pool LUN load

Page 57: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

57 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows the total IOPS serviced by the storage processor during the test.

Figure 34. Antivirus - Storage processor IOPS

Storage processor IOPS

Page 58: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

58

The following graph shows the storage processor utilization during the test.

Figure 35. Antivirus - Storage processor utilization

The antivirus scan operations caused moderate CPU utilization during peak load. The load is shared between two SPs during the scan of each collection. The EMC VNX series has sufficient scalability headroom for this workload.

Storage processor utilization

Page 59: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

59 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows the IOPS serviced from FAST Cache during the test.

Figure 36. Antivirus - FAST Cache IOPS

At peak load, FAST Cache serviced over 24,000 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and SP memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced almost all of the 24,000 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would take roughly 133 SAS drives to achieve the same level of performance. However, EMC does not recommend using a 133:2 ratio for SAS to SSD replacement. EMC's recommended ratio is 20:1 because workloads may vary.

FAST Cache IOPS

Page 60: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

60

The following graph shows the CPU load from the ESX servers in the VMware clusters. A single server is reported because all servers had similar results.

Figure 37. Antivirus - ESX CPU load

The CPU load on the ESX server was well within acceptable limits during this test. It should be noted that hyperthreading was enabled to double the number of logical CPUs.

ESX CPU load

Page 61: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

61 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows the Average Guest Millisecond/Command counter, which is shown as GAVG in esxtop. This counter represents the response time for I/O operations issued to the storage array.

Figure 38. Antivirus - Average Guest Millisecond/Command counter

The peak GAVG value for the virtual desktop storage never crossed 31 ms. The FAST Cache performed an enormous amount of read I/O operations during this test.

Patch install results

This test was performed by pushing a security update to all desktops using Microsoft System Center Configuration Manager (SCCM). The desktops were divided into five collections containing 200 desktops each. The collections were configured to install updates in a 1-minute staggered schedule an hour after the patch was downloaded. This caused all patches to be installed within 7 minutes.

ESX disk response time

Test methodology

Page 62: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

62

The following graph shows the disk IOPS for a single SAS drive that consists of the storage pool that stores the four Pool1_x datastores. Because the statistics from all drives in the pool are similar, the statistics of a single drive are shown in the graphs for clarity and readability.

Figure 39. Patch install - Disk IOPS for a single SAS drive

The drives did not get saturated during the patch download phase. During the patch installation phase, the disk serviced as many as 213 IOPS at peak load while a spike of 30 ms response time was recorded within the 7-minute interval.

Pool individual disk load

Page 63: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

63 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows the LUN IOPS and response time from one of the datastores. Because the statistics from all LUNs in the pool were similar, the statistics of a single LUN are shown in the graphs for clarity and readability.

Figure 40. Patch install - LUN IOPS and response time

During peak load, the LUN response time was below 11 ms, and the datastore serviced nearly 1,500 IOPS during peak load.

Pool LUN load

Page 64: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

64

The following graph shows the total IOPS serviced by the storage processor during the test.

Figure 41. Patch install - Storage processor IOPS

During peak load, the storage processors serviced over 12,000 IOPS. The load is shared between two SPs during the patch install operation of each collection.

Storage processor IOPS

Page 65: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

65 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows the storage processor utilization during the test.

Figure 42. Patch install - Storage processor utilization

The patch install operations caused moderate CPU utilization during peak load. The EMC VNX series has sufficient scalability headroom for this workload.

Storage processor utilization

Page 66: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

66

The following graph shows the IOPS serviced from FAST Cache during the test.

Figure 43. Patch install - FAST Cache IOPS

At peak load, FAST Cache serviced over 8,500 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and SP memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced over 5,000 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would take roughly 28 SAS drives to achieve the same level of performance.

FAST Cache IOPS

Page 67: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

67 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows the CPU load from the ESX servers in the VMware clusters. Because all servers had similar results, the results from a single server are shown in the graph.

Figure 44. Patch install - ESX CPU load

The ESX server CPU load was well within the acceptable limits during the test. It should be noted that hyperthreading was enabled to double the number of logical CPUs.

ESX CPU load

Page 68: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

68

The following graph shows the Average Guest Millisecond/Command counter, which is shown as GAVG in esxtop. This counter represents the response time for the I/O operations issued to the storage array.

Figure 45. Patch install - Average Guest Millisecond/Command counter

The GAVG values for virtual desktop storage on the datastores were below 10 ms for almost all data points.

Login VSI results

This test was conducted by scheduling 1,000 users to connect over Independent Computing Architecture (ICA) connections in a 56-minute window and start the Login VSI-medium workload. This workload was run for one hour in a steady state to observe the load on the system.

The following graph shows the response time compared to the number of active desktop sessions, as generated by the LoginVSI launchers. It shows that the average response time increases marginally as the user count increases. Throughout the test run, the average response time stays below 800 ms, which has plenty of headroom below the 2,000 ms gating metric.

ESX disk response time

Test methodology

Login VSI result summary

Page 69: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

69 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

There is a spike of 3,477 ms in maximum response time towards the end of the test run when concurrent user logoff takes place.

Figure 46. Login VSI response times

To simulate a login storm, 1,000 desktops are powered up initially into steady state by setting the idle desktop count to 1,000. The login time of each session is then measured by starting a LoginVSI test that establishes the sessions with a custom interval of three seconds. The 1,000 sessions are logged in within 56 minutes, a period that models a burst of login activity that takes place in the opening hour of a production environment.

The LoginVSI tool has a built-in login timer that measures from the start of the logon script defined in the Active Directory group policy to the start of the LoginVSI workload for each session. Although it does not measure the total login time from an end-to-end user’s perspective, the measurement gives a good indication of how sessions will be affected in a login storm scenario.

The following figure shows the trend of the login time in seconds as sessions are started in rapid succession. The average login time for 1,000 sessions is approximately 3.5 seconds. The maximum login time is recorded at 8.6 seconds, while the minimum login time is 1.6 seconds.

Login storm timing

Page 70: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

70

All users should receive their desktop sessions with a reasonable delay.

Figure 47. Login storm timing

The following graph shows the disk IOPS from one of the datastores. Because the statistics from all disks were similar, only a single disk is reported for clarity and readability of the graphs.

Figure 48. Login VSI - Disk IOPS for a single SAS drive

During peak load, the SAS disk serviced less than 100 IOPS and the disk response time was less than 8 ms.

0123456789

10

0 100 200 300 400 500 600 700 800 900

Seconds

Desktops

Login Storm Timing

Pool individual disk load

Page 71: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

71 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows the LUN IOPS and response time from one of the datastores. Because the statistics from all LUNs were similar, only a single LUN is reported for clarity and readability of the graphs.

Figure 49. Login VSI - LUN IOPS and response time

During peak load, the LUN response time remained within 3 ms and the datastore serviced over 1,200 IOPS.

Pool LUN load

Page 72: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

72

The following graph shows the total IOPS serviced by the storage processor during the test.

Figure 50. Login VSI - Storage processor IOPS

The following graph shows the storage processor utilization during the test.

Figure 51. Login VSI - Storage processor utilization

The storage processor peak utilization was below 30 percent during the logon storm. The load is shared between two SPs during the VSI load test.

Storage processor IOPS

Storage processor utilization

Page 73: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

73 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows the IOPS serviced from FAST Cache during the test.

Figure 52. Login VSI - FAST Cache IOPS

At peak load, FAST Cache serviced over 8,000 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and SP memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced over 6,500 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would take roughly 36 SAS drives to achieve the same level of performance.

FAST Cache IOPS

Page 74: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

74

The following graph shows the CPU load from the ESX servers in the VMware clusters. A single server is reported because all servers had similar results.

Figure 53. Login VSI - ESX CPU load

The CPU load on the ESX server was never more than 30% utilization during steady state. It should be noted that hyperthreading was enabled to double the number of logical CPUs.

ESX CPU load

Page 75: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

75 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows the Average Guest Millisecond/Command counter, which is shown as GAVG in esxtop. This counter represents the response time for I/O operations issued to the storage array.

Figure 54. Login VSI - Average Guest Millisecond/Command counter

The GAVG values for the virtual desktop storage on the datastores were well below 3 ms during the peak load.

ESX disk response time

Page 76: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

76

FAST Cache benefits

To illustrate the benefits of enabling FAST Cache in a desktop virtualization environment, a study was conducted to compare the performance with and without FAST Cache. The non-FAST Cache configuration called for 50 SAS drives in a storage pool, as opposed to the baseline of 20 SAS drives backed by FAST Cache with 2 Flash drives, which essentially displaced 30 SAS drives in the non-FAST Cache configuration, a 15:1 ratio of drive savings. The summary graphs below demonstrate how FAST Cache benefits are realized in each use case.

The following graph shows the host peak response time without FAST Cache during boot storm is roughly three times higher than that with FAST Cache.

Figure 55. Boot storm – average latency comparison

0

5

10

15

20

25

1 2 3 4 5 6 7 8 9 10 11 12

Avg

Gu

est

late

ncy

(m

s)

Minutes after power-on 1000 desktop VMs

Boot Storm Comparison

2 Flash + 20 SAS 50 SAS

3x peak responsetime without FAST Cache

Case study

Page 77: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

77 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

The following graph shows that the antivirus scan completed in 77 minutes with FAST Cache enabled, compared with 101 minutes in the case of the non-FAST Cache configuration. The overall scan time was reduced by roughly 24%.

Figure 56. Antivirus scan – scan time comparison

0

5

10

15

20

25

30

35

40

1 6

11

16

21

26

31

36

41

46

51

56

61

66

71

76

81

86

91

96

10

1

Avg

Gu

est

late

ncy

(m

s)

Minutes to scan 1000 Desktops

Antivirus Scan Comparison

2 Flash + 20 SAS 50 SAS

Finishes 24% faster

Page 78: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 6: Testing and Validation

EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

78

The graph below shows the host peak response time without FAST Cache during patch storm is roughly two and a half times higher than that with FAST Cache.

Figure 57. Patch storm – average latency comparison

0

5

10

15

20

25

30

1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76

Avg

Gu

est

late

ncy

(m

s)

Minutes

Patch Storm Comparison

2 Flash + 20 SAS 50 SAS

Patch Download

Patch Install

2.5x peak response time without FAST

Page 79: Solution - EMC VNX - FC for Vmware vSphere 4.1 - XenDesktop 5

Chapter 7: Conclusion

79 EMC Infrastructure for Virtual Desktops enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Proven Solutions Guide

7 Conclusion This chapter summarizes our test results and includes the following sections:

Summary

References

Summary EMC VNX FAST Cache provides measurable benefits in a desktop virtualization environment as shown in the previous chapter. It not only reduces response time for both read and write workloads, it effectively supports more users on fewer drives, resulting in less power, cooling, rack space consumption, and greater IOPS density with less drive requirements.

References Refer to the following white papers, available on Powerlink, for information about solutions similar to the one described in this paper:

EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series(FC), VMware vSphere 4.1, and Citrix XenDesktop 5—Reference Architecture

Deploying Microsoft Windows 7 Virtual Desktops with VMware View—Applied Best Practices Guide

EMC Performance Optimization for Microsoft Windows XP for the Virtual Desktop Infrastructure—Applied Best Practices

Deploying Microsoft Windows 7 Virtual Desktops with VMware View—Applied Best Practices Guide

If you do not have access to the above content, contact your EMC representative.

The following documents are available on the Citrix website:

Citrix eDocs (Documentation Library)

White papers

Other documentation