A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and...

18
© Dell C Runn Virtu VMw A Proo Trento onfidential ning Hig ual Desk ware vSp of of Con on Potgie gher-Ed ktops u phere 4 ncept eter M ducatio using Ci 4 May 2010 on App itrix Xe plication enDeskt ns on top on

Transcript of A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and...

Page 1: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

© Dell C

RunnVirtuVMwA Proo

Trento

onfidential

ning Higual Deskware vSp

of of Con

on Potgie

gher-Edktops uphere 4ncept

eter

M

ducatiousing Ci4

May 2010

on Appitrix Xe

plicationenDeskt

ns on top on

Page 2: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 2 Solutions Innovation Center

Table of Contents

Table of Contents ............................................................................................... 2

List of Figures ........................................................................................................ 2

List of Tables ......................................................................................................... 2

Executive Summary ........................................................................................... 3

Introduction ........................................................................................................... 4

Test Results and Analysis ............................................................................. 10

Best Practices, Recommendations and Observations ................... 12

Conclusion .......................................................................................................... 15

Appendix A: Infrastructure Specifications ............................................ 16

Appendix B: References ................................................................................ 18

List of Figures

Figure 1: Physical Architecture .................................................................................................................................... 5 Figure 2: Logical Architecture .................................................................................................................................... 6 Figure 3: Adobe Reader Performance Results ..................................................................................................... 10 Figure 4: Autodesk AutoCAD Performance Results ............................................................................................. 11

List of Tables

Table 1: Desktop Virtual Machine Specifications ..................................................................................................16 Table 2: Virtual Hardware Specifications ................................................................................................................16 Table 3: Software Specifications .............................................................................................................................. 17 Table 4: Server Hardware Specifications ................................................................................................................ 17 Table 5: Storage Hardware Specifications ............................................................................................................ 18

Page 3: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 3 Solutions Innovation Center

Section 1

Executive Summary

Universities and Higher Education customers have different computing profiles than typical corporate users. In addition to the day-to-day office collaboration software, they have the need to use software that allows them to learn and produce complex projects. These software applications typically need high performance and graphic compute environments. For these software applications, the default compute environment is to use a standalone computer with enough compute and graphic power to meet its needs. IT departments had always struggled to manage these expensive applications. The application IT departments also have to manage and maintain the plethora of compute environments. The ability to centrally manage the end users compute environment using Virtual Desktop Infrastructure (VDI) has allowed IT departments to attack this problem. VDI has been proven very successful using traditional office applications. This whitepaper will look at some typical Higher Education software applications and see whether they would be likely candidates for VDI and once verified, how they would scale.

Page 4: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 4 Solutions Innovation Center

Section 2

Introduction

Virtual Desktop Infrastructure (VDI) has many components as well as many different vendor technologies to its architecture. Two of the most widespread technologies utilized are based on the VMware vSphere product portfolio and Citrix XenDesktop. While some institutions prefer to stick to a single vendor VDI strategy, other institutions that already have a virtualization strategy in place may need to integrate different VDI technologies on top of the existing virtualization platform. To this end, the VDI solution used for this testing was a hybrid solution comprised of Citrix XenDesktop, Citrix Provisioning Server and VMware vSphere 4.0.1. To simulate user load, Citrix EdgeSight for Load Testing (ESLT) was used. Two applications, (Adobe Reader and Autodesk AutoCAD) were used with ESLT to compare a typical office application with a Higher Education application. In addition to the automated testing of these applications using ESLT, manual equivalent tests were performed. This ensured the feasibility and functionality within the VDI environment. It is also important to note that the tests conducted on these applications may differ from real-world usage cases, as the tests were simple and not exhaustive enough to fully stress the applications.

Architecture

To accomplish this hybrid solution testing of both VMware and Citrix technologies, the physical architecture (as depicted in Figure 1) was comprised of two Dell PowerEdge R710 Servers (See Appendix A for an exact description of the physical hardware used) running VMware ESX 4.0.1. A Dell PowerEdge R300 (See Appendix A) server was used as the VMware vCenter management server. The servers were networked together using a 1Gb/s Dell PowerConnect 5448 switch, with each R710 server having two 1Gb/s links to this network. Providing the storage for the architecture was a single Dell | EqualLogic PS6000XV iSCSI storage array (See Appendix A). The R710 servers were connected to the PS6000XV array through a PowerConnect 6248 switch via two 1Gb/s connections. The PowerConnect switch configuration was optimized for iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic). Various models of Laptop, Desktop and Workstation hardware were used to simulate the manual as well as the automated application testing. The Dell Precision M6300 and Precision M6500 were used to simulate the Citrix ESLT virtual users, while the Dell Latitude E6400 and Dell Optiplex FX160 and Dell Optiplex 780 FLX were used to test the manual feasibility and functionality of the logical and physical architecture.

Page 5: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010

Built on tocomprise Simul

Usingusing

PreseThe Prunnithe ES

ConnThe Cback-­ Citr

ThisThe

­ CitrThisappdes

­ ActThis

0

op of the phyd of the follo

lated End-useg the Dell Pre ESLT.

entation Tier: Presentation Tng Citrix XenSLT users to

nection Tier: Connection T-end infrastrurix XenDesktos virtual mache DDC also prrix Provisionins virtual mach

propriate endsktop virtual mive Directorys virtual mach

ysical architeowing compo

ers: cision 6300 a

Tier (running App. These v

access the ar Tier (running oucture applicaop, Desktop Dhine managerovides an intng Server (PVhine allows fo-user applica

machines for y Domain Conhine allows fo

Figure 1: P

cture is the loonents/tiers:

and 6500 mo

on a single D virtual machinrchitecture an

on the same ations and se

Delivery Contes the simulatterface to pro

VS): or the creatioations installe use by end-untroller (AD): or the centra

Page 5

hysical Archit

ogical/virtual

obile worksta

Dell PowerEdnes publish thnd test the ap

Dell PowerEdervices as folltroller (DDC)ted end-user ovision, deplo

on of a masteed. The pristiusers. lized manage

tecture

l architecture

ations, up to 1

dge R710) hoshe Internet Expplications.

dge R710 meows: : requests for oy and mana

er operating sne OS image

ement of bot

Solutions In

e, as depicted

100 users we

sts two virtuaxplorer applic

entioned abo

a virtual macage the deskto

system (OS) ie is then strea

th infrastruct

nnovation C

d in Figure 2.

ere simulated

al machines cation to allo

ove) houses th

chine desktoop infrastruc

mage with alamed to the

ure access,

Center

It is

ow

he

p. ture.

ll the

Page 6: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010

autpro

­ SQLA viand

DesktAn adthe ap

0

hentication, pvides networL Database Sertual machin

d PVS servicestop Tier dditional Dell pplications th

password andrk name resoerver: e running Mis as a central

PowerEdge Rhat the simula

d profile manlution and dy

crosoft SQL l managemen

R710 (ESX 4.0ated end-use

Figure 2: L

Page 6

nagement forynamic TCPIP

Server housent repository.

0.1) hosts theers will use.

Logical Archite

r the simulateP address allo

es the various.

e Windows XP

ecture

Solutions In

ed end-usersocation for th

s database us

P Desktops, p

nnovation C

s. This server he infrastruct

sed by the DD

pre-loaded w

Center

also ure.

DC

with

Page 7: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 7 Solutions Innovation Center

Test Methodology

The main objective of the test methodology is to perform testing on a few of the applications that Universities and Higher Education users would potentially use on a daily basis. Below is a description of the two main testing methods used to achieve this objective: Feasibility Testing:

The first part of the overall testing methodology was to manually perform basic application feasibility testing. This testing initially involved installing the appropriate application(s) onto the master image stored on the PVS virtual machine. This version of the master image was streamed to the virtual machines being used as Windows XP desktops for the end-users. Various basic functionality tests were performed against the applications. This included launching a version of Internet Explorer published on the Citrix XenApp servers and then using this browser to connect to a desktop virtual machine. The following tasks were then performed using the test applications:

­ Launched the specific application and verified that there were no licensing issues, since the

application was loaded on a single image and streamed to the desktop virtual machine. ­ Maximized and minimized the application to verify full screen redirection of the application to

the end-user laptop or desktop. ­ Performed various tasks that are specific to the individual applications listed below. These tasks

included opening various sample files specific to the application and then performed random tasks within the application. These tasks included manipulating the document/picture/object, changing a few attributes of the document/picture/object and then saved the newly altered document/picture/object.

­ Closed down the application and logged-off from the desktop virtual machine.

The above testing methodology implements a single end-user connection to the VDI architecture. Once the user has connected to the DDC, and by using the above methodology, the following list of applications was tested and then the virtual machine desktop rebooted: ­ Adobe Acrobat Reader 9.3 ­ Autodesk AutoCAD 2011 ­ Adobe Director 11 ­ Adobe Illustrator Creative Suite 5 ­ Adobe Photoshop Creative Suite 5 ­ Adobe Dreamweaver Creative Suite 5 ­ ChemBIO 3D 12.0 ­ ChemBIO Draw 12.0

Performance Testing:

The second part of the testing methodology involved simulating one, five, ten, twenty and thirty, end-users automatically from the Dell Precision M6300 and Precision M6500, using ESLT. This testing was specific to using the Adobe Reader and the Autodesk AutoCAD applications. The reason these two applications were chosen from the above list was mainly because firstly, AutoCAD is a widely used application within this type of environment and it is extremely taxing on physical resources like CPU and memory. Additionally, it is extremely graphic intensive. To get a decent baseline to compare the AutoCAD results, we choose Adobe Reader as the second application because it too is a widely used application and it would offset the physical resources required by AutoCAD to give a comparative baseline. Therefore performance tests included the following tasks:

Adobe Acrobat Reader: ­ Connect to an idle desktop virtual machine and launch Adobe Reader.

Page 8: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 8 Solutions Innovation Center

­ Maximized and minimized the application to ensure proper full-screen direction to the end-user client running the ESLT tests.

­ Open a 5MB graphically intensive PDF document. ­ Scroll down, one page at a time, to the end of the 48-page document, and close down Adobe

Reader. ­ Re-launch Adobe Reader and open a 15MB PDF Document. ­ Scroll down, one page at a time, to the end of the 230-page document and close down Adobe

Reader. ­ Re-launch Adobe Reader and open a 25MB PDF document. ­ Quickly scroll down through the document and close down Adobe Reader. ­ Log off from the desktop virtual machine.

Autodesk AutoCAD: ­ Connect to an idle desktop virtual machine and launch Autodesk AutoCAD. ­ Select to use the trial license. ­ Opened a sample 1MB AutoCAD drawing. ­ Maximized the application to ensure the appropriate screen redirection to the end-user client. ­ Closed the application. ­ Re-launch Autodesk AutoCAD and reload the sample AutoCAD drawing. ­ Close down the application and log off from the desktop virtual machine.

To try and mimic the type of user experience that would occur in a real-world scenario, the appropriate amount of desktop virtual machines were pre-staged before the test begun, using the Idle Pool Settings and Logoff Behavior options in the Desktop Group Properties properties configuration. For example for the 20-user test, 20 virtual machine desktops are idle, waiting for users to initiate the first connection. After executing the testing scripts, the end-user logs off the desktop virtual machine. The desktop virtual machine is then deleted and another one is provisioned. Once a new end-user initiates a connection to the DDC, the virtual machine starts with a fresh operating system image streamed to it, so the testing script is re-executed.

A number of tools that allowed for the automation of load generation and performance, were evaluated against the VDI architecture. The tool that integrated the easiest into the testing methodology, of those evaluated, was Citrix EdgeSight for Load Testing (ESLT). ESLT is designed to specifically test the load and performance of Citrix XenApp solutions. So in order to use this tool specifically for the VDI architecture, Citrix XenApp had to also be used as the front-end interface for users to access the virtual desktop machines and the applications. To generate the appropriate load for executing the test methodology, a script was customized for each individual test user as well as each instance of the Adobe Reader and Autodesk AutoCAD applications. This script was then recorded and played back/executed with 1, 5, 10 users, 20 users, and finally 30 users.

Even though the above tests were executed in various increments of users, due to the nature and timing of the each users’ individual script, there was no way to verify that the users were concurrent at any particular point in time.

One disadvantage in using ESLT is that an additional Tier (Presentation Tier) was added to the overall architecture, since ESLT only works with XenApp and not XenDesktop natively. This required additional licensing requirements for XenApp and added complexity to the overall architecture as well as additional load onto the ESX hosts, the network and the storage. Veeam Monitor for VMware as well as VMware Overview Performance charts were used to monitor and capture the CPU and Memory performance results from the ESX hosts on which the virtual machine desktops were running. Veeam Monitor allows for the comprehensive reporting on resource

Page 9: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 9 Solutions Innovation Center

consumption and workload data, while VMware Overview Performance charts allow the same data to be captured in real-time. For a detailed overview of these results, please refer to Section 3. Monitoring and reporting of the storage performance was done using Dell |EqualLogic SAN HeadQuarters (SANHQ). SANHQ is a client/server application that runs on a Microsoft Windows server and uses SNMP to query the groups. SANHQ collects data over time and stores it on the server for later retrieval and analysis. For a detailed overview of these results, please refer to Section 3.

Page 10: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 10 Solutions Innovation Center

Section 3

Test Results and Analysis

Once the applications were selected as viable candidates for the VDI environment, it was necessary to see how many concurrent users could use the application before exceeding the CPU, Memory and Disk thresholds of the architecture. Before the testing process could begin, 100 virtual machines were provisioned and a number of them brought online, waiting for test users to connect and execute the test scripts. The specific amount of online virtual machines corresponds to the amount of concurrent users executing the test scripts. For instance, if 30 concurrent users are tested, then 30 of the 100 virtual machines are online. The charts below detail what happened during the testing for Adobe Reader and Autodesk AutoCAD:

Figure 3: Adobe Reader Performance Results

0

50

100

150

200

250

300

350

400

450

0

10

20

30

40

50

60

70

80

90

100

1 User 5 User 10 User 20 User 30 User

CPU Avg. (%)

Mem Avg. (%)

IOPS (Read)

IOPS (Write)

Page 11: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 11 Solutions Innovation Center

Figure 4: Autodesk AutoCAD Performance Results

As can be seen from Figure 3 and 4, using Adobe Reader and Autodesk AutoCAD respectively, the first test started with a single user and ended with 30 users. It is evident that the maximum threshold peaks at around 20 users. The memory utilization goes down after the algorithm gets going, but disk read I/O goes up significantly. The data suggests that the hypervisor’s memory management algorithm, paging memory data to the disk through its own swap mechanism and the balloon driver, contributes to the lowering of the physical machines’ memory consumption. This is a clear indication that increasing the amount of users from 20 to 30 causes a significant burden on physical CPU, Memory and Disk resources. Even though there is no option within the testing framework to measure the basic response time for each individual user, by increasing the amount of test users, it became visually evident from each user session that the response time from the application diminishes, causing the testing script to reset. As already stated, every time a user logs off, the XenDesktop logoff behavior and idle time settings shut down and delete the virtual machine. Once a new test session begins, the DDC provisions a new virtual machine (if not already pre-allocated) and boots it with a fresh, streamed operating system image, thus allowing new users to log in and start the test. Although this process happens in the background, eventually the 100 pre-allocated virtual machines are used up, thus causing new virtual machines to be provisioned. The above results clearly indicate this occurring by the distinct increase in Disk reads/writes as the ESX host deletes and creates virtual machines when the 20 and 30 user tests are performed. This increase in disk activity delays new users from connecting into the infrastructure to start the tests. Thus consuming additional physical CPU, Memory and Disk resources during the 20 and 30 user tests.

0

100

200

300

400

500

600

700

800

900

0

10

20

30

40

50

60

70

80

90

100

1 User 5 User 10 User 20 User 30 User

CPU Avg. (%)

Mem Avg. (%)

IOPS (Read)

IOPS (Write)

Page 12: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 12 Solutions Innovation Center

Section 4

Best Practices, Recommendations and Observations

Below are some Dell Services best practices as well as some recommendations from the testing performed on the VDI architecture. For references to these links, please see Appendix B. Storage

The Dell | EqualLogic array was configured using RAID 50. It is typically a best practice to configure either RAID 10 (for optimum performance and redundancy) or RAID 50, but since only a single array was used, we chose RAID 50 for a balance of performance, redundancy and capacity in this case. Dell also recommends using logical unit (LUN) sizes of around 500GB for the VMware Virtual Machine File System (VMFS) volumes. Virtual machines with similar disk I/O workload types were stored on the same VMFS volumes; i.e. the virtual machine desktops were all located on the same VMFS volume as they had the same guest operating and the test workload was the same for each machine. The infrastructure virtual machines were located on a separate VMFS volume as they had the same guest operating system and application workload.

Desktop Delivery Controller As per the recommendations from Citrix, the DDC virtual machine was configured with two Virtual CPUs and 4GB of virtual RAM. Additionally, a separate virtual machine running Microsoft SQL Server 2005 (SP3) hosted the database for the DDC, to ensure that all of the DDC virtual hardware resources are dedicated to managing the desktop virtual machine group and user connections.

Active Directory The various Citrix components needed to make up the VDI architecture requires different licenses. Although the DDC is typically the first component of the architecture that is installed, it is recommended that the Citrix Licensing server (required by the application components to verify whether they are licensed to function) be installed on the Active Directory server.

VMware ESX Hosts Using the best practices for configuring the iSCSI initiator for each Dell PowerEdge R710, running as ESX hosts, six iSCSI initiators, three per physical Network Interface Cards (NIC) were used. Additionally, the native ESX multi-path (MPIO) driver was used to distribute iSCSI IO across the initiators. Since multi-pathing originates at the initiator level, there is no need to configure any form of link aggregation on the physical SAN switches. However, link aggregation had to be configured on the physical switches handling the standard network traffic for the VDI infrastructure virtual machines and the client desktop virtual machines. By adding more than one physical NIC to a virtual switch (vSwitch), link aggregation is initiated. Therefore a matching protocol needs to be configured on the ports on the physical switch to which the ESX host is connected. To improve connectivity between the virtual machine and the physical network, the ip-hash NIC teaming algorithm was configured on the vSwitch.

Page 13: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 13 Solutions Innovation Center

vCenter Server In order to ensure that there were no significant licensing or network configuration issues impacted by the testing environment when running vCenter on a virtual machine, a separate physical server was used. This is neither a specific best practice nor a recommendation for the VDI architecture. Rather to accommodate the multiple configuration changes for the testing as well as the performance requirements to run a Microsoft SQL 2005 as the vCenter database, a physical server was preferred for this test. Additionally to ensure that all the infrastructure virtual machines as well as the desktop virtual machines remained on dedicated ESX hosts for the purposes of generating appropriate performance data for testing the VDI Architecture, Distributed Resource Scheduling (DRS) and VMware High Availability (HA) technologies were both disabled. It is important to note however that it is a Dell best practice to implement these technologies in a real-world scenario, as doing so ensures the higher availability of critical applications and user sessions as well as better physical resource utilization. It is also important to note that order for the DDC and the PVS servers need to communicate with the vCenter Server to automate the management and provisioning of the desktop virtual machines.

Desktop Provisioning Server As with a physical PVS implementation, network and disk IO (streaming the desktop operating system at boot time) are the biggest cause of resource contention. To try to offset these issues it is typically a best practice to store the “master” operating system image on a separate LUN with a higher performance RAID level like RAID 10. Additionally it is also recommended that as much network bandwidth as possible be dedicated to the connection between the desktop clients and the provisioning server. This is accomplished by using a separate, dedicated network or by using link aggregation on the connection between the provisioning server and the switch to which it is connected. These factors also come into play in the case of a virtual environment. Therefore, it is recommended that the LUN containing the “master” operating system image be mapped as a raw disk to the PVS virtual machine using VMware’s Raw Disk Mapping (RDM). This physical LUN should be configured in a RAID 10 configuration. In addition, Dell recommends that a separate virtual SCSI controller be configured to access the RDM. This will allow for an additional SCSI command queue to the disk. If a RDM is not possible, then at least assign a separate Virtual Machine Disk (VMDK) to the virtual PVS machine with its own virtual SCSI controller. The VMDK should reside on a LUN configured in a RAID 10 configuration. Since a single Dell | EqualLogic array was used (with a RAID 50 configuration) for this test, a combination of these best practices were implemented. A separate VMDK was created on a separate LUN and then added to the virtual PVS machine. The “master” operating system image was then stored on this separate virtual disk. The virtual PVS machine was then given an additional virtual SCSI controller to access this disk. It is important to note that during the testing when multiple desktop virtual machines were provisioned and then destroyed etc. the disk cache (containing the disk changes for each individual desktop virtual machine) can grow significantly and use disk space quickly. Therefore, it is important to monitor the disk space for the LUN containing the “master” operating system image. Additionally, a separate virtual machine running Microsoft SQL Server 2005 (SP3) hosted the database for the PVS virtual machine to ensure that all of the PVS virtual hardware resources are dedicated to managing the streaming and disk access of the “master” image to the desktop virtual machines.

Page 14: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 14 Solutions Innovation Center

Network Although there are no results depicted within Figure 4 and 5 to show network utilization during the tests, due to the nature of how DDC and PVS work together, constantly streaming the operating system image as well as any cache read/writes made by each virtual machine significantly impact network performance and utilization. Even though not part of the test architecture, consideration should be given to placing the PVS server on the same vSwitch as the desktop virtual machines to contain network traffic within the ESX hypervisor and thus optimize physical CPU utilization.

XenDesktop and vSphere integration To better integrate Citrix and VMware technologies together, Citrix provides a wizard loaded on the PVS server to automatically create the PVS collection and desktop groups. PVS requires access to the SDK on the vCenter server to automate the provisioning. Access to the SDK occurs via a SSL connection through HTTPs. The Citrix best practices strongly suggest ensuring that the appropriate SSL certificates are install and configured on both the vCenter server and the PVS server to ensure this communication occurs securely. This can be complicated to set up and Appendix B provides references on how to do this. To simplify the connectivity between PVS and vCenter for testing purposes, the SDK was accessed using a HTTP connection, as it was less complicated to configure.

It can be more complex to design and install a desktop virtualization platform on software that's different from the server virtualization platform, there are many benefits to doing so. The Citrix application layer requires a number of additional application components like Microsoft SQL, PVS and in this instance, XenApp. As opposed to the underlying virtualization layer from VMware which is straightforward to install and configure with a wealth of documentation to assist in the process. However, once correctly configured, the virtual desktops are very easily created and provisioned. Once online the desktops are also easily managed, but since there are many interlinked applications, it can be very difficult to re-configure and change the environment. Thus the recommendation is to test the architecture thoroughly in a lab and standardize on a solution before deploying it into production to not incur too many modifications. Added to this ensure that the appropriate licenses are installed a configured for the various solution components as these can be difficult to manage and maintain as many different licenses are required.

Page 15: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 15 Solutions Innovation Center

Section 5

Conclusion

Although day-to-day usage within a University or Higher Education environment may vary by institution, the test results demonstrate that many of these applications will work within a VDI architecture. Additionally, the test results provide a design reference architecture. By load testing a typical office type application as well as a resource intensive application, we have provided a range of performance results that can be used as a guide when planning a VDI deployment.. The best practices mentioned are ideas that can be used to assist designing a particular configuration, how they are used will either improve on the amount of users the architecture can accommodate at the expense of application response and user experience or vice versa. Overall, the information provided can assist Universities and Higher Education customers design, test and integrate this hybrid VDI solution into their existing virtualization infrastructure.

Page 16: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 16 Solutions Innovation Center

Section 6

Appendix A: Infrastructure Specifications

Below is a detailed description of the physical and virtual hardware specifications: Table 1: Desktop Virtual Machine Specifications

Desktop Operating System System Specifications

Windows XP

Windows XP Service Pack 3

1 vCPU

2048MB RAM

24GB Hard Disk streamed via the Desktop Provisioning Server Table 2: Virtual Hardware Specifications

Server Role System Specifications Configuration

XenApp Server

Purpose Internet Explorer 8 Published Application

Operating System Windows 2008 SP2 (x86)

Processor 2 vCPU

Memory 4096MB RAM

Disk 40GB Hard Disk

Network Adapter Virtual Intel E100 1Gb/s Adapter

Desktop Provisioning Server

Purpose Client Desktop Operating System Image Streaming

Operating System Windows 2003 Standard Edition R2 SP2 (x86)

Processor 2 vCPU

Memory 4096MB RAM

Disk 100GB Hard Disk

Network Adapter Virtual Intel E100 1Gb/s Adapter

Desktop Delivery Controller

Purpose Desktop User Connection Broker/Session Management

Operating System Windows 2003 Standard Edition R2 SP2 (x86)

Processor 2 vCPU

Memory 4096MB RAM

Disk 8GB Hard Disk

Network Adapter Virtual Intel E100 1Gb/s Adapter

Microsoft SQL 2005 Server Purpose

Back-end Database to Provisioning Server and Desktop Delivery Controller

Operating System Windows 2003 Standard Edition R2 SP2 (x86)

Processor 2 vCPU

Page 17: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 17 Solutions Innovation Center

Memory 4096MB RAM

Disk 36GB Hard Disk

Network Adapter Virtual Intel E100 1Gb/s Adapter

Microsoft Active Directory Server

Purpose User Access Control; DNS Resolution; IP Address Assignment

Operating System Windows 2003 Standard Edition R2 SP2 (x86)

Processor 1 vCPU

Memory 1024MB RAM

Disk 8GB Hard Disk

Network Adapter Virtual Intel E100 1Gb/s Adapter Table 3: Software Specifications

Role Software Specifications

User Load Generation Citrix EdgeSight for Load Testing

Desktop Provisioning Citrix Provisioning Server 5.1 Service Pack 1

User Front-End Citrix XenApp 5.01

Desktop Delivery Citrix XenDesktop 4 Feature Pack 1

Application Database Microsoft SQL 2005 Standard Edition

Virtualization Hyper-Visor VMware ESX 4.01 Virtual Machine Management VMware vCenter 4.01

Table 4: Server Hardware Specifications

Server Role System Specifications Configuration

VMware ESX Host

Model Dell PowerEdge R710

Operating System VMware ESX 4.01

Processor 2 x 2.925GHz Quad Core Intel X5570 Processors

Memory 48GB RAM

Disk 2 x 73GB (15,000 RPM) SAS Hard Disks (RAID 1)

Network Adapter 4 x Broadcom NetXtreme Network Interface Cards

VMware vCenter Server

Model Dell PowerEdge R200

Operating System Windows 2003 Standard Edition R2 SP2 (x86)

Processor 1 x 3.16 GHz Quad Core Intel X5460 Processor

Memory 16G RAM

Disk 1 x 146GB (15,000 RPM) SAS Hard Disk

Network Adapter 4 x Broadcom NetXtreme Network Interface Cards

Page 18: A Proof of Concept Trenton Potgieter · iSCSI traffic (See Section 4: Best Practices and Recommendations for further information on how the switch was optimized for iSCSI traffic).

July 2010 Page 18 Solutions Innovation Center

Table 5: Storage Hardware Specifications

Storage Role System Specifications Configuration

VMware ESX Host Shared Storage

Model Dell | EqualLogic PS6000XV

Disk 16 x 450GB (15,000RPM0 SATA Hard Disks (RAID 50 with 2 x Hot Spare)

Appendix B: References

Configuring VMware vSphere Software iSCSI with Dell | EqualLogic PS Series Storage. EdgeSight for Load Testing Best Practices for XenDesktop Scalability Monitoring Your PS Series SAN with SANHQ Using XenDesktop with VMware XenDesktop Modular Reference Architecture Evaluating XenDesktop Enterprise Edition Veeam Monitor for VMware