Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when...

41
Technical white paper Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5 Table of contents Executive summary ...................................................................................................................................................................... 2 Introduction .................................................................................................................................................................................... 2 Why virtualize Microsoft Exchange using HP BL460c Gen8 server blades? ................................................................ 2 Why deploy Exchange using HP 3PAR StoreServ Storage? ............................................................................................. 4 Overview .......................................................................................................................................................................................... 7 HP ProLiant BL460c Gen8 server blades ............................................................................................................................. 7 HP BladeSystem ........................................................................................................................................................................ 9 Virtual Connect technology ................................................................................................................................................... 11 Sizing considerations .................................................................................................................................................................. 15 Mailbox Server Virtual CPU sizing considerations ............................................................................................................ 15 CAS and HT Servers Virtual CPU sizing considerations ................................................................................................... 17 Balancing vCPU resources..................................................................................................................................................... 18 Virtual memory sizing considerations ................................................................................................................................ 19 Storage sizing considerations .............................................................................................................................................. 20 Configuration and setup ............................................................................................................................................................ 21 HP 3PAR StoreServ provisioning steps .............................................................................................................................. 21 3PAR StoreServ Adaptive Optimization setup .................................................................................................................. 25 Measuring AO performance .................................................................................................................................................. 27 Virtual Connect ........................................................................................................................................................................ 31 VMware vSphere 5 .................................................................................................................................................................. 33 Implementing a proof-of-concept .......................................................................................................................................... 39 Summary ....................................................................................................................................................................................... 40 For more information ................................................................................................................................................................. 41

Transcript of Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when...

Page 1: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper

Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

Table of contents Executive summary ...................................................................................................................................................................... 2

Introduction .................................................................................................................................................................................... 2

Why virtualize Microsoft Exchange using HP BL460c Gen8 server blades? ................................................................ 2

Why deploy Exchange using HP 3PAR StoreServ Storage? ............................................................................................. 4

Overview .......................................................................................................................................................................................... 7

HP ProLiant BL460c Gen8 server blades ............................................................................................................................. 7

HP BladeSystem ........................................................................................................................................................................ 9

Virtual Connect technology ................................................................................................................................................... 11

Sizing considerations .................................................................................................................................................................. 15

Mailbox Server Virtual CPU sizing considerations ............................................................................................................ 15

CAS and HT Servers Virtual CPU sizing considerations ................................................................................................... 17

Balancing vCPU resources..................................................................................................................................................... 18

Virtual memory sizing considerations ................................................................................................................................ 19

Storage sizing considerations .............................................................................................................................................. 20

Configuration and setup ............................................................................................................................................................ 21

HP 3PAR StoreServ provisioning steps .............................................................................................................................. 21

3PAR StoreServ Adaptive Optimization setup .................................................................................................................. 25

Measuring AO performance .................................................................................................................................................. 27

Virtual Connect ........................................................................................................................................................................ 31

VMware vSphere 5 .................................................................................................................................................................. 33

Implementing a proof-of-concept .......................................................................................................................................... 39

Summary ....................................................................................................................................................................................... 40

For more information ................................................................................................................................................................. 41

Page 2: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

2

Executive summary

IT organizations face many sizing and design decisions when deploying and managing Microsoft® Exchange Server 2010 (Exchange 2010). Dynamic business requirements, ever-increasing mailbox capacities, and service levels have resulted in direct pressure on IT resources and budgets. These changing business requirements require new approaches to deploy, manage and maintain Exchange 2010 infrastructure.

Traditional enterprise Exchange designs, which dedicate physical servers and Direct Attached Storage (DAS), have come under increased scrutiny as IT organizations rethink current application deployment models1 in their data centers. According to a Forester Consulting survey2 commissioned by HP, 44% of the IT organizations surveyed are virtualizing their test, development and production email environments.

For Exchange 2010, virtualization and advanced Storage Area Network (SAN) features provide value to IT organizations looking to increase data center efficiency by pooling resources. Virtualization and Storage Area Networks provide increased agility to dynamically scale mailboxes and enhance business continuity. While scaling up resources is the most prevalent design concern, there can be opposite situations where resources have been overprovisioned. In physical deployments, dynamically scaling system resources or having to reallocate over provisioned resources is a difficult task. However, with virtualization and SAN deployments, resources provide the ability to be adjusted and “right sized” as the business demands change.

While virtualization of Exchange is not necessarily new, it has now reached the level of operational maturity needed to support enterprise deployments. The sizing guidelines and design decisions in this white paper describe important details on how the HP ProLiant BL460c Gen8 server blades and 3PAR StoreServ storage can be configured and deployed to support virtualized Exchange 2010 enterprise deployments.

Key Findings: This white paper provides sizing and design decision guidance for enterprise customers looking to virtualize multiple Exchange servers and thousands of mailboxes. The white paper details sizing considerations and recommendations for virtualizing Exchange using HP ProLiant BL460c Gen8 server blades (BL460c Gen8). Virtualizing Exchange using HP BL460c Gen8 servers and VMware vSphere 5 provides new levels of high availability and flexibility by combining benefits of VMware vMotion live migration technology with HP Virtual Connect Flex-10 networking architecture.

Using HP 3PAR StoreServ 10400 storage technologies such as Adaptive Optimization (AO), Dynamic Optimization (DO) and Thin Provisioning (TP), IT organizations can management storage I/O performance while also gaining space efficiency. The 3PAR StoreServ technologies allow administrators to move and reconfigure storage resources as needed without disruption to the Exchange environment. The HP 3PAR StoreServ can rapidly provision and manage large numbers of virtual volumes and present them to hosts, saving hours in manual configuration and provisioning tasks.

Target audience: This white paper is intended to assist IT decision makers, storage administrators, Microsoft Exchange

architects and VMware administrators involved in the planning and deployment of virtualized Exchange using HP 3PAR StoreServ and HP ProLiant Gen8 servers within VMware vSphere 5 environments.

This white paper describes testing performed in November 2012.

Introduction

Why virtualize Microsoft Exchange using HP BL460c Gen8 server blades?

Server consolidation through virtualization is common practice in many IT organizations; however, large-scale adoption of virtualization of Exchange workloads has been slow. The slow adoption rate is due to two main issues:

• Microsoft’s Exchange virtualization support policies.

• Effectiveness of server consolidation for larger Exchange mailbox workloads.

The first issue, Exchange support policy, had been a barrier to many organizations concerned about deploying a Microsoft supported configuration. Prior to Exchange Server 2003, Microsoft did not support virtualizing any of the Exchange roles. However, Microsoft revised official support for virtualization starting with Exchange 2003, albeit very limited in scope and not practical for all but the smallest deployments. Support significantly expanded with the release of Exchange Server 2007, providing virtualization support for the Hub transport (HT), Client Access Server (CAS) and the mailbox (MBX) roles. Virtualization of the Unified Messaging (UM) role remained unsupported. More importantly, Exchange 2007 virtualization did

1 The Economics of Virtualization: Moving Toward an Application-Based Cost Model vmware.com/files/pdf/Virtualization-application-based-cost-model-WP-

EN.pdf 2 Email Is a Key Workload for Storage Strategy http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=c03609287

Page 3: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

3

not support combining database replication features such as Exchange cluster continuous replication (CCR) and hypervisor-based failover clustering.

With testing assistance from the HP Alliance Performance and Solutions group, Microsoft revised the support policy for Exchange 2010 SP1 (and greater) allowing the combination of both Database Availability Group (DAG) and hypervisor-based failover functionality including live migration.

Combining support for both DAG and hypervisor-based failover clustering provides a new level of agility to IT organizations previously not available for Exchange 2010. Exchange 2010 Virtual Machines (VMs) participating in a DAG can now be hosted in hypervisor failover clusters. Hypervisor-based high availability (HA) allows Exchange VMs to move between cluster nodes when using shared SAN storage. For unplanned events, VMs automatically reboot on other physical hosts in the cluster. For planned events, VMs can be migrated using live migration tools such as VMware vMotion. Live migration technologies allow Exchange VMs to remain operational during the migration between hypervisor nodes without perceived downtime for clients.

Adding hypervisor HA features to an Exchange DAG configuration allows for additional layers of protection and management options during service events. To assist in identifying configurations that have been tested and validated with Microsoft products, Microsoft has developed a Server Virtualization Validation Program (SVVP). For information about the program see: windowsservercatalog.com/svvp.aspx?svvppage=svvp.htm

The second issue, the effectiveness of server consolidation, is a common concern expressed by Exchange architects and administrators sizing Exchange servers. Exchange deployments require proper sizing of the system resources, such as CPU, Memory and Storage I/O subsystems. Enterprise mailbox server deployments typically require significant CPU, memory and storage resources, which leads to deployment on dedicated physical hardware. While this continues to be the common objection to virtualization, significant advances in server performance and underlying load-balancing product features such as VMware Distributed Resources Scheduler (DRS) have made this argument less relevant.

HP BL460c Gen8 To understand the advances in system performance, it is helpful to compare the different CPU architectures used in Exchange 2010 deployments, but making direct CPU comparisons is not straightforward. Comparing processor architectures require megacycles calculations be normalized to account for the differences in processor architectures. Microsoft defines megacycles as a unit of work referencing a baseline system used for Exchange 2010 testing. The normalized results referred to as adjusted megacycles compare Exchange workload performance of the baseline system and newer processor architecture.

Figure 1, compares the HP ProLiant DL380 G5 server3 (DL380 G5) used as the Exchange 2010 baseline system with the newer generations HP ProLiant BL460c blade servers. Microsoft selected HP ProLiant DL380 G5 during initial Exchange 2010 testing to define baseline CPU megacycle requirements. Figure 1 depicts a significant increase in CPU performance relative to the Exchange 2010 workloads. The HP ProLiant DL380 G5 baseline server configured with two processors (8 cores) is capable of producing 26,664 megacycles, while an HP ProLiant BL460c Gen8 configured with two processors (16 cores) is capable of 118,210 megacycles. This is a 3.4 times increase in adjusted megacycle performance moving from Generation 5 to Generation 8 server platforms.

3 Configured with two Intel Xeon X5470 4-core processors (total of 8 cores)

Page 4: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

4

Figure 1. Rapidly increasing CPU processing power

In terms of the number of Exchange users that a physical server could support, an HP ProLiant DL380 G5 configured in a DAG, with a 200 messages sent and received per user per day workload profile could handle up to 4,800 users (at ~78% CPU utilization). Typical enterprise Exchange 2010 mailbox server deployments range between 3,000 and 9,000 mailboxes depending on the messages sent and received per user per day workload profiles used. Sizing for the same 200 messages per user per day workload profile on the HP ProLiant BL460c Gen8 server shows it could handle up to 21,000 users (at ~78% CPU utilization). This is significantly higher than the number of users handled by most large mailbox servers deployed in enterprise IT organizations today. This gap between the current hardware capabilities of the BL460c Gen8 and the typical enterprise deployment of a mailbox server allows for consolidation of resources using virtualization.

Why deploy Exchange using HP 3PAR StoreServ Storage?

Designing storage solutions for Exchange has always been complex, requiring detailed knowledge of both user workload requirements as well as the storage subsystem designs. With each major release of Exchange, the user I/O requirements have dropped while mailbox capacity requirements have increased. Exchange 2010 is no different. With multiple database copies and the emphasis on large mailbox storage, capacities requirements have grown significantly.

To handle the increases in storage capacity needed, Microsoft has optimized Exchange 2010 to support lower cost, higher capacity, midline disk storage. Midline disks provide large disk capacities but operate at slower I/O performance. To make up for the slower I/O performance, Microsoft has made significant changes to the Extensible Storage Engine (ESE) in Exchange 2010. Instead of shorter, more random I/O, typical with prior versions of Exchange, the Exchange 2010 ESE performs fewer large sequential I/Os. These improvements in the ESE along with other design changes have significantly reduced the I/O performance requirements, making midline disks suitable for Exchange 2010 workloads. However, deploying midline drives introduces a new set of considerations and challenges when designing storage configurations for Exchange 2010.

Midline disks provide significant capacity increases, allowing for larger mailbox capacities at lower cost per GB. Consequently, larger mailboxes require larger database sizes. The recommended maximum databases sizes have increased from 200GB in Exchange 20074 to 2TB in Exchange 20105. Such significant increases in database size now affect maintenance events such as rebuilding a RAID array after a disk failure. Maintenance events that took only a few hours to complete using smaller databases and enterprise storage, can now take tens of hours to complete when you combine very large databases and midline disks.

Microsoft does not recommend deploying RAID5 with midline disks for Exchange. The added overhead of parity reconstruction with RAID5 can take many hours and potentially days on larger RAID array to complete. During this rebuild time, the RAID array is under heavy load, resulting in suboptimal user performance. The extended rebuild times also increase the risk of a second drive failure, resulting in data loss of the entire RAID array.

4 When deployed with CCR for HA 5 When deployed in a DAG for HA

26664

44973.28

71992.8

118210.4

0

20000

40000

60000

80000

100000

120000

140000

DL380 G5X5470 3.33GHz 4-core proc

total of 8 cores

BL460c G6X5570 2.93GHz 4-core proc

total of 8 cores

BL460c G7X5675 3.06GHz 6-core proc

total of 12 cores

BL460c Gen8E5-2680 2.7GHz 8-core proc

total of 16 cores

Ad

just

ed

Me

ga

cycl

es

Server platforms

Increasing CPU Processing Power

Page 5: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

5

To reduce the inherent risks related to extended RAID rebuild times, Microsoft recommends DAS configurations to be deployed using either RAID1/10 or RAID-less6 (JBOD) disk configurations. HP expands on the Microsoft guidance for DAS configurations by recommending deploying either RAID1 two-disk mirror sets or single RAID-less disk configurations. Each mirror set or single RAID-less disk would then contain a single Exchange database. In the RAID1 scenario, there is still a rebuild process to recover from a failed disk, but impact is isolated to one database.

RAID-less configurations operate differently in the event of a disk failure. In this scenario, the connections to the mailbox server holding the failed database are automatically failed over to another mailbox server in the DAG that holds a copy of the database. The recovery processes require manual steps to replace the failed disk, prepare the new disk in the Windows® OS, and then perform a reseed of the database to the new disk. Since the reseeding process is disk intensive, another passive copy of the database should be the source of the reseed. This avoids any potential client performance issues with the active copy. The reseed process copies the entire database, transaction logs and content indexes over the replication network to the target server. Depending on the size of the database and replication network link speed, this can take many hours to complete. After the reseed process is completed and the database is in a healthy state, the database can be switched back to the original server.

While the DAG failover/failback functionality is relatively quick, typically less than 30 seconds, routine service events with DAS storage configurations require Exchange administrators to manage and monitor the end-to-end recovery process. If not properly managed, disk rebuilds or reseed tasks can quickly degrade end user performance. For small organizations, the extra administration time may not be a significant issue. However, as the numbers of users grow into the tens of thousands, end-to end recovery can become a significant IT resource issue.

Deploying large mailboxes using DAS creates other challenges beyond routine maintenance. Exchange 2010 I/O performance demands are the same if the mailbox contains only a couple hundred megabytes or contains a couple gigabytes of messaging data. I/O requirements are based on user workload profiles and not the mailbox capacity. To support the I/O requirements using midline disks, DAS solutions require fully provisioned storage when deployed with Window Server 2008 or Windows Server 2008 R27. However, at initial deployment, user capacity requirements are considerably less.

Storage capacity may remain underutilized for an extended period as users grow into their new multi-gigabyte mailboxes, especially if the mailbox capacity plan assumes two- or three-year growth period. During this period, DAS may be fully deployed, but not fully utilized.

DAS challenges:

• Requires special Exchange administrative attention during routine maintenance events.

• Requires fully provisioned storage to meet I/O requirements.

HP 3PAR StoreServ Storage HP 3PAR StoreServ solutions address storage challenges by decoupling many of the storage management functions from day to day Exchange administration. Advanced functionality such as chunklet-based RAID and wide striping of data across multiple hard drives improves I/O performance and reduces administration.

Chunklet-based RAID divides physical drives into multiple equally sized slices called chunklets. The chunklet size is 1GB for HP 3PAR StoreServ 10400 and can be thought of as its own small disk. RAID groups are constructed from chunklets on separate drives throughout the array (see right side of figure 2). Wide striping provides the full performance capabilities of the array to small volumes without provisioning excess capacity and without creating hot spots on a subset of physical drives. Other chunklets on the physical drives are available for other volumes. The left side of figure 2 shows what RAID groups look like on traditional SAN arrays and DAS configurations, where storage is provisioned directly from pools of physical disks. The traditional RAID sets represent unbalanced I/O loads on a subset of physical disks, which can result in performance issues, especially during service events, which draw higher than normal I/O.

6 RAID-less configurations require a minimum of 3 copies of the databases see: http://technet.microsoft.com/en-us/library/ee832790.aspx#Psm 7 Windows Server 2012 supports thin provisioned volumes

Page 6: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

6

Figure 2. Wide striping on HP 3PAR Storage compared to traditional RAID

Chunklet-based RAID improves rebuild performance in the event of physical disk failure. Unlike DAS storage where the entire physical disk is allocated to a single disk array, and must be rebuilt, chunklet-based RAID is distributed across many physical disks. During a physical disk failure, only the chunklets stored on the failed physical disk need to be rebuilt. Wide striping reduces the overhead during the reconstruction of the chunklets, allowing the volumes to maintain normal response times for Exchange users.

Thin provisioning

HP 3PAR StoreServ 10400 can also be thin provisioned allowing administrators to deploy storage capacity for Exchange as needed. Figure 3 shows the traditional array’s ‘dedicate on allocation’ methodology where both allocated and purchased physical disk capacity are the same. In contrast, HP 3PAR StoreServ thin provisioning dedicate-on-write uses fewer disks to meet the required net array capacity.

Figure 3. HP 3PAR StoreServ Thin Provisioning

Dynamic Optimization

HP 3PAR Dynamic Optimization (DO) software allows the underlying characteristics of a volume to be changed transparently and without disruption of service. Using this software ensures that drive type, RAID level and configuration, and high availability options can all be changed simply, easily, and non-disruptively. Unlike traditional arrays, where a poor initial choice can be difficult and/or costly to change, Dynamic Optimization allows changes to be made easily. DO provides the ability to migrate an Exchange database virtual volume from one type of RAID level to another or to a different storage tier.

Page 7: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

7

Adaptive Optimization

HP 3PAR Adaptive Optimization (AO) allows different tiers (SSD, Fibre Channel and Nearline) of storage to be used together as part of the same logical volume using a rules-based data movement engine. The AO engine on a scheduled basis analyzes performance (access rates) for sub-volume data regions, then selects the most active regions (those with the highest I/O rates) to be autonomically moved to a faster storage tier (SSD or Fibre Channel). As data regions become less active, the data movement engine moves these regions to slower storage tiers (Fibre Channel or Nearline).

Exchange user access patterns can vary significantly depending on business activities and age of the messages. Large multi-gigabyte mailboxes will likely have some portions of the mailbox very active on a day-to-day basis such as the inbox folder and calendar. However, a majority of mailbox data is likely to remain inactive, consuming capacity but using little I/O. This pattern of relatively small active (hot) messages and large quantities of inactive (cold) messages are ideal workloads for AO.

Virtualization and SAN technologies provide a rich set of features and functionality to address common concerns that IT organization face when managing and monitoring enterprise deployments of Exchange 2010. Combining technologies provides a rich set of tools to manage both unplanned and routine service events. The overview section will briefly describe the HP ProLiant BL460c Gen8 and HP 3PAR StoreServ 10400 hardware technologies detailed in this paper.

Overview

HP ProLiant BL460c Gen8 server blades and HP 3PAR StoreServ 10400 are key hardware enablers when virtualizing Exchange 2010. The sizing and design decisions detail HP hardware and software technologies using VMware vSphere 5. However, before getting into specific decision details, it is important to provide an overview of the HP hardware platforms recommended.

HP ProLiant BL460c Gen8 server blades

The HP ProLiant Gen8 family addresses the complex management challenges that server administrators face in today’s virtual data center. The HP ProLiant Gen8 servers combine more than 150 design innovations and have more than 900 patents pending. The HP ProLiant BL460c Gen8 design increases system resources needed to deploy Exchange in a virtualized environment. System resource features include:

• Increased processor performance, with 33% more cores8 − up to 8 cores / 16 threads per processor

• Increased memory capacity, with 33% more DIMM slots− supporting up to 512GB memory9

• Increased I/O bandwidth with PCIe 3.0 with two x16 mezzanine slots and integrated I/O controller on processor die

• Increased network connectivity options with HP FlexibleLOM adapters – 1Gb (4 port) and 10Gb (2 port) modules

Figure 4 depicts the system board layout of the BL460c Gen8 server blade, featuring increased DIMM slots, FlexibleLOM adapter and the PCIe 3.0 mezzanine slots.

8 Compared to BL460c G7 9 Requires two processors

Page 8: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

8

Figure 4. HP ProLiant BL460c Gen8 server blade

Processors HP ProLiant BL460c Gen8 server blade uses multi-core Intel® Xeon® E5-2600 processors that provide greater integration and improved bandwidth interfaces (QuickPath Interconnect (QPI), memory, and PCI Express), resulting in almost 2x higher overall system performance than previous generation processors. The processors include up to 20 MB Level 3 cache, two QPI links, and four Direct Media Interface (DMI) 2.0 lanes. The integrated four-channel memory controller supports up to two DDR3 DIMMs per channel10 with data rates up to 1600 Mega Transfers per second (MT/s).

The x4 DMI link is a bi-directional chip-to-chip interconnect between the processor and chipset, providing a total of 20Gb/s in each direction. Figure 6 depicts a processor block diagram of the ProLiant BL460c Gen8 using the Xeon E5-2600 series processors11.

10 In the BL460c Gen8 server, other ProLiant platforms support up to 3 DIMMs per channel 11 Depicted with 8 cores per processor, however the number of processor cores varies by model

Page 9: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

9

Figure 5. Processor block diagram of the BL460c Gen8 server platform using Xeon E5-2600 series processors

Memory

Server virtualization has significantly increased the need for more memory. As a result, DRAM manufacturers have increased chip component densities to support higher memory capacities. The combination of higher memory demand, component complexity, and availability requirements have increased the importance of system memory.

HP SmartMemory is a unique technology introduced for the HP ProLiant Gen8 Server series. Unlike third-party memory, HP SmartMemory has passed the rigorous HP qualification and test process. HP SmartMemory unlocks certain performance and high efficiency features optimized for HP ProLiant Gen8 servers:

• HP SmartMemory uses up to 20% less power than third-party memory while achieving the same performance. At low voltage (1.35 V), HP SmartMemory operates at DDR3-1333 MHz with one and two DIMMs per channel. In comparison, third-party memory requires 1.5 V to achieve the same performance.

• HP SmartMemory performance is 25% higher with Unbuffered ECC (UDIMM) memory and capable of supporting 1333 MT/s with up to two DIMMs per channel at 1.35V. Third-party memory supports 1066 MT/s bandwidth at higher operating voltage (1.5V).

FlexibleLOM modules The idea behind LAN-on-motherboard (LOM) was to provide essential connectivity without the need to consume an expansion slot or mezzanine connector. The implication is that the choice has to be made in the design phase and the decision cannot be revisited until the design is changed. With the FlexibleLOM module approach in select ProLiant Gen8 servers, not only can you choose from several different technologies, speeds and features, but also you can do so without sacrificing expandability. This also means that if new connectivity technologies come out, you can upgrade existing deployed ProLiant Gen8 servers if desired. FlexibleLOM modules available for BL460c Gen8 servers include Ethernet 10Gb 2-port 560FLB FIO Adapter, FlexFabric 10Gb 2-port 554FLB FIO Adapter and the Flex-10 10Gb 2-port 530FLB FIO Adapter.

HP BladeSystem

The HP BladeSystem c7000 enclosure represents an evolution of the entire rack-mounted infrastructure. It consolidates all the supporting infrastructure elements − power, cooling, and I/O infrastructure for the c-Class components into a single infrastructure-in-a-box. The BladeSystem c7000 enclosure can be configured with the following:

• Up to eight full-height or 16 half-height server, storage, or other option blades

• Up to eight interconnect modules simultaneously supporting a variety of network interconnect fabrics

• Up to 10 Active Cool 200 fans

Page 10: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

10

• Up to six power supplies

• Redundant BladeSystem Onboard Administrator (OA) management modules (optional active-standby design)

The BladeSystem c7000 enclosure is 10U high. It includes a shared, five terabit-per-second, high-speed NonStop midplane for a wire-once connection of server blades to the network and shared storage. A pooled-power backplane delivers power and ensures that the full capacity of the power supplies remains available to all server blades and interconnects. The enclosure comes with a single-phase AC, a three-phase AC, or a 48 V DC power subsystem to meet the needs of your data center power infrastructure. Figures 6 and 7 show both the front and rear views of the BladeSystem c7000 enclosure. The BladeSystem c7000 enclosure and the BladeSystem c3000 enclosure support many of the same critical components such as servers, interconnects, mezzanine cards, storage blades, power supplies, and fans.

Figure 6. BladeSystem c7000 enclosure

Figure 7. Rear view

Page 11: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

11

Enclosure management

The HP BladeSystem c7000 Enclosure has extensive embedded management capabilities based on three management elements:

• BladeSystem Onboard Administrator (OA) or BladeSystem OA with KVM

• ProLiant OA powered by Integrated Lights-Out 2 (iLO 2), iLO 3 or iLO 4 management processors that are integrated on the server blades

• Interconnect module management such as the HP Virtual Connect Manager or Virtual Connect Enterprise Manager

These integrated management elements provide powerful hardware management for remote administration and local diagnostics, as well as component and enclosure troubleshooting.

BladeSystem Onboard Administrator The brain of c-Class enclosure is the BladeSystem OA module located in the enclosure. It performs four management functions for the enclosure:

• Detecting component insertion and removal

• Identifying components and required connectivity

• Managing power and cooling

• Controlling components

IT administrators can access the OA in three different ways:

• Through the web browser graphical user interface (GUI)

• Through the scriptable command line interface (CLI)

• Through the built-in Insight Display diagnostic LCD panel on the front of the c-Class enclosure

An optional second OA in the BladeSystem c7000 enclosure provides redundancy for these functions.

Virtual Connect technology

HP developed Virtual Connect technology to simplify networking configuration using an HP BladeSystem c-Class environment. The baseline Virtual Connect technology virtualizes the connections between the server and the LAN and SAN network infrastructure. It adds a hardware abstraction layer that removes the direct coupling between them. Server administrators can physically wire the uplinks from the enclosure to its network connections once, and then manage the network addresses and uplink paths through Virtual Connect software. Using Virtual Connect interconnect modules provides the following capabilities:

• Reduces the number of cables required for an enclosure, compared to using pass-through modules

• Reduces the number of edge switches that LAN and SAN administrators must manage

• Allows pre-provisioning of the network and SAN, server administrators can add, replace, or upgrade servers without requiring immediate involvement from the LAN or SAN administrators

• Enables a flatter, less hierarchical network, reducing equipment and administration costs, reducing latency and improving performance

• Delivers direct server-to-server connectivity within the BladeSystem enclosure. This is an ideal way to optimize for East/West traffic flow12, which is becoming more prevalent at the server edge with the growth of server virtualization.

Without Virtual Connect abstraction, changes to server hardware (for example, replacing the system board during a service event) often result in changes to the MAC13 addresses and WWNs14 . The server administrator must then contact the LAN/SAN administrators, give them updated addresses, and wait for them to make the appropriate updates to their infrastructure. With Virtual Connect, a server profile holds the MAC addresses and WWNs constant, so the server administrator can apply the same networking profile to new hardware. This can significantly reduce the time of a service event.

Virtual Connect Flex-10 technology further simplifies network interconnects. Flex-10 technology lets you split a 10Gb Ethernet port into four physical function NICs (called FlexNICs). This lets you replace multiple, lower-bandwidth NICs with a single 10Gb adapter. Prior to Flex-10, a typical server blade enclosure required up to 40 pieces of hardware (32 mezzanine

12 Server to server network traffic vs. North/South server to client traffic 13 Media Access Control - address uniquely assigned to a network interface 14 World Wide Name is a unique identifier used in storage technologies

Page 12: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

12

adapters and 8 modules) for a full enclosure of 16 virtualized servers. Use of HP FlexNICs with Virtual Connect interconnect modules reduces the required hardware up to 50% by consolidating all the NIC connections onto two 10Gb ports.

Virtual Connect FlexFabric adapters broadened the Flex-10 capabilities by providing a way to converge network and storage protocols on a 10Gb port. Virtual Connect FlexFabric modules and FlexFabric adapters are capable of converging Ethernet, Fibre Channel, or accelerated iSCSI traffic into a single 10Gb data stream. The 10Gb physical link can be partitioned into four virtual functions with adjustable bandwidth per virtual function, FlexFabric adapters reduce management complexity; the number of NICs, HBAs, and interconnect modules needed, and associated power and operational costs. Using FlexFabric technology lets you reduce the hardware requirements by 95% for a full enclosure of 16 virtualized servers – from 40 components to 2 FlexFabric modules.

The most recent Virtual Connect innovation is the ability to connect directly to HP 3PAR Storage Systems, which eliminates the intermediate SAN infrastructure. Server administrators can manage storage device connectivity and LAN network connectivity using Virtual Connect Manager.

Figure 8. Direct-Attach SAN architecture

Virtual Connect components Virtual Connect is a portfolio of interconnect modules, adapters, embedded software, and an optional management application:

• Virtual Connect interconnect modules – FlexFabric, Flex-10 or Fibre Channel modules plug directly into the interconnect bays in the rear of the HP BladeSystem c-Class enclosure. The modules connect to the server blades through the enclosure midplane. The Ethernet-based modules support 1 or 10Gb uplinks and downlinks, allowing you to purchase 1Gb SFPs and upgrade to 10Gb SFP+ transceivers when the rest of their infrastructure is ready to support it.

• Flex-10 and FlexFabric adapters are available as either LAN-on-motherboard (LOM) devices or mezzanine cards. Virtual Connect technology also works with 1Gb adapters and FlexibleLOM modules for ProLiant BL Gen8 servers.

• Virtual Connect Manager (VCM) firmware is embedded in the Virtual Connect Flex-10 and FlexFabric interconnect modules. VCM manages a single domain of up to four enclosures.

• Virtual Connect Enterprise Manager (VCEM) is an optional software application that lets you manage up to 250 Virtual Connect domains and up to 1000 enclosures within those domains. The VCEM software provides automation and group-based management capabilities beyond what VCM offers.

HP 3PAR StoreServ 10400

HP 3PAR StoreServ 10400 is ideal for storage virtualization of Exchange 2010. It combines highly virtualized, autonomically managed, and dynamically tiered storage arrays with advanced internal virtualization capabilities to increase administrative efficiency, system utilization, and storage performance.

Architecture

HP 3PAR StoreServ, combines open technologies with extensive innovations in hardware and software design. HP 3PAR StoreServ features a high-speed, full-mesh, passive system backplane that joins multiple controller nodes (the high-

Page 13: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

13

performance data movement engines of the architecture) to form a cache-coherent, mesh-active cluster. This low-latency interconnect allows for tight coordination among the controller nodes and a simplified software model.

In HP 3PAR StoreServ 10400, the controller nodes are paired via Fibre Channel connections from each node in the pair to the dual-ported drive chassis owned by that pair.

High availability is built into the HP 3PAR StoreServ architecture. Unlike other approaches, the system offers both hardware and software fault tolerance by running a separate instance of the HP 3PAR StoreServ Operating System on each controller node, thus ensuring the availability of user data. With this design, software and firmware failures, a significant cause of unplanned downtime in other architectures, are greatly reduced.

HP 3PAR StoreServ 10400 Components

• Drive Chassis (4U) and Drive Magazines, HP 3PAR StoreServ 10400 can support up to six drive chassis in the primary rack and eight additional in an expansion rack. Each drive chassis supports a minimum of two and maximum of ten drive magazines, with each drive magazine containing four disks (40 disks per disk chassis). Additional drive chassis and drive magazines can be added non-disruptively to the system as capacity needs grow.

• Full-mesh controller back plane is a passive circuit board that contains slots for controller nodes. Each controller node slot is connected to every other controller node slot by a high-speed link, forming a full-mesh interconnect between all controller nodes in the cluster. The 3PAR StoreServ 10400 uses a four-node backplane that supports two or four controller nodes.

• Controller Node contains two high-performance, proprietary HP 3PAR StoreServ Gen4 ASICs optimized for data movement between three I/O buses, a three memory-bank data cache, and seven high-speed links to the other controller nodes over the full-mesh backplane. These ASICs perform parity calculations (for RAID 5 and RAID MP/Fast RAID 6) on the Data Cache, and calculates the CRC Logical Block Guard used by the T10 Data Integrity Feature (DIF) to validate data stored on drives.

• Service Processor (SP) is used to maintain, troubleshoot, and upgrade the 3PAR StoreServ.

Figure 9 graphically depicts HP 3PAR StoreServ 10400 major components.

Figure 9. HP 3PAR StoreServ 10400 Components

Drive Chassis (4U) and Drive

Magazines

Full-mesh controller back

plane

Controller Node

Service Processor

Page 14: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

14

HP 3PAR StoreServ Storage concepts and terminology

HP 3PAR StoreServ Storage includes both the hardware components that physically store your data and the software applications that manage your data. HP 3PAR StoreServ is comprised of the various layers shown from left to right in figure 10.

Figure 10. 3PAR StoreServ storage layers

• Physical Disk is a hard drive mounted in a drive magazine located in the drive chassis (SSD, Fibre Channel and Nearline).

• Chunklets are drawn from physical disks; and logical disks (LDs) are created from chunklets. HP 3PAR StoreServ OS divides physical disks into multiple equally sized slices called chunklets. The chunklet size is 1GB for HP 3PAR StoreServ 10400; RAID groups are constructed from chunklets throughout the array.

• Logical Disk is a collection of physical disk chunklets arranged as rows of RAID sets. Each RAID set is made up of chunklets from different physical disks. Logical disks are pooled together in CPGs, which allocate space to virtual volumes. The 3PAR StoreServ OS automatically creates the underlying logical disks when you create CPGs. The RAID type, space allocation, growth increments and other logical disk parameters can be set when you create a CPG or modified later.

• Common Provisioning Groups (CPG) are user created virtual pools of logical disks that allocate space to virtual volumes on demand. A CPG allows up to 4,095 virtual volumes to share the CPG's resources. Both fully provisioned virtual volumes and thinly provisioned virtual volumes draw space from the CPG's logical disk pool.

• Virtual Volumes (VV) are user created and draw their resources from CPGs, with volumes exported as LUNs to hosts. Virtual volumes are the only data layer visible to the hosts. You can create physical copies or virtual copy snapshots of virtual volumes that remain available if the original base volume becomes unavailable.

• Fully Provisioned Virtual Volume is a volume that uses logical disks that belong to a logical disk pool drawn from the same CPG. Unlike thinly provisioned virtual volumes, fully provisioned virtual volumes have a set amount of user space that is allocated for user data. The fully provisioned volume size is fixed, and the size limit is 16 TB.

• Thin Provisioned Virtual Volume (TPVV) is a volume that uses logical disks that belong to a logical disk pool. TPVVs associated with the same CPG draw space from that pool as needed, allocating space on demand in small increments from each controller node. As the volumes that draw space from the CPG require additional storage, the HP 3PAR StoreServ OS automatically creates additional logical disks and adds them to the pool until the CPG reaches the user-defined growth limit, which restricts the CPG’s maximum size.

• Dynamic Optimization (DO) provides the ability to migrate whole virtual volumes from one type of RAID level to another or to a different storage tier. DO allows you to change the underlining drive type, RAID level and configuration, which is especially helpful adapting to new storage performance or capacity requirements.

• Adaptive Optimization (AO) allows different tiers (SSD, Fibre Channel and Nearline) of storage to be used together as part of the synthetic logical volume using a rules-based data movement engine. The AO engine on a scheduled basis analyzes performance (access rates) for sub-volume data regions, then selects the most active regions (those with the highest I/O rates) to be moved to faster storage tiers (SSD or Fibre Channel). As data regions become less active, the data movement engine moves these regions to slower storage tiers (Fibre Channel or Nearline).

Figure 11 compares DO LUN movement between three storage tiers (Tier 0 SSD, Tier 1 Fibre Channel and Tier 2 Nearline) with AO sub-LUN block region movement between storage tiers.

Page 15: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

15

Figure 11. Dynamic optimization and Adaptive optimization

Sizing considerations

User workload and availability options differ significantly with each new version of Exchange making capacity and sizing hardware a rather difficult task, especially for the mailbox role. To reduce the complexity of sizing Exchange 2010, Microsoft has provided detailed system resource requirements to be used to size different user workload profiles. User workload profiles are based on the number of messages sent/received per user per day and the average size of those messages. The workload profiles account for peak times, where there may be significant spikes in mailbox activity. The mailbox profiles provide estimated values on CPU, memory and storage I/O resource requirements needed for each mailbox user. Table 1 provides a list of the most common Microsoft user workload profiles used in sizing Exchange 2010. For additional details on larger message profiles, refer to Microsoft TechNet article: http://technet.microsoft.com/en-us/library/ee712771.aspx.

Table 1. User workload profile

Message sent/received per mailbox per day

Megacycles for active mailbox

Megacycles for passive mailbox

Database cache per mailbox (MB)

Single copy database IOPS (no DAG)

Multi-copy database IOPS (DAG)

50 1 0.15 3 0.06 0.005

100 2 0.3 6 0.12 0.1

150 3 0.45 9 0.18 0.15

200 4 0.6 12 0.24 0.2

Mailbox Server Virtual CPU sizing considerations

Microsoft refers to megacycles when sizing physical CPU resources, the values in table 1 are based on testing conducted using HP ProLiant DL380 G5 with two X5470 3.33GHz processors (8 cores). This platform was used by Microsoft to calculate megacycle requirements for the different user workload profiles listed in the sizing article. These baseline megacycle requirements must be adjusted when sizing Exchange mailbox server with newer CPU architectures.

Sizing with newer CPU architectures requires calculations to be normalized for differences between the baseline platform and newer architectures. The normalized results, referred to as available adjusted megacycles, account for performance improvements in processor architectures. To determine the available adjusted megacycles, Microsoft uses the SPECint2006 rate result to calculate the new processor performance. The SPECint2006 is an independent performance benchmark maintained by the Standard Performance Evaluation Corporation.

Page 16: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

16

To determine the SPECint2006 rate results of given server:

1. Go to the spec.org.

2. Select Results tab, highlight CPU2006 and then select Search CPU2006 Results.

3. In the Available Configurations drop down box select SPECint2006 Rates.

4. In the Simple Request section select System from the drop down list.

5. In adjacent Matches field enter the server model to be queried.

6. Then click the Execute Simple Fetch button.

Figure 12 details the SPECinit2006 rate result query for the BL460c Gen8 server.

Figure 12. SPECinit2006 rate result query

Figure 13 details the results returned when querying the BL460c Gen8 server. The list details all the processor configurations, which have been benchmarked by the Standard Performance Evaluation Corporation.

Figure 13. Matching query returned 14 systems

Page 17: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

17

Using the Microsoft Exchange 2010 Mailbox Server Role Requirements Calculator, the rate result, (referred to as value) can be used to calculate the Mailbox server role’s CPU utilization. This rate result calculates the available adjusted megacycles for the physical server. With newer versions of the calculator, Microsoft has added the Server Role Virtualization option to size the Mailbox Server as a guest virtual machine (VM)15 ( see figure 14). When Server Role Virtualization is set to Yes, you can provide virtual CPU (vCPU) sizing inputs for Mailbox Server VM.

Figure 14. Server Role Virtualization

In virtualized deployments, the SPECinit2006 Rate Value needs to be recalculated to adjust for the number vCPUs to be assigned to the VM and what is available to the entire physical server.

In the virtualized calculation, you determine the per-core value and then multiply it by the number of vCPUs to be assigned to the VM. Using the SPECinit2006 Rate Value of 665 for the physical server, you recalculate the virtual SPECinit2006 Rate Value for a 4-vCPU VM with the following formulas.

SPECinit2006 Rate Results / Total number of cores = Per core value

665 / 16 = ~41.6

Number of cores in guest VM * Per core value = Virtual SPECinit2006 Rate Value

4 * 41.6 = 166.4

In this example, the VM with four vCPUs has SPECinit2006 Rate Value of 166.4; this value can then be used in the Exchange 2010 Mailbox Server Role Requirements Calculator as depicted in figure 15. It is important to note that when calculating vCPU or CPU resources that Hyper-Threading is not part of sizing calculations. While there are performance benefits of Hyper-Threading, sizing assumptions should only use physical CPUs cores.

Figure 15. Mailbox Server Guest Machines SPECint2006 Rate Value

The Exchange 2010 Mailbox Server Role Requirements Calculator also accounts for virtualization overhead and allows you to specify a hypervisor CPU adjustment factor (see figure 16) which is used to adjust the available megacycles to the VM. The default is 10%, which should be suitable for most sizing scenarios.

Figure 16. Rate Value and Hypervisor CPU adjustment factor

Once you have specified the SPECint2006 Rate Value for the number vCPUs to be assigned to the VM and entered the Hypervisor CPU Adjustment Factor, you size the Mailbox server in the same manner as physical server sizing.

CAS and HT Servers Virtual CPU sizing considerations

After the Mailbox server VM sizing requirements have been determined for virtual CPU resources, you can use the following sizing formulas to determine the CAS and HT server VM requirements. Sizing the CAS and HT server VMs is the same as sizing physical servers, which is based on CPU core ratios with Mailbox servers.

The CAS server requires significant CPU resources and Microsoft recommends the following formula to determine the number of CPUs needed:

(Number of required mailbox CPUs) × 3 ÷ 4 = number of CAS CPUs

15 Version 16.1 or greater

Exchange Environment Configuration Value

Global Catalog Server Architecture 64-bit

Server Multi-Role Configuration (MBX+CAS+HT) No

Server Role Virtualization Yes

Server Configuration Processor Cores / Server SPECint2006 Rate Value

Mailbox Server Guest Machines 4 166.4

Secondary Datacenter Mailbox Server Guest Machines 12 0

Lagged Copy Server Guest Machines 12 0

Processor Configuration Value

Hypervisor CPU Adjustment Factor 10%

Page 18: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

18

Using the CAS formula, a mailbox server VM configured with 4 vCPUs would require a CAS server VM to be configured with 3 vCPUs. The HT Server VM on the other hand, does not have significant vCPU resource requirements, and Microsoft recommends a 5:1 ratio16 between mailbox and HT vCPUs. Using a four vCPU Mailbox server VM example, the HT server VM would only need 0.817 vCPU to support the workload.

A simpler method is to combine the CAS and HT roles into a single server VM and then use a 1:1 vCPU ratio between the mailbox server VM and CAS/HT server VM. This method reduces the number of VMs and makes placement of VMs on physical hosts less complex, especially when working with failure scenarios.

Balancing vCPU resources

Sizing Exchange servers in a virtualized environment requires striking a balance between the number of CPUs available in the physical host and the number of vCPUs needed to support the workloads of the individual Exchange VMs. Virtualizing Exchange allows more flexibility when assigning the number of vCPUs to a VM. However, it is important to distribute vCPU resources evenly across the Exchange VMs. For example, the BL460c Gen8 server with 16 cores can evenly distribute vCPUs using combinations of 2, 4 or 8 vCPUs per VM. While, sizing Exchange VMs with 3, 6 or 9 vCPUs is possible, it becomes a difficult task trying to balance VM workloads with server vCPU resources. This is especially important when designing for failure scenarios.

Figure 17 depicts an example four-node vSphere cluster, each physical server configured with 16 CPU cores, which supports up to four VMs each with four vCPUs. The example depicts six Mailbox server VMs and six CAS/HT server VMs hosted by the four physical servers. During normal operations, each physical server will host three Exchange VMs. In a failure scenario, where one physical server is unavailable, the remaining three physicals server will each support four Exchange VMs.

Note

There are additional considerations about placement of the Mailbox and CAS/HT server VMs to preserve Exchange DAG functionality that will be addressed in the Distributed Resource Scheduler section.

Figure 17. Example resource distribution

Evenly balancing vCPU resources has additional benefits by reducing the complexity when designing for failure scenarios. Balancing vCPUs allows failover scenarios to be used without having to undersubscribe or oversubscribe vCPU resources. Depending on the sizing requirements, you may need to model different processor configurations and mailbox workloads to find an optimal configuration.

16 When using Anti-virus software on the HT role 17 Must be rounded up to 1 vCPU

Physical Server Physical Server

Mailbox

Spare

Physical Server

Spare

Mailbox

Mailbox

CAS/HT

CAS/HT

Spare

Mailbox

Physical Server

Mailbox

Mailbox

SpareCAS/HT CAS/HT

CAS/HT CAS/HT

Page 19: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

19

Sizing and design considerations • Use the virtualization option in the calculator to size Mailbox Server guest Machines vCPU configurations.

• Recalculate the SPECint2006 Rate Result to size Mailbox Server Guest Machines.

• Model different CPU and Mailbox profiles to determine an optimal configuration – balance vCPU resources with the mailbox server requirements.

• Deploy combined CAS/HT server VMs in a 1:1 ratio to the mailbox server VMs.

Virtual memory sizing considerations

Sizing memory for a virtualized environment is not significantly different from a physical server in terms of memory needed for an individual Exchange server VM. Exchange 2010 relies heavily on memory to optimize performance. Table 1 provides the database cache per mailbox guidelines for the different user workload profiles. Proper memory sizing ensures that there is sufficient memory available for mailbox database cache. Exchange 2010 relies on mailbox database cache to reduce disk I/O. If memory available to Exchange is reduced, mailbox database cache will decrease resulting in higher disk I/O.

Hypervisors provide features which attempt to reclaim memory resources from running VMs using memory overcommit techniques18. Memory overcommit techniques work well on VMs, which do not rely significantly on memory caching. However, for Exchange it is important to disable hypervisor-based memory overcommit techniques19. These techniques can dynamically reduce the amount memory available to the VM, which can result in poor Exchange performance.

Physical memory

There are special considerations when sizing memory for the physical host. Virtualizing multiple Enterprise mailbox server VMs requires a significant amount of memory.

The HP ProLiant BL460c Gen8 server supports up to 16 DIMMs when configured with two processors. Each processor has four memory channels with each channel supporting one or two DIMMs. Depending on the type of DIMM used, memory can be optimized for performance, capacity and or low power requirements.

There are three types of DIMMs available for HP ProLiant BL460c Gen8 server:

• Unbuffered DIMMs (UDIMMs) represent the most basic type of memory module. With UDIMMs, all address and control signals, as well as the data lines, connect directly to the memory controller across the DIMM connector. UDIMMs offer the fastest memory speeds, lowest latencies, and (relatively) low power consumption. However, they are limited in capacity.

• Registered DIMMs (RDIMMs) lessen direct electrical loading by having a register on the DIMM to buffer the Address and Command signals between the DRAMs and the memory controller. This allows each memory channel to support up to two dual-rank DIMMs increasing the amount of memory that the ProLiant BL460c Gen8 server can support. With RDIMMs, the partial buffering slightly increases both power consumption and memory latency.

• Load Reduced DIMMs (LRDIMMs) use a memory buffer on all memory signals and to perform rank multiplication. The use of rank multiplication allows ProLiant BL460c Gen8 servers to support two quad-ranked DIMMs on a memory channel. LRDIMMs provide the largest possible memory footprint per server. However, LRDIMMs use the most power and have the highest latencies for the same memory clock speeds.

Rules for populating processors and DIMM slots • If only one processor is installed, only half of the DIMM slots are available.

• To maximize performance, balance the total memory capacity between all installed processors and load the channels similarly whenever possible.

• When two processors are installed, balance the DIMMs across the two processors.

• White DIMM slots denote the first slot to be populated in a channel.

• Place the DIMMs with the highest number of ranks in the white slot when mixing DIMMs of different ranks on the same channel.

• Use the DDR3 Memory Configuration Tool located at: hp.com/go/ddr3memory-configurator to determine the proper placement of DIMMs for your system.

18 Microsoft Support Policies and Recommendations for Exchange Servers in Hardware Virtualization Environments, Dynamic Memory Allocation Considerations

http://technet.microsoft.com/en-us/library/cc794548(v=exchg.80).aspx 19 Microsoft Support Policies and Recommendations for Exchange Servers in Hardware Virtualization Environments, Dynamic Memory Allocation Considerations

http://technet.microsoft.com/en-us/library/cc794548(v=exchg.80).aspx

Page 20: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

20

Rules for DIMM types • Do not mix UDIMMs, RDIMMs, or LRDIMMs.

• RDIMMs operating at either 1.35V or 1.5V may be mixed in any order, but the system will operate at the higher voltage.

• DIMMs of different speeds may be mixed in any order – the server will select the lowest common speed.

Sizing and design considerations • For virtualization of Exchange, RDIMMs are recommended. RDIMMs provide greater memory capacities than UDIMMs and

offer better performance at lower costs than LRDIMMs.

• For best performance, use one DIMM per channel and distribute DIMMs across all memory channels. This configuration reduces latency and increases aggregate throughput.

• Select CPU and DIMMs with 1600 MT/s maximum memory speeds. Memory running at 1600 MT/s has a 20% lower latency than memory running at 1333 MT/s.

Storage sizing considerations

Thin provisioning and Adaptive Optimization using 3PAR StoreServ

The thin provisioning feature of the HP 3PAR StoreServ allows administrators to increase back end storage utilization by more efficiently using available storage capacity. Thin provisioning helps to reduce the storage requirements during initial deployment of large mailboxes, when the on-disk mailbox capacity requirements of the mailboxes are significantly lower.

From a Windows Server and Exchange design perspective, storage appears fully provisioned, while the backend storage capacity is only allocated on write. This approach allows you to configure the Exchange storage configuration based on end state design, without having to allocate all disks resources at initial deployment.

The Adaptive Optimization feature allows you to create a synthetic LUN, which can be comprised of different disk types or tiers. AO moves sub-LUN data regions between storage tiers, based on usage patterns. For example, AO polices can be configured to use three storage tiers (SSD, Fibre Channel and Nearline) allowing active sub-LUN data regions to be placed on faster FC or SSD tiers, while relatively inactive data regions can be moved to the slower higher capacity Nearline tier . Adaptive Optimization runs on a user-defined scheduled basis (not as user access patterns change) to analyze how data was accessed in the previous (user-defined) measurement period and moves sub-LUN regions of data based on the data access patterns and analysis algorithm.

AO and TP allow administrators to increase back end storage utilization and tune synthetic LUNs using different storage tiers; however sizing Exchange requires some additional considerations. Sizing tools such as the Microsoft Exchange 2010 Mailbox Server Role Requirements Calculator are not aware of thin provisioning and Adaptive Optimization, so it is important that you still meet the I/O performance and initial storage capacity boundaries of the solution.

Storage sizing

Designing storage for Exchange involves striking a balance between the capacity and performance needed to support the expected load on the storage subsystems. The following steps review the process of determining the I/O performance and capacity requirements for Exchange. In steps one and two, you can use the Microsoft Exchange 2010 Mailbox Server Role Requirements Calculator to model different storage configurations. Steps one and two establish the minimum boundaries, to meet the I/O performance and capacity requirements for the solution. These minimum boundaries then need to be used in steps three and four when factoring TP and AO.

Step one

• Determine the storage performance (IOPS) required to support the quantity of users at their estimated user workload profile.

• Determine the storage capacity (MB or GB) on disk required to support the quantity of users at their mailbox size.

Note: Determine the highest requirement between performance and capacity as this is your minimum boundary.

Step two

• Determine the disk type (SSD, Fibre Channel, and Nearline) you wish to use and its performance characteristics so that you can calculate the performance of the disk(s) in a RAID configuration. Ensure you have enough drives in the storage tier to meet the performance requirements in step one.

• Determine the disk type you wish to use and its capacity so that you can calculate the quantity of the disks in a RAID configuration to meet the capacity required.

Page 21: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

21

Note: The performance or capacity boundary you determined in step one is critical to include in this step.

Step three

• Use the number of IOPS that are required to support the user workload from steps one and two to calculate the distribution of disks required.

• Ensure that the initial capacity you need to allow for the migration of user mailbox data, accounts for the initial capacity design as well as the capacity required before you wish to grow your thinly provisioned LUNs (adding more storage).

Note: Step three is about the use of thin provisioning after the previous steps have been done. The performance and capacity boundary you discovered in steps one and two are used to choose the appropriate mix of drive types to support the performance and capacity.

Step four

• When using Adaptive Optimization, choose a storage tier to support the initial I/O performance and capacity requirements determined in step three.

• After defining the initial tier storage capacity, determine how much capacity can be allocated for the other storage tiers for sub-LUN data moves.

Note: Step four is about the use of Adaptive Optimization. The FC tier is the recommended tier to deploy vLUNs when using AO. AO can then move sub-LUN data to different storage tiers based on AO policies.

Sizing and design considerations

• Size the I/O Performance and Capacity requirements first, before TP and AO.

• Ensure I/O performance is met when enabling thin provisioning.

• Size TP for initial capacity, plus reserve capacity before having to add additional physical storage.

• Deploy vLUNs on the FC tier when using Adaptive Optimization and allow AO to move data to other tiers.

Configuration and setup

HP 3PAR StoreServ provisioning steps

Deploying Exchange 2010 in an enterprise environment requires a significant amount of time and effort. Configuring storage for thousands of mailboxes requires provisioning potentially hundreds of storage volumes to support both the active and passive Exchange database copies. For DAS deployments, storage arrays are configured at each individual server using an Array Configuration Utility (ACU). Depending on RAID functionality, disks will need to be configured as RAID disk sets or as individual RAID-less disks. For RAID-based configurations, spare disks and controller hardware should be added to provide additional fault tolerant functionality. There are many manual configuration steps when deploying large-scale DAS solutions. While command line scripting reduces the number of tasks, it is still a significant effort to provision storage across hundreds of DAS volumes.

With HP 3PAR StoreServ 10400, storage provisioning and management are performed using HP 3PAR StoreServ Management Console (MC) graphical interface or Command line interface (CLI). You can quickly create virtual volumes in large batches without having to manually define which physical drives are to be used and how data protection needs to be set up.

HP 3PAR StoreServ autonomic storage management allows virtual volumes or server hosts to be grouped together, referred to as sets, which are managed as a single object. Adding an object to an autonomic group applies all previously performed provisioning actions to the new member. For example, when a new host is added to a set, all volumes are autonomically exported to the new host with no administrative intervention. Similarly, when a new volume is added to a set, the volume is also automatically exported to all hosts in the set.

Creating thinly provisioned virtual volumes and virtual volume sets using the MC

1. From the Menu bar, select Actions > Provisioning > Virtual Volume > Create Virtual Volume.

2. Use the Create Virtual Volume wizard to create a base volume.

3. In the System field, select the StoreServ array to place the new virtual volumes.

Page 22: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

22

4. In the Name field enter a VV label (when creating multiple VVs a number is appended to the name field for each VV label).

5. In the Size field enter the VV capacity (this is the fully provisioned volume size).

6. In the Provisioning Allocation Settings field select thinly provisioned.

7. Select a User Common Provisioning Group (CPG must already be configured).

8. Enter value for Allocation warning (avoid setting allocation limits).

9. In the Count Field, enter the number of VVs to be created. In 3PAR OS 3.1.2 this is under the grouping section and is labeled as Number of Volume(s) (1-999).

10. In the Set Name field, enter the new or existing name to be associated with this grouping of VVs.

11. Select Show Advanced Counters; in 3PAR OS 3.1.2 this is called show advanced options.

12. On the Polices and Geometry step, select Zero Detect (in 3PAR OS 3.1.2, this is already selected).

Figure 18 provides an example configuration using the virtual volume wizard used to create the virtual volumes. This example will create 100 thin provisioned VVs labeled Exchange.0 – Exchange.99.

Figure 18. Creating virtual volumes

You can also create the virtual volumes using the CLI with the createvv command. Referring to the example in figure 18,

the same input arguments are provided to the createvv command.

Note

The CLI is case sensitive, use lower case for commands and match case for parameters entered.

Cli% createvv -tpvv -usr_aw 50 -cnt 100 –pol zero_detect EXCH-R10-FC Exchange

1500G

After creating the virtual volumes in the CLI, you can group them into a new virtual volume set using the createvvset

command:

Cli% createvvset -cnt 100 Exchange Exchange.0

Page 23: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

23

To display the results of the newly created VV use the showvv command.

Cli% showvv Exchange.*

Unlike DAS deployments where storage volumes can only be presented to a single server, SAN storage volumes can be presented to multiple servers. Presenting the same storage volumes to multiple servers allows VMs to be moved between physical hosts in the vSphere cluster. Exchange VMs can be live migrated using vMotion or in the event of a physical host failure, automatically failed over to another node and restarted. However, the process of presenting large numbers of storage volumes to multiple hosts is not a trivial task.

To simplify this task, you can use the MC to manage host sets. The concept of host sets is similar to virtual volume sets, where host servers are grouped together for management. Maintenance tasks such as adding a new host do not require manual steps to export all the virtual volumes to the new host. Once a new host is added to the host set, all the exported VVs presented to the host set will automatically be presented to the newly added host. To present a VV to a host, you can use the MC to create VLUNs by exporting a VV or VV set to a host or host set.

Creating a VLUNs using the MC

1. From the Menu bar, select Actions > Provisioning > VLUN > Create VLUN.

2. Select the HP 3PAR StoreServ system containing the VV set.

3. Select the Virtual Volume Set radio button and highlight the virtual volume set to be presented (exported).

4. In Setting step, select Host Set and highlight the host set to be exported to. In 3.1.2 this is called Export To.

5. Enter LUN ID to specify starting range for LUN number.

You can also create the VLUNs using the CLI using the createvlun command. The following example creates 100 VLUNs,

labeled Exchange.0 – 99, exporting them to a host set name ESX_Hosts. Note the first LUN ID starts at 10 in this example. As a best practice, place gaps in the LUN ID numbers when exporting VV sets. This gap allows room to add additional VVs to an existing VV set. Exporting VVs requires the VLUNs LUN IDs to be contiguous within the same VV set.

Cli% createvlun -cnt 100 Exchange.0 10 set:ESX_Hosts

To display which vLUNs have been exported to a host set use showvlun command.

Cli% showvlun –host set:ESX_Hosts

Configuration and setup considerations • Create multiple virtual volumes using virtual volume sets.

• Create host sets for all physical hosts in the vSphere cluster.

• When exporting VLUNs leave gaps in VLUN IDs, allowing room to add VVs as needed.

Virtualized Exchange environments involve large numbers of storage volumes; all needing to be exported to each member of the vSphere cluster. Additional host servers can be added to the cluster as computing demands grow. For each additional host, all shared storage volumes must be exported to the new server.

HP 3PAR StoreServ can be configured using direct or fabric connections. For direct connections, see the Virtual Connect section for more information; otherwise for fabric connections, ensure that the FC switches have been zoned to allow the hosts to access the HP 3PAR StoreServ. The host’s worldwide numbers (WWNs) need to be visible by the array before you can create the host in the MC or CLI.

Creating a host in a host set using the MC

1. From the Menu bar, select Actions > Hosts > Create Host.

2. Use the Create host wizard to create new hosts and host set.

3. In the System field select the StoreServ array.

4. In the Name field enter a host name.

5. In the Set Name field enter a new or existing Host set to be associated with this host.

Page 24: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

24

6. In the Persona field select 6 Generic-legacy (see figure 19). In 3PAR OS 3.12 select 11 VMware.

7. On the Fibre Channel step, select the host WWNs from available list and add them to the assigned list.

Figure 19. Create host and host set

You can also create the hosts using the CLI using the createhost command. The following example creates a host

named ESX-host1, with a host Set Name Exchange-Cluster1, using host Persona 6 and its two WWN numbers.

Cli% createhost -persona 6 ESX-host1 10000000C9724AB2 10000000C97244FE

After creating the hosts in the CLI, you can group them into a new host set using the createhostset command:

Cli% createhostset Exchange-Cluster1

Cli% createhost –add Exchange-Cluster1 ESX-host1

To display the results of the newly created hosts and host sets, use the showhost and showhostset commands.

Cli% showhost ESX-host1

Cli% showhostset Exchange-Cluster1

Configuration and setup considerations • Configure a least two WWNs per host for fault tolerance and multipath load balancing.

• Use a host set to provide autonomic access to VLUNs exported to the vSphere cluster.

Page 25: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

25

3PAR StoreServ Adaptive Optimization setup

AO configuration has changed with the release of HP 3PAR StoreServ OS 3.1.2. AO is now integrated into HP 3PAR StoreServ OS, instead of being part of System Reporter. AO configuration is performed using HP 3PAR StoreServ MC or CLI. Figure 20 demonstrates launching AO from the MC. Figure 21, details AO policy configuration using the 3.1.1 and prior versions.

Figure 20. Create AO Configuration (version 3.1.2)

Figure 21. HP 3PAR StoreServ InForm 3.1.1 and prior version of System Reported AO policy configuration

Page 26: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

26

Using the Create AO configuration wizard allows you to configure AO policies. The following options should be defined in each AO policy.

Mode

AO policies can be configured using one of three modes, performance, balanced and cost. Performance mode biases the tiering algorithm to move more data into faster tiers, while cost mode biases the tiering algorithm to move more data into the slower tiers. Balanced mode balances between cost and performance.

Tiers AO policies allow you to define up to three different storage tiers; the tiers are numbered 0, 1 and 2, with tier 0 being considered the highest performance and tier 2 being the lowest. Each storage tier is defined by a Common Provisioning Group, which defines RAID level, disks used, disk type and speed. AO uses the cost and performance characteristics of each tier to tune application workloads. Tiers also define how much capacity AO can use by specifying the size a given tier can grow to.

AO polices should be defined per application workload, allowing polices to be configured to suit a given type of workload. Each application can have its own AO policy, which can be based on a different set of CPG definitions. For instance, a SQL Server AO policy may use a CPG defined with the SSD tier for very high I/O requirements, while an AO policy for Exchange may use a CPG with FC to meet I/O requirements. For enterprise applications hosted by multiple servers, application servers can share the same set of CPGs allowing a single AO policy to manage the application workloads common to multiple servers.

AO uses the following parameters to analyze the workload to make decisions on data region movement between storage tiers.

• Average tier service times

• Average tier access rate densities

• Space available in the tiers

Schedule AO In 3PAR OS 3.1.2, AO has the option to run now or to create a schedule. The schedule can be defined to run hourly, daily, multiple daily, run only once or using the advanced options. The schedule feature also has options to run analyze only to report and not move data regions.

In 3PAR OS 3.1.1 and older, the schedule feature has three fields: date, weekday and hour. You can select a date, week day and hour to run, you also have the option to select multiple times , such as Monday and Tuesday or use the * (asterisk) to allow to run at each interval. For instance if you select * for the date, weekday and time fields, AO will run every day in the month, every day of the week, every hour.

Measurement duration The measurement hours are closely tied to the schedule settings. The schedule defines when AO executes and the measurement hours define how many hours the analysis will consider. For example, if AO is set to run every day at 5:00PM with a measurement hour of 24 hours, AO will analyze data from the prior 24-hour period. The measure period should align to peak application workloads allowing AO to analyze data when the system is under it highest loads.

The analysis calculates the access rate (I/O accesses/ (GiB * minute)) over the measurement interval for all VV regions in the CPGs as well as the average access rates for each CPG (tier). Based on the results of the analysis, data is then moved from one tier to the other as needed.

Exchange 2010 AO policy recommendations For Exchange 2010, AO recommendations use CPGs configured with RAID10. RAID10 offers better I/O performance, which is especially important when using thin provisioning with Nearline storage.

Recommended tiers:

• Tier 0 - SSD disks not used (optional use, generally not required for Exchange workloads)

• Tier 1 - FC disks configured as RAID10 (use to host the virtual volumes)

• Tier 2 - NL disks configured as RAID10 (used to move I/O data regions with lower I/O access rates)

Defining AO schedule time depends on the Exchange workload peak period. If Exchange workloads peak periods are during the typical work week (Monday – Friday). Select “* “for date for the schedule. Then multi-select work week days Monday, Tuesday, Wednesday, Thursday and Friday. The hour field should be set to run shortly after the peak period for the day, to allow AO to perform analysis and data movement during the off hours.

Page 27: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

27

The measurement hours should be set to 24 hours, which will analyze the entire previous day. However, you can specify a shorter period to match just the peak period. Using a shorter measurement period will result in higher levels of data being maintained or migrated to faster tiers. The data can only be moved once per measurement period. If the AO policy is set to execute more than once during the same measurement period, data previously moved will not be eligible to be moved again until the next measurement period.

For Exchange workloads using thin provisioning, the mode should be initially set to “performance” to allow increased I/O performance using the Fibre Channel tier.

Configuration and setup considerations • Define AO schedule to run only during work week and after peak period.

• Use a 24-hour measurement period.

• Use performance mode initially to maintain I/O performance, and then adjust to more conservative modes if needed.

Measuring AO performance

To analyze capacity and performance within HP 3PAR StoreServ, System Reporter can be used. System Reporter provides several report options including quick, scheduled and custom reports. To measure and analyze Exchange storage capacity and performance, two custom reports are particularly useful:

• CPG space vs. time usage report

• Region I/O density report

CPG Space vs. Time Report

The CPG Space vs. Time Report details how space is consumed in the array over a specified time period. For Exchange testing this report measures how storage space is distributed between the two tiers of storage defined by the AO policy. The report can also be helpful in determining the application workload steady state, where AO has normalized the space used on each of the storage tiers.

To create a custom report click, the Custom Reports tab in System Reporter and select the following parameters:

1. Sample resolution: “Hourly”.

2. Report: “CPG Space”.

3. Selected Systems: choose the array containing the CPGs from the available systems.

4. Click the “Build Report Menu” button (see Figure 22).

Figure 22. Build CPG Space report

Page 28: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

28

On the Hourly CPG Space Usage screen (see figure 23), configure the following sections:

1. Section 1, Report Selection, Compare option: “CPG”.

2. Section 2, Time Selection: define Begin and End times to analyze.

3. Section 3, Object Selection: select the Exchange NL and FC CPGs to be compared.

4. Section 4, Format Selection: no changes were necessary.

5. Use the “Generate Report” button to create the report.

Figure 23. Hourly CPG Space Usage

The CPG Used Space graph measures CPG space used over a specific time period. In this example, Exchange databases were initially located on the Nearline storage (denoted by the green line). However, over a several day Exchange simulation20 period, AO analyzed data usages patterns and performed data region moves to the FC tier (denote by red line) in figure 24. The CPG Used Space graph provides details on the how storage capacity for the Exchange vLUNs is distributed across the storage tiers defined in the AO policy.

Figure 24. CPG Used Space

20 Using the Microsoft LoadGen tool

Page 29: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

29

Region I/O density report

The Region I/O density report shows the I/O density distribution and capacity utilization between I/O regions. The report produces graphs, which are helpful in understanding how capacity is distributed across the CPGs and the active I/O data regions. To create a region I/O density report, click the Custom Reports tab and select the following parameters.

1. Sample resolution: “Hourly”.

2. Report: “Adaptive Optimization”.

3. Select Systems: choose the array from the available systems.

4. Click the “Build Report Menu” button (see figure 25).

Figure 25. Build Adaptive Optimization report.

On the Hourly Adaptive Optimization screen (see figure 26), configure the following sections:

1. Section 1, Report Selection, Report Type: “Region I/O Density”.

2. Section 2, Time Selection: define Begin and End times.

3. Section 3, Object Selection: select the Exchange NL and FC CPGs to be compared.

4. Section 4, Format Selection, Report Information: “Charts and Table”.

Figure 26. Region I/O Density

Page 30: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

30

Figure 27 shows the first histogram graph of I/O rate density information for the tiered environment. The green portion of the stacked bar data indicates the capacity used by Exchange of the NL CPG tier, while the red portion of the stacked bar data indicates the capacity used by Exchange on the FC CPG tier.

Looking at the histogram in this example, there was a wide distribution of capacity across the I/O rate density regions, indicating Exchange has varying degrees of I/O density. The Exchange database’s capacity is distributed between the Nearline (sum of all green bars) and Fibre Channel (sum of all red bars).

Figure 27. Space distribution

The data regions are broken out from left to right with increasing I/O rate densities. The first data region bucket contains the I/O rate density from 0 up to 0.015625 I/O/ (GiB * min). The next and successive buckets are double the previous value.

Figure 28 is the second histogram of the AO region I/O density report. This histogram shows the distribution of I/O across the data regions. Similar to space usage, I/O accesses are distributed over a wide range of data region buckets. However, in this example the FC tier, providing only 16% of the Exchange storage capacity, provides nearly half of the I/O accesses (~46%).

Figure 28. I/O access distribution

Configuration and setup considerations • Use CPG Space vs. Time to monitor AO region moves and determine when steady state has been achieved and

distribution of capacity between storage tiers.

• Use AO region I/O density report to measure CPG capacity between tiers and to compare with I/O region density to determine how capacity and I/O accesses are distributed between storage tiers.

Page 31: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

31

Virtual Connect

Virtual Connect is a physical-layer machine abstraction technology, just as hypervisor software abstracts physical servers into virtual machines. Virtual Connect consists of interconnect modules for Ethernet and Fibre Channel and includes embedded software for configuring single HP BladeSystem c-Class enclosures. The software and hardware presented in the Virtual Connect modules provides the abstraction functions so the external SAN and Ethernet networks appear as a pool of connections within the HP BladeSystem enclosure.

Instead of using the physical MAC addresses and WWNs created when the NICs or host bus adapters (HBAs) are manufactured, the Virtual Connect Manager creates specific server connection profiles. The profiles assign unique MAC addresses and WWNs, which can then be associated with HP BladeSystem enclosure bays so that these addresses are held constant. The server profile that Virtual Connect manages includes the server’s LAN and SAN connectivity settings, Virtual Connect network assignments, managed MAC addresses and WWNs Fibre Channel boot parameters.

This is a similar concept that a hypervisor follows when configuring virtual machines. In a virtual machine environment, the hypervisor defines the VM settings, logical IDs including the network node address or virtual NIC identities for each virtual machine. The hypervisor software implements a layer 2 bridging device (a virtual Switch) to provide external network connectivity to the virtual machines. With Virtual Connect, the Virtual Connect Manager defines the NIC identities for each physical server blade and uses a hardware implementation of a layer 2 bridge to provide external network connectivity to the server blades.

Flex-10 for Virtual Connect

Flex-10 technology is an extension of HP Virtual Connect technology. It is a hardware-based, I/O abstraction solution that allows you to partition a 10Gb Ethernet connection, to regulate the bandwidth of each partition, and to prioritize each of those partitions. A single 10Gb network port can represent up to four physical NIC devices, or FlexNICs, for a total bandwidth of 10Gbps. Flex-10 hardware consists of two components: a 10Gb Flex-10 LAN-on-motherboard (LOM) device or 10Gb Flex-10 mezzanine card; and the HP Virtual Connect Flex-10 10Gb Ethernet Module.

The 10Gb Flex-10 LOM or mezzanine card connects to a 10Gb server port on the BL460c Gen8 server blade. To the server, each of the four FlexNICs appears as a discrete NIC, with its own driver. Server Administrators can define the bandwidth available to each FlexNIC from 100Mb to 10Gb through the Virtual Connect interface. While the FlexNICs share the same physical port, traffic flow for each FlexNIC is isolated with its own MAC address and virtual local area network (VLAN). The Virtual Connect Flex-10 Ethernet Module recognizes Flex-10 connections from the server as part of a Virtual Connect server profile. Figure 29 depicts the logical representation of FlexNICs connection to the interconnect module in the c7000 chassis.

Figure 29. Flex-10 logical representation of the FlexNICs

Page 32: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

32

Exchange virtualization network requirements

Virtualizing Exchange requires multiple networks configured to isolate protocol and functions. For the vSphere 5 hypervisor two networks should be defined, the first is used to isolate the management network, connect the ESXi servers together and allow management from the vCenter console. While this network does not require a lot of bandwidth, it is latency sensitive and needs to be highly available. The management network should be isolated from other networks and protocols to ensure vSphere cluster communications between nodes is not delayed. The second network is used for vMotion, allowing live migration of Exchange virtual machines between cluster nodes. This link requires a significant amount of bandwidth to move VMs with large amounts of memory between nodes.

From an Exchange virtual machine perspective, Exchange needs two networks; the first for client traffic. This network is used for public traffic connecting to the mail services and needs relatively low latency and highly availability. While each Exchange VM does not need a multi-gigabit network for client traffic, the physical host should be configured with two (one from each NIC port) 1Gb networks to handle the aggregated traffic of multiple Exchange VMs. The second Exchange network is used for mailbox database replication. This network can use a significant amount bandwidth, especially when seeding databases between servers.

Virtual Connect devices are managed using Virtual Connect Manager (VCM) web application. VCM allows you to define server profiles, Ethernet networks and SAN fabrics. Table 2 lists the recommended network configurations to be defined in VCM for Exchange virtualization.

Table 2. VC networks using Flex-10

Network Bandwidth

MAPI Client network 1Gb

vMotion 4Gb

Replication 4Gb

HA (ESXi management traffic) 1Gb

FlexNICs to network assignments

The BL460c Gen8 server blade configured with a 10Gb FlexibleLOM adapter, allows you to create up to eight FlexNICs, four per port. The FlexNICs are labeled LOM:1 for port 1 and LOM:2 for port 2 and defined as a, b, c and d. To provide high availability and aggregate bandwidth, LOMs from both ports can be assigned to the same network. Figure 30, details network assignments when using all four FlexNICs for networking. In this example the LOM: 1-a (port1) and LOM: 2-a have been both assigned to MAPI network.

Figure 30. FlexNIC to network assignment

Page 33: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

33

Convergence with Virtual Connect FlexFabric adapters

Virtual Connect provides both fabric-attached and direct-attached options when connecting to 3PAR StoreServ systems. Traditional fabric based solutions use SAN switches connected to either 20 or 24 port Virtual Connect Fibre Channel Modules. The VC Fibre Channel models connect to the blade servers using HBA mezzanine cards installed in each server blade. Traditional fibre channel configurations offer the flexibility of connecting many different storage and backup devices deployed in your environment, but fabric based SANs are also complex and expensive to deploy and maintain.

As an alternative to fabric attached SAN, HP provides support for direct attached SAN with the 3PAR StoreServ. Using the Virtual Connect FlexFabric adapters, you can converge Ethernet, Fibre Channel, or accelerated iSCSI traffic into a single 10Gb data stream instead of having entirely separate protocols for Ethernet and FC. The FlexFabric removes the need for additional HBA mezzanine adapters, and instead uses the FlexFabric 10Gb 2-port 554FLB FIO Adapter installed in the BL460c Gen8 server blade FlexibleLOM connector (see figure 4).

A FlexFabric adapter provides standard NIC functions, FlexNIC capabilities, and Fibre Channel or iSCSI FlexHBA capability. Each FlexFabric adapter contains two 10Gb Ethernet ports that you can partition into four Flex-10 physical functions per port, either as FlexNICs or FlexHBAs. FlexHBA is an actual PCIe physical function on the FlexFabric adapter that you can configure to handle storage traffic. The server ROM and hypervisor recognize the PCIe function as an HBA device.

Note

FlexHBAs can only be defined as the second physical function of each FlexFabric adapter port.

The FlexFabric adapter encapsulates Fibre Channel packets as FCoE and consolidates the Fibre Channel and Ethernet traffic into one 10Gb data stream. The FlexFabric interconnect module separates the converged traffic. Fibre Channel and Ethernet traffic continue beyond the FlexFabric edge using the existing native Ethernet and Fibre Channel infrastructure.

Using FlexFabric, the server LOM’s physical functions can be configured as either FlexNIC or FlexHBA21, eliminating the need for an additional HBA mezzanine card in each server. The VC FlexFabric modules also replace both the VC Flex-10 network modules as well as the VC Fibre Channel modules, reducing the requirement from four modules down to two. For more details and how to deploy, download the HP Virtual Connect Fibre Channel Networking Scenarios cookbook at http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01702940/c01702940.pdf.

Configuration and setup considerations • Isolate MAPI, replication, management and vMotion traffic and protocols in to separate VC networks.

• Configure FlexNICs to meet bandwidth and availability needs of the networks, especially the MAPI, replication and vMotion networks.

• To reduce SAN infrastructure consider Direct-Attach SAN, using FlexFabric VC modules and server FlexFabric LOM adapters.

VMware vSphere 5

Distributed Resource Scheduler Virtualizing Exchange 2010 deployments allows you to combine the DAG features available in Exchange 2010 with hypervisor failover clustering. Hypervisor failover clustering provides the capabilities of moving VMs between hosts in the event of a physical host failure or during planned maintenance events. However, Exchange is not virtualization aware and subsequently there is no logic or policy built into Exchange to help determine proper placement of Exchange VMs on physical hosts.

It is important to preserve Exchange native HA functionality by ensuring DAG members containing copies of the same database are placed on separate physical hosts. It is also important to ensure that VMs containing the CAS and HT roles are distributed between the physical hosts to prevent a wide scale service disruption in case of a physical host failure. While Exchange does not have built in logic or polices, VMware vSphere 5 allows you to create rules to manage VM placement, using the Distributed Resource Scheduler (DRS).

DRS allows you to manage and distribute resources within the vSphere cluster. There are two types of rule sets: VM to VM and VM to host, each of which can be set as either an affinity or anti-affinity rule. VM to VM rules allow you to define if a VM

21 Only one FlexHBA for LOM port

Page 34: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

34

should be kept together on the same physical host (affinity rule) or to be separated on to different physical hosts (anti-affinity rule). VM to host rules are similar to VM to VM in that they define if a VM is going to be placed or not placed on a particular physical host.

By default, VM to VM rules are defined as “should run” rules, which means that the rule will attempt to be processed. However, if the rule cannot be enforced it can be overridden. For example, if a rule indicates two VMs need to run on separate physical hosts, but one of the physical hosts is down, then both VMs will be placed (overridden) on the remaining host. VM to VM rules, which have been overridden, will be listed with a violation status.

VM to host rules have both should and must to define if the rule is strictly enforced. In cases where a rule is defined as must the VM will not be powered on if there is a violation.

DRS automation

Depending how you configure DRS automation, rules are processed either manually or automatically. DRS provides three levels of rule automation:

• Manual, VM placement or migration recommendations are only suggested.

• Partially automated, VMs are automatically placed at power on, but DRS will only suggest migration recommendations.

• Fully automated, VMs are automatically placed at power on and will be automatically migrated from one physical host to another during failure scenarios.

Manual and partially automated modes require server administrators to manually apply the DRS recommendations; while in fully automated mode, VMs will be automatically migrated using vMotion. Setting automation to fully automated, allows system resources to be migrated during a physical host outage and then restored back after the failed physical host is restored. Figure 31 displays a VM to VM rule violation due to target host being down.

Figure 31. Rule violation

DRS processes the rule based on the level of automation that is set. In Figure 32, DRS is set to partially automated, which can automatically power on VMs but will prompt to apply the DRS recommendation when migrating a VM between physical hosts.

Figure 32. Apply DRS recommendation

Page 35: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

35

Anti-affinity rules for Mailbox server VMs

For Mailbox VMs, anti-affinity rules allow you to separate mailbox VMs, so that they are not placed together on the same physical host. To create an anti-affinity rule:

1. Select Rules from vSphere DRS.

2. Select Add to bring up the rule window.

3. In Name field, enter a descriptive name for the rule.

4. In Type field, select “Separate MBX servers”.

5. Click add button to add the mailbox virtual machines to be separated.

Figure 33 depicts an anti-affinity rule defined with three mailbox VMs.

Figure 33. Separate MBX servers

This anti-affinity rule will attempt to place mailbox VMs on separate physical hosts in the vSphere cluster whenever possible. It is important to design the anti-affinity rules to be consistent with your production configuration. Using the four-node cluster example in the Balancing CPU resources section, the six mailbox server VMs can be divided into 2 logical groups with three mailboxes server VMs each. Within each group, the mailbox server VMs can be used to maintain a set of mailbox databases and their replicas. The DRS anti-affinity rules can be used to prevent these mailbox VMs from being hosted on the same physical server. By ensuring the Mailbox server VMs containing copies of the same database are hosted by separate physical servers the Exchange DAG functionality is preserved. Figure 34 depicts how mailbox server VMs can be separated using DRS anti-affinity rules.

Page 36: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

36

Figure 34. Anti-affinity rules mailbox server VMs

Anti-affinity rules for CAS/HT server VMs Similar to anti-affinity rules used to separate mailbox VMs, CAS/HT anti-affinity rules should be defined to separate VMs. For CAS/ HT the anti-affinity rules distribute VMs across physical nodes to reduce the impact of physical node failure. In this configuration no more than two CAS/HT server VMs would be impacted by a physical node failure, requiring the VMs to failover to another physical node. Figure 35 depicts how mailbox server VMs can be separated using DRS anti-affinity rules.

Figure 35. Anti-affinity rules CAS/HT server VMs

Configuration and setup considerations • Use DRS to create and configure VM to VM anti-affinity rules to separate mailbox server VMs containing copies of the

same databases.

• Use DRS to create and configure VM to VM anti-affinity rules to separate CAS/HT server VMs, reducing the number of CAS/HT server VMs needing to be failed over in the event of a physical node failure.

• Set automation to fully automated to allow VMs to be moved during and after a failure scenario.

Physical Server Physical Server

Mailbox

Spare

Physical Server

Spare

Mailbox

Mailbox

CAS/HT

CAS/HT

Spare

Mailbox

Physical Server

Mailbox

Mailbox

SpareCAS/HT CAS/HT

CAS/HT CAS/HT

Anti-affinity rules separates Mailbox server VMs

Anti-affinity rules separates Mailbox server VMs

Physical Server Physical Server

Mailbox

Spare

Physical Server

Spare

Mailbox

Mailbox

CAS/HT

CAS/HT

Spare

Mailbox

Physical Server

Mailbox

Mailbox

SpareCAS/HT CAS/HT

CAS/HT CAS/HT

Anti-affinity rules separates CAS/HT server VMs

Anti-affinity rules separates CAS/HT server VMs

Page 37: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

37

vMotion

VMware vMotion is used to live migrate VMs between ESXi hosts with no perceivable down time. vMotion is a key component in DRS functionality allowing VMs to be moved based on policy and rule settings. vMotion enables the migration of VMs under production workloads while users are actively using system resources. The live migration process first copies the execution state of a VM from the source physical host to the target physical host. The execution state consists of three components:

• Virtual devices (CPU, network and disk adaptors)

• External connections (networking and SCSI devices)

• Virtual machine’s physical memory

The VM’s physical memory component is the largest of the three components and requires the most time to transfer. The transfer of physical memory is done in three phases:

• Phase 1: Guest trace, which places traces on VM memory pages to track modification to pages which have changed during the transfer process.

• Phase 2: Pre-copy, performs an initial copy of source VM to the target VM and then performs iterative copies of memory pages that have been modified while the VM continues to operate.

• Phase 3: Switchover, during this phase the VM is quiesced (paused) on the source physical host, allowing a final sync of memory changes before VM control is transferred to the target physical host.

During phase 2, there is a noticeable spike in source physical host CPU as the memory transfer occurs; figure 36 measures CPU usage on the source physical host before, during and after the vMotion migration. In this example, the source physical host contains two mailbox VMs that are both running an Exchange simulation using Microsoft LoadGen. Before migrating one of the mailbox VMs, the source physical host CPU usage averaged approximately 34%. During vMotion migration, CPU usage peaked at 52% (momentarily), as memory pages where copied. After the migration completed, CPU usage on the source physical host dropped to 22%.

Figure 36. vMotion CPU usage

In phase 3, the mailbox VM is quiesced to allow for a final sync of memory page modifications to be transferred to the target. During this switchover process, Exchange will experience a brief service latency spike. In this example, the Exchange Information Store I/O database read average latency, briefly spiked, as the VM is quiesced and switched over. Figure 37 details the latency spike, as VM is quiesced and returns to normal.

Note

When vMotion is used, it is important that there are available CPU resources to handle the CPU spike and not prolong the migration process. VMware recommends that you leave vCPU headroom22 for vMotion tasks.

22 30% of one vCPU in each physical host

Page 38: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

38

Figure 37. Latency spike during final sync

Microsoft Cluster settings for vMotion

Microsoft cluster service’s SameSubnetDelay default setting of 1000ms (1sec) will likely cause the Exchange databases to failover during a migration of a mailbox VM. To prevent database failover during live migrations, you will need to increase the SameSubnetDelay from 1000ms to 2000ms and the SameSubnetThreshold from 5ms to 10ms. These settings are per cluster and only need to be changed on one Exchange mailbox server per DAG.

To change the cluster you can log on to any Exchange mailbox server in each DAG, launch PowerShell and enter the following commands.

Import-module FailoverClusters

(Get-Cluster).SameSubnetThreshold=10

(Get-Cluster).SameSubnetDelay=2000

vMotion network settings vMotion depends on high speed network links between hosts to perform migrations and should be placed on high speed dedicated network links. For large Exchange VM using large physical memory configurations, vMotion migration performance can be increased by using multiple network adapters. Configuring multiple networks allows vMotion to load balance traffic across multiple VMkernel ports. In table 2 (Virtual Connect section), vMotion networks are defined using two 4Gb VC networks, one from each FlexibleLOM port. The two 4Gb networks are combined, using a single vSphere vSwitch by adding the two vmnic adapters. Figure 38 details the vSwitch configuration with two VMkernel ports.

Figure 38. vSwitch VMkernel ports

Page 39: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

39

Using two vmnics, vMotion can utilize both networks as part of the migration process. Figure 39, demonstrates the combined network bandwidth of two 4Gb VC networks. In this figure each vmnic is equally load balanced during the vMotion migration and consumes nearly the entire aggregated 8Gb network link.

Figure 39. vMotion load balancing

Configuration and setup considerations

• Size additional CPU headroom, approximately 30% of one vCPU in each physical host for vMotion tasks.

• Adjust SameSubnetDelay for each Microsoft cluster (per DAG) to prevent Exchange database failovers during live migrations.

• Use multiple networks to provide appropriate bandwidth for vMotion.

HP Customized ESXi 5 images The new ProLiant Gen8 servers require the use of the HP Customized image for installation. The drivers for the new network and storage controllers in the ProLiant Gen8 servers are integrated in the HP Customized image and are not part of the generic ESXi image that is distributed by VMware. ESXi requires drivers for these controllers to be integrated as you are not able to insert them during installation. The HP Customized images for ESXi are available on the HP ProLiant servers website. For image download page go to: http://h18004.www1.hp.com/products/servers/software/vmware/esxi-image.html

Implementing a proof-of-concept

As a matter of best practice for all deployments, HP recommends implementing a proof-of-concept using a test environment that matches as closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained. For help with a proof-of-concept, contact an HP Services representative (hp.com/large/contact/enterprise/index.html) or your HP partner.

Page 40: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

40

Summary

HP ProLiant BL460c Gen8 server blades and HP 3PAR StoreServ 10400 Storage are key hardware enablers when virtualizing Exchange 2010. Virtualization and advanced SAN features provide Exchange 2010 with the ability to be right sized, allowing solutions to be scaled up or scaled down, adapting to changing business needs.

Virtualizing Exchange 2010 using HP BL460c Gen8 servers and VMware vSphere 5 also provides flexibility to quickly adjust as workload demands change. Unlike physical deployments, which lock in server and storage resources to Exchange 2010, virtualized deployments allow system resources to be more effectively utilized across the entire data center. Using HP BL460c Gen8 servers and VMware vSphere 5 provides new levels of high availability by combining the benefits of VMware vMotion live migration technology with HP Virtual Connect Flex-10 networking architecture.

Using HP 3PAR StoreServ 10400 storage technologies such as Adaptive Optimization, Dynamic Optimization and Thin Provisioning, IT organizations can manage storage I/O performance while also gaining space efficiency by pooling resources. The 3PAR storage technologies allow administrators to move and reconfigure storage resources as needed without disruption to the Exchange environment.

Sizing and design decisions described in this document have been developed to assist in the sizing and decision processes to deploy Exchange 2010 in a virtualized environment.

Page 41: Sizing and design decisions when virtualizing Microsoft ......Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ... virtualization and SAN

Technical white paper | Sizing and design decisions when virtualizing Microsoft Exchange with HP 3PAR StoreServ and ProLiant BL460c Gen8 using VMware vSphere 5

For more information

HP BladeSystem, hp.com/go/blades

HP ProLiant, hp.com/go/ProLiant

HP Servers, hp.com/go/servers

HP 3PAR StoreServ P10000 Storage Systems, hp.com/go/P10000

HP 3PAR StoreServ Storage Family, hp.com/go/3PAR

HP Storage, hp.com/go/storage

HP Storage for VMware, hp.com/go/storage/vmware

HP Sizer for Microsoft Exchange Server 2010, hp.com/solutions/microsoft/exchange2010/sizer

HP/VMware technical papers

Reference Architecture: Implementing HP 3PAR V400 and ProLiant BL460c Gen8 with Microsoft Exchange 2010 running on VMware vSphere 5, http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA4-3845ENW

HP 3PAR StoreServ Storage and VMware vSphere 5 best practices, http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA4-3286ENW

Adaptive optimization for HP 3PAR Storage, http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA4-0867ENW

HP 3PAR VMware ESX Implementation Guide, http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c03290624/c03290624.pdf

VMware vSphere Storage DRS Interoperability, vmware.com/files/pdf/techpaper/vsphere-storage-drs-interoperability.pdf

HP 3PAR Documentation

HP 3PAR System Reporter Software overview, h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-0371ENW

To help us improve our documents, please provide feedback at hp.com/solutions/feedback.

Sign up for updates

hp.com/go/getupdated

© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for

HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as

constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other

countries.

4AA4-5558ENW, February 2013