H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A...

26
CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations Abstract This white paper compares the performance of the EMC ® CLARiiON ® CX3 models 10, 20, 40, and 80, with the CLARiiON CX models 300, 500, and 700 on three application workloads. In addition, it compares the performance of the CX3 series with the NetApp FAS3050c, HP EVA8000, and IBM DS4800 model 88 with the same application workloads. July 2007

Transcript of H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A...

Page 1: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms

Technology Concepts and Business Considerations

Abstract

This white paper compares the performance of the EMC® CLARiiON® CX3 models 10, 20, 40, and 80, with the CLARiiON CX models 300, 500, and 700 on three application workloads. In addition, it compares the performance of the CX3 series with the NetApp FAS3050c, HP EVA8000, and IBM DS4800 model 88 with the same application workloads.

July 2007

Page 2: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Copyright © 2007 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com

All other trademarks used herein are the property of their respective owners.

Part Number H2904

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 2

Page 3: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Table of Contents Executive summary ............................................................................................4 Introduction.........................................................................................................4

Audience ...................................................................................................................................... 5 Using application workloads to test..................................................................5

Overview of tested configurations................................................................................................ 5 EMC CLARiiON CX3-10c, CX3-20, CX3-40, and CX3-80....................................................... 5 EMC CLARiiON CX300, CX500, and CX700........................................................................... 8 NetApp FAS3050c.................................................................................................................. 10 HP EVA8000 .......................................................................................................................... 11 IBM DS4800 ........................................................................................................................... 12

Performance comparison of application workloads ................................................................... 13 OLTP application workload .................................................................................................... 14 Exchange application workload.............................................................................................. 19 Backup-to-disk application workload...................................................................................... 21

Conclusion ........................................................................................................23 Appendix: Array configuration settings .........................................................24

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 3

Page 4: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Executive summary Controlled tests clearly demonstrate the performance superiority of the EMC® CLARiiON® CX3 UltraScale™ series over CLARiiON’s previous generation CX series, and comparable storage systems by NetApp, IBM, and HP. The tests, which simulated workloads for Exchange, OLTP, and backup-to-disk, are described in this white paper along with detailed configuration information for each storage system.

Introduction This white paper describes performance tests comparing EMC CLARiiON CX storage systems (models 300, 500, and 700) with CX3 UltraScale™ storage systems (models 10, 20, 40, and 80). The CLARiiON CX3 series is then compared with its nearest competition: the NetApp FAS3050c, IBM DS4800 model 88, and HP EVA8000. This paper also discusses the application workloads used in the testing (which included OLTP, Exchange, and backup-to-disk application workloads), and the storage system configurations tested.

As shown in the following tables, the results of these extensive performance tests clearly demonstrate the performance superiority of the CX3 UltraScale series systems over previous generation CX series and competitive offerings by NetApp, IBM, and HP.

Table 1. Application performance results for the CX and CX3 series

CLARiiON CX3 versus CX application performance

Application workload

CX300 60 disks 2 GB cache

CX3-10c 60 disks 2 GB cache

CX500 120 disks 4 GB cache

CX3-20 120 disks 4 GB cache

CX700 240 disks 8 GB cache

CX3-40 240 disks 8 GB cache

CX3-80 480 disks 16 GB cache

OLTP (transactions/min) 14,983 19,148 34,250 34,097 52,408 59,630 72,372

Exchange (# users) 7,040 7,799 12,579 16,154 20,080 23,836 28,655

Backup-to-disk (MB/s) 228 393 330 471 870 881 1,121

Table 2. Application performance results for the CX3 series and competitive storage systems

CLARiiON CX3 series versus FAS3050c, EVA8000, and DS4800-88

Application workload

CX3-10c 60 disks 2 GB cache

FAS3050c 262 disks 8 GB cache

CX3-20 120 disks 4 Gb cache

EVA8000 240 disks 8 GB cache

CX3-40 240 disks 8 GB cache

DS4800-88 224 disks

16 GB cache

CX3-80 480 disks 16 GB cache

OLTP (transactions/min) 19,148 12,396 34,097 44,810 59,630 63,559 72,372

Exchange (# users) 7,799 10,239 16,154 19,277 23,836 25,396 28,655

Backup-to-disk (MB/s) 393 229 471 480 881 985 1,121

This white paper begins with an overview of the storage systems that were tested. It then describes the simulated application workloads, and discusses the performance of each storage system.

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 4

Page 5: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Audience This white paper is for customers, including storage architects and administrators, and anyone involved in evaluating, acquiring, managing, operating, or designing networked storage environments.

Using application workloads to test You can use many different types of workloads to compare the performance of storage systems. For example, using narrowly focused workloads that generate cache hit reads and cache hit writes produces very large I/Os per second (IOPS) throughput and megabyte per second (MB/s) bandwidth. However, this is not a realistic simulation, because it only exercises the storage system’s front-end ports and cache, and no customer application will ever achieve this throughput or bandwidth. The tests described here focus on three application-inspired workloads—OLTP, Exchange, and backup-to-disk— because they are more representative of real-world customer environments.

Overview of tested configurations To understand the performance results, it is important to understand the basic architecture of each storage system and the specific configurations that were tested.

EMC CLARiiON CX3-10c, CX3-20, CX3-40, and CX3-80 The EMC CLARiiON CX3 UltraScale series, launched in May 2006, is the third-generation CLARiiON CX storage platform and introduces the industry-leading UltraScale architecture and new levels of scalability, reliability, and ease-of-use capabilities.1 The new CX3 UltraScale architecture uses more powerful storage processors, high-speed 4 Gb/s front-end and back-end Fibre Channel (FC) technology, and state-of-the-art PCI Express interconnect technology to deliver full 4 Gb/s performance from end to end.

As with earlier CX series models, all CX3 UltraScale models include dual storage processors (SPs). The major differences between the model 10c, model 20, model 40, and model 80 are the number and speed of the CPUs in the SPs, the number of front- and back-end ports, the size of the system memory, and the maximum number of supported disks. In October 2006, EMC announced the availability of an FC/iSCSI combo option for models 20 and 40, which provides simultaneous FC and iSCSI front-connectivity within the same CX3 frame. In February 2007, EMC introduced the model 10c, which offers a low-cost entry point into the CX3 UltraScale series. At this time, EMC also updated the FC design of the models 20 and 40 to meet new connectivity demands.

The CLARiiON CX3 series supports standard 73 GB and 146 GB high-performance 4 Gb/s 15k rpm Fibre Channel disk drives and 73 GB2, 146 GB, and 300 GB 2 Gb/s 10k rpm Fibre Channel disk drives. CX3 systems also support 500 GB high-capacity, low-cost Serial ATA II (SATA II) 7.2k rpm disk drives with integrated 4 Gb/s Fibre Channel adapters; and 500 GB high-capacity, low-cost, 2 Gb/s 7.2k rpm Fibre Channel disk drives (also known as Low-Cost Fibre Channel, or LCFC).

The CLARiiON CX3 series also introduces a number of host connectivity options. In addition to the standard FC host connections offered on most base models, models 10, 20, and 40 are available in FC/iSCSI combination models that allow a multi-protocol host attach. These options are summarized in Table 3. Figure 1 illustrates the basic architecture of the CX3 models 10c, 20, 40, and 80. Figure 1 also shows the FC host attach configurations3 for the FC application performance tests discussed later.

A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 5

Introduction to the CX3 UltraScale Series, white paper, EMC Corporation, June 2007. 1

The CX3-10c does not support 73 GB 10k rpm drives; all other models do. 2

The iSCSI connections in the CX3-10c diagram were not active for the FC portion of testing. 3

CLARiiON CX3 Application Performance:

Page 6: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Table 3. CX3 UltraScale series configuration options

CX model 10c

CX3 model 20

CX3 model 20c

CX3 model 40

CX3 model 40c

CX3 model 80

Total # of 4 Gb/s FC host ports 4 12 4 8 4 8

Total # of copper GbE iSCSI connections 4 N/A 8 N/A 8 N/A

Total # of 4 Gb/s FC disk ports 2 2 2 8 4 8

Max # of drives per 60 120 120 240 240 480 storage system

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 6

Page 7: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

- Single 1.8 GHz CPU/SP - 2 FE FC ports/SP @ 4 Gb/s - 2 iSCSI GbE ports/SP - 1 BE FC port/SP @ 4 Gb/s - 1 GB memory/SP - 60 15k rpm, 4-Gb/s FC disks

CX3 Model 10c

4 Gb/s FC Copper GbE

BEFE

A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 7

CX3 Model 80

- Dual 3.6 GHz CPUs/SP - 4 FE FC ports/SP @ 4 Gb/s - 4 BE FC ports/SP @ 4 Gb/s - 8 GB memory/SP - 480 15k rpm, 4-Gb/s FC disks

Figure 1. CLARiiON CX3 UltraScale models 10c, 20, 40, and 80 architecture and test configurations

All CLARiiON CX3 UltraScale series systems run a custom operating environment called FLARE®, and are managed via the EMC Navisphere® Management Suite, which is web-based and runs directly on the storage system. No external management appliance is required.

The CLARiiON consolidates groups of disk drives into units called RAID groups, which can contain up to 16 disk drives. Each RAID group implements a specific RAID type: 0, 1, 1/0, 3, or 5. The data-to-parity ratio depends on the number of drives in the group—for a five-disk RAID group the ratio is 4-to-1; for a 16-disk RAID group it is 15-to-1. Within these RAID groups, users create LUNs to present as storage units to hosts. Multiple LUNs on a storage system can be striped or concatenated together and presented to a host as a single large LUN called a metaLUN. This allows a single LUN to be striped across more than one

BE

SP B

FE SP A Disks

- Dual 2.8 GHz CPUs/SP - 4 FE FC ports/SP @ 4 Gb/s - 4 BE FC ports/SP @ 4 Gb/s - 4 GB memory/SP - 240 15k rpm, 4-Gb/s FC disks

CX3 Model 40 BE

SP B

FE SP A Disks

CX3 Model 20 - Single 2.8 GHz CPU/SP - 6 FE FC ports/SP @ 4 Gb/s - 1 BE FC port/SP @ 4 Gb/s - 2 GB memory/SP - 120 15k rpm, 4-Gb/s FC disks

BEFE

Disks

SP B

SP A

SP A

SP B

Disks

CLARiiON CX3 Application Performance:

Page 8: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

RAID group, thereby increasing the number of disk spindles and providing a much wider range of performance and configuration options. Additional LUNs can be added dynamically, allowing metaLUNs to be expanded on the fly.

EMC CLARiiON CX300, CX500, and CX700 The major differences among the CLARiiON CX series (CX300, CX500, and CX700) are the number and speed of the CPUs in the dual SPs; the number of front- and back-end ports; the size of the system memory; and the maximum number of supported disks. The CLARiiON CX series supports the following:

• Standard 73-GB and 146-GB 2-Gb/s, 15k rpm Fibre Channel disk drives • Standard 73-GB, 146-GB, and 300-GB 2-Gb/s, 10k rpm Fibre Channel disk drives • Standard 500-GB high-capacity, low-cost, 2-Gb/s, 7.2k-rpm SATA disk drives. The CX series run the FLARE operating environment, are managed via the EMC Navisphere Management Suite, and support RAID 0, 1, 1/0, 3, or 5 LUNs.

With the exception of the CX300, which continues to be in high demand, the CLARiiON CX series has been superseded by the CLARiiON CX3 UltraScale series.

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 8

Page 9: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Figure 2 illustrates the basic architecture of the CX300, CX500, and CX700, and the hardware configurations used in the performance tests.

A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 9

CX300

- Single 800 MHz CPU/SP - 2 FE FC ports/SP @ 2 Gb/s - 1 BE FC port/SP @ 2 Gb/s - 1 GB memory/SP - 60 15k rpm, 2-Gb/s FC disks

FE BESP A

SP B

Disks

- Dual 1.6 GHz CPUs/SP - 2 FE FC ports/SP @ 2 Gb/s - 2 BE FC ports/SP @ 2 Gb/s - 2 GB memory/SP - 120 15k rpm, 2-Gb/s FC disks

CX500

CX700

- Dual 3.0 GHz CPUs/SP - 4 FE FC ports/SP @ 2 Gb/s - 4 BE FC ports/SP @ 2 Gb/s - 4 GB memory/SP - 24

FE BE

Disks SP A

SP B

FE BESP A

SP B

Disks

0 15k rpm, 2-Gb/s FC disks

Figure 2. CX300, CX500, and CX700 architecture and tested configurations

CLARiiON CX3 Application Performance:

Page 10: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

NetApp FAS3050c Network Appliance’s fabric-attached storage (FAS) systems provide block-level data access over Fibre Channel and iSCSI storage area networks (SANs), and support CIFS, NFS, and HTTP file access protocols for network-attached storage (NAS). 4 The midrange FAS3000 series (consisting of the FAS3020 and FAS3050) was announced in May 2005.

All FAS systems run the Data ONTAP operating system and use the WAFL (Write Anywhere File Layout) file system, which was designed specifically to meet the requirements for a NAS appliance. 5 Within Data ONTAP, aggregates are built using one or more disk RAID groups, and WAFL volumes are defined on top of aggregates. Block data LUNs are created on top of WAFL volumes to support block-level SAN environments. The FAS3020 and FAS3050 support both Fibre Channel and SATA disk drives. For Fibre Channel disks, single parity RAID 4 (7+1) and (13+1) RAID groups are supported, as well as double parity RAID-DP (14+2) and (26+2).

WAFL writes new or updated data blocks to a new block location on disk. To reduce physical I/O, WAFL batches individual writes together and tries to use free blocks in the same RAID stripe to reduce updates to the parity disk. However, as more of the available disk capacity becomes used, free blocks become more scarce and scattered over different RAID stripes and more physical I/O is required to write a block. Tests have shown that this fragmentation leads to performance degradation over time.

To make the NetApp test results repeatable and realistic, before each test LUNs were destroyed and new LUNs were created and prefilled with data. For the OLTP and Exchange tests described later, LUNs were prefilled to 50 percent of their usable capacity with streams of sequential writes to simulate the use case in which a database was already present. During the tests, all read and write requests were addressed to already written data blocks. For more information about NetApp filers, see the NetApp Performance in SAN Environments Technical Note.

The high-end FAS3050c (dual controller active/active cluster) model used in our tests included a total of 8 GB of ECC system memory, 1 GB nonvolatile memory (NVRAM), eight onboard 1 Gb/s Ethernet ports, eight 2 GB/s Fibre Channel ports (configurable as storage-attached initiator or host-attached target), and eight PCI-X slots for I/O expandability. A fully configured system can support up to 336 disk drives on eight FC loops. To compare the performance of systems with similar usable capacity, we performed our tests with 262 disks on six FC loops.

4 Larson, Jim, Block Management with Data ONTAP 7G: FlexVol, FlexClone, and Space Guarantees, TR3348, Network Appliance, Inc., October 2004. 5 Hitz, Dave, James Lau, and Michael Malcolm, File System Design for an NFS File Server Appliance, TR3002, Network Appliance, Inc., March 1995.

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 10

Page 11: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Figure 3 illustrates the basic architecture of the NetApp FAS3050c and the hardware configurations used in the performance tests.

A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 11

Figure 3. NetApp FAS3050c architecture and tested configuration

HP EVA8000 Compaq Computer Corp. introduced the Enterprise Virtual Storage (EVA) midrange storage line in October 2001, shortly before it merged with Hewlett-Packard Co. The EVA was a significant departure from the previous generation of Digital/Compaq HSG80 storage systems and provided no easy upgrade path from previous generations. The primary marketing point for the EVA was its virtualization technology. In May 2005, HP updated its EVA line with three new models that replaced the existing EVA3000 and 5000.

The EVA currently comes in three variations: EVA4000, 6000, and 8000, with the EVA8000 being the largest of the line.6 In February 2006, HP announced support for 4 Gb/s host connections. The latest EVA8000 consists of two controllers, each with 4 GB of memory, four 4 Gb/s Fibre Channel front-end (FE) host ports, and four 2 Gb/s Fibre Channel back-end (BE) device ports. Up to 18 14-slot disk shelves can be configured to support a total of 240 drives (the last 12 slots in the 18th shelf cannot be used). The EVA8000 supports 72 and 146 GB, 2-Gb/s, 15k rpm Fibre Channel disk drives as well as 146 and 300 GB 10k rpm drives. The EVA also supports low-cost 250 and 500 GB FATA drives, which utilize standard SATA drive mechanics with a Fibre Channel interface directly on the unit.

The EVA divides groups of physical disks into logically coupled units called disk groups. One or more virtual disks, Vdisks, are defined within a disk group and presented to hosts as LUNs. RAID 0, 1, and 5 are supported, with all virtual RAID 5 (VRAID5) LUNs using a 4+1 data-to-parity ratio regardless of the number of disks in the disk group. Data is spread across every spindle in the disk group for all RAID types. The minimum size of a disk group is eight physical drives, while the maximum size is all available disk drives. Best performance is usually obtained by configuring fewer, larger disk groups.

6 HP StorageWorks 4000/6000/8000 Enterprise Virtual Array (EVA) Family, Hewlett-Packard Development Company, LP, February 2006.

- Dual 3.6 GHz CPUs/controller - 4 FE FC ports/controller @ 2 Gb/s - 3 BE FC active ports/controller @ 2 Gb/s - 3 BE FC passive ports/controller @ 2 Gb/s - 4 GB ECC memory/controller - 512 MB nonvolatile memory/controller - 262 15k rpm, 4-Gb/s FC disks

FAS3050c BE FE

Disks

DisksController

A

Controller

B

CLARiiON CX3 Application Performance:

Page 12: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Figure 4 is a block diagram of the EVA8000 architecture and hardware configuration used in the tests. The EVA8000 test results reported in this technical note are based on the use of 4 Gb/s host ports and 72 GB 15k rpm 2-Gb/s Fibre Channel disks. When conducting EVA8000 performance tests, you must prefill locations that will be read with actual data, because read I/O requests that access locations that do not contain actual data have unrealistically small response times. To avoid this problem, the LUNs used in the OLTP and Exchange tests were prefilled to 50 percent of their capacity by streams of sequential writes; this simulated the use case in which a database was already present.

FE - Dual HVS210 controllers - 4 FE FC ports/controller @ 4 Gb/s - 4 BE FC ports/controller @ 2 Gb/s - 4 GB memory/controller - 2404 15k rpm, 2-Gb/s FC disks

EVA8000

A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 12

Figure 4. HP EVA8000 architecture and test configuration

IBM DS4800 Engenio Information Technologies, Inc. announced the IBM DS4800, a rebranded Engenio 6998 storage system, in April 2005.7 In June 2005, IBM announced the DS4800 as its version of the product. The DS4800 processor is based on Engenio’s XBB architecture: a dual-chip approach with a dedicated XOR engine and a standard CPU to handle data movement. The Ds4800 uses Xeon, Intel’s 32-bit 2.4-GHz CPU, to move data.

The DS4800 is available in three models: 82, 84, and 88, with up to 2, 4, or 8 GB of system memory, respectively, per controller. Each model supports up to 224 disk drives. Supported drives include standard 10k rpm (36, 73, 146, and 300 GB) and 15k rpm (36 , 73, and 146 GB) Fibre Channel drives and 7200 rpm 400 GB SATA drives. A 16-drive, 4 Gb/s Fibre Channel enclosure, the EXP810, was announced in May 2006. The DS4800 supports a total of eight 4 Gb/s Fibre Channel host-side front-end (FE) ports, and eight 4 Gb/s back-end (BE) device ports.

In the DS4800, disks are arranged into disk groups (which are analogous to CLARiiON RAID groups) with up to 30 drives per group. All LUNs in a group share the same RAID level. RAID levels 0, 1, 3, 5, and 1/0 are supported.

7 Technical Overview: 6998 Storage System – SAN Storage for Open Systems, Engenio Information Technologies, Inc., April 11, 2005.

BEDisksController

A

Controller

B

CLARiiON CX3 Application Performance:

Page 13: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Figure 5 illustrates the DS4800’s basic architecture and the DS4800 model 88 hardware configuration for the performance tests.8 The model 88 system was fully configured with a total of 16 GB of system memory and 224 146 GB, 15k rpm, 4-Gb/s Fibre Channel disks packaged in 14 EXP810 disk expansion enclosures. The connections between the controllers illustrated in the figure are physically part of the back-end Fibre Channel disk channels. Disk I/O and inter-controller communication share the same FC loops. Since all cached writes and cross-controller communication can add significant traffic, the CLARiiON design dedicates separate communication links for this purpose.

- Dual 2.4 GHz controllers - 4 FE FC ports/controller @ 4 Gb/s - 4 BE FC ports/controller @ 4 Gb/s- 8 GB memory/controller - 224 15k rpm, 4-Gb/s FC disks

DS4800 Model 88 FE

A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 13

Figure 5. IBM DS4800 model 88 architecture and tested configuration

Performance comparison of application workloads For the application workloads described below, EMC made every effort to configure each storage system to obtain the best possible application performance, thus providing a fair, apples-to-apples comparisons of the storage systems. The tests were driven by the same number and type of servers and host bus adapters, and the servers were monitored to ensure that they did not limit performance. Please note that this data shows how each system performs on an application-like workload when there is no replication (or other optional services) running on the CX3 or competitive arrays. While you can use these performance results to compare the relative performance of each system, you should not use these results as sizing guidelines for specific customer configurations.

8 The DS4800 array evaluated in the May 2005 version of this document was the less powerful model 84. It had a total of 8 GB of data cache rather than 16 GB, and 2 Gb/s rather than 4 Gb/s Fibre Channel back-end ports.

BE DisksController

A

Controller

B

CLARiiON CX3 Application Performance:

Page 14: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Table 4 summarizes the results of the application workload performance tests across the existing CLARiiON CX and new CX3 series when tested with the FC configurations described previously.

Table 4. Results of application workload testing for CX3 series vs. CX series

CX3 versus CX application performance

A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 14

Application workload

CX300 60 disks 2 GB cache

CX3-10c 60 disks 2 GB cache

CX500 120 disks 4 GB cache

CX3-20 120 disks 4 GB cache

CX700 240 disks 8 GB cache

CX3-40 CX3-80 240 disks 480 disks 8 GB 16 GB cache cache

OLTP 14,983 19,148 34,250 34,097 52,408 59,630 72,372 (transactions/min)

Exchange 7,040 7,799 12,579 16,154 20,080 23,836 28,655 (# users)

Backup-to-disk 228 393 330 471 870 881 1,121 (MB/s)

Table 5 compares the results of the CX3 series models to competitors’ offerings under the FC configurations outlined in earlier sections. The results show the clear performance advantages of the CX3 series over the equivalent competitive offering.

Table 5. Results of application workload testing for CX3 series vs. competitive offerings

CLARiiON CX3 series versus FAS3050c, EVA8000, and DS4800-88

Application workload

CX3-10c 60 disks 2 GB cache

FAS3050c 262 disks 8 GB cache

CX3-20 120 disks 4 Gb cache

EVA8000 240 disks 8 GB cache

CX3-40 240 disks 8 GB cache

DS4800-88 CX3-80 224 disks 480 disks

16 GB 16 GB cache cache

OLTP 19,148 12,396 34,097 44,810 59,630 63,559 72,372 (transactions/m

in)

Exchange 7,799 10,239 16,154 19,277 23,836 25,396 28,655 (# users)

Backup-to-disk 393 229 471 480 881 985 1,121 (MB/s)

The following sections provide more information about the workloads, configurations, and results for the tests summarized in thee tables. See the “Appendix: Array configuration settings” section for information about the tunable array configuration parameters used in these tests.

OLTP application workload An OLTP application workload simulator measured the performance of the storage systems under conditions similar to what customers might see in their OLTP environment. The tests simulated the I/O pattern generated by a number of concurrent users where each user submits transactions consisting of:

• Twenty-one random 2 KB reads to disk simulating database reads • Nine random 2 KB writes to disk simulating database writes • One log write of 512 bytes, where multiple log writes were grouped together to form a sequential write

to disk varying in size from 4 KB to 14 KB

CLARiiON CX3 Application Performance:

Page 15: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

To represent an environment where user data consumes 50 percent of the configured disk capacity, the simulator generated random database reads and writes that were uniformly distributed over one-half of each LUN configured for database data. Each user waited an average of 10 seconds after completing a transaction before submitting another transaction.

Each time the test was run, the transaction response times and throughput in transactions per minute (TPM) were recorded. The number of concurrent users in a test run was increased until the storage system was saturated. Database checkpoint processing and logging activity were simulated during each run. Two checkpoints, each simulating a 150 MB dump of the host database cache, were included in the measurement interval.

The number and type of disks and the RAID configurations used in the OLTP tests were as follows:

• CX3-10: 60 15k rpm, 4 Gb/s FC disks configured as 11 RAID 5 (4+1) database LUNs, one RAID 1/0 (2+2) log LUN, and one spare disk

• CX3-20: 120 15k rpm, 4 Gb/s FC disks configured as 23 RAID 5 (4+1) database LUNs, one RAID 1/0 (2+2) log LUN, and one spare disk

• CX3-40: 240 15k rpm, 4 Gb/s FC disks configured as 47 RAID 5 (4+1) database LUNs, one RAID 1/0 (2+2) log LUN, and one spare disk

• CX3-80: 480 15k rpm, 4 Gb/s FC disks configured as 95 RAID 5 (4+1) database LUNs, one RAID 1/0 (2+2) log LUN, and one spare disk

• CX300: 60 15k rpm, 2 Gb/s FC disks configured as 11 RAID 5 (4+1) database LUNs, one RAID 1/0 (2+2) log LUN, and one spare disk

• CX500: 120 15k rpm, 2 Gb/s FC disks configured as 23 RAID 5 (4+1) database LUNs, one RAID 1/0 log LUN, and one spare disk

• CX700: 240 15k rpm, 2 Gb/s FC disks configured as 47 RAID 5 (4+1) database LUNs, one RAID 1/0 (2+2) log LUN, and one spare disk

• NetApp FAS3050c: 262 15k rpm, 4 Gb/s FC disks with six disks reserved for the Data ONTAP root volume. The remaining 256 disks were configured as 16 RAID-DP (14+2) disk groups with two LUNs defined per group, 31 assigned as database LUNs, and one log LUN.

• DS4800-88: 224 15k rpm, 4 Gb/s FC disks configured as 44 RAID 5 (4+1) database LUNs, and one RAID 1/0 (2+2) log LUN

• EVA8000: 240 15k rpm, 2 Gb/s FC disks configured as a single disk group with 47 VRAID5 (4+1) database LUNs, and one VRAID5 (4+1) log LUN.

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 15

Page 16: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Figure 6 shows the average transaction response time in milliseconds (ms) versus the throughput in transactions per minute (TPM), as the number of concurrent users increased for the CX and CX3 series systems. When comparing storage systems, A outperforms B if it can support higher levels of throughput at lower transaction response times. The CX3-10 outperformed the CX300; the CX3-20 outperformed the CX300 and CX500; the CX3-40 outperformed the CX500 and the CX700; and the CX3-80 outperformed the CX3-40.

0

200

400

600

800

1000

1200

1400

1600

1800

2000

0 10,000 20,000 30,000 40,000 50,000 60,000 70,000 80,000Transactions Per Minute (TPM)

Ave

rage

Tra

nsac

tion

Res

pons

e Ti

me

(mse

c)

CX300 60 Disks CX500 120 Disks CX700 240 Disks CX3-20 120 DisksCX3-40 240 Disks CX3-80 480 Disks CX3-10 60 Disks

Figure 6. CX3 versus CX: OLTP transaction response time versus throughput

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 16

Page 17: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

In practice, each OLTP-like application has its own acceptable response time threshold for a transaction; sometimes this is a tunable parameter. We selected a transaction response time threshold of 2 seconds (s), a typical value for TPC-C reported numbers. (Remember that one Transaction = 21 reads, 9 writes + 1 log write.) The simulator has historically used this value as a cutoff point for years; this value was not chosen to hide any data or make one system look better than another. Figure 7 shows the number of transactions per minute (TPM) that were supported by each storage system with an average transaction response time of 2 seconds (2000 ms). In these tests:

• The CX3-10 supported 28 percent more TPM that the CX300. • The CX3-20 supported 128 percent more TPM than the CX300 and an equivalent number of TPM to

the CX500 (although the CX3-20 outperforms the CX500 at lower response time constraints). • The CX3-40 supported 74 percent more TPM than the CX500 and 14 percent more than the CX700. • The CX3-80 supported 38 percent more than the CX700 and 21 percent more than the CX3-40.

14,98319,148

34,250 34,097

52,40859,630

72,372

0

10,000

20,000

30,000

40,000

50,000

60,000

70,000

80,000

CX300 CX3-10 CX500 CX3-20 CX700 CX3-40 CX3-80

Tran

sact

ions

per

Min

ute

(TPM

)

Figure 7. CX3 versus CX series: OLTP throughput capacity at 2 second transaction response time

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 17

Page 18: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Figure 8 shows the average transaction response time versus transaction throughput for the CLARiiON CX3 series systems (models 10, 20, 40, and 80) versus the NetApp FAS3050c, HP EVA8000, and IBM DS4800 (model 88), as the number of concurrent users increased. The shape of the FAS3050c curve shows that the system exhibited unusual behavior as the number of users increased. We reran this test several times, and the results were similar each time. We attribute these results to the dynamic performance of the WAFL file system.

The CX3-20 significantly outperformed the NetApp FAS3050c, the CLARiiON CX3-40 outperformed the FAS3050c and the EVA8000, and the CX3-80 outperformed them all. Although the CX3-10 was unable to achieve the same maximum number of TPMs as the FAS3050c, for those TPM measurements common to the two systems, the CX3-10 offered a substantially better average transaction response time. This is an impressive feat as the FAS3050c was also equipped with well over four times the number of disks as the CX3-10 in this test.

0

200

400

600

800

1000

1200

1400

1600

1800

2000

0 10,000 20,000 30,000 40,000 50,000 60,000 70,000 80,000Transactions Per Minute (TPM)

Ave

rage

Tra

nsac

tion

Res

pons

e Ti

me

(mse

c)

FAS3050c 262 Disks DS4800-88 224 Disks CX3-20 120 Disks EVA8000 240 DisksCX3-40 240 Disks CX3-80 480 Disks CX3-10 60 Disks

Figure 8. CX3 versus competitors: OLTP transaction response time versus throughput

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 18

Page 19: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Figure 9 shows the number of transactions per minute (TPM) that each storage system supported with an average transaction response time of 2 seconds (2000 ms). In these tests:

• The CX3-20 (configured with 120 disks) supported 40 percent more TPM than the FAS3050c (configured with 262 disks).

• The CX3-40 supported 144 percent more TPM than the FAS3050c and 30 percent more than the EVA8000 when configured with a similar number of disks.

• The CX3-80, when configured with 480 disks, supported more TPM than each of the other systems.

19,14824,390

34,097

45,701

59,63064,532

72,372

0

10,000

20,000

30,000

40,000

50,000

60,000

70,000

80,000

CX3-10 FAS3050c CX3-20 EVA8000 CX3-40 DS4800-88

CX3-80

Tran

sact

ions

per

Min

ute

(TP

M)

Figure 9. CX3 versus competitors: OLTP throughput capacity at 2 second transaction response time

Exchange application workload These tests used an Exchange application workload based on I/O traces of user environments running Microsoft Exchange 2003 and Microsoft’s LoadSim tool. Please remember that this information should not be used to size actual Microsoft Exchange environments; its purpose is to provide a uniform workload to compare systems under the same constraints. The workload consisted of 4 KB random reads and 4 KB random writes, with a 2-to-1 ratio of reads to writes. An external load generator submitted concurrent threads of random reads and writes to 50 percent of the usable data capacity of each system. Within each thread, a new random read or write I/O was initiated as soon as the previous I/O completed, resulting in one parallel I/O per thread. To put a heavy I/O load on the systems, 32 concurrent threads were submitted to each RAID 5 (4+1) LUN, and 52 concurrent threads were submitted to each of the two LUNs associated with each NetApp RAID-DP (14+2) disk group. This resulted in approximately the same number of parallel I/Os per physical disk, an average of 6.4 per disk for the RAID 5 (4+1) disk groups, and 6.5 per disk for the RAID-DP (14+2) disk groups.

The number and type of disks and the RAID configurations used in the Exchange tests were as follows:

• CX3-10: 60 15k rpm, 4 Gb/s FC disks configured as 12 RAID 5 (4+1) LUNs • CX3-20: 120 15k rpm, 4 Gb/s FC disks configured as 24 RAID 5 (4+1) LUNs • CX3-40: 240 15k rpm, 4 Gb/s FC disks configured as 48 RAID 5 (4+1) LUNs

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 19

Page 20: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

• CX3-80: 480 15k rpm, 4 Gb/s FC disks configured as 96 RAID 5 (4+1) LUNs • CX300: 60 15k rpm, 2 Gb/s FC disks configured as 12 RAID 5 (4+1) LUNs • CX500: 120 15k rpm, 2 Gb/s FC disks configured as 24 RAID 5 (4+1) LUNs • CX700: 240 15k rpm, 2 Gb/s FC disks configured as 48 RAID 5 (4+1) LUNs • NetApp FAS3050c: 262 15k rpm, 4 Gb/s FC disks with six disks reserved for the Data ONTAP root

volume; the remaining 256 disks were configured as 16 RAID-DP (14+2) disk groups, with two LUNs defined per group.

• DS4800-88: 220 15k rpm, 4 Gb/s FC disks configured as 44 RAID 5 (4+1) LUNs • EVA8000: 240 15k rpm, 2 Gb/s FC disks configured as a single disk group with 48 VRAID5 (4+1)

LUNs The sustained throughput in I/Os per second (IOPS) was measured for each of the configurations. The throughput was then converted into the number of Exchange users the configuration could support if each system was configured to run at 80 percent of its Exchange IOPS rating, and the average Exchange user performed one I/O per second. Therefore:

# Exchange Users = 80% of maximum Exchange IOPS Figure 10 compares the number of users the CLARiiON CX3 series and the CX series supported with the simulated Exchange workload. In these tests: • The CX3-10 supported 11 percent more simulated Exchange users than the CX300. • The CX3-20 supported 107 percent more simulated Exchange users than the CX3-10 and 28 percent

more simulated Exchange users than the CX500. • The CX3-40 supported 48 percent more than the CX3-20 and 19 percent more than the CX700. • The CX3-80 supported 20 percent more than the CX3-40.

7,040 7,799

12,579

16,154

20,080

23,836

28,655

0

5,000

10,000

15,000

20,000

25,000

30,000

35,000

CX300 CX3-10 CX500 CX3-20 CX700 CX3-40 CX3-80

Exch

ange

Use

rs

Figure 10. CX3 series versus CX series: Number of supported Exchange users Figure 11 compares CLARiiON CX3 systems with the FAS3050c, the EVA8000, and the DS4800 model 88. It is interesting to note that the FAS3050c outperformed the CX3-10 by only 31 percent, although the FAS3050c had over four times as many disks as the CX3-10. (The CX3-10 had 60 active disks, while the FAS3050c had 256 active disks.) In these tests: • The CX3-20 supported 58 percent more users than the FAS3050c. • The CX3-40 supported 133 percent more users than the FAS3050c and 24 percent more than the

EVA8000.

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 20

Page 21: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

• The CX3-80 supported 180 percent more than the FAS3050c, 49 percent more than the EVA8000, and

13 percent more than the DS4800.

7,79910,239

16,154

19,277

23,83625,396

28,655

0

5,000

10,000

15,000

20,000

25,000

30,000

35,000

CX3-10 FAS3050c CX3-20 EVA8000 CX3-40 DS4800-88 CX3-80

Exch

ange

Use

rs

Figure 11. CX3 series versus competition: Number of supported Exchange users

Backup-to-disk application workload These tests used a backup-to-disk workload that consisted of concurrent I/O streams consisting of 256 KB sequential writes. These tests used the following disk configurations:

• CX3-10: 60 15k rpm, 4 Gb/s FC disks configured as 12 RAID 5 (4+1) LUNs • CX3-20: 120 15k rpm, 4 Gb/s FC disks configured as 24 RAID 5 (4+1) LUNs • CX3-40: 240 15k rpm, 4 Gb/s FC disks configured as 48 RAID 5 (4+1) LUNs • CX3-80: 480 15k rpm, 4 Gb/s FC disks configured as 96 RAID 5 (4+1) LUNs • CX300: 60 15k rpm, 2 Gb/s FC disks configured as 12 RAID 5 (4+1) LUNs • CX500: 120 15k rpm, 2 Gb/s FC disks configured as 24 RAID 5 (4+1) LUNs • CX700: 240 15k rpm, 2 Gb/s FC disks configured as 48 RAID 5 (4+1) LUNs. • NetApp FAS3050c: 262 15k rpm, 4 Gb/s FC disks with six disks reserved for the Data ONTAP root

volume and the remaining 256 disks configured as 16 RAID-DP (14+2) disk groups, with two LUNs defined per group

• DS4800-88: 220 15k rpm, 4 Gb/s FC disks configured as 44 RAID 5 (4+1) LUNs • EVA8000: 240 15k rpm, 2 Gb/s FC disks configured as a single disk group with 48 VRAID5 (4+1)

LUNs An external load generator submitted parallel streams of sequential writes, with one stream (thread) per LUN, and measured the I/O throughput in MB/s. Maximum MB/s throughput for sequential writes is usually obtained when large writes bypass the data cache and write directly to disk. The CLARiiON and HP EVA8000 systems have tunable parameters to enable cache bypass (write through). The IBM DS4800 and NetApp 3050c do not have tunable parameters for this function, but their internal algorithms are designed to achieve the same result.

Figure 12 shows the backup-to-disk MB/s throughput capacity of the CLARiiON CX3 and CX series. In these tests:

• The CX3-10 achieved 72 percent higher bandwidth than the CX300 and 19 percent higher bandwidth than the CX500.

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 21

Page 22: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

• The CX3-20 achieved 20 percent more bandwidth than the CX3-10. • The CX3-40 achieved 124 percent higher bandwidth than the CX3-10, 87 percent more than the CX3-

20, and just slightly more than the CX700. • The CX3-80 achieved 27 percent more than the CX3-40.

228

330393

471

870 881

1,121

0

200

400

600

800

1,000

1,200

CX300 CX500 CX3-10 CX3-20 CX700 CX3-40 CX3-80

Bac

kup

to D

isk

(MB

/s)

Figure 12. CX3 versus CX: Backup-to-disk throughput (MB/s)

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 22

Page 23: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Figure 13 shows the backup-to-disk throughput capacity of the CLARiiON CX3 series systems in MB/s versus the NetApp FAS3050c, the HP EVA8000 , and the IBM DS4800 model 88. In these tests: 9

• The CX3-10 achieved 72 percent higher bandwidth than the FAS3050c. • The CX3-20 achieved 106 percent higher backup-to-disk bandwidth than the FAS3050c. • The CX3-40 achieved 285 percent more than the FAS3050c and 84 percent more than the EVA8000. • The CX3-80 achieved 390 percent more backup throughput than FAS3050c, 134 percent more than the

DS4800, and 14 percent more than the DS4800-88.

229

393471 480

881

985

1,121

0

200

400

600

800

1,000

1,200

FAS3050c CX3-10 CX3-20 EVA8000 CX3-40 DS4800-88 CX3-80

Bac

kup

to D

isk

(MB

/s)

Figure 13. CX3 series versus competition: Backup-to-disk throughput (MB/s)

Conclusion While the application workload tests described in this white paper may not precisely represent a customer’s environment, they do provide a consistent workload across each system that can be used to compare the relative performance of the tested storage systems on applications with different I/O characteristics. These tests clearly demonstrate the performance superiority of the CLARiiON UltraScale CX3 storage systems over competitors’ offerings (the NetApp FAS3050c, HP EVA8000, and IBM DS4800 model 88) and the previous generation of CLARiiON CX series storage systems.

9 HP’s published sequential write performance for the EVA8000 is 533.88 MB/s using 264 KB writes, which is 11 percent more than the 480 MB/s reported here for 512 KB writes. HP obtained the 533.88 MB/s number by using more than one sequential stream per LUN.

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 23

Page 24: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

Appendix: Array configuration settings

A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 24

OLTP Exchange Backup-to-

disk

Total Cache = 2 GB Sys Mem = 1428 MB RC = 206 MB WC = 206 MB CPS = 2 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache = 2 GB Sys Mem = 1428 MB RC = 206 MB WC = 206 MB CPS = 4 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache = 2 GB Sys Mem = 1428 MB RC = 206 MB WC = 206 MB CPS = 16 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 128 KB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

EMC CX3-10c

Total Cache = 4 GB Sys Mem = 1990 MB RC = 250 MB WC = 803 MB CPS = 2 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache = 4 GB Sys Mem = 1990 MB RC = 250 MB WC = 803 MB CPS = 4 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache = 4 GB Sys Mem = 1990 MB RC = 250 MB WC = 803 MB CPS = 16 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 128 KB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

EMC CX3-20 and CX3-20c

Total Cache = 8 GB Sys Mem = 2160 MB RC = 968 MB WC = 2 GB CPS = 2 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache = 8 GB Sys Mem = 2160 MB RC = 968 MB WC = 2 GB CPS = 4 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache = 8 GB Sys Mem = 2160 MB RC = 968 MB WC = 2 GB CPS = 16 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 128 KB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

EMC CX3-40 and CX3-40c

CLARiiON CX3 Application Performance:

Page 25: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 25

OLTP Exchange Backup-to- disk

Total Cache =16 GB Sys Mem = 2928 MB RC = 1 GB WC = 2 GB CPS = 2 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache =16 GB Sys Mem = 2928 MB RC = 1 GB WC = 2 GB CPS = 4 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache =16 GB Sys Mem = 2928 MB RC = 1 GB WC = 2 GB CPS = 16 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 128 KB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enable

EMC CX3-80

d

Total Cache = 2 GB Sys Mem = 984 MB RC = 163 MB WC = 400 MB CPS = 2 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache = 2 GB Sys Mem = 984 MB RC = 163 MB WC = 400 MB CPS = 4 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache = 2 GB Sys Mem = 984 MB RC = 163 MB WC = 400 MB CPS = 16KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 128 KB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

EMC CX300

Total Cache = 4 GB Sys Mem = 1160 MB RC = 400 MB WC = 1 GB CPS = 2 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache = 4 GB Sys Mem = 1160 MB RC = 400 MB WC = 1 GB CPS = 4 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache = 4 GB Sys Mem = 1160 MB RC = 400 MB WC = 1 GB CPS = 16 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 128 KB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

EMC CX500

Total Cache = 8 GB Sys Mem = 1650 MB RC = 1 GB WC = 2 GB CPS = 2 KB Prefetch = Variable IO Size Mult = 4

EMC CX700 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache = 8 GB Sys Mem = 1650 MB RC = 1 GB WC = 2 GB CPS = 4 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 1 MB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

Total Cache = 8 GB Sys Mem = 1650 MB RC =1 GB WC = 2 GB CPS = 16 KB Prefetch = Variable IO Size Mult = 4 Max Prefetch = 4096 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y CWA = 128 KB Hi Watermark = 80 Low Watermark = 60 SES = 64 KB Sniff Verify = Enabled

CLARiiON CX3 Application Performance:

Page 26: H2904 - EMC CLARiiON CX3 Application Performance: A ... · CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations

OLTP Exchange Backup-to-disk

Total ECC Memory = 8 GB Total NVRAM = 1 GB Snapshot space = 0 Other = Data ONTAP 7.2 default settings

Total ECC Memory = 8 GB Total NVRAM = 1 GB Snapshot space = 0 Other = Data ONTAP 7.2 default settings

Total ECC Memory = 8 GB Total NVRAM = 1 GB Snapshot space = 0 Other = Data ONTAP 7.2 default settings

NetApp FAS3050c

Total Cache = 16 GB RC = Not Tunable WC = Not Tunable CBS = 4 KB ReadAhead = 1 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y Flush Start = 80 Flush Stop = 80 Seg Size = 64 KB Media Scan = Disabled

Total Cache = 16 GB RC = Not Tunable WC = Not Tunable CBS = 16 KB ReadAhead = 1 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y Flush Start = 80 Flush Stop = 80 Seg Size = 64 KB Media Scan = Disabled

Total Cache = 16 GB RC = Not Tunable WC = Not Tunable CBS = 16 KB ReadAhead = 1 RC Enabled? = Y WC Enabled? = Y Mirrored WC? = Y Flush Start = 80 Flush Stop = 80 Seg Size = 64KB Media Scan = Disabled

IBM DS4800

Total Cache = 8 GB RC = 2 GB WC = 1 GB Control Cache = 2 GB RC Enabled? = Y WC = Write-Back Mirrored WC? = Y Redundancy = VRAID5 Preferred Path Balanced

Total Cache = 8 GB RC = 2 GB WC = 1 GB Control Cache = 2 GB RC Enabled? = Y WC = Write-Back Mirrored WC? = Y Redundancy = VRAID5 Preferred Path Balanced

Total Cache = 8 GB RC = 2 GB WC = 1 GB Control Cache = 2 GB RC Enabled? = Y WC = Write-Through Mirrored WC? = Y Redundancy = VRAID5 Preferred Path Balanced

HP EVA8000

CLARiiON CX3 Application Performance: A Comparison of Midrange Storage Platforms Technology Concepts and Business Considerations 26