Exploiting New IBM Hardware Announcements to Optimize ... · ATS – IBM Beta Program ATS Group ATS...
Transcript of Exploiting New IBM Hardware Announcements to Optimize ... · ATS – IBM Beta Program ATS Group ATS...
1 SaaS-built innovations by ATS to empower C-level management to IT administrators | © 2007-2011 ATS Group. All rights reserved. SaaS-built innovations by ATS to empower C-level management to IT administrators | © 2007-2011 ATS Group. All rights reserved.
Presented by Tim Conley (ATS Group)
Exploiting New IBM Hardware Announcements to Optimize Storage Performance
2 Compliments from Galileo Performance Explorer™
Storage Session
Lecture Presentation Session Title: Exploiting New IBM Hardware Announcements to Optimize Storage Performance
Abstract: Come see some of the latest Hardware and Software releases from IBM, and the performance benefits that can be gained from upgrading to the new hardware types being announced at Edge 2014, specifically the new SVC-DH8 nodes, and the V7000 Gen2 platform, both running the new 7.3 code stream. We will provide real world workload results on the new hardware, compared to the previous generation hardware and software. We will also present the latest Flash system performance for the Flash 840, utilized behind the virtualized layers. Customer benchmark results will be reviewed to give the audience examples of how their workload may be enhanced.
Product: SAN Volume Controller (DH8 nodes), V7000 Gen2, Flash 840, SONAS and V7000 Unified.
3 Compliments from Galileo Performance Explorer™
ATS – IBM Innovation Center, Malvern Pa
Technologies: SVC V7000 SONAS XIV Pure Systems Power 8 ESX GPFS Etc….
4 Compliments from Galileo Performance Explorer™
ATS – IBM Beta Program ATS Group
ATS has had the luxury to participate in IBM Beta programs since 2010. Current Beta Programs: - SVC – DH8 nodes - V7000-G2 - Storwize 7.3 Code - SONAS 1.5 - V7000 Unified 1.5
5 Compliments from Galileo Performance Explorer™
ATS – Performance Testing
Performance Testing: - SVC – DH8 nodes
- 7.3 code - Use of 1 and 2, 4-port FC cards - Compression with Accelerator card
- V7000-G2 - 7.3 code - Compression with Accelerator card
- SVC CG8 nodes - 7.2 code testing - 7.3 code testing
- SONAS - 1.4 code testing - 1.5 code testing
- V7000 Unified - 1.4 code testing - 1.5 code testing
6 Compliments from Galileo Performance Explorer™
New Hardware
SVC DH8 – Front View
V7000 – Front View
7 Compliments from Galileo Performance Explorer™
New Hardware
New 4-port Fiber Channel Card - Up to three (3) cards, 12 ports (all usable)
Compression Accelerator Adapter - Up to two (2), minimum one (1). V7 one (1)
8 Compliments from Galileo Performance Explorer™
Easy Tier 3
Tier 0 Tier 1 Tier2
SSD ENT NL
SSD ENT NONE
SSD NL NONE
NONE ENT NL
SSD NONE NONE
NONE ENT NONE
NONE NONE NL
Support any combinations of 3 tiers Tier up/down Load Balance within a Tier
9 Compliments from Galileo Performance Explorer™
Performance Testing
10 Compliments from Galileo Performance Explorer™
Storage Cloud – Example #1
SVC3
V7000 Tier 0
Existing DS8x000s
1.6PB FC TIER 1 Drives
SVC4
Existing DS8x000s
1.6PB FC TIER 1 Drives
V7000 Tier 0
DS88000
320 TB
Nearline Tier 2 Drives
3.5PB Government
Private Storage Cloud
11 Compliments from Galileo Performance Explorer™
Performance Testing Test setup is as follows
» Base 2145-CG8s (no RtC or HBA hardware additions); 2-node cluster running 7.2.0.4.
» 60% reads, 4K, 50:50 random:sequential
» 40% writes, 32K, 50:50 random:sequential
» Three Windows 2008 R2 VMs are used, each running on a separate ESXi host.
» ESXi multipathing is set to round robin for all LUNs.
» Each VM is given four 100 GiB volumes, mapped to the VMs as raw device mappings (RDMs).
» IOmeter is used to generate the workload. It is using the raw SCSI disks, with no file systems on the volumes. IOmeter is configured to use one worker thread per LUN, and to use 30 outstanding I/Os per LUN.
» Back-end SVC mdisks are all FlashSystem 840 volumes.
» At the start of the test, no compressed volumes exist on the SVC.
12 Compliments from Galileo Performance Explorer™
Performance Testing
» Running on the same hardware, SVC 7.3 sustained ~200 MiB/s more throughput:
» 1.3GB/sec vs 1.5GB/sec
CG8 Performance Test on 7.3
7.2 7.3
13 Compliments from Galileo Performance Explorer™
Performance Testing » SVC 7.3 sustained ~15,000 more IOps:
» 85K IOps vs 100K IOps
7.2 7.3
14 Compliments from Galileo Performance Explorer™
Performance Testing » SVC 7.3 maintained ~1 ms lower write service times. Read service times were
essentially unchanged:
7.2 7.3
15 Compliments from Galileo Performance Explorer™
Performance Testing » Node CPU utilization was up ~7%:
» Overall Performance is the same or better on 7.3 vs 7.2, on same CG8 Hardware.
» Our results indicate that SVC 7.3 performs as good as or better than SVC 7.2 on the same hardware.
7.2 7.3
16 Compliments from Galileo Performance Explorer™
Performance Testing
» As with 7.2 on the CG8s, the creation of a single compressed volume in the I/O group caused a decrease in throughput and IOps:
7.2 7.3
17 Compliments from Galileo Performance Explorer™
Performance Testing » As with 7.2 on the CG8s, the creation of a single compressed volume in the I/O
group caused an increase in service times:
7.2 7.3
18 Compliments from Galileo Performance Explorer™
Performance Testing » As with 7.2 on the CG8s, the creation of a single compressed volume in the I/O
group caused an increase in CPU utilization:
7.2 7.3
19 Compliments from Galileo Performance Explorer™
Performance Testing » Result Summary
» Turning on Compression on a highly utilized SVC (CPU), performance will suffer
» Compression used best when used at lower SVC CPU utilization
» Compression functionally requires the additional 2nd CPU FC in <DH8 nodes
» Example below shows Real world customer with Compression active, zero impact in write services times
» Saving 43TB on compression in use
20 Compliments from Galileo Performance Explorer™
Performance Testing » Running with the same level of code, the DH8 nodes sustained ~200 MiB/s more
throughput:
» 1.5GB/sec vs 1.7GB/sec
CG8 DH8
21 Compliments from Galileo Performance Explorer™
Performance Testing » The DH8 nodes sustained ~12,000 more IOps
» 100K IOps vs 112K IOps
CG8 DH8
22 Compliments from Galileo Performance Explorer™
Performance Testing » The DH8 nodes maintained similar read service times and ~1 ms better write service
times:
CG8 DH8
23 Compliments from Galileo Performance Explorer™
Performance Testing » Node CPU utilization was significantly lower, as expected
» 92% vs 65%
CG8 DH8
24 Compliments from Galileo Performance Explorer™
Performance Testing » Connected two additional FC ports per node (2nd 4-port adapter) for inter-node
communication only.
» Throughput increased by ~200 MiB/s, 1.7GB/sec vs 1.9GB/sec
DH8
25 Compliments from Galileo Performance Explorer™
Performance Testing » Transfers per second increased ~10,000 IOps
» 112K IOps vs 122K IOps
DH8
26 Compliments from Galileo Performance Explorer™
Performance Testing » Even with the additional workload, write service times decreased ~0.4 ms. Read
service times dropped ~0.1 ms:
DH8
27 Compliments from Galileo Performance Explorer™
Performance Testing » By offloading some of the cache mirroring work from the original four FC ports, those
ports exhibited lower utilization and therefore more headroom for host and storage traffic
» 450-500MB/sec down to 300MB/sec
DH8 6-ports DH8-4ports
28 Compliments from Galileo Performance Explorer™
Performance Testing » Result Summary
» DH8 nodes with 4 FC ports outperforms equivalent CG8 nodes by 15%
» DH8 nodes with 6 FC ports outperforms equivalent CG8 nodes by 25%
» DH8 reduces latency by up to 20%
» DH8 still has 35% CPU headroom available vs 8% on CG8
29 Compliments from Galileo Performance Explorer™
Easy Tier Performance Testing » Started Easy Tier, 3-tier configuration.
» Shows data moving to XIV and DS5100 from source Flash.
30 Compliments from Galileo Performance Explorer™
Easy Tier Performance Testing » Started Easy Tier, 3-tier configuration.
» Slight latency increase overall after data movement. 85% data moved off Flash.
31 Compliments from Galileo Performance Explorer™
Easy Tier Performance Testing » Started Easy Tier, Load Balancing
» Shows data moving to new XIV volumes, zero latency impact. Immediate Movement
32 Compliments from Galileo Performance Explorer™
Easy Tier Performance Testing » Updated STAT tool output.
33 Compliments from Galileo Performance Explorer™
Performance Testing » Result Summary
» Easy Tier 3 promotes and demotes data among three tiers
» Performance at the volume level is not significantly changed
» Place data on proper Tiers
» Easy Tier in load balancing mode, immediately moves hot extents ONLY!!
» Prior balancing mode required to move many more extents to properly balance data.
34 Compliments from Galileo Performance Explorer™
Compression Performance Testing
» Simulated Oracle Benchmark testing
» 100% Random Reads, for worst case compression results.
»70/30 R/W ratio
»175MB/Sec
»25K IOps
»AIX 7.1, Pure 460 node LPAR
35 Compliments from Galileo Performance Explorer™
Compression Performance Testing » CPU for compression stats
» Shows data moving increase in CPU for compression. 7% to 20%
36 Compliments from Galileo Performance Explorer™
Compression Performance Testing » Compressing testing on V7k
» Slight increase in read service times from .4 vs .8ms
37 Compliments from Galileo Performance Explorer™
Compression Performance Testing » Compressing testing on V7k
» Zero impact in write service times. All 4 volumes identical
38 Compliments from Galileo Performance Explorer™
Compression Performance Testing
» Result Summary
» Zero impact on write service times
» Slight increase in read service times, with 100% random read workload….Worst Case scenario for compression workload
» More cores and processing available to significantly increase overall compression workload
39 Compliments from Galileo Performance Explorer™
Questions & Answers
40 SaaS-built innovations by ATS to empower C-level management to IT administrators | © 2007-2011 ATS Group. All rights reserved. SaaS-built innovations by ATS to empower C-level management to IT administrators | © 2007-2011 ATS Group. All rights reserved.
Thank you!