HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle...

40
HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash array Reference Architecture

Transcript of HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle...

Page 1: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash array

Reference Architecture

Page 2: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture

Contents Executive summary ................................................................................................................................................................................................................................................................................................................................ 3 Introduction ................................................................................................................................................................................................................................................................................................................................................... 3 Solution overview ..................................................................................................................................................................................................................................................................................................................................... 5 Solution components ............................................................................................................................................................................................................................................................................................................................ 7

Hardware ................................................................................................................................................................................................................................................................................................................................................... 8 Software ................................................................................................................................................................................................................................................................................................................................................. 13 Oracle Database ............................................................................................................................................................................................................................................................................................................................. 13

HPE Application Tuner Express (HPE-ATX) ............................................................................................................................................................................................................................................................. 14 Best practices and configuration guidance for Oracle on HPE Superdome X and the HPE 3PAR StoreServ 8450 All Flash array ......................................... 15

Workload description ................................................................................................................................................................................................................................................................................................................. 16 Capacity and sizing ............................................................................................................................................................................................................................................................................................................................ 28

Analysis and recommendations ....................................................................................................................................................................................................................................................................................... 29 Implementing a proof-of-concept .................................................................................................................................................................................................................................................................................. 29 HPE Database Performance Profiler (DPP) .......................................................................................................................................................................................................................................................... 30

Summary ...................................................................................................................................................................................................................................................................................................................................................... 30 Appendix A: Bill of materials ...................................................................................................................................................................................................................................................................................................... 31 Appendix B: udev device permission rules ................................................................................................................................................................................................................................................................... 33 Appendix C: udev rules ................................................................................................................................................................................................................................................................................................................... 33 Appendix D: /etc/sysctl.conf ...................................................................................................................................................................................................................................................................................................... 34 Appendix E1: init.ora ......................................................................................................................................................................................................................................................................................................................... 34 Appendix E2: init.ora ......................................................................................................................................................................................................................................................................................................................... 35 Appendix F: multipath.conf ......................................................................................................................................................................................................................................................................................................... 36 Appendix G: ATX script – used for all OLAP and 2-blade OLTP ............................................................................................................................................................................................................ 39 Appendix H: ATX script – used for 4-blade OLTP only ................................................................................................................................................................................................................................... 39 Resources and additional links ................................................................................................................................................................................................................................................................................................ 40

Page 3: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 3

Executive summary Hybrid transactional and analytical processing is an emerging application architecture that “breaks the wall” between transaction processing and analytics1,2. Traditional architectures separated transactional and analytical systems. The need to respond to the inflection points within a business means that using after-the-fact analysis is no longer adequate. Business inflection points are transient opportunities and, as such, need to be exploited in real time. Combining OLTP transactions with OLAP queries allows analytics to be run in real time on in-flight transactional data. Combining OLTP and OLAP requires a large, scale up system architecture that has the ability to meet those additional demands.

Demands for faster transaction processing speeds, scalable capacity, consolidation and increased flexibility are required to meet the needs of today’s businesses. This Reference Architecture is intended to provide customers with the expected performance benefits of real-time analytics, on the HPE Integrity Superdome X paired with HPE 3PAR StoreServ 8450 All Flash arrays running Oracle 12.2. We tested the scalability of 1, 2 and 4-blade nPars on a data-intensive Oracle workload running both OLTP and OLAP instances, to showcase the linear scalability of the solution.

Hewlett Packard Enterprise is committed to delivering customers a more thorough utilization of their servers. HPE has created a software product, named HPE Application Tuner Express (HPE-ATX) that will help customers do just that, by reducing the latency introduced by accessing memory not on the local processor. During our testing, we used HPE-ATX to both start the database while homing the Oracle processes to specific processors and to start the Oracle Listener. Starting the Oracle Listener allowed us to home remote connections to specific processors and their associated memory.

Target audience: This Hewlett Packard Enterprise Reference Architecture (RA) is designed for IT professionals who use, program, manage, or administer large databases that require high performance. Specifically, this information is intended for those who evaluate, recommend, or design new and existing IT high performance architectures. Additionally, CIOs may be interested in this document as an aid to guide their organizations to determine when to implement an Oracle OLTP environment alongside an Oracle In-Memory solution for their Oracle online analytical processing (OLAP) environments and the performance characteristics associated with those implementations.

This Reference Architecture describes testing completed in August 2017.

Document purpose: The purpose of this document is to describe a Reference Architecture that demonstrates the performance implications associated with deploying Oracle 12.2 running both an online transaction processing (OLTP) workload as well as an online analytical processing (OLAP) using Oracle In-Memory on the HPE Superdome X server and HPE 3PAR StoreServ 8450 All Flash array.

Disclaimer: Products sold prior to the separation of Hewlett-Packard Company into Hewlett Packard Enterprise Company and HP Inc. on November 1, 2015 may have a product name and model number that differ from current models.

Introduction As compute and storage power increase, the price relative to the amount of resources consumed declines. As the price per-gigabyte (GB) and the compute power required to collect and store the data declines, businesses are able to economically store more data and automate more functions. The result of this additional automation is a vast amount of data which can be used to make decisions about where to guide the business in the future.

Customers have a choice in how to deploy their online analytical processing (OLAP).

The more traditional method is to have a system that is used as a “data generator”. This is generally an online transaction processing (OLTP) system implemented to run the day-to-day business. In this scenario, at prescribed intervals, businesses will perform a data extract from the OLTP system and transfer the data to the OLAP system on which analytics is then performed on the data. There are, however, some caveats to using this type of scheme. Some of those issues are:

• Latency associated with when the data becomes available – this has the propensity to cause a business to miss near term trend lines, which could cause an oversight or delay the recognition of an inflection point.

1 Market Guide for HTAP-Enabling In-Memory Computing Technologies. 04/15/2017 gartner.com 2 Hybrid Transaction/Analytical Processing Will Foster Opportunities for Dramatic Business Innovation. 04/15/2017 gartner.com

Page 4: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 4

• ETL (Extract, Transform and Load) – This is the process by which you extract data from the OLTP environment, transport it over to the OLAP environment, cleanse the data and load it into a data warehouse. There can be several issues with this process. As mentioned before, it introduces latency. There can also be issues and problems with transforming the data from the OLTP representation to the OLAP representation.

• Having dual systems, one for OLTP and one for OLAP increases the number of Oracle licenses required to perform the processing on each of the systems.

• Each of the systems must be provisioned such that they are capable of meeting the peak processing needs.

As systems and storage have become more capable and robust, there has emerged a new way of solving this problem. That is to combine the OLTP processing on the same system with the OLAP system.

The HPE Superdome X, paired with the HPE 3PAR 8450 All Flash array makes the transition from separate systems to a single system feasible.

Combining the two types of processing has the following benefits:

• Reduced latency – Since the data to be analyzed is in situ, there is no latency involved with having to wait for a data extract. Once an OLTP transaction has completed, the data that has been modified is available for analysis and reporting.

• ETL – This process may be totally eliminated.

• Licenses – The number of Oracle licenses is reduced, because the total number of processing cores is reduced.

• The system only has to be provisioned for the peak OLTP workload plus any OLAP workload occurring at the same time, or vice versa. In fact, one workload could be eliminated in favor of the other, should the need arise.

The primary question this paper is meant to answer is, what is the performance profile of a resulting hybrid OLTP/OLAP system?

The HPE Superdome X is a modular, bladed design, and is able to scale from 1 to 8 blades in increments of 1, 2, 3, 4, 6 or 8 blades. Each blade has 2 processors, with each processor having between 4 and 24 cores. Each blade is able to be configured with up to 6TB of memory, meaning the HPE Superdome X can hold a total of 48TB of memory.

This paper explores:

• What happens if I run my OLTP during the day and then run my OLAP in the evening, effectively isolating the performance requirements. What then, does the performance of my workloads look like as I expand the partition on which I am running the Oracle database?

• What happens if the OLTP environment is run at the same time as the OLAP environment. In that case, what is the impact to performance when those two disparate workloads are run at the same time?

Additionally, this paper will cover best practices for both OLTP and OLAP configurations and will address the following OLAP specific questions:

• How much memory do I need to migrate my tables or columns into memory?

• Oracle 12.2 has four memory compression features offered with the in-memory feature. They are Compress for Query Low, Compress for Query High, Compress for Capacity Low, and Compress for Capacity High. This paper will compare the memory requirements for each compression type. It will further provide insight into the differing CPU requirements for each compression type. The compression types are described as:

– Compression for Query Low – Optimized for query performance (default)

– Compression for Query High – Optimized for query performance as well as space savings

– Compression for Capacity Low – Balanced with a greater bias towards space saving

– Compression for Capacity High – Optimized for space saving

• Concurrency impact when multiple users are performing queries at the same time.

Page 5: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 5

Solution overview For this Reference Architecture, HPE chose an HPE Superdome X server testing with 1, 2 and 4 blades. Each blade had 3TB of memory installed, so the configurations had 3, 6 and 12TB of total memory respectively. Each blade had 2 X dual-port 16Gb Fibre Channel I/O cards. The HPE 3PAR StoreServ 8450 All Flash array was chosen for storage. For this RA, we used two 3PAR 8450 arrays due to the amount of storage consumed. We believe the performance of the total configuration would not have varied materially had only one HPE 3PAR StoreServ 8450 been used, rather than two.

The following, table 1, shows how the storage was configured and how the LUNs were used.

Table 1. Storage device configuration and use

Drive size RAID Provisioning type Number of LUNs Usage

256GB 10 Thin 1 Boot volume and Oracle software

756GB 10 Thin 16 Tablespaces and indexes

41GB 10 Thin 16 Redo logs and the rollback tablespace

Figure 1 provides a detailed, physical layout that shows how the server and storage was configured.

Page 6: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 6

HP SN6000B 16Gb FC Switch

47434642454144403935383437333632312730262925282423192218211720161511141013912873625140

HP SN6000B 16Gb FC Switch

47434642454144403935383437333632312730262925282423192218211720161511141013912873625140

HPE 3PAR StoreServ 8450• 4-node• 384 GiB of cache• 10 Expansion Shelves• 96 X 400 MLC SSDs

HPE Superdome X Gen9• 1 – 4 Blades

• 2 X Intel Xeon E7-8891 v4 10-core 2.8 GHz processors

• 3TB memory• 2 X Dual-port HPE QMH2672 16Gb FC HBAs• 1 X HPE FlexFabric 20Gb 2-port 650FLB Adapter

2 X HPE SN6000B SAN Switches

UID

HPBladeSystemSuperdomeEnclosure

PS12

PS7

Bay1

Bay9

PS1

Bay8

Bay16

PS6

0

UID1

2

3

4

SuperdomeBL920s Gen9

UID1

2

3

4

SuperdomeBL920s Gen9

UID1

2

3

4

SuperdomeBL920s Gen9

UID1

2

3

4

SuperdomeBL920s Gen9

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 80003PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 800012

110 233PAR

StoreServ8400

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 80003PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 800012

110 233PAR

StoreServ8400

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

HPE 3PAR StoreServ 8450• 4-node• 384 GiB of cache• 8 Expansion Shelves• 80 X 400 MLC SSDs

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 80003PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 800012

110 233PAR

StoreServ8400

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 80003PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 800012

110 233PAR

StoreServ8400

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

12

110 233PAR

StoreServ8000

DriveEnclosure

3PAR 8000 3PAR 80003PAR 80003PAR 8000 3PAR 8000 3PAR 80003PAR 80003PAR 8000

Figure 1. Physical layout of the HPE Superdome X Gen9 server and the HPE 3PAR StoreServ 8450 All Flash arrays

Page 7: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 7

Solution components The solution utilized an HPE Superdome X server connected to two HPE 3PAR StoreServ 8450 All Flash arrays. As stated earlier, we believe the performance of the total configuration would not have varied materially had only one HPE 3PAR StoreServ 8450 been used, rather than two. As stated earlier, each blade had two, dual-port 16Gb Fibre Channel cards, one dual-port 20Gb FlexFabric HBA and 3TB of memory. The connections were made between the HPE Superdome X and the HPE 3PAR StoreServ 8450 using the 16Gb Fibre Channel ports and a pair of HPE SN6000B SAN switches. The connection for IP access was made using the first port of the FlexFabric card on the first blade in the configuration.

We tested with 1, 2 and 4 blade configurations. The configurations had the following characteristics:

1 Blade:

• 2 x Intel® Xeon® E7-8891 v4 10-core processors with a clock speed of 2.8GHz

• 3TB of memory

• 2 X dual-port 16Gb Fibre Channel HBAs

• 1 X dual-port 20Gb FlexFabric HBA (only the first port was used)

2 Blade:

• 4 x Intel Xeon E7-8891 v4 10-core processors with a clock speed of 2.8GHz

• 6TB of memory

• 4 X dual-port 16Gb Fibre Channel HBAs

• 2 X dual-port 20Gb FlexFabric HBA (only the first port of the HBA for blade 1 was used)

4 Blade:

• 8 x Intel Xeon E7-8891 v4 10-core processors with a clock speed of 2.8GHz

• 12TB of memory

• 8 X dual-port 16Gb Fibre Channel HBAs

• 4 X dual-port 20Gb FlexFabric HBA (only the first port of the HBA for blade 1 was used)

We used two HPE 3PAR StoreServ All Flash arrays. The reason we used two was for capacity rather than performance considerations. The HPE 3PAR StoreServ 8450 All Flash arrays were configured as shown below:

HPE 3PAR StoreServ All Flash array 1:

• 4-Node

• 384GiB of cache

• 8 X 16Gb Fibre Channel ports

• 10 X Expansion Shelves

• 96 X 400 MLC SSDs

HPE 3PAR StoreServ All Flash array 2:

• 4-Node

• 384GiB of cache

• 8 X 16Gb Fibre Channel ports

Page 8: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 8

• 8 X Expansion Shelves

• 80 X 400 MLC SSDs

Software:

• Oracle 12c Enterprise Edition – Version 12.2.0.1.0

• Red Hat® Enterprise Linux® (RHEL) – Version 7.3

• HammerDB – Version 2.21

Hardware HPE Superdome X

Figure 2. HPE Superdome X Enclosure populated with 8 server blades

HPE Superdome X is an HPE server that represents a new category of x86 modular, mission-critical systems to consolidate all tiers of critical applications on a common platform. Engineered with trusted HPE Superdome 2 reliability, the HPE Superdome X includes a modular, bladed design, and shares HPE BladeSystem efficiencies including a common server management framework, supported from x86 to the HPE Superdome 2. With breakthrough innovations such as the fault-tolerant Crossbar Fabric and Error Analysis Engine coupled with hard partitioning capabilities, the HPE Superdome X sets the standard for mission-critical x86 computing.

HPE Superdome X offers scalability that surpasses the market, flexibility through HPE nPars, and mission critical RAS functionality. In summary:

• Support for up to 16 Intel Xeon Processor E7 v4 and E7 v3 Family

• 384 DIMM slots with up to 48TB of DDR4 memory, providing a large memory footprint for the most demanding applications

• 16 FlexLOM slots (2 per blade) providing LAN on motherboard configuration flexibility

• 24 Mezz PCIe gen3 slots (3 per blade) for maximum IO bandwidth connectivity to LAN, SAN, and InfiniBand

• Built-in shared DVD

Page 9: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 9

• HPE nPars: 1-, 2-, 3-, 4-, 63 or 8-Blade and multiple nPars supported for greater system reliability and licensing optimization

• Error Analysis Engine self-heals by driving response to failures, minimizing human error

The HPE BladeSystem Superdome Enclosure is the building block for the HPE Superdome X. Each compute enclosure supports 15 x fans, 12 x power supplies, associated power cords, 2 x Onboard Administrator (OA) modules, 2 x Global Partitioning Services Modules (GPSM), and 4 x HPE Crossbar Fabric Modules (XFMs). Configurations of 1 to 8 blades can be populated in an enclosure with support for hard partitions (nPars) containing 1, 2, 3, 4, 61 or 8 Blades. Multiple HPE nPars of different sizes are supported within a single enclosure.

HPE BL920s Server Blade

Figure 3. HPE BL920s Server Blade

Each server blade – has the following specifications:

• Includes two Intel E7 v4 or E7 v3 CPUs (The included CPU depends on which blade model is purchased.)

• 48 DIMMs slots for DDR4 memory

• 3 Mezzanine slots (1 x8, 2 x16) PCIe gen3

• 2 FlexLOM slots

• The XNC2 chipset to enable smooth, high-performance scalability from 2 to 16 sockets in addition to mission-critical class RAS features.

HPE BL920s Gen9 Server Blade with Intel Xeon v4 processors and v3 processors Each server blade supports two (per blade) of these processors:

Using Intel Xeon v4 Processors • Intel Xeon Processors: E7-8894 v4 24-core/2.4 GHz/165W/60MB

• Intel Xeon Processors: E7-8893 v4 4-core/3.2 GHz/140W/60MB

• Intel Xeon Processors: E7-8891 v4 10-core/2.8 GHz/165W/60MB

• Intel Xeon Processors: E7-8890 v4 24-core/2.2 GHz/165 W/60MB

3 BL920 Gen9 (v4) blades only

Page 10: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 10

• Intel Xeon Processors: E7-8880 v4 22-core/2.2 GHz/150 W/55MB

• Intel Xeon Processors: E7-8855 v4 14-core/2.1 GHz/140 W/35MB

Memory options:

• HPE 128GB (4 x 32GB) DDR4-2400 CAS-17 LRDIMM Memory Kit

• HPE 256GB (4 x 64GB) DDR4-2400 CAS-17 LRDIMM Memory Kit

• HPE 512GB (4 x 128GB) DDR4-2400 CAS-17 LRDIMM Memory Kit

Using Intel Xeon v3 Processors Each server blade supports two (per blade) of these processors:

• Intel Xeon Processor E7-8891 v3 10-core/2.8 GHz/165 W/45 MB

• Intel Xeon Processor E7-4850 v3 14-core/2.2 GHz/115 W/35 MB

• Intel Xeon Processor E7-8880 v3 18-core/2.3 GHz/150 W/45 MB

• Intel Xeon Processor E7-8890 v3 18-core/2.5 GHz/165 W/45 MB

• Intel Xeon Processor E7-8893 v3 4-core/3.2 GHz/140 W/45 MB

Memory options:

• HPE 64GB (4 x 16GB) DDR4-2133 CAS-15-15-15 LRDIMM Memory Kit

• HPE 128GB (4 x 32GB) DDR4-2133 CAS-15-15-15 LRDIMM Memory Kit

HPE 3PAR StoreServ All Flash array

Figure 4. HPE 3PAR StoreServ 4-node All Flash

HPE 3PAR StoreServ 8000 storage delivers the performance advantages of a purpose-built, flash optimized architecture without compromising resiliency, data services, or data mobility. A flash optimized architecture reduces the performance bottlenecks that can choke hybrid and general-purpose disk arrays. However, unlike other purpose-built flash arrays, HPE 3PAR StoreServ 8000 does not require you to introduce an entirely new architecture into your environment to achieve flash optimized performance. As a result, you don’t have to sacrifice rich, Tier-1 data services, quad-node resiliency, or flexibility to get midrange affordability. A choice of all flash, converged flash, and tiered flash models gives you a range of options that support true convergence of block and file protocols, all flash array performance and the use of spinning media to further optimize costs.

Page 11: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 11

The HPE 3PAR StoreServ Architecture was designed to provide cost-effective single-system scalability through a cache-coherent, multi-node clustered implementation. This architecture begins with a multifunction node design and, like a modular array, requires just two initial controller nodes for redundancy. However, unlike traditional modular arrays, enhanced direct interconnects are provided between the controllers to facilitate Mesh-Active processing. Unlike legacy Active/Active controller architectures—where each LUN (or volume) is active on only a single controller—this Mesh-Active design allows each LUN to be active on every controller in the system, thus forming a mesh. This design delivers robust, load-balanced performance and greater headroom for cost-effective scalability, overcoming the trade-offs typically associated with modular and monolithic storage arrays.

With rich capabilities, the lowest possible cost for all flash performance, and non-disruptive scalability to four nodes, HPE 3PAR StoreServ 8000 storage eliminates tradeoffs. You no longer need to choose between affordability and Tier-1 resiliency or flash optimized performance and Tier-1 data services. That’s because HPE 3PAR StoreServ 8000 storage shares the same flash optimized architecture and software stack with the entire family of HPE 3PAR StoreServ arrays, so you’ll not only get an industry-leading storage platform, but a storage platform that you can grow into, not out of.

When combined with high-density SSDs, HPE 3PAR StoreServ compaction and compression technologies lower the cost of flash storage to below that of traditional 10K spinning media.

In cases where there is a large amount of duplicate data, HPE 3PAR StoreServ Thin Deduplication software also improves write throughput and performance. Other storage architectures that support deduplication are not able to offer these benefits at the same capacity and scale at the same performance level.4

HPE 3PAR StoreServ compression technology is particularly useful for Oracle databases. In an Oracle proof of concept, HPE was able to achieve a 2 to 1 compression ratio. Compression is part of Adaptive Data Reduction, a collection of technologies that come standard with 3PAR StoreServ designed to reduce data footprint. Adaptive Data Reduction includes Zero Detect, deduplication, compression and Data Packing. When used alone or in combination, these technologies maximize flash capacity, reduce total cost, and improve flash media endurance.

Unique technologies extend your flash investments HPE innovations around flash not only help bring down the cost of flash media, but HPE 3PAR Gen5 Thin Express ASICs within each node also provide an efficient, silicon-based, zero-detection mechanism that “thins” your storage and extends your flash media investments. These ASICs power inline de-duplication for data compaction that removes allocated but unused space without impacting your production workloads, which has the added benefit of extending the life of flash-based media by avoiding unnecessary writes. The unique adaptive read and write feature also serves to extend the life of flash drives by automatically matching host I/O size for reads and writes.

In addition, while other architectures generally reserve entire drives as spares, the HPE 3PAR StoreServ architecture reserves spare chunklets within each drive. Sparing policies are adjusted automatically and on the fly to avoid using flash for sparing, thus lengthening media lifespan and helping to drive down performance costs. A five-year warranty on all HPE 3PAR StoreServ flash drives protects your storage architecture investment.

Databases Database performance and availability are so critical that many organizations deploy generous capacity and hire expensive management resources to maintain the required service levels. HPE 3PAR StoreServ storage removes these inefficiencies. For example, with HPE 3PAR StoreServ Thin Persistence software and the Oracle Automatic Storage Management (ASM) Storage Reclamation Utility (ASRU), your Oracle databases stay thin by automatically reclaiming stranded database capacity. HPE also offers cost-effective Oracle-aware snapshot technologies, which also benefit from HPE 3PAR StoreServ compression cost savings.

Quality of Service (QoS) Quality of service (QoS) is an essential component for delivering modern, highly scalable multi-tenant storage architectures. The use of QoS moves advanced storage systems away from the legacy approach of delivering I/O requests with “best effort” in mind and tackles the problem of “noisy neighbors” by delivering predictable tiered service levels and managing “burst I/O” regardless of other users in a shared system. Mature QoS solutions meet the requirements of controlling service metrics such as throughput, bandwidth, and latency without requiring the system administrator to manually balance physical resources. These capabilities eliminate the last barrier to consolidation by delivering assured QoS levels without having to physically partition resources or maintain discreet storage silos.

4 Subject to qualification and compliance with the HPE 3PAR Get Thinner Guarantee Program Terms and Conditions, which will be provided by your HPE Sales or Channel Partner

representative

Page 12: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 12

HPE 3PAR StoreServ Priority Optimization software enables service levels for applications and workloads as business requirements dictate, enabling administrators to provision storage performance in a manner similar to provisioning storage capacity. This allows the creation of differing service levels to protect mission-critical applications in enterprise environments by assigning a minimum goal for I/O per second and bandwidth, and by assigning a latency goal so that performance for a specific tenant or application is assured. It is also possible to assign maximum performance limits on workloads with lower service-level requirements to make sure that high-priority applications receive the resources they need to meet service levels.

HPE 3PAR Adaptive Data Reduction technologies HPE 3PAR Thin Provisioning Thin provisioning allows a volume to be created and made available to a host without the need to dedicate physical storage until it is actually needed. HPE 3PAR Thin Provisioning software has long been considered the gold standard in thin provisioning for its simplicity and efficiency. It is the most comprehensive thin provisioning software solution available, allowing enterprises to purchase only the disk capacity they actually need and only when they actually need it.

HPE 3PAR Compression Compression works by looking inside data streams and looking for opportunities to reduce the size of the actual data. HPE 3PAR inline lossless compression algorithm is specifically designed to work on a flash-native block size to drive efficiency and performance and leverages a series of different technologies to offer the highest possible savings.

HPE 3PAR Deduplication With the increasing use of SSD media, deduplication for primary storage arrays has become critical. The cost differential between SSDs and hard disk drives (HDDs) requires compaction technologies like thin provisioning and deduplication to make flash-based media more cost-efficient. The widespread deployment of server virtualization is also driving the demand for primary storage deduplication.

HPE 3PAR Thin Persistence HPE 3PAR Thin Persistence software is an optional feature that keeps thin provisioned virtual volumes (TPVVs) and read/write snapshots of TPVVs small by detecting pages of zeros during data transfers and not allocating space for the zeros. This feature works in real time and analyzes the data before it is written to the source TPVV or read/write snapshot of the TPVV. Freed blocks of 16 KB of contiguous space are returned to the source volume, and freed blocks of 128 MB of contiguous space are returned to the CPG for use by other volumes.

HPE 3PAR Peer Persistence HPE 3PAR Peer Persistence can be deployed to provide customers with a highly available stretched cluster, a cluster that spans two data centers. A stretched Oracle RAC cluster with HPE 3PAR Peer Persistence protects services from site disasters and expands storage load balancing to the multi-site data center level. The stretched cluster can span metropolitan distances (up to 5 ms roundtrip latency for the Fibre Channel [FC] replication link, generally about a 500 km roundtrip) allowing administrators to move storage workloads seamlessly between sites, adapting to changing demand while continuing to meet service-level requirements.

HPE 3PAR Peer Persistence combines HPE 3PAR StoreServ storage systems for multi-site level flexibility and availability. HPE Remote Copy synchronous replication between arrays offers storage disaster tolerance. HPE Remote Copy is a component of HPE 3PAR Peer Persistence. Peer Persistence adds the ability to redirect host I/O from the primary storage system to the secondary storage system transparently.

Page 13: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 13

Figure 5 is a pictorial representation of fine-grained virtualization and system-wide striping.

Figure 5. HPE 3PAR StoreServ 8400 data layers

Software Oracle Database Oracle Database 12c is available in a choice of editions that can scale from small to large single servers and clusters of servers. The available editions are:

• Oracle Database 12c Standard Edition 2: Delivers unprecedented ease-of-use, power and price/performance for database applications on servers that have a maximum capacity of two sockets5.

• Oracle Database 12c Enterprise Edition: Available for single or clustered servers with no socket limitation. It provides efficient, reliable and secure data management for mission-critical transactional applications, query-intensive big data warehouses and mixed workloads5.

For this paper, Oracle Database 12c Enterprise Edition, which is required in order to utilize the In-Memory Option, was installed. In addition to all of the features available with Oracle Database 12c Standard Edition 2, Oracle Database 12c Enterprise Edition has the following options:

• Oracle Active Data Guard

• Oracle Advanced Analytics

• Oracle Advanced Compression

• Oracle Advanced Security

• Oracle Database In-Memory

• Oracle Database Vault

• Oracle TimesTen Application-Tier Database Cache

• Oracle Label Security

• Oracle Multitenant

• Oracle On-line Analytical Processing

5 Source: Oracle Database 12c Product Family white paper

For more information, refer to: https://docs.oracle.com/cd/B28359_01/license.111/b28287/editions.htm

Page 14: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 14

• Oracle Partitioning

• Oracle Real Application Clusters

• Oracle RAC One Node

• Oracle Real Application Testing

• Oracle Spatial and Graph

The following, figure 6 is a graphical depiction of Oracle’s dual format architecture. Oracle maintains the table as a row based table and when migrated to memory, also maintains a columnar store6.

Figure 6. Oracle dual-format architecture

HPE Application Tuner Express (HPE-ATX) HPE Application Tuner Express (HPE-ATX) is a utility for HPE Linux customers to enable their applications to achieve maximum performance while running on larger x86 servers. With HPE-ATX application execution is aligned with the data in memory resulting in increased performance even on servers with four sockets.

Since HPE-ATX runs alongside your applications no changes in your applications are required to realize the performance benefits from HPE-ATX.

These performance gains are possible as many x86 applications in use today were designed for older 2 and 4-socket systems. No consideration for scaling these applications onto larger socket systems was designed in, leading to significant application performance issues on large systems.

HPE-ATX helps applications run much more efficiently and perform better in larger system configurations.

HPE-ATX has a number of configuration policies. They are:

• Round Robin Tree – Use a round robin distribution for processes. Include the root process/thread and all of its descendants.

• Round Robin Flat – Use a round robin distribution for processes. Include the root process/thread and only its direct descendants.

• Fill First Tree – Completely allocate all cores and threads within a physical processor then move on to the next processor. Include the root process/thread and all of its descendants.

• Fill First Flat – Completely allocate all cores and threads within a physical processor then move on to the next processor. Include the root process/thread and only its direct descendants.

• Pack – The root process/thread and all of its descendants will be launched on the same NUMA node/processor.

HPE-ATX is fully supported by HPE and can be downloaded from HPE Software Depot.

6 Contained in the Oracle Database In-Memory with Oracle 12c Release 2 Technical Overview dated August 2017:

oracle.com/technetwork/database/in-memory/overview/twp-oracle-database-in-memory-2245633.pdf

Page 15: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 15

Best practices and configuration guidance for Oracle on HPE Superdome X and the HPE 3PAR StoreServ 8450 All Flash array HPE Superdome X Compute Module BIOS settings • Hyper-Threading – Enabled

• Intel Turbo Boost – Enabled

RHEL configuration • Create udev rules to set the following device options for the LUNs and required settings for the Oracle volumes (per values in Appendices B

and C).

– Set the sysfs “rotational” value for SSD disks to 0.

– Set the sysfs “rq_affinity” value for each device to 2.

– Set “I/O scheduler” to noop.

– Set the tuned profile to throughput-performance.

– Set permissions and ownership for Oracle volumes.

HPE 3PAR 8450 StoreServ All Flash array space allocation We found during testing that configuring the redo logs and the undo tablespace as RAID-5 LUNs provided a slight performance advantage over configuring them RAID-10. However, Oracle recommends nothing less than RAID-10 and as a result, that’s how we configured these spaces.

The HPE 3PAR StoreServ 8450 All Flash arrays had the following LUN definitions:

• 16 x 756GB RAID-10 LUNs for the database, tablespaces, indexes and the temporary tablespace, which were split identically across the two arrays. This was labeled within Oracle ASM as DATA.

• 16 x 41GB RAID-10 LUNs for the redo log and the undo tablespace, which were also split identically across the two arrays. This was labeled within Oracle ASM as REDO.

• 1 X 256GB RAID-10 LUN for the boot volume, RHEL and the Oracle software

Oracle configuration best practices The Oracle database configuration highlights were as follows:

• Set the buffer cache memory size large enough to include the size of the in-memory area specified plus space for the standard Oracle database buffer cache memory size, which needs to be set large enough to avoid as many physical reads as possible. During testing 200GB was found to be the optimal size for the SGA, apart from the amount of in-memory space consumed.

When testing with the 300GB schema, the buffer cache size was set to 650GB. When using Oracle In-Memory Option with the 300GB schema, the in-memory size was set to 450GB. The entire schema was able to load in to memory with this setting when using the Oracle Compress for Query Low compression algorithm. We found during testing that the OLTP performance peaked when using 200GB of SGA space for the rest of the memory structures apart from the in-memory space.

• Create two large redo log file spaces of 300GB to minimize log file switching and reduce log file waits.

Notes:

1. Customer implementations should create their log files at a size that will cause a log file switch to occur at a frequency that meets their business need.

2. Writes to the redo log files was limited during OLAP testing, but heavily used during OLTP testing.

• Create an undo tablespace of 200GB.

• Create a temporary tablespace of 1TB.

Page 16: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 16

• Set the number of processes to a level that will allow all intended users to connect and process data. During the testing, we used 3000 for this parameter, although we never approached this number during the test sequence.

• Set the number of open cursors to a level that will not constrict Oracle processes. This was set to 3000 during testing. Again, we never approached this number of open cursors during the tests.

HPE-ATX best practices • Start the Oracle Listener process using the Round Robin Flat policy.

• Start the Oracle Database processes using the Round Robin Tree policy7.

Workload description HPE tested Oracle using HammerDB, an open-source tool. For this test sequence, HammerDB was used to implement an OLAP-type workload, as well as an OLTP workload.

For the online analytical processing (OLAP) workload, with the exception of sorting and the update function, the entire test is based on reading a large amount of data. This test is meant to emulate a Decision Support System (DSS), which represents a typical workload of business users inquiring about the performance of their business. This test is represented by a set of business focused ad-hoc queries and the tests were measured by the amount of time taken to complete each discrete query as well as the amount of time to complete all of the queries. In all, 22 separate queries are part of this test scenario. The timed results were normalized and used to compare test configurations. Other metrics measured during the workload came from the operating system.

The tests were performed on a schema size of 300GB.

We used two different connection counts for our OLAP tests, 1 and 5 users. The reason for using the two connection counts is to give the reader a feel for the level of scaling available.

For the online transaction processing (OLTP) portion of the workload, HammerDB provides a real-world type scenario that consumes both CPU for the application logic and I/O. The HammerDB tool implements an OLTP-type workload (60 percent read and 40 percent write) with small I/O sizes of a random nature. The transaction results were normalized and used to compare test configurations. Other metrics measured during the workload came from the operating system and/or standard Oracle Automatic Workload Repository (AWR) statistics reports.

The OLTP test, performed on a schema with 5,000 warehouses and 1.8TB in size, was both highly CPU and moderately I/O intensive. The environment was tuned for maximum user transactions. After the database was tuned, the transactions were recorded at different connection levels. Because customer workloads vary in characteristics, the measurement was made with a focus on maximum transactions.

We used several different Oracle connection counts for our OLTP tests. The results of various user count tests can be seen in the following graphs.

7 The command to launch the Oracle Database using HPE-ATX is included in Appendices G and H.

Page 17: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 17

Standalone OLTP results The following graph, figure 7 shows the performance of OLTP transaction as we scale from 1 to 4 blades in the configuration. The results have been normalized, such that the result for the 50 connection, 1-blade result has been set to 100%. All other percentages are scaled relative to that result. When running the 4-blade tests, a second instance was added to the configuration to address locking contention associated with the HammerDB test scenario.

Figure 7. Standalone OLTP results relative to the 1-blade, 50 connection count outcome.

0%10%20%30%40%50%60%70%80%90%100%

50%

100%

150%

200%

250%

300%

50 100 150 200 250 300

Perc

enta

ge C

PU U

tiliz

atio

n

Perc

enta

ge T

rans

actio

nal T

hrup

ut

Number of Oracle Connections

OLTP ResultsBased on 1 Blade 50 Connection Count

1 Blade 2 Blades 4 Blades

Average CPU 1 Bld Average CPU 2 Bld Average CPU 4 Bld

Up to a 200% improvement in

performance with the 4-blade

nPar

Page 18: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 18

Standalone OLAP results The next graph, figure 8 shows what happens when we run a 5 user, 32 thread OLAP workload. The 1 blade metric is set to 100% for each compression type. All 2 blade and 4 blade results are relative to the 1 blade outcome.

Figure 8. Standalone OLAP results based on each compression type 1 blade outcome, 5 users, 32 threads.

As you can see from the above graph, in every instance, the times were reduced by adding blades into the configuration.

0%

25%

50%

75%

100%

125%

150%

175%

200%

225%

250%

275%

300%

Query Low Query High Capacity Low Capacity High

Rel

ativ

e Pe

rfor

man

ce

300GB 5 User Schema 32 Thread Time to SolveBased on 1 blade for each compression type

1 Blade 2 Blade 4 Blade

A 190% performance improvement

between a 1-blade and a

4-blade configuration.

Page 19: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 19

Next, let’s look at a 5 user, 64-thread OLAP run in figure 9. Here we’re beginning to see the true power of being able to expand the server to meet the performance requirements.

Figure 9. Standalone OLAP results based on each compression type 1 blade outcome, 5 users, 64 threads.

0%

25%

50%

75%

100%

125%

150%

175%

200%

225%

250%

275%

300%

Query Low Query High Capacity Low Capacity High

Rel

ativ

e Pe

rfor

man

ce

300GB 5 User Schema 64 Thread Time to SolveBased on 1 blade for each compression type

1 Blade 2 Blade 4 Blade

Another 180% performance improvement

from a 1-blade configuration to

a 4-blade configuration

Page 20: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 20

And finally, for the stand alone results, let’s look at the 5 user, 128 thread result in figure 10. The difference in performance is more pronounced here as the scaling of the server becomes more of a requirement, because we’re consuming and needing more processor performance.

Figure 10. Standalone OLAP results based on each compression type 1 blade outcome, 5 users, 128 threads.

0%

25%

50%

75%

100%

125%

150%

175%

200%

225%

250%

275%

300%

325%

Query Low Query High Capacity Low Capacity High

Rel

ativ

e Pe

rfor

man

ce

300GB 5 User Schema 128 Thread Time to SolveBased on 1 blade for each compression type

1 Blade 2 Blade 4 Blade

As the requirement

for CPU increases, the delta between

the 1-blade and the 4-blade

configuration widens

Page 21: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 21

OLTP results while running OLAP concurrently Now let’s watch what happens when we run both workloads concurrently.

The next graph, figure 11, is a head to head comparison to demonstrate the impact to OLTP when running an OLAP workload at the same time. In this graph, we normalize the OLTP result for each user count on a 1 blade configuration running standalone to 100%. The throughput percentages are what happens when we run the OLTP workload at the same time as an OLAP workload of a single user running with 32 threads.

For example, running a 1 blade, 50 connection standalone OLTP configuration is 100 transactions per minute, we set that to 100%. Then when the OLAP workload is added, the OLTP throughput is 55 transactions per minute, we set that to 55% as shown in the graph, compared to the standalone configuration.

Figure 11. OLTP results when running with an OLAP workload simultaneously.

As expected there is an impact associated with introducing an additional workload. The highest impacts were due to the amount of available CPU. With the 1-blade configuration, the available CPU was almost consumed with 50 connections, whereas this did not happen with the 2-blade configuration until 200 connections. As the number of connections is scaled, that impact is mitigated on 1-blade and 2-blade configurations. This is due to the fact that we were not consuming all of the available processing power with the lower connection counts on the 2-blade and 4-blade configurations. In fact, even at the lower connection level, the 2 blade and the 4 blade configuration handled the additional load well.

0%

25%

50%

75%

100%

50 100 150 200 250 300

Perc

enta

ge T

rans

actio

nal T

hrup

ut

Number of Oracle Connections

OLTP Results while running OLAP concurrentlyBased on Each Connection Count

1 Blade + OLAP 2 Blades + OLAP 4 Blades + OLAP

In all cases, the 4-blade

configuration running

simultaneously with OLAP was able to run at

least at 80% of the same

workload being run standalone.

Page 22: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 22

The next question to answer is, what happens if I want to run an OLTP environment simultaneously with an OLAP environment and I don’t want my OLTP environment to incur an impact, or if I want to scale my OLTP environment to provide additional throughput. The following graph, figure 12, shows those results. Here we are comparing the 1, 2 and 4-blade results with the 1 blade result running standalone at each connection count. For example, if a 1-blade OLTP standalone environment running with 50 connections produced a result of 100 transactions that would be set to 100%. Then if a 1-blade OLTP, 50 connection environment running at the same time as the OLAP workload produced a result of 75 transactions that would be set to 75%. If we then add a blade to the configuration making it 2-blade and ran the same 50 connection test at the same time we’re running the OLAP workload and that produced a result of 125 transactions, we’d set that result to 125%.

We make that same type of comparison at each connection count 50, 100, 150, etc.

Figure 12. OLTP results when running with an OLAP workload simultaneously, based on the 1-blade result for each connection count.

0%

25%

50%

75%

100%

125%

150%

175%

200%

225%

50 100 150 200 250 300

Perc

enta

ge T

rans

actio

nal T

hrup

ut

Number of Oracle Connections

OLTP Results while running OLAP concurrentlyBased on 1 Blade Each Connection Count

1 Blade + OLAP 2 Blades + OLAP 4 Blades + OLAP

Up to a 110% performance

improvement in the number of

OLTP transactions

with the 4-blade system when

running OLAP concurrently.

Page 23: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 23

OLAP results while running OLTP concurrently Now let’s look at the impact of running the OLAP workload while introducing a concurrent OLTP load.

The next graph, figure 13 is a comparison of running a standalone OLAP workload versus the same OLAP workload running simultaneously with a 50 connection count OLTP workload. In each instance, although not plotted, the standalone OLAP result is set to 100% for each compression type. For this graph, the 100% number is represented in the graph in figure 8. The 1, 2 and 4 blade OLAP results are relative to the standalone OLAP outcome for the same blade count. The number of concurrent OLAP users is 5 and each user performed their query across 32 threads.

Figure 13. OLAP results when running with an OLTP workload simultaneously, based on the standalone OLAP result for the same blade count.

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

110%

120%

Query Low Query High Capacity Low Capacity High

Rel

ativ

e Pe

rfor

man

ce

300GB 5 Users 32 Threadvs 5 Users 32 Thread & OLTP simultaneously

1 Blade + OLTP 2 Blade + OLTP 4 Blade + OLTP

As CPU becomes more

critical, the 4-blade

configuration delivers over

90% the throughput

versus the same OLAP workload

alone.

Page 24: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 24

The following graph, figure 14 shows what happens when we scale the environment. The 100% comparison point in this graph is the 1 blade standalone result for each compression type. The number of concurrent OLAP users is 5 and each user performed their query across 32 threads.

Figure 14. OLAP results when running with an OLTP workload simultaneously, based on the standalone OLAP result on a 1 blade configuration.

0%

25%

50%

75%

100%

125%

150%

175%

200%

225%

250%

Query Low Query High Capacity Low Capacity High

Rel

ativ

e Pe

rfor

man

ce

300GB 5 Users 32 Threadvs 5 Users 32 Thread & OLTP simultaneously based on 1 Blade

Result

1 Blade + OLTP 2 Blade + OLTP 4 Blade + OLTP

A 149% performance improvement going from 1-blade to 4-blades.

Page 25: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 25

The next graph, figure 15 is the result for the 5 user, 64 thread run concurrently with OLTP. Here we’re looking at each blade configuration and comparing it to a 100% value given to the same OLAP run without anything else being run concurrently. The 100% result is not plotted on this graph, but is contained in the graph in figure 9.

Figure 15. OLAP results when running with an OLTP workload simultaneously, based on the standalone OLAP result for the same blade count.

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Query Low Query High Capacity Low Capacity High

Rel

ativ

e Pe

rfor

man

ce

300GB 5 Users 64 Thread Standalone vs 5 Users 64 Thread & OLTP simultaneously

1 Blade + OLTP 2 Blade + OLTP 4 Blade + OLTP

Every 4-blade result is in

excess of 90% of the standalone

result

Page 26: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 26

The following graph, figure 16 shows what happens when we scale the environment. It’s the same data as the previous graph, but here we’re comparing everything to the 1 blade configuration for each compression type. The number of concurrent OLAP users is 5 and each user performed their query across 64 threads.

Figure 16. OLAP results when running with an OLTP workload simultaneously, based on the standalone OLAP result on a 1 blade configuration.

0%

25%

50%

75%

100%

125%

150%

175%

200%

225%

250%

275%

Query Low Query High Capacity Low Capacity High

Rel

ativ

e Pe

rfor

man

ce

300GB 5 Users 64 Thread Standalonevs 5 Users 64 Thread & OLTP simultaneously based on 1 Blade

Result

1 Blade + OLTP 2 Blade + OLTP 4 Blade + OLTP

A 160% performance improvement going from 1-blade to

4-blade

Page 27: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 27

The next graph, figure 17 depicts the scaling of the 5 user OLAP workload, with each user running with 128 concurrent threads. As with the other comparison graphs, the 100% number is set by the same OLAP workload running standalone and here too, the 100% result is not plotted on this graph, but is contained in the graph in figure 10.

Figure 17. OLAP results when running with an OLTP workload simultaneously, based on the standalone OLAP result for the same blade count.

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

Query Low Query High Capacity Low Capacity High

Rel

ativ

e Pe

rfor

man

ce

300GB 5 Users 128 Thread Standalone vs 5 Users 128 Thread & OLTP simultaneously

1 Blade + OLTP 2 Blade + OLTP 4 Blade + OLTP

Average performance for the 4-blade was

90% that of running the same OLAP

workload standalone

Page 28: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 28

And finally, the next graph, figure 18 is the same information, but all percentages are relative to the 1 blade configuration running the OLAP workload standalone. In this graph, there are 5 users running with 128 concurrent threads.

Figure 18. OLAP results when running with an OLTP workload simultaneously, based on the standalone OLAP result on a 1 blade configuration.

Capacity and sizing The HPE Integrity Superdome X can scale from 1 to 8 blades in increments of 1, 2, 3, 4, 6 or 8-Blades. Each blade can add up to 3TB of memory, for a total memory footprint of up to 24TB.

The following table, table 2, shows the amount of storage space taken when storing a 300GB OLAP schema on disk. It also shows the amount of memory space taken for the various compression policies when those tables are placed in-memory.

Table 2. Oracle table and in-memory sizes for the 300GB schema

Table Name Table Size GB

On Disk

Query Low Size GB

Compress Factor

Query High Size GB

Compress Factor

Capacity Low Size GB

Compress Factor

Capacity High Size GB

Compress Factor

LINEITEM 241.47 124.55 1.94 99.50 2.43 70.63 3.42 50.51 4.78

ORDERS 53.03 43.44 1.22 36.05 1.47 18.86 2.81 11.93 4.45

PARTSUPP 38.81 34.96 1.11 33.55 1.16 14.32 2.71 7.96 4.87

PART 8.88 3.74 2.38 3.21 2.76 1.78 5.00 1.32 6.70

CUSTOMER 7.19 7.32 0.98 6.28 1.14 3.19 2.26 2.20 3.26

SUPPLIER 0.45 0.47 0.94 0.41 1.08 0.21 2.16 0.14 3.14

Total 349.83 214.48 1.63 179.00 1.95 108.99 3.21 74.06 4.72

0%

25%

50%

75%

100%

125%

150%

175%

200%

225%

250%

275%

300%

Query Low Query High Capacity Low Capacity High

Rel

ativ

e Pe

rfor

man

ce

300GB 5 Users 128 Thread Standalonevs 5 Users 128 Thread & OLTP simultaneously based on 1 Blade

Result

1 Blade + OLTP 2 Blade + OLTP 4 Blade + OLTP

185% performance improvement going from a 1-blade to a

4-blade configuration

Page 29: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 29

The next table, table 3, shows the amount of storage space taken when storing a 3TB OLAP schema on disk and compares that to the amount of memory taken when that same set of tables is loaded into memory.

Table 3. Oracle table and in-memory sizes for the 3TB schema

Table Name Table Size GB

On Disk

Query Low Size GB

Compress Factor

Query High Size GB

Compress Factor

Capacity Low Size GB

Compress Factor

Capacity High Size GB

Compress Factor

LINEITEM 2,444.36 1,292.25 1.89 1,020.59 2.40 715.13 3.42 511.40 4.78

ORDERS 536.07 469.94 1.14 387.84 1.38 190.75 2.81 126.23 4.25

PARTSUPP 391.05 351.99 1.11 338.62 1.15 145.63 2.69 80.35 4.87

PART 89.00 37.87 2.35 32.33 2.75 18.77 4.74 12.85 6.93

CUSTOMER 72.00 73.02 0.99 63.15 1.14 32.70 2.20 21.66 3.32

SUPPLIER 4.38 4.76 0.92 4.12 1.06 2.08 2.10 1.42 3.09

Total 3,536.86 2,229.83 1.59 1,846.65 1.92 1,105.06 3.20 753.91 4.69

As you can see from the above tables, if each blade were installed with 3TB of memory, then a 4-blade configuration could accommodate up to 5 x 3TB schemas using the compress for query low setting. The same configuration would allow for up to 16 X 3TB schemas using the compress for capacity high scheme.

In addition, Oracle In-memory allows for loading tablespaces, tables or columns within a table. As you can see from the scaling numbers above, the HPE Superdome X will allow for a very large schema to be loaded into memory.

The ability to utilize from 1 to 8 blades in a single nPar, allows the HPE Superdome X to load truly massive databases into memory. Additionally, the ability to run OLTP applications side-by-side with OLAP applications allows for the elimination of ELT and removes the latency associated with transferring data from OLTP databases to OLAP databases.

Analysis and recommendations As represented in the graphs, the HPE Superdome X scales in a close to linear fashion. It scaled to 150% when going from a 1-blade configuration to a 4-blade configuration during the OLTP tests. During the OLAP tests, the HPE Superdome X scaled to almost 200% when going from a 1-blade configuration to a 4-blade configuration when running the 5-user, 128-thread tests. Additionally, when combining OLTP and OLAP workloads on a consistent configuration, the impact to each respective workload is in the 10-20% range which is a great indication that the HPE Superdome X server is able to handle the additional load.

It is beyond the scope of this paper to estimate the amount of memory required to read a specific set of tablespaces, tables or columns into memory. However, as demonstrated by the above tables, the amount of space required on-disk took 1.59 times more than the amount memory required to lift tables into memory when using Query Low compression to 4.69 times more when using Capacity High compression. In the case of the 3TB database, the on-disk space consumed was approximately 3.6TB, while the in-memory footprint varied from a high of 2.23TB using Query Low compression to a low of 754GB using the Capacity High compression algorithm. This means that a schema quite a bit larger than 12TB could be lifted into 12TB of memory, depending on your compression and performance tradeoffs.

Likewise, when the on-disk storage space consumed was 358GB, the in-memory footprint ranged from a high of 214GB using Query Low to a low of just 74GB using Capacity High.

This means that a schema could be up to 4.72 times the size of the intended memory target using this sample data.

Implementing a proof-of-concept As you can see from the performance results, differences between implementations and data access patterns can cause a given environment’s performance to vary from what was tested as part of this paper. As a matter of best practice for all deployments, HPE recommends implementing a proof-of-concept using a test environment that matches as closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained. For help with a proof-of-concept, contact an HPE Services representative (hpe.com/us/en/services/consulting.html) or your HPE partner.

Page 30: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 30

HPE Database Performance Profiler (DPP) HPE also offers an Oracle Performance and Cost Assessment to qualified customer, using HPE Database Performance Profiler (DPP). DPP uses performance data collected on existing Oracle databases to identify performance bottlenecks that are inflating software license costs. The result of this assessment will include specific recommendations as well as a summary that provides an estimate of the potential benefits associated with the proposal. Please contact your HPE Account Manager for full details.

DPP collects utilization and inventory data to allow HPE to understand the database workload and provide fact-based recommendations addressing license consolidation and/or reduction, complexity, availability, consolidated server and storage footprint, performance, and TCO/ROI. Specific data indicates infrastructure changes with the most impact on identified performance issues.

Summary As shown in the graphs, the HPE Superdome X is a capable platform on which to deploy an integrated Oracle OLTP/OLAP environment. It scales in an almost linear fashion when moving from 1 to 2 to 4 blade configurations. It has the ability to scale large memory footprints, which allows businesses to move more disk-based tablespaces, tables and columns into memory, which enhances its attractiveness in Oracle In-memory environments.

The HPE 3PAR StoreServ 8450 All Flash array is also a capable platform on which to deploy Oracle databases. During testing we rarely waited on I/O operations to complete, which, in turn, kept the processors busier than would have been otherwise possible.

Page 31: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 31

Appendix A: Bill of materials

Note Part numbers are at time of testing and subject to change. The bill of materials does not include complete support options or other rack and power requirements. If you have questions regarding ordering, please consult with your HPE Reseller or HPE Sales Representative for more details. hpe.com/us/en/services/consulting.html

Table 4a. Bill of materials for the HPE Superdome X server

Qty Part number Description

Rack Infrastructure

1 H6J66A HPE 42U 600x1075mm Advanced Shock Rack

1 H6J66A 001 HPE Factory Express Base Racking Service

HPE Superdome X

1 AT147B HPE Superdome X Base Enclosure

1 AT147B 001 HPE Superdome X Local Power

1 AT152A HPE Superdome X Advanced Par LTU

2 787635-B21 HPE 6127XLG Blade Switch Opt Kit

4 C8S47A Brocade 16Gb/28c PP+ Embedded SAN Switch

8 H7B45A HPE BL920s Gen9 2.8GHz 20c Svr Blade

96 H7B83A HPE DDR4 256GB (4x64GB) Mem Module Kit

8 700763-B21 HPE FlexFabric 20Gb 2P 650FLB Adptr

16 710608-B21 HPE QMH2672 16Gb FC HBA

8 BD505A HPE iLO Adv incl 3yr TSU 1-Svr Lic

2 H8B55A HPE Mtrd Swtchd 14.4kVA/CS8365C/NA/J PDU

1 H6J85A HPE Rack Hardware Kit

1 BW906A HPE 42U 1075mm Side Panel Kit

48 QK734A HPE Premier Flex LC/LC OM4 2f 5m Cbl

48 QK724A HPE B-series 16Gb SFP+SW XCVR

Table 4b. Bill of materials for the first HPE 3PAR StoreServ 8450 All Flash array

Qty Part number Description

Rack Infrastructure

1 BW904A HPE 42U 600x1075mm Enterprise Shock Rack

1 BW904A 001 HPE Factory Express Base Racking Service

HPE 3PAR StoreServ 8450 All Flash array

1 H6Z25A HPE 3PAR StoreServ 8450 4N Stor Cnt Base

4 H6Z00A HPE 3PAR 8000 4-pt 16Gb FC Adapter

16 N9Y06A HPE 3PAR 8000 400GB SFF SSD

1 L7C17A HPE 3PAR 8450 OS Suite Base LTU

96 L7C18A HPE 3PAR 8450 OS Suite Drive LTU

2 QR480B HPE SN6000B 16Gb 48/48 FC Switch

96 QK724A HPE B-series 16Gb SFP+SW XCVR

Page 32: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 32

Qty Part number Description

10 H6Z26A HPE 3PAR 8000 SFF(2.5in) SAS Drive Encl

80 N9Y06A HPE 3PAR 8000 400GB SFF SSD

1 K2R28A HPE 3PAR StoreServ SPS Service Processor

1 TK808A HPE Rack Front Door Cover Kit

80 QK735A HPE Premier Flex LC/LC OM4 2f 15m Cbl

16 QK734A HPE Premier Flex LC/LC OM4 2f 5m Cbl

4 H5M58A HPE Basic 4.9kVA/L6-30P/C13/NA/J PDU

1 BW906A HPE 42U 1075mm Side Panel Kit

1 BD362A HPE 3PAR StoreServ Mgmt/Core SW Media

1 BD363A HPE 3PAR OS Suite Latest Media

1 BD365A HPE 3PAR SP SW Latest Media

1 TC472A HPE Intelligent Inft Analyzer SW v2 LTU

Table 4c. Bill of materials for the second HPE 3PAR StoreServ 8450 All Flash array

Qty Part number Description

Rack Infrastructure

1 BW904A HPE 42U 600x1075mm Enterprise Shock Rack

1 BW904A 001 HPE Factory Express Base Racking Service

HPE 3PAR 8450 StoreServ All Flash array

1 H6Z25A HPE 3PAR StoreServ 8450 4N Stor Cnt Base

4 H6Z00A HPE 3PAR 8000 4-pt 16Gb FC Adapter

16 N9Y06A HPE 3PAR 8000 400GB SFF SSD

1 L7C17A HPE 3PAR 8450 OS Suite Base LTU

80 L7C18A HPE 3PAR 8450 OS Suite Drive LTU

2 QR480B HPE SN6000B 16Gb 48/48 FC Switch

96 QK724A HPE B-series 16Gb SFP+SW XCVR

8 H6Z26A HPE 3PAR 8000 SFF(2.5in) SAS Drive Encl

64 N9Y06A HPE 3PAR 8000 400GB SFF SSD

1 K2R28A HPE 3PAR StoreServ SPS Service Processor

1 TK808A HPE Rack Front Door Cover Kit

80 QK735A HPE Premier Flex LC/LC OM4 2f 15m Cbl

16 QK734A HPE Premier Flex LC/LC OM4 2f 5m Cbl

4 H5M58A HPE Basic 4.9kVA/L6-30P/C13/NA/J PDU

1 BW906A HPE 42U 1075mm Side Panel Kit

1 BD362A HPE 3PAR StoreServ Mgmt/Core SW Media

1 BD363A HPE 3PAR OS Suite Latest Media

1 BD365A HPE 3PAR SP SW Latest Media

1 TC472A HPE Intelligent Inft Analyzer SW v2 LTU

Page 33: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 33

Appendix B: udev device permission rules To allow for persistent permissions and Oracle database access across reboots, a udev rules file named /etc/udev/rules.d/ 12-dm-permission.rules was created to set the required ownership of the Oracle ASM LUNs.

ENV{DM_NAME}=="data01", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data02", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data03", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data04", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data05", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data06", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data07", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="data08", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="dta01", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="dta02", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="dta03", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="dta04", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="dta05", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="dta06", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="dta07", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="dta08", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdo1", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdo2", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdo3", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdo4", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdo5", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdo6", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdo7", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdo8", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdob1", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdob2", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdob3", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdob4", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdob5", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdob6", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdob7", OWNER:="oracle", GROUP:="oinstall", MODE:="660" ENV{DM_NAME}=="rdob8", OWNER:="oracle", GROUP:="oinstall", MODE:="660"

Appendix C: udev rules A udev rules file was created to set the rotational latency, the I/O scheduler and rq_affinity. The name of this file was /etc/udev/rules.d/10-3par.rules.

ACTION=="add|change", KERNEL=="dm-*", PROGRAM="/bin/bash -c 'cat /sys/block/$name/slaves/*/device/vendor | grep 3PARdata'", ATTR{queue/rotational}="0", ATTR{queue/scheduler}="noop", ATTR{queue/rq_affinity}="2", ATTR{queue/nomerges}="1", ATTR{queue/nr_requests}="128"

Page 34: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 34

Appendix D: /etc/sysctl.conf In order to enable the Oracle database, the following /etc/sysctl.conf file was used to set OS kernel parameters.

# System default settings live in /usr/lib/sysctl.d/00-system.conf. # To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file # # For more information, see sysctl.conf(5) and sysctl.d(5). fs.file-max = 6815744 kernel.sem = 250 32000 100 128 #kernel.shmmni = 4096 kernel.shmmni = 16384 kernel.shmall = 2684354560 kernel.shmmax = 10995116277760 kernel.panic_on_oops = 1 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 fs.aio-max-nr = 4194304 net.ipv4.ip_local_port_range = 9000 65500 vm.nr_hugepages = 1101004 vm.hugetlb_shm_group = 54322

Appendix E1: init.ora The following init.ora file was used when starting the first instance, which included all in-memory processing.

sdx.__data_transfer_cache_size=0 sdx.__db_cache_size=67914170368 sdx.__java_pool_size=1879048192 sdx.__large_pool_size=1879048192 sdx.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment sdx.__sga_target=650G sdx.__shared_io_pool_size=536870912 sdx.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/sdx/adump' *.audit_trail='db' *.compatible='12.2.0' *.control_files='+DATA/SDX/CONTROLFILE/current.261.950786907' *.db_block_size=8192 *.db_create_file_dest='+DATA' *.db_domain='' *.db_name='sdx' *.db_recovery_file_dest='+DATA'

Page 35: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 35

*.db_recovery_file_dest_size=4560m *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=sdxXDB)' *.open_cursors=3000 *.processes=3000 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=650G *.undo_tablespace='UNDOTBS1' result_cache_max_size=794304K _high_priority_processes='VKTM*|LG*' lock_sga=TRUE use_large_pages='ONLY' _max_outstanding_log_writes=4 LOG_BUFFER=1G inmemory_size=450G HASH_AREA_SIZE=67108864 sdx.__pga_aggregate_target=51546M *.pga_aggregate_target=51546M parallel_servers_target=1000 sdx.__shared_pool_size=8589934592 _fast_cursor_reexecute=true

Appendix E2: init.ora The following init.ora file was used when starting the second instance, which was for OLTP processing when adding blades 3 and 4 to the configuration.

sdx2.__data_transfer_cache_size=0 sdx2.__db_cache_size=67914170368 sdx2.__java_pool_size=1879048192 sdx2.__large_pool_size=1879048192 sdx2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment sdx2.__sga_target=650G sdx2.__shared_io_pool_size=536870912 sdx2.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/sdx2/adump' *.audit_trail='db' *.compatible='12.2.0' *.control_files='+DATA/SDX2/CONTROLFILE/current.272.949851623','+REDO/SDX2/CONTROLFILE/current.259.949851623'*.db_block_size=8192 *.db_create_file_dest='+DATA' *.db_domain='' *.db_name='sdx2' *.db_recovery_file_dest='+DATA' *.db_recovery_file_dest_size=4560m *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=sdx2XDB)' *.open_cursors=3000 *.processes=3000 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=200G *.undo_tablespace='UNDOTBS1' result_cache_max_size=794304K _high_priority_processes='VKTM*|LG*' lock_sga=TRUE use_large_pages='ONLY'

Page 36: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 36

_max_outstanding_log_writes=4 LOG_BUFFER=1G HASH_AREA_SIZE=67108864 sdx2.__pga_aggregate_target=51546M *.pga_aggregate_target=51546M parallel_servers_target=1000 sdx2.__shared_pool_size=8589934592 _fast_cursor_reexecute=true

Appendix F: multipath.conf The following multipath.conf was used to place names on the multipath device files so they could be more easily identified for use with the Oracle database.

defaults { find_multipaths yes user_friendly_names yes } blacklist { } multipaths { multipath { wwid 360002ac000000000000000180001d940 alias data01 } multipath { wwid 360002ac000000000000000190001d940 alias data02 } multipath { wwid 360002ac0000000000000001a0001d940 alias data03 } multipath { wwid 360002ac0000000000000001b0001d940 alias data04 } multipath { wwid 360002ac0000000000000001c0001d940 alias data05 } multipath { wwid 360002ac0000000000000001d0001d940 alias data06 } multipath { wwid 360002ac0000000000000001e0001d940 alias data07 } multipath { wwid 360002ac0000000000000001f0001d940 alias data08 } multipath { wwid 360002ac0000000000000000a0001d944

Page 37: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 37

alias dta01 } multipath { wwid 360002ac0000000000000000b0001d944 alias dta02 } multipath { wwid 360002ac0000000000000000c0001d944 alias dta03 } multipath { wwid 360002ac0000000000000000d0001d944 alias dta04 } multipath { wwid 360002ac0000000000000000e0001d944 alias dta05 } multipath { wwid 360002ac0000000000000000f0001d944 alias dta06 } multipath { wwid 360002ac000000000000000100001d944 alias dta07 } multipath { wwid 360002ac000000000000000110001d944 alias dta08 } multipath { wwid 360002ac000000000000000210001d944 alias fs_new1 } multipath { wwid 360002ac000000000000000200001d944 alias fs_new2 } multipath { wwid 360002ac0000000000000001f0001d944 alias fs_new3 } multipath { wwid 360002ac0000000000000001e0001d944 alias fs_new4 } multipath { wwid 360002ac0000000000000001d0001d944 alias fs_new5 } multipath { wwid 360002ac0000000000000001c0001d944 alias fs_new6 } multipath { wwid 360002ac0000000000000001b0001d944 alias fs_new7 } multipath {

Page 38: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 38

wwid 360002ac0000000000000001a0001d944 alias fs_new8 } multipath { wwid 360002ac000000000000000280001d940 alias rdo1 } multipath { wwid 360002ac000000000000000290001d940 alias rdo2 } multipath { wwid 360002ac0000000000000002a0001d940 alias rdo3 } multipath { wwid 360002ac0000000000000002b0001d940 alias rdo4 } multipath { wwid 360002ac0000000000000002c0001d940 alias rdo5 } multipath { wwid 360002ac0000000000000002d0001d940 alias rdo6 } multipath { wwid 360002ac0000000000000002e0001d940 alias rdo7 } multipath { wwid 360002ac0000000000000002f0001d940 alias rdo8 } multipath { wwid 360002ac0000000000000002c0001d944 alias rdob1 } multipath { wwid 360002ac0000000000000002b0001d944 alias rdob2 } multipath { wwid 360002ac0000000000000002a0001d944 alias rdob3 } multipath { wwid 360002ac0000000000000002d0001d944 alias rdob4 } multipath { wwid 360002ac0000000000000002e0001d944 alias rdob5 } multipath { wwid 360002ac0000000000000002f0001d944 alias rdob6 }

Page 39: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 39

multipath { wwid 360002ac000000000000000300001d944 alias rdob7 } multipath { wwid 360002ac000000000000000310001d944 alias rdob8 } }

Appendix G: ATX script – used for all OLAP and 2-blade OLTP The following script was used to start the first instance. This instance was used for the OLAP testing for all blade configurations and it was used for OLTP testing on the 2-blade configuration.

#!/bin/bash hpe-atx -p rr_flat -n 0-3 -l listener.log lsnrctl start sdx hpe-atx -p rr_tree -n 0-3 -l atx.log srvctl start database -db sdx

Appendix H: ATX script – used for 4-blade OLTP only The following script was used to start the second instance. This instance was used for the OLTP testing when adding blades 3 and 4 to the configuration.

#!/bin/bash hpe-atx -p rr_flat -n 4-7 -l listener2log lsnrctl start sdx2 hpe-atx -p rr_tree -n 4-7 -l atx2.log srvctl start database -db sdx2

Page 40: HPE Reference Architecture for Oracle 12c OLTP and OLAP ......HPE Reference Architecture for Oracle 12c OLTP and OLAP workload on HPE Superdome X and HPE 3PAR StoreServ All Flash a

Reference Architecture Page 40

Sign up for updates

© Copyright 2017-2018 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.

a00026056enw, April 2018, Rev. 1

Resources and additional links HPE Superdome X Servers hpe.com/servers/SuperdomeX

HPE 3PAR StoreServ hpe.com/storage/3par

HPE 3PAR Thin Technologies http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=4AA3-8987ENW

HPE 3PAR Peer Persistence Software http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=4AA4-3533ENW

HPE Reference Architectures hpe.com/info/ra

HPE Servers hpe.com/servers

HPE Storage hpe.com/storage

HPE Networking hpe.com/networking

HPE Technology Consulting Services hpe.com/us/en/services/consulting.html

To help us improve our documents, please provide feedback at hpe.com/contact/feedback.