Download - HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Transcript
Page 1: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

HPE Reference Architecture for Oracle 12c RAC Scaling on HPE ProLiant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash

Technical white paper

Page 2: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper

Contents Executive summary ................................................................................................................................................................................................................................................................................................................................ 3 Introduction ................................................................................................................................................................................................................................................................................................................................................... 3 Solution overview ..................................................................................................................................................................................................................................................................................................................................... 3 Solution components ............................................................................................................................................................................................................................................................................................................................ 7

HPE ProLiant DL380 Gen9 server ................................................................................................................................................................................................................................................................................... 7 HPE 3PAR StoreServ 8450 All Flash array ............................................................................................................................................................................................................................................................... 8 Oracle Database ............................................................................................................................................................................................................................................................................................................................. 12 HPE Insight Cluster Management Utility ................................................................................................................................................................................................................................................................. 12

Best practices and configuration guidance for the Oracle RAC solution .......................................................................................................................................................................................... 13 Capacity and sizing ............................................................................................................................................................................................................................................................................................................................ 16

Workload description ................................................................................................................................................................................................................................................................................................................. 17 Workload results ............................................................................................................................................................................................................................................................................................................................. 18 Analysis and recommendations ....................................................................................................................................................................................................................................................................................... 22

Summary ...................................................................................................................................................................................................................................................................................................................................................... 24 Implementing a proof-of-concept .................................................................................................................................................................................................................................................................................. 24

Appendix A: Bill of materials ...................................................................................................................................................................................................................................................................................................... 25 Appendix B: Oracle configuration parameters .......................................................................................................................................................................................................................................................... 27 Appendix C: Linux kernel parameters ............................................................................................................................................................................................................................................................................... 28 Appendix D: Multipath configuration ................................................................................................................................................................................................................................................................................ 28 Appendix E: HPE 3PAR StoreServ 8450 udev device permission rules .......................................................................................................................................................................................... 31 Appendix F: udev rules ................................................................................................................................................................................................................................................................................................................... 31 Resources and additional links ................................................................................................................................................................................................................................................................................................ 32

Page 3: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 3

Executive summary In today’s IT organizations, the demands of rapid database implementations continue to escalate. Faster transaction processing speeds, capacity-based scaling, increased flexibility, high availability and business continuity are required to meet the needs of the 24/7 business.

This Reference Architecture is intended to provide customers with the expected performance implications associated with deploying Oracle Real Application Clusters (RAC) 12c on HPE ProLiant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash storage, and also demonstrate scaling from one to four server nodes while providing a highly available Oracle database. This while paper will help customers in planning to provide the appropriate level of performance and continue to meet the SLAs that are a key requirement for their business enterprises.

Testing performed as part of this Reference Architecture establishes a baseline level of performance based on a Single-Instance version of the Oracle database. That baseline was then compared to 1, 2, 3 and 4-node Oracle RAC deployments. While Oracle RAC One Node performed at a level somewhat less than that of the Single-Instance deployment, the testing exhibited a near linear scaling effect when adding nodes to the cluster.

Target audience: This Hewlett Packard Enterprise Reference Architecture (RA) is designed for IT professionals who use, program, manage, or administer large databases that require high performance. Specifically, this information is intended for those who evaluate, recommend, or design new and existing IT high performance architectures. Additionally, CIOs may be interested in this document as an aid to guide their organizations to determine when to implement Oracle RAC and the performance characteristics associated with that implementation.

This RA describes testing completed in August 2016.

Document purpose: The purpose of this document is to describe a Reference Architecture that organizations can utilize to plan for their deployment of Oracle RAC whether they are converting from an Oracle Single-Instance Database design model or adding servers to their existing Oracle RAC databases.

Introduction Information Technology departments are under pressure to add value to the business, improve existing infrastructure, enable growth and reduce overhead. As Oracle database footprints grow, IT departments have to make a decision whether to grow their database infrastructure to a larger Single-Instance server, or whether to grow their database infrastructure horizontally using a number of cooperating servers in a clustered, Oracle RAC environment.

There are costs and benefits to both deployment scenarios. The benefits associated with Oracle RAC deployments are:

• High Availability – Should a server that is part of the RAC environment fail, there is no interruption of database services. It should be noted, however, that while the cluster will continue to operate and deliver database services to the organization, it will do so in a degraded fashion until the failed member can be returned to the cluster.

• Rolling patching and upgrades – A server can be taken offline seamlessly so that it can be patched and then returned to cluster, all while database services are delivered to the organization.

• Growth – Oracle RAC clusters can be expanded on a node-by-node basis. A cluster that starts with 2 servers, can be expanded to 3 servers and later to 4 or more servers. Each additional server brings additional throughput to the cluster. The total number of nodes that can be added to a cluster and result in an increase in performance is instance dependent. One organization may have an instance design that results in no additional performance beyond two nodes, while another may have an instance that allows the addition of nodes 5, 6 or more nodes. Each organization should test their specific deployment to ensure the addition of a RAC node results in a performance improvement.

While all of these benefits are available with the deployment of Oracle RAC, they come with the cost of additional licenses. In addition to licensing the servers for the Oracle database, you must also license them for RAC. Each license is based on a per-core license model when deploying Oracle Enterprise Edition.

Solution overview This Reference Architecture demonstrates the performance differences when moving from an Oracle Single-Instance deployment to a RAC deployment. Oracle RAC One Node is a high availability option that customers can use to minimize downtime, while saving money on Oracle licenses. We have a secondary node that’s part of the cluster, but doesn’t run the Oracle database. Should the node running the Oracle database become incapacitated, the Oracle instance fails over to the secondary node.

Page 4: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 4

This RA then compares an Oracle RAC One Node configuration against a configuration running 2, 3 and 4 Oracle RAC nodes to analyze the level of scaling that may be expected when scaling out to larger and larger horizontal RAC deployments.

This provides organizations with an example of how one instance scales using RAC and as a starting point for evaluating the decision on whether to deploy an Oracle RAC solution.

Figure 1 is a depiction of how each node was deployed during our test scenarios.

The baseline test configuration was a Single-Instance Oracle database on a single server. Testing then progressed to an Oracle RAC One Node solution on the same single server. After that we added a second server and ran a two node Oracle RAC instance. This was followed by a three node and a four node configuration.

ORA RAC 03ORA RAC 01 ORA RAC 02 ORA RAC 04

5

6

7

8

1

2

3

4

DIMMS2 4 6 8 1012

1 3 5 7 9 11

2 4 6 8 1012

1 3 5 7 9 11FANS 321 654

PS2

2 PROCS 1

PS1POWERCAP OVERTEMPAMPSTATUS

21 43NIC

UID

ProLiantDL380pGen8

5

6

7

8

1

2

3

4

DIMMS2 4 6 8 1012

1 3 5 7 9 11

2 4 6 8 1012

1 3 5 7 9 11FANS 321 654

PS2

2 PROCS 1

PS1POWERCAP OVERTEMPAMPSTATUS

21 43NIC

UID

ProLiantDL380pGen8

5

6

7

8

1

2

3

4

DIMMS2 4 6 8 1012

1 3 5 7 9 11

2 4 6 8 1012

1 3 5 7 9 11FANS 321 654

PS2

2 PROCS 1

PS1POWERCAP OVERTEMPAMPSTATUS

21 43NIC

UID

ProLiantDL380pGen8

5

6

7

8

1

2

3

4

DIMMS2 4 6 8 1012

1 3 5 7 9 11

2 4 6 8 1012

1 3 5 7 9 11FANS 321 654

PS2

2 PROCS 1

PS1POWERCAP OVERTEMPAMPSTATUS

21 43NIC

UID

ProLiantDL380pGen8

Single InstanceRAC One Node

Two node RAC

Three node RAC

Four node RAC

12

110 23

12

110 23

3PARStoreServ

7450

3PARStoreServ

7450SAS SAS SAS SAS

10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

Figure 1. Logical diagram showing four Oracle RAC Nodes

Page 5: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 5

Figure 2 provides a more detailed depiction of how a two-node RAC solution is configured. It is important to note that there are two network configurations. The Public Network is how users connect to the environment. The Private Interconnect is utilized as the heartbeat network for the servers in the RAC cluster. This private heartbeat network should be accessible by servers that are cooperating in this specific RAC cluster instance.

Oracle RAC Node 1

HBA HBA

ens3f0 ens3f1

Physical Machine

Oracle RAC Node 2

HBA HBA

ens3f1 ens3f0

Fibre Channel Fibre Channel

Private Network

Public Network

Physical Machine

Network Switch

Driver Server

Network Switch

Figure 2. Logical architectural depiction of how a two-node Oracle RAC cluster communicates.

Page 6: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 6

Figure 3 provides a detailed, logical layout of how all servers were connected via the Fibre Channel network to the storage, along with how the storage was allocated and provisioned.

12

110 23

12

110 23

3PARStoreServ

7450

3PARStoreServ

7450SAS SAS SAS SAS

10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

12

110 23

12

110 23

3PARStoreServ

7450

3PARStoreServ

7450SAS SAS SAS SAS

10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

12

110 23

12

110 23

3PARStoreServ

7450

3PARStoreServ

7450SAS SAS SAS SAS

10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

12

110 23

12

110 23

3PARStoreServ

7450

3PARStoreServ

7450SAS SAS SAS SAS

10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

HPE 3PAR StoreServ 8450 AFA• 4-node• 384GiB of cache• 8 x expansion shelves• 24 x 16Gb FC ports – which were run at 8Gb• 80 x 480GB MLC SSD

Ø DATAo 8 x 512GB RAID-1 LUNs

Ø REDOao 8 x 128GB RAID-5 LUNs

Ø REDObo 8 x 128GB RAID-5 LUNs

Ø VOTINGo 1 x 16GB RAID-5 LUN

Ø OCRo 1 x 16GB RAID-5 LUN

• All LUNs exported to all servers

3PAR StoreServ 8450 Controller Nodes

3PAR StoreServ 8450 Expansion Drive Shelves

HPE 5930 Switch

24 X 16Gb Fibre Channel run at 8Gb

12

110 23

12

110 23

3PARStoreServ

7450

3PARStoreServ

7450SAS SAS SAS SAS

10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

SAS SAS SAS SAS10K 900GB 10K 900GB 10K 900GB 10K 900GB

4 X HPE ProLiant DL380 Gen9 • 2 x E5-2689 v4 10-core 3.1GHz processors• 512GB memory• 2 x 480GB 12Gb RI-3 SSD (OS)• 2 x 8Gb dual port PCIe Fibre Channel HBA• 4 X 10Gb 2-port Ether NIC (Only 2 ports were used)

UID

ProLiantDL380Gen9

UID

1 2 3 4 5 6 7 8

UID

ProLiantDL380Gen9

UID

1 2 3 4 5 6 7 8

UID

ProLiantDL380Gen9

UID

1 2 3 4 5 6 7 8

UID

ProLiantDL380Gen9

UID

1 2 3 4 5 6 7 8

10/100/100Base-T

LINKACTManagement Console

FAN 1

SYS

PWR 2PWR 1 PWR 4PWR 3

FAN 2

HP 5930 Series Switch JG336A

Figure 3. Physical layout of the HPE ProLiant DL380 Gen9 servers and the HPE 3PAR StoreServ 8450 All Flash environment.

Page 7: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 7

Solution components This solution had many configurations, each one founded upon and expanding upon its predecessor.

The entire end point solution was made up of four HPE ProLiant DL380 Gen9 servers connected to an HPE 3PAR StoreServ 8450 All Flash array (AFA) using a pair of HPE SN6000B SAN switches for redundancy.

Connections between the servers on the private network, as well as client traffic on the public network were routed through an HPE 5930 Top of Rack (TOR) switch.

Each of the HPE ProLiant DL380 Gen9 servers had the following configuration:

• 2 x Intel® Xeon® E5-2689 v4 10-core processors with a clock speed of 3.1GHz

• 512GB of memory

• 2 x HPE 480GB 12G SAS RI-3 SSDs for OS and ORACLE_HOME

• 2 x HPE 82E 8Gb Dual-port Fibre Channel HBA

• 2 x dual port 10GbE network cards

The HPE 3PAR StoreServ 8450 All Flash array had the following configuration:

• 4-nodes

• 384GiB of cache

• 8 x expansion shelves

• 80 x 480GB SAS MLC SFF SSD

• 24 x 16Gb Fibre Channel ports – run at 8Gb.

Software

• Oracle 12c – Version 12.1.0.2.0 for both the Grid component as well as the database component

• Red Hat® Enterprise Linux – Version 7.2

HPE ProLiant DL380 Gen9 server

Figure 4. HPE ProLiant DL380 Gen9 server

The world’s best-selling server, has been updated with the latest Intel E5-2600 v4 processors and 2400MHz DDR4 memory. The HPE ProLiant DL380 Gen9 is designed to adapt to the needs of any environment, from large enterprise to remote office/branch office (ROBO), offering enhanced reliability, serviceability, and continuous availability.

Page 8: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 8

With the HPE ProLiant DL380 Gen9 server, you can deploy a single platform to handle a wide variety of enterprise workloads:

• Storage-centric applications such as the Oracle database: Remove bottlenecks and improve performance

• Data warehousing/analytics: Find the information you need, when you need it, to enable better business decisions

• Virtualization: Consolidate your server footprint by running multiple virtual machines on a single DL380

• Big Data: Manage exponential growth in your data volumes - structured, unstructured, and semi-structured

• Customer relationship management (CRM): Gain a 360-degree view of your data to improve customer satisfaction and loyalty

• Enterprise resource planning (ERP): Trust the DL380 Gen9 to help you run your business in near real time

• Virtual desktop infrastructure (VDI): Deploy remote desktop services to provide your workers with the flexibility they need to work anywhere, at any time, using almost any device

• SAP: Streamline your business processes through consistency and real-time transparency into your end-to-end corporate data. To support your heterogeneous IT environment, the HPE ProLiant DL380 Gen9 server supports Microsoft® Windows® and Linux® operating systems, as well as VMware® and Citrix® virtualization environments.

The HPE ProLiant DL380 Gen9 delivers industry-leading performance and energy efficiency, delivering faster business results and quicker returns on your investment. The HPE ProLiant DL380 Gen9 posts up to 27% performance gain by using the Intel E5-2600 v4 processors versus the previous version E5-2600 v3 processors1, and up to 12% performance gain with new 2400MHz DDR4 memory2. Power saving features, such as ENERGY STAR® rated systems, and 96 percent efficient Titanium HPE Flexible Slot power supplies, help to drive down energy consumption and costs.

HPE 3PAR StoreServ 8450 All Flash array

Figure 5. HPE 3PAR StoreServ 8450 All Flash array

The business world is moving rapidly, and corporate success is defined by how quickly an enterprise can turn ideas into value. This means that a well-tuned IT environment has never been more important to doing business, than it is today. Storage infrastructures must be simpler, smarter, faster, more flexible, and more business aligned than ever, to meet the needs of this rapid pace.

1E5-2600 v4 processors provide up to 27% performance gain vs. previous-generation E5-2600 v3s. Average performance based on key industry-standard benchmarks calculations submitted by OEMs as of 16 March 2016, comparing 2-socket Intel Xeon processor E5-2600 v3 to v4 family. Key industry benchmarks include: SPECint*_rate_base2006, SPECint*_base 2006 (Speed), SPECfp*_rate_base 2006, SPECfp*_base2006 (Speed), SPECmpiL*_base2007, SPECmpiM*_base2007, SPECompG*_base2012, SPECvirt_sc*2013, VMmark* 2.5 performance (matched pairs), TPC-E*, SPECjEnterprise*2010, Two-tier SAP SD* Windows*/Linux, 1-Node TPC-H* 1TB, TPCx-BB* and SPECjbb*2015 MultiJVM. See intel.com/performance/datacenter for full configuration details. 2 Based upon 2400MHz clock speed of DDR4 memory available on HPE ProLiant DL380 Gen9 system with E5-2600 v4 processors versus 2133MHz clock speed of DDR3 memory

available on HPE ProLiant DL380 Gen9 servers with E5-2600 v3 processors.

Page 9: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 9

With a flexible, flash-optimized architecture, HPE 3PAR StoreServ storage provides the only primary storage architecture you’ll ever need. Regardless of whether you are a midsize enterprise experiencing rapid growth, a large enterprise looking to support IT as a Service (ITaaS), or a global service provider building a hybrid or private cloud, HPE 3PAR StoreServ storage features a modern architecture to support better business outcomes. A range of models brings Tier 1 data services to the midrange, delivering all-flash array performance for the cost of a spinning disk array, and providing mission-critical resiliency and quality of service (QoS).

Flash-optimized architecture featuring a Mesh-Active design HPE 3PAR StoreServ storage features a Mesh-Active design based on a unique system of controller interconnects. This flash-optimized architecture combines the benefits of monolithic and modular architectures while eliminating price premiums, scaling complexities, and the performance bottlenecks of legacy storage designs.

Unlike legacy Active-Active controller architectures, the HPE 3PAR Mesh-Active design allows each volume to be active on every controller in the system. This delivers robust, load-balanced performance and greater headroom for cost-effective scalability.

A high-speed, full-mesh interconnection joins multiple storage controllers to form a cache-coherent, flash-optimized Mesh-Active cluster that is ideal for low-latency, high-performance, and internode communication. Purpose-built HPE 3PAR Gen5 Thin Express ASICs in each node connect all controllers via dedicated, high-bandwidth, low-latency links and spread I/O workloads widely across the array using direct memory access (DMA) to reduce latency times.

Fine-grained virtualization and system-wide striping The HPE 3PAR architecture uses three levels of storage virtualization to drive up capacity utilization and accelerate performance. This fine-grained approach to storage virtualization:

• Divides each physical disk into granular allocation units that can be independently assigned and dynamically reassigned to different logical disks to create virtual volumes

• Enables mixed RAID levels on the same physical drive

• Supports flash and other nonvolatile memory types

Logical disks are the virtualization layer in which QoS parameters are applied (availability level, drive media type, RAID level, etc.). This enables sub-LUN tiering and system-wide striping of data, increasing capacity utilization and performance levels. Fine-grained virtualization combined with system-wide striping enables uniform I/O patterns by spreading wear evenly and system-wide. System-wide sparing also helps guard against performance degradation if there is a media failure by enabling faster, “many-to-many” rebuilds.

Page 10: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 10

The following, figure 6, is a pictorial representation of fine-grained virtualization and system-wide striping.

Figure 6. HPE 3PAR StoreServ 8450 data layers

Page 11: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 11

Unique technologies extend your flash investments HPE innovations around flash not only help bring down the cost of flash media, but HPE 3PAR Gen5 Thin Express ASICs within each node also provide an efficient, silicon-based, zero-detection mechanism that “thins” your storage and extends your flash media investments. These ASICs power inline deduplication for data compaction that removes allocated but unused space without impacting your production workloads, which has the added benefit of extending the life of flash-based media by avoiding unnecessary writes. The unique adaptive read and write feature also serves to extend the life of flash drives by automatically matching host I/O size for reads and writes.

In addition, while other architectures generally reserve entire drives as spares, the HPE 3PAR architecture reserves spare chunklets within each drive. Sparing policies are adjusted automatically and on the fly to avoid using flash for sparing, thus lengthening media lifespan and helping to drive down performance costs. A five-year warranty on all HPE 3PAR StoreServ flash drives protects your storage architecture investment.

Databases Database performance and availability are so critical that many organizations deploy generous capacity and hire expensive management resources to maintain the required service levels. HPE 3PAR StoreServ storage removes these inefficiencies. For example, with HPE 3PAR Thin Persistence software and the new Oracle Automatic Storage Management (ASM) Storage Reclamation Utility (ASRU), your Oracle databases stay thin by automatically reclaiming stranded database capacity. HPE also offers cost-effective Oracle-aware snapshot technologies.

Quality of Service (QoS) Quality of service (QoS) is an essential component for delivering modern, highly scalable multi-tenant storage architectures. The use of QoS moves advanced storage systems away from the legacy approach of delivering I/O requests with “best effort” in mind and tackles the problem of “noisy neighbors” by delivering predictable tiered service levels and managing “burst I/O” regardless of other users in a shared system. Mature QoS solutions meet the requirements of controlling service metrics such as throughput, bandwidth, and latency without requiring the system administrator to manually balance physical resources. These capabilities eliminate the last barrier to consolidation by delivering assured QoS levels without having to physically partition resources or maintain discreet storage silos.

HPE 3PAR Priority Optimization software enables service levels for applications and workloads as business requirements dictate, enabling administrators to provision storage performance in a manner similar to provisioning storage capacity. This allows the creation of differing service levels to protect mission-critical applications in enterprise environments by assigning a minimum goal for I/O per second and bandwidth, and by assigning a latency goal so that performance for a specific tenant or application is assured. It is also possible to assign maximum performance limits on workloads with lower service-level requirements to make sure that high-priority applications receive the resources they need to meet service levels.

HPE 3PAR Thin Provisioning Since its introduction in 2002, HPE 3PAR Thin Provisioning software has been widely considered the gold standard in thin provisioning. This thin provisioning solution leverages the system’s dedicate-on-write capabilities to make storage more efficient and more compact, allowing customers to purchase only the storage capacity they actually need and only as they actually need it.

HPE 3PAR Thin Persistence HPE 3PAR Thin Persistence software is an optional feature that keeps TPVVs and read/write snapshots of TPVVs small by detecting pages of zeros during data transfers and not allocating space for the zeros. This feature works in real time and analyzes the data before it is written to the source TPVV or read/write snapshot of the TPVV. Freed blocks of 16 KB of contiguous space are returned to the source volume, and freed blocks of 128 MB of contiguous space are returned to the CPG for use by other volumes.

HPE 3PAR Peer Persistence HPE 3PAR Peer Persistence can be deployed to provide customers with a highly available stretched cluster, a cluster that spans two data centers. A stretched RAC cluster with HPE 3PAR Peer Persistence protects services from site disasters and expands storage load balancing to the multi-site data center level. The stretched cluster described can span metropolitan distances (up to 5 ms roundtrip latency for the Fibre Channel [FC] replication link, generally about a 500 km roundtrip) allowing administrators to move storage workloads seamlessly between sites, adapting to changing demand while continuing to meet service-level requirements.

Oracle 12c RAC combines servers to create a resilient client connect and compute infrastructure for Oracle databases. HPE 3PAR Peer Persistence combines HPE 3PAR Storage systems for multi-site level flexibility and availability. HPE Remote Copy synchronous replication between arrays offers storage disaster tolerance. HPE Remote Copy is a component of HPE 3PAR Peer Persistence. Peer Persistence adds the ability to redirect host IO from the primary storage system to the secondary storage system transparently.

Page 12: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 12

Oracle Database Oracle Database 12c is available in a choice of editions that can scale from small to large single servers and clusters of servers. The available editions are:

• Oracle Database 12c Standard Edition 2: Delivers unprecedented ease-of-use, power and price/performance for database applications on servers that have a maximum capacity of two sockets.3

• Oracle Database 12c Enterprise Edition: Available for single or clustered servers with no socket limitation. It provides efficient, reliable and secure data management for mission-critical transactional applications, query-intensive big data warehouses and mixed workloads. 3

For this paper, Oracle Database 12c Enterprise Edition was installed. While RAC can be deployed with Standard Edition 2, there are limitations with regards to the number of threads that can be active for any one instance which limit the performance available with that version. In addition to all of the features available with Oracle Database 12c Standard Edition 2, Oracle Database 12c Enterprise Edition has the following options:

• Oracle Active Data Guard

• Oracle Advanced Analytics

• Oracle Advanced Compression

• Oracle Advanced Security

• Oracle Database In-Memory

• Oracle Database Vault

• Oracle TimesTen Application-Tier Database Cache

• Oracle Label Security

• Oracle Multitenant

• Oracle On-line Analytical Processing

• Oracle Partitioning

• Oracle Real Application Clusters

• Oracle RAC One Node

• Oracle Real Application Testing

• Oracle Spatial and Graph

HPE Insight Cluster Management Utility HPE Insight Cluster Management Utility (CMU) is an integrated, easy-to-use tool for provisioning, management, and monitoring in clusters of any scale. CMU makes cluster management more efficient and error free and enables administrators to optimize the use of their resources.

Ideally in an Oracle RAC deployment, scenario, all nodes should be configured exactly alike. All nodes should receive the same package binaries, the same kernel configuration parameters, the same users and groups need to be created with the same UIDs and GIDs, so that whether a client lands on one node or the other is entirely transparent.

In order to ensure all nodes were configured alike, we used HPE Insight Cluster Management Utility (CMU). We configured the first node to the point of deploying Oracle Grid software. We then backed up the operating system, which included all parameter settings and software binaries that are part of the operating system using HPE CMU.

We then used CMU to install that ‘golden image’ to all nodes in the cluster ensuring they were all configured identically. At that point, all that was needed was to create passwordless SSH between all nodes. Once that was completed, we were able to install the Oracle Grid software.

3 Source: Oracle Database 12c Product Family white paper, oracle.com/technetwork/database/oracle-database-editions-wp-12c-1896124.pdf

Page 13: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 13

The following figure shows the CMU monitoring the cluster. The first node is being utilized at 100% and the second node is being utilized at 40%. Nodes 3 and 4 were not performing any work.

Figure 7. HPE CMU monitoring the CPU load on all nodes within the cluster.

Best practices and configuration guidance for the Oracle RAC solution To optimize the configuration for Oracle, the following changes were made to the hardware, firmware, and software. All servers were configured in the same way.

HPE ProLiant BIOS settings • Hyper-Threading - Enabled

• Intel Turbo Boost - Enabled

• HPE Power Profile - Maximum Performance

Page 14: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 14

RHEL configuration • Create udev rules to set the following device options for the SSD LUNs and required settings for the Oracle volumes (per values in Appendix E

and F).

– Set the sysfs “rotational” value for SSD disks to 0.

– Set the sysfs “rq_affinity” value for each device to 2.

Note: Request completions all occurring on core 0 caused a bottleneck; setting the rq_affinity value to 2, resolved this problem.

– Set “I/O scheduler” to noop.

– Set permissions and ownership for Oracle volumes.

• Volume size - Virtual volumes should all be the same size and SSD type for each Oracle ASM disk group.

• Use the recommended multipath parameters (see Appendix D) to maintain high availability while also maximizing performance and minimizing latencies.

HPE 3PAR StoreServ space allocation All servers used shared storage for the database. They all had the following LUNs definitions:

• 8 x 512GB RAID-1 LUNs for the database, tablespaces, indexes and undo tablespace. This was labeled DATA.

• 8 x 128GB RAID-5 LUNs for the redo log space number 1. This was labeled REDOa.

• 8 x 128GB RAID-5 LUNs for the redo log space number 2. This was labeled REDOb.

• 1 x 16GB RAID-5 LUN for voting. This was labeled VOTING.

• 1 x 16GB RAID-5 LUN as a database recovery file destination. This was called OCR.

Page 15: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 15

The following, figure 8, is a graphical depiction of how the storage was laid out and how it was connected to each node in the cluster.

8 X 512GB LUNsDATA

8 X 128GB LUNsREDOa

8 X128GB LUNsOracle REDOb

1 X16GB LUNVOTING

1 X16GB LUNOCR

Oracle ASM

Oracle Database

RHEL 7.2

Oracle Grid

RAC Node 4

Oracle ASM

Oracle Database

RHEL 7.2

Oracle Grid

RAC Node 3

Oracle ASM

Oracle Database

RHEL 7.2

Oracle Grid

RAC Node 2

Oracle ASM

Oracle Database

RHEL 7.2

Oracle Grid

RAC Node 1

Figure 8. Graphical depiction of storage layout and connections to all nodes participating in the cluster

Oracle configuration best practices

Oracle RAC configurations use instances local to each node tied together in a gridded infrastructure. Each server runs a local instance. The Listener process ties all of this together by acting as the central location where all connection requests are made. The Listener then passes those connection requests to a local instance. The private network is used for communication between nodes to keep all instances synchronized.

For Oracle configuration, on a per instance basis, the following is recommended:

• Disable automatic memory management, if applicable.

• Set buffer cache memory size large enough, per your implementation, to avoid physical reads. For this testing, the buffer cache size was set to 128GB.

• Create two large redo log file spaces of 220GB to minimize log file switching and reduce log file waits. One redo log file was placed in REDOa and the other was placed in REDOb.

• Create an undo tablespace of 200GB.

• Set the number of processes to a level that will allow all intended users to connect and process data. During our testing we use 3000 for this parameter setting.

• Set the number of open cursors to a level that will not constrict Oracle processes. This was set to 3000.

Page 16: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 16

Figure 9 depicts a high level guide with the steps needed to install Oracle RAC on HPE ProLiant servers running Red Hat Enterprise Linux.

Inst

all O

racl

e G

rid

Infra

stru

ctur

e

Create a public and private network interface on each node

Install and Configure NTPConfigure multipathingSet LUN names to recognizable names on all nodes

Disable SELinuxInstall required RHEL software packages

Create Oracle groups and users.

Install RHEL7.2

Create secure shell connectivity between nodes

Set User Limits for Oracle and grid users

Add DATA, REDO, VOTING and OCR LUNs. Export them to all nodes

Create sudo rules for user oracle

Create the Oracle Disk groups DATA, REDOa, REDOb, VOTING and OCR

Perform post grid installation tasks

Serv

er a

nd O

S

Inst

alla

tion

step

s

Install and Configure Grid Infrastructure

Inst

all O

racl

e Gr

id

Infr

astr

uctu

re

In

stal

l Ora

cle

DB

soft

war

e an

d in

stal

l da

taba

se

Install Oracle database software on the cluster

Verify Grid InfrastructureRun cluster commands

Create the disk groups: DATA, REDOa, REDOb, VOTING and OCR

Ensure disk groups are seen on all nodes and set permissions on all nodes

Use DBCA to create a RAC database

Oracle RAC deployment on HPE ProLiant Servers running RHEL complete

Create golden image using CMU. Distribute image to 3 additional

RAC servers

Oracle RAC Deployment Process on HPE ProLiant Servers with Red Hat Enterprise Linux

Figure 9. High level steps for installing Oracle RAC on HPE ProLiant Servers

Capacity and sizing We tested Oracle Single-Instance Database on a single server. We then created an Oracle RAC cluster and tested Oracle RAC One Node, two, three and finally four node RAC. The goal of the testing was to provide IT departments with an idea of the performance available with each deployment method, and the relative scaling when growing the size of the RAC cluster. It should be noted that the server and storage infrastructure held was consistent throughout. Single-Instance was run on the same node as the Oracle RAC One Node. That node was then used when we extended to a two node RAC. Those two nodes were then used when the RAC was extended to three nodes, etc.

During our testing, we did not see a reduction in the scalability available by adding additional RAC nodes. However, we do expect that at some node count and number of transactions, there will be a reduction in the effectiveness of adding a node. Determining at what node count that reduction in effectiveness occurs is outside the scope of this testing and paper.

Page 17: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 17

The following graph (Figure 10) depicts the maximum performance available with each deployment model. All results are normalized to the Single-Instance result. The number underneath the bar is the number of connections at which we achieved the maximum number of transactions within each deployment model. It should be noted that each connection had no think or typing time, thus each connection represents multiple thousands of real-life users.

Figure 10. Maximum throughput. The connection count that generated the maximum transactions is below each bar.

You will notice that between Single-Instance and Oracle RAC One Node, there is a performance dip. This is attributable to the fact that there is overhead associated with making sure the shared storage is kept consistent when using RAC. Since we were already using all of the available processing power in the server when running Single-Instance, the manifestation of that additional processing for the shared storage for RAC, is that fewer compute resources were available for transactional processing.

You will also note that near linear scaling was achieved as we added nodes to the RAC configuration. While the scaling is linear, HPE testing revealed that the scaling is not 100%, 200% and 300%. HPE attributes this to overhead incurred with keeping all of the nodes in sync.

Please see the graphs in the Workload results section for more detail regarding the outcome of specific performance tests that were executed as part of this Reference Architecture.

Workload description The Oracle workload was tested using HammerDB, an open-source tool. The HammerDB tool implements an OLTP-type workload (60 percent read and 40 percent write) with small I/O sizes of a random nature. The transaction results were normalized and used to compare test configurations. Other metrics measured during the workload come from the operating system and/or standard Oracle Automatic Workload Repository (AWR) statistics reports.

The OLTP test, performed on a database 1.8TB in size, was both highly CPU and moderately I/O intensive. The environment was tuned for maximum user transactions. After the database was tuned, the transactions were recorded at different connection levels. Because customer workloads vary in characteristics, the measurement was made with a focus on maximum transactions.

Oracle Enterprise Database version 12.1.0.2.0 was used in these test configurations.

100%78%

141%

215%

268%

0%

50%

100%

150%

200%

250%

300%

250 150 250 250 275

Perc

enta

ge

Number of Oracle Connections at maximum throughput

Maximum throughput

Single-Instance RAC One Node Two node RAC Three node RAC Four node RAC

Throughput scales linearly

from 1 to 4 RAC nodes

Page 18: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 18

We used several different Oracle connection counts for our tests. The results of various user count tests can be seen in the following graphs.

Workload results The following graph, Figure 11, depicts the performance that was achieved with each user count. The results are anchored to the 25 connection count for the Single-Instance which has been set to 100%. All other percentages are relative to that result.

Figure 11. Oracle RAC results from tests relative to the 25 connection count, Single-Instance result

50%

100%

150%

200%

250%

300%

350%

400%

450%

500%

550%

25 50 75 100 125 150 175 200 225 250 275 300

Perc

enta

ge

Number of Oracle Connections

Results anchored with Single Instance 25 connection count

Single-Instance RAC One Node Two node RAC Three node RAC Four node RAC

RAC One Node performance improvement tails off at 75 connections, 2 node tails off at 100, 3 node at 175, and 4

node at 275 connections

Page 19: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 19

The following graph, Figure 12, is the same data, but we made the Single-Instance result 100% for each connection count. This was done to see the exact performance levels available for each connection count vis-à-vis the Oracle Single-Instance result at the same connection count.

Figure 12. Results from tests relative to the Single-Instance result for each connection count.

Using the Single-Instance result for

each connection count more clearly illustrates

the scaling effect

Page 20: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 20

The following graph, Figure 13 shows the CPU utilization for each connection count and each configuration. You will note that Single-Instance had the highest CPU utilization, followed by RAC One Node, two node RAC, three node RAC, and four node RAC respectively. We attributed the result to the fact that as we add nodes to the cluster, more and more time is spent coordinating activities between all nodes within the cluster.

Figure 13. CPU utilization at various user counts and deployment configurations

The following graph, figure 14, is a graphical depiction of the transaction latency incurred during the testing at various user counts.

0

10

20

30

40

50

60

70

80

90

100

25 50 75 100 125 150 175 200 225 250 275 300

CPU

Util

izat

ion

Number of Oracle Connections

CPU utilization

Single-Instance RAC One Node Two node RAC Three node RAC Four node RAC

Page 21: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 21

The higher transactional latency with the reduction in node count is attributle to the ability of the deployment scenario to scale. You will notice that the Oracle RAC One Node had the highest transactional latency, followed by Single-Instance, followed by 2, 3 and 4 node RAC.

Figure 14. Transaction latency for all deployment options and connection counts

02468

101214

25 50 75 100 125 150 175 200 225 250 275 300

Milli

seco

nds

Number of Oracle Connections

Transaction latency

Single-Instance RAC One Node Two node RAC

Three node RAC Four node RAC

Four Node RAC had the lowest transactional

latency

Page 22: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 22

Analysis and recommendations As shown in the above graphs, scaling from 1 to 4 node Oracle RAC deployments demonstrate a fairly linear performance curve. Breaking out the RAC performance numbers is shown in the following graph, Figure 15. In this graph, we include only the RAC results and we make the RAC One Node number 100% for each connection count.

Figure 15. Results for RAC only deployment options based on Oracle RAC One Node results

50%

100%

150%

200%

250%

300%

350%

400%

25 50 75 100 125 150 175 200 225 250 275 300

Perc

enta

ge

Number of Oracle Connections

Results based on RAC One Node for each connection count

RAC One Node Two node RAC Three node RAC Four node RAC

Page 23: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 23

The following graph, figure 16, shows the average throughput improvement percentage across all connection counts as compared to Oracle RAC One Node. It further shows the maximum throughput performance improvement percentage that was attained for each deployment model. It should be noted that the average performance number is artificially low due to the fact that at low CPU utilization levels, having more, cooperating servers made a smaller difference in the ability to process additional transactions.

Figure 16. Average and maximum performance improvement for each deployment based on RAC One Node

Two node RAC provided 1.65 to 1.82 times more throughput than RAC One Node. Three node RAC provided 2.27 to 2.79 and four node provided 2.64 to 3.46 the throughput of RAC One Node.

It should be noted, however, that performance for each specific Oracle instance may vary. Because of this HPE recommends that a customer perform a proof of concept with their own data to determine the level of scaling they should expect.

Further, HPE does not believe these scaling numbers will continue ad-infinitum. It is beyond the scope of this paper to determine where performance improvements start to decrease when adding additional nodes.

One of the key findings of this paper is that Oracle Single-Instance performed better than Oracle RAC One Node. Customers need to be cognizant of this data point if planning to deploy RAC One Node. However, RAC One Node provides a high availability benefit which may be appealing to customers not wanting to incur the expense of deploying a two node RAC.

100%

150%

200%

250%

300%

350%

RAC One Node Two node RAC Three node RAC Four node RAC

Perc

enta

ge

Number of Oracle Connections

Average and maximum performance increase based on RAC One Node

Average increase Maximum Performance

Maximum performance

achieved from 1-4 RAC nodes

Page 24: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 24

Summary The HPE ProLiant DL380 Gen9 is a capable, scalable server on which to deploy Oracle RAC. When paired with an HPE 3PAR StoreServ 8450 All Flash array, it forms a formidable combination that will allow Oracle RAC deployments to scale in a near linear fashion. Binding the entire environment together is the HPE 5930 network switch, which allowed us to create a public network on which the clients connect, a private network on which the nodes keep synchronized and the data channel for communication between the HPE StoreServ 8450 and the HPE ProLiant DL380 Gen9 servers. This combination allows a business to grow at their own pace, acquiring the equipment required to satisfy the business requirement today and not over spending on present day equipment only to plan for future growth.

Customers deploying Oracle RAC also receive the benefit of a high availability platform that allows for zero (or close to zero when using RAC One Node) unplanned downtime. If a node were to fail, the cluster and database remains available to the business, albeit with less than optimal performance.

Customers can use this Reference Architecture to determine, at a high level, the impact of moving from Oracle Single-Instance to Oracle RAC and use the information to determine at which deployment option or node count to step in to an Oracle RAC solution.

This Reference Architecture describes solution testing performed in July and August, 2016.

Implementing a proof-of-concept As a matter of best practice for all deployments, HPE recommends implementing a proof-of-concept using a test environment that matches as closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained. For help with a proof-of-concept, contact an HPE Services representative (hpe.com/us/en/services/consulting.html) or your HPE partner.

Page 25: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 25

Appendix A: Bill of materials The following bill of materials (BOM) contains electronic license to use (E-LTU) parts. Electronic software license delivery is now available in most countries. HPE recommends purchasing electronic products over physical products (when available) for faster delivery and for the convenience of not tracking and managing confidential paper licenses. For more information, please contact your reseller or an HPE representative.

Note Part numbers are at time of publication/testing and subject to change. The bill of materials does not include complete support options or other rack and power requirements. If you have questions regarding ordering, please consult with your HPE Reseller or HPE Sales Representative for more details, hpe.com/us/en/services/consulting.html.

Table 1a. Bill of materials for the 4-node RAC solution. In order to determine the parts required for a lesser deployment, for instance 3-node, reduce the number of servers by 1.

Qty Part Number Description

Rack and Server infrastructure

1 H6J66A HPE 42U 600x1075mm Advanced Shock Rack

1 H6J66A 001 HPE Factory Express Base Racking Service

4 719064-B21 HPE DL380 Gen9 8SFF CTO Server

4 719064-B21 ABA U.S. - English localization

4 817949-L21 HPE DL380 Gen9 E5-2689v4 FIO Kit

4 817949-B21 HPE DL380 Gen9 E5-2689v4 Kit

64 805353-B21 HPE 32GB 2Rx4 PC4-2400T-L Kit

8 816562-B21 HPE 480GB 12Gb SAS RI-3 SFF SC SSD

4 749974-B21 HPE Smart Array P440ar/2G FIO Controller

4 652503-B21 HPE Ethernet 10Gb 2P 530SFP+ Adptr

4 727054-B21 HPE Ethernet 10Gb 2-port 562FLR-SFP+Adpt

4 733660-B21 HPE 2U SFF Easy Install Rail Kit

8 AJ763B HPE 82E 8Gb Dual-port PCI-e FC HBA

8 720478-B21 HPE 500W FS Plat Ht Plg Pwr Supply Kit

4 BD505A HPE iLO Adv incl 3yr TSU 1-Svr Lic

2 H8B55A HPE Mtrd Swtchd 14.4kVA/CS8365C/NA/J PDU

1 H6J85A HPE Rack Hardware Kit

1 BW930A HPE Air Flow Optimization Kit

1 BW930A B01 Include with complete system

1 BW906A HPE 42U 1075mm Side Panel Kit

1 H6J66A HPE 42U 600x1075mm Advanced Shock Rack

1 H6J66A 001 HPE Factory Express Base Racking Service

HPE 3PAR StoreServ 8450 All Flash

1 BW904A HPE 42U 600x1075mm Enterprise Shock Rack

1 BW904A 001 HPE Factory Express Base Racking Service

1 H6Z25A HPE 3PAR StoreServ 8450 4N Stor Cnt Base

4 H6Z00A HPE 3PAR StoreServ 8000 4-pt 16Gb FC Adapter

16 K2Q95A HPE 3PAR StoreServ 8000 480GB SFF SSD

1 L7C17A HPE 3PAR StoreServ 8450 OS Suite Base LTU

Page 26: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 26

Qty Part Number Description

80 L7C18A HPE 3PAR StoreServ 8450 OS Suite Drive LTU

8 H6Z26A HPE 3PAR StoreServ 8000 SFF(2.5in) SAS Drive Encl

64 K2Q95A HPE 3PAR StoreServ 8000 480GB SFF SSD

1 K2R28A HPE 3PAR StoreServ SPS Service Processor

1 TK808A HPE Rack Front Door Cover Kit

80 QK735A HPE Premier Flex LC/LC OM4 2f 15m Cbl

16 QK734A HPE Premier Flex LC/LC OM4 2f 5m Cbl

4 H5M58A HPE Basic 4.9kVA/L6-30P/C13/NA/J PDU

1 BW906A HPE 42U 1075mm Side Panel Kit

1 BD362A HPE 3PAR StoreServ Mgmt/Core SW Media

1 BD363A HPE 3PAR OS Suite Latest Media

1 BD365A HPE 3PAR SP SW Latest Media

1 TC472A HPE Intelligent Inft Analyzer SW v2 LTU

HPE 5930 network switch

1 JG505A HPE 59xx CTO Switch Solution

1 JH379A HPE 5930 2-slot 2QSFP BF AC Bdl

1 JH180A HPE 5930 24p SFP+ and 2p QSFP+ Mod

1 JH184A HPE 5930 24p Conv Port and 2p QSFP+ Mod

1 JG505A HPE 59xx CTO Switch Solution

1 JG510A HPE 5900AF 48G 4XG 2QSFP+ Switch

2 JC680A HPE 58x0AF 650W AC Power Supply

2 JC682A HPE 58x0AF Bck(pwr) Frt(prt) Fan Tray

Page 27: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 27

Appendix B: Oracle configuration parameters DL380RAC_2.__data_transfer_cache_size=0 DL380RAC_4.__data_transfer_cache_size=0 DL380RAC_3.__data_transfer_cache_size=0 DL380RAC_1.__data_transfer_cache_size=0 DL380RAC_2.__db_cache_size=137975824384 DL380RAC_4.__db_cache_size=137975824384 DL380RAC_3.__db_cache_size=137975824384 DL380RAC_1.__db_cache_size=137975824384 DL380RAC_2.__java_pool_size=3758096384 DL380RAC_4.__java_pool_size=3758096384 DL380RAC_3.__java_pool_size=3758096384 DL380RAC_1.__java_pool_size=3758096384 DL380RAC_2.__large_pool_size=2147483648

DL380RAC_4.__large_pool_size=2147483648 DL380RAC_3.__large_pool_size=2147483648 DL380RAC_1.__large_pool_size=2147483648 DL380RAC_2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment DL380RAC_3.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment DL380RAC_4.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment DL380RAC_1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment DL380RAC_2.__pga_aggregate_target=54223962112 DL380RAC_4.__pga_aggregate_target=54223962112 DL380RAC_3.__pga_aggregate_target=54223962112 DL380RAC_1.__pga_aggregate_target=54223962112 DL380RAC_2.__sga_target=162671886336 DL380RAC_4.__sga_target=162671886336 DL380RAC_3.__sga_target=162671886336 DL380RAC_1.__sga_target=162671886336 DL380RAC_2.__shared_io_pool_size=536870912 DL380RAC_4.__shared_io_pool_size=536870912 DL380RAC_3.__shared_io_pool_size=536870912 DL380RAC_1.__shared_io_pool_size=536870912 DL380RAC_2.__shared_pool_size=17716740096 DL380RAC_4.__shared_pool_size=17716740096 DL380RAC_3.__shared_pool_size=17716740096 DL380RAC_1.__shared_pool_size=17716740096 DL380RAC_2.__streams_pool_size=0 DL380RAC_4.__streams_pool_size=0 DL380RAC_3.__streams_pool_size=0 DL380RAC_1.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/DL380RAC/adump' *.audit_trail='db' *.cluster_database=true *.compatible='12.1.0.2.0' *.control_files='+DATA/DL380RAC/CONTROLFILE/current.267.919341699','+OCR/DL380RAC/CONTROLFILE/current.256.919341699' *.db_block_size=8192 *.db_create_file_dest='+DATA' *.db_domain='' *.db_name='DL380RAC' *.db_recovery_file_dest='+OCR' *.db_recovery_file_dest_size=5535m *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=DL380RACXDB)' *.open_cursors=3000 *.pga_aggregate_target=51560m *.processes=3000

Page 28: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 28

*.remote_login_passwordfile='exclusive' *.sga_target=154680m DL380RAC_1.undo_tablespace='UNDOTBS1' DL380RAC_2.undo_tablespace='UNDOTBS2' DL380RAC_3.undo_tablespace='UNDOTBS3' DL380RAC_4.undo_tablespace='UNDOTBS4' _high_priority_processes='VKTM*|LG*' lock_sga=true

Appendix C: Linux kernel parameters fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 132015897 kernel.shmmax = 270368557056 kernel.panic_on_oops = 1 net.core.rmem_default = 262144 net.core.wmem_default = 262144 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65500 vm.nr_hugepages = 150021 vm.hugetlb_shm_group = 54322 net.core.rmem_max = 4194304 net.core.wmem_max = 1048576 net.ipv4.tcp_rmem = 4096 87380 134217728 net.ipv4.tcp_wmem = 4096 65536 134217728 net.core.netdev_max_backlog = 300000 kernel.numa_balancing = 0

Appendix D: Multipath configuration The following multipath parameters were included in the /etc/multipath.conf file, which are the settings for RHEL 7.2 and HPE 3PAR Persona 2 (ALUA). It should also be noted that the names of the multipath device files were altered so that all nodes could see the same device names and so they were in a human readable format.

defaults { polling_interval 10 user_friendly_names yes find_multipaths yes } devices { device { vendor "3PARdata" product "VV" path_grouping_policy group_by_prio path_selector "round-robin 0" path_checker tur features "0" hardware_handler "1 alua" prio alua failback immediate rr_weight uniform no_path_retry 18 rr_min_io_rq 1 detect_prio yes } } blacklist {

Page 29: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 29

devnode "^(ram|zram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" } multipaths { multipath { wwid 360002ac000000000000000b700019f6e alias redo01 } multipath { wwid 360002ac0000000000000011700019f6e alias voting } multipath { wwid 360002ac000000000000000b600019f6e alias redo02 } multipath { wwid 360002ac000000000000000d400019f6e alias data01 } multipath { wwid 360002ac000000000000000b500019f6e alias redo03 } multipath { wwid 360002ac000000000000000d300019f6e alias data02 } multipath { wwid 360002ac000000000000000b400019f6e alias redo04 } multipath { wwid 360002ac000000000000000d200019f6e alias data03 } multipath { wwid 360002ac000000000000000b300019f6e alias redo05 } multipath { wwid 360002ac000000000000000d100019f6e alias data04 } multipath { wwid 360002ac000000000000000d000019f6e alias data05 } multipath { wwid 360002ac000000000000000cf00019f6e alias data06 } multipath { wwid 360002ac000000000000000ce00019f6e alias data07 } multipath { wwid 360002ac000000000000000cd00019f6e alias data08 } multipath { wwid 360002ac000000000000000ba00019f6e alias redo06 } multipath { wwid 360002ac000000000000000b900019f6e

Page 30: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 30

alias redo07 } multipath { wwid 360002ac000000000000000b800019f6e alias redo08 } multipath { wwid 360002ac0000000000000011800019f6e alias ocr } multipath { wwid 360002ac0000000000000012000019f6e alias redob01 } multipath { wwid 360002ac0000000000000011f00019f6e alias redob02 } multipath { wwid 360002ac0000000000000011e00019f6e alias redob03 } multipath { wwid 360002ac0000000000000011d00019f6e alias redob04 } multipath { wwid 360002ac0000000000000011c00019f6e alias redob05

} multipath { wwid 360002ac0000000000000011b00019f6e alias redob06 } multipath { wwid 360002ac0000000000000011a00019f6e alias redob07 } multipath { wwid 360002ac0000000000000011900019f6e alias redob08 } }

Page 31: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 31

Appendix E: HPE 3PAR StoreServ 8450 udev device permission rules For HPE 3PAR StoreServ 8450 storage configuration, a udev rules file named /etc/udev/rules.d/12-dm-permission.rules was created to set the required ownership of the Oracle ASM LUNs: ENV{DM_NAME}=="data01", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="data02", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="data03", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="data04", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="data05", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="data06", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="data07", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="data08", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redo01", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redo02", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redo03", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redo04", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redo05", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redo06", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redo07", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redo08", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="voting", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="ocr", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redob01", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redob02", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redob03", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redob04", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redob05", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redob06", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redob07", OWNER:="grid", GROUP:="asmadmin", MODE:="660" ENV{DM_NAME}=="redob08", OWNER:="grid", GROUP:="asmadmin", MODE:="660"

Appendix F: udev rules For HPE 3PAR StoreServ 8450, a rules file was created to set the rotational latency, the I/O scheduler and rq_affinity. The name of this file was /etc/udev/rules.d/10-3par.rules.

ACTION=="add|change", KERNEL=="dm-*", PROGRAM="/bin/bash -c 'cat /sys/block/$nam e/slaves/*/device/vendor | grep 3PARdata'", ATTR{queue/rotational}="0", ATTR{que ue/scheduler}="noop", ATTR{queue/rq_affinity}="2", ATTR{queue/nomerges}="1", AT TR{queue/nr_requests}="128"

Page 32: HPE Reference Architecture for Oracle 12c RAC … · 12c RAC Scaling on HPE Pro Liant DL380 Gen9 and HPE 3PAR StoreServ 8450 All Flash . Technical white paper . Technical white paper

Technical white paper Page 32

Sign up for updates

© Copyright 2016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. SAP is a registered trademark of SAP, Inc. Microsoft and Windows are trademarks of Microsoft Corporation. VMWare is a registered trademark of VMWare Corporation. Citrix is a registered trademark of Citrix Corporation. ENERGY STAR is a registered mark owned by the U.S. government.

4AA6-7809ENW, September 2016

Resources and additional links HPE Reference Architectures, hpe.com/info/ra

HPE Servers, hpe.com/servers

HPE Storage, hpe.com/storage

HPE ProLiant DL380 Gen9 Server, www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=7271241

HPE 3PAR StoreServ 8450 All Flash, https://www.hpe.com/us/en/storage/3par.html

HPE 3PAR StoreServ Storage Concepts Guide HPE 3PAR OS 3.2.2, http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04204225

HPE Networking, hpe.com/networking

To help us improve our documents, please provide feedback at hpe.com/contact/feedback.