Oracle RAC 12c (12.1.0.2) Operational Best Practices - A result of true collaboration
-
Upload
markus-michalewicz -
Category
Software
-
view
839 -
download
3
Transcript of Oracle RAC 12c (12.1.0.2) Operational Best Practices - A result of true collaboration
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. |
Oracle RAC 12c (12.1.0.2) OperaBonal Best PracBces A result of true collabora.on
Markus Michalewicz Director of Product Management, Oracle Real ApplicaBon Clusters April 14th 2015
@OracleRACpm hOp://www.linkedin.com/in/markusmichalewicz hOp://www.slideshare.net/MarkusMichalewicz
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Safe Harbor Statement The following is intended to outline our general product direcBon. It is intended for informaBon purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or funcBonality, and should not be relied upon in making purchasing decisions. The development, release, and Bming of any features or funcBonality described for Oracle’s products remains at the sole discreBon of Oracle.
3
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 4
OperaBonal Best PracBces
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
OS
Network
Cluster
DB
Update
InstallaBon
t SI
hJp://www.slideshare.net/MarkusMichalewicz/oracle-‐rac-‐12c-‐collaborate-‐best-‐prac.ces-‐ioug-‐2014-‐version
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Program Agenda 1
2
3
4
New in Oracle RAC 12.1.0.2 (Install)
OperaBonal Best PracBces for
Generic Clusters
Extended Cluster
Dedicated Environments
Consolidated Environments
Appendices A – D
5
5
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Program Agenda 1
2
3
4
New in Oracle RAC 12.1.0.2 (Install)
OperaBonal Best PracBces for
Generic Clusters
Extended Cluster
Dedicated Environments
Consolidated Environments
Appendices A – D
6
5
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
12.1.0.1 Grid Infrastructure Management Repository (GIMR)
12.1.0.2 Grid Infrastructure Management Repository (GIMR)
• Single Instance Oracle Database 12c Container Database with one PDB – The resource is called “ora.mgmtdb” – Future consolidaBon planned – Installed on one of the (HUB) nodes – Managed as a failover database – Stored in the first ASM disk group created
7
New in 12.1.0.2 install: GIMR – No Choice Anymore
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
12.1.0.1 disk group crea.on: start with “GRID” disk group
12.1.0.2 disk group crea.on: start with GIMR hos.ng disk group • GIMR typically does not require redundancy for the disk group. – Hence, do not share with GRID DG.
• Clusterware files (VoBng Files and OCR) are easy to relocate – See example in Appendix A.
• More informaBon: – How to Move GI Management Repository to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1)
– Managing the Cluster Health Monitor Repository (Doc ID 1921105.1)
8
RecommendaBon: Change in Disk Group CreaBon
More Informa.on in Appendix A
12.1.0.2 disk group crea.on: start with “GIMR” disk group
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
For Upgrades Follow the OCR • CreaBng a new disk group (DG) for the GIMR works for fresh installs.
• When upgrading, the GIMR is placed in the same DG as the OCR in 12.1.0.2 – If OCR mirrors are used, the first locaBon in the list will be used. – For future versions the following rules apply:
• If a GIMR is present (12c or later), then the new GIMR is placed in the same DG as the current GIMR • If no GIMR is present, the new GIMR is placed in the same DG as the VoBng Disk(s) (ER 19661882)
• In order to define the GIMR DG locaBon during upgrades – locate “the OCR” in the respecBve DG using ocrconfig (online), then upgrade and finally relocate the OCR back to its original locaBon, if required.
9
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
12.1.0.1: Go with Standard Cluster 12.1.0.2: Use Flex Cluster (includes Flex ASM by default)
10
New in 12.1.0.2: RecommendaBon to use Flex Cluster
One excepBon: if installing for an Extended Oracle RAC
Cluster – use Standard Cluster + Flex ASM
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
1. Standard ASM • Pre-‐12c ASM configura2on mode
2. Oracle Flex ASM • Recommended
3. ASM Client Cluster • New in 12.1.0.2, assuming “Standard Cluster” install was chosen (not Flex Cluster)
4. Non-‐ASM managed storage
11
12.1.0.2 – Four Storage OpBons available!?
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Flex Cluster 12c 12.1.0.1 / 12.1.0.2 / …
Client Clusters (CC) 12.1.0.2+ going forward
12
What is an ASM Client Cluster?
Node3 Node2 Node1
DBA DBA DBA
Oracle ASM
Oracle Clusterware
ASM ASM ASM
Flex ASM managed shared storage
One Cluster
LeafNodeN LeafNode.. LeafNode1 …Oracle Clusterware <Loosely coupled>
>Tightly coupled<
AppA AppA AppB
Node2 Node1
Oracle Grid Infrastructure
Node…
GI
CC2 …
Node2 Node1
Oracle Grid Infrastructure
Node…
GI
CC1 …
Node2 Node1
Oracle Clusterware
ASM ASM
Flex ASM Storage
RAC DB Cluster
AppA AppA AppB
AppC AppD AppE
Independent clusters, which use the RAC DB Cluster to store VoBng Disks and OCR (no
directly aOached shared storage required)
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
DBCA Despite running Leaf Nodes
13
ConBnue to use Leaf Nodes for ApplicaBons in 12.1.0.2
[GRID]> olsnodes -‐s -‐t germany AcBve Unpinned argenBna AcBve Unpinned brazil AcBve Unpinned italy Ac.ve Unpinned spain Ac.ve Unpinned
More Informa.on in Appendix D
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Install what’s necessary Configure what’s desired (update later)
14
New Network Flexibility in 12.1.0.2 – RecommendaBon
More Informa.on in Appendix B
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 15
AutomaBc DiagnosBc Repository (ADR)
ADR_base
diag
asm rdbms tnslsnr clients crs (others)
More Informa.on in Appendix C
• Oracle Grid Infrastructure now supports the Automa.c Diagnos.c Repository
• ADR simplifies log analysis by • centralizing most logs under a defined folder structure.
• maintaining a history of logs. • providing its own command line tool to manage diagnosBc informaBon.
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Program Agenda 1
2
3
4
New in Oracle RAC 12.1.0.2 (Install)
OperaBonal Best PracBces for
Generic Clusters
Extended Cluster
Dedicated Environments
Consolidated Environments
Appendices A – D
16
5
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 17
OperaBonal Best PracBces – Generic Clusters
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
OS
Network
Cluster
DB
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 18
Generic Clusters – Storage
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage Appendix A
OS
Network
Cluster
DB
Step 1: Create “GRID” Disk Group – Generic Cluster Step 2: Move Clusterware Files
Step 3: Move ASM SPFILE / password file
More Informa.on in Appendix A
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Avoid memory pressure!
Use Memory Guard
AcBvated by default with 12.1.0.2
19
Use Solid State Disks (SSDs) to host swap
More InformaBon in “My Oracle Support” (MOS) note 1671605.1 – “Use Solid State
Disks to host swap space in order to increase node availability”
Use HugePages for SGA (Linux)
More informaBon in MOS notes 361323.1 & 401749.1
Avoid Transparent HugePages (Linux6) See alert in MOS note 1557478.1
Generic Clusters – OS / Memory
argentina germany Oracle GI Oracle RAC
Oracle GI Oracle RAC
argentina germany Oracle GI Oracle RAC
Oracle GI Oracle RAC
Swapping
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
• OraChk – Formerly RACcheck – A.k.a. ExaChk
• RAC ConfiguraBon Audit Tool – Details in MOS note ID 1268927.1
• Checks “Oracle” (Databases): – Standalone Database – Grid Infrastructure & Oracle RAC – Maximum Availability Architecture (MAA) ValidaBon (if configured)
– Oracle Hardware setup configuraBon
20
Generic Clusters – OS / OraChk and TFA
Trace File Analyzer
More informaBon in MOS note 1513912.1
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 21
TFA – Efficiency from A to Z
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 22
Generic Clusters – OS Summary
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage Appendix A
OS Memory Config + OraChk / TFA
Network
Cluster
DB
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Define “normal”
23
Size Interconnect for aggregated throughput
Use redundancy (HAIPs) for Load Balancing
Use different subnets for the interconnect
Use Jumbo Frames wherever possible
Ensure enBre infrastructure support
Generic Clusters – Network
More Informa.on in Appendix B
Receive()
argentina germany
8K Data Block
1500 byte MTU
Send()
Reassembly FragmentaBon
Oracle RAC Oracle RAC
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
• Fact: In virtual environments, certain network components are “virtualized”.
• Consequence: SomeBmes, network failures are not reflected in the guest environment.
• Reason: OS commands run in the guest fail to detect the network failure as the “virtual NIC” remains “up”.
• Result: correcBve acBons may not be performed.
• Solu.on: Ping Targets
24
Virtual Generic Clusters? – Use Ping Targets with 12.1.0.2
Guest
DBI
Server
APP
More Informa.on aJached
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
• Ping Targets are new in Oracle RAC 12.1.0.2 • Ping Targets use a probe to a given desBnaBon (IP) in order to determine network availability.
• Ping Targets are used in addiBon to local checks
• Ping Targets are used on the public network only • Private networks already use constant heartbeaBng
• Ping Targets should be chosen carefully: • Availability of the ping target is important • More than one target can be defined for redundancy • Ping target failures should be meaningful
• Example: Pinging a central switch (probably needs to be enabled) between clients and the database servers.
25
(Virtual) Generic Clusters – Use Ping Targets on Public
[GRID]> su Password: [GRID]> srvctl modify network -‐k 1 -‐pingtarget “<UsefulTargetIP(s)>" [GRID]> exit exit [GRID]> srvctl config network -‐k 1 Network 1 exists Subnet IPv4: 10.1.1.0/255.255.255.0/eth0, staBc Subnet IPv6: Ping Targets: <UsefulTargetIP(s)> Network is enabled Network is individually enabled on nodes: Network is individually disabled on nodes:
Guest
DBI
Server
APP
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 26
Generic Clusters – Network Summary
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage Appendix A
OS Memory Config + OraChk / TFA
Network As discussed +Appendix B
Cluster
DB
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 27
Generic Clusters – Cluster
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage Appendix A
OS Memory Config + OraChk / TFA
Network As discussed +Appendix B
Cluster Appendix D
DB
1. Install / maintain HUBs, add Leaf Nodes 2. Adding nodes to the cluster
3. Use Leaf nodes for non-‐DB use cases
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Program Agenda 1
2
3
4
New in Oracle RAC 12.1.0.2 (Install)
OperaBonal Best PracBces for
Generic Clusters
Extended Cluster
Dedicated Environments
Consolidated Environments
Appendices A – D
28
5
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 29
Extended Oracle RAC
From an Oracle perspec.ve, an Extended RAC installa.on is used as soon as data (using Oracle ASM) is mirrored between independent storage arrays.
(Exadata Storage Cells are excluded from this defini.on.)
ER: open to make "EXTENDED ORACLE RAC" A DISTINGUISHABLE CONFIGURATION
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 30
Extended Cluster – Storage
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage Appendix A Appendix A
OS Memory Config + OraChk / TFA
Network As discussed +Appendix B
Cluster Appendix D
DB
Step 1: Create “GRID” Disk Group – Extended Cluster Step 2: Move Clusterware Files
Step 3: Move ASM SPFILE / password file Step 4: “srvctl modify asm –count all”
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Extended Oracle RAC – use Standard Cluster + Flex ASM
• What are “ASM_PREFERRED_READ_FAILURE_GROUPS”? – The ASM_PREFERRED_READ_FAILURE_GROUPS ini2aliza2on parameter is a comma-‐delimited list of strings that specifies the failure
groups that should be preferen2ally read by the given instance. This parameter is generally used only for clustered ASM instances and its value can be different on different nodes. • Example: “diskgroup_name1.failure_group_name1, ...”
– For extended RAC, this paper: Oracle Real ApplicaBon Clusters on Extended Distance Clusters (p. 26) suggests using the “ASM_PREFERRED_READ_FAILURE_GROUPS parameter to go to the local mirror instead of going to any available mirror.”
• What is the issue? – Using Flex ASM, by default only three ASM instances are started in the cluster. – Assuming that the Extended RAC uses more than three nodes (e.g. 2 nodes in each side), there is a likeliness for a cross-‐side failover.
• An ASM instance cross-‐side failover breaks the logic to use “local mirrors” in this case, as the local mirror is side-‐dependent.
• SoluBon: – Follow the recommendaBon to use “srvctl modify asm –count all” for Extended Oracle RAC implementaBons when using Flex ASM.
• This reduces the staBsBcal likeliness for a cross-‐side failover. – For future releases, ER / BUG 17045279 – “ASM_PREFERRED_READ DOES NOT WORK WITH FLEX ASM” addresses this problem.
31
How to make use of “ASM_PREFERRED_READ_FAILURE_GROUPS”?
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 32
Extended Cluster – OS
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage Appendix A Appendix A
OS Memory Config + OraChk / TFA
As for Generic Clusters
Network As discussed +Appendix B
Cluster Appendix D
DB
More informa.on: Oracle Real ApplicaBon Clusters on Extended Distance Clusters (PDF) -‐
hOp://www.oracle.com/technetwork/database/opBons/clustering/overview/extendedracversion11-‐435972.pdf
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Define “normal”
The goal in an Extended RAC setup is to hide the distance.
Any latency increase might (!) impact applicaBon performance.
33
VLANs are fully supported for Oracle RAC – for more informaBon, see:
hOp://www.oracle.com/technetwork/database/database-‐technologies/clusterware/overview/interconnect-‐vlan-‐06072012-‐1657506.pdf
VerBcal subnet separaBon is not supported.
Extended Cluster – Network
More informa.on: Oracle Real ApplicaBon Clusters on Extended Distance Clusters (PDF) -‐
hOp://www.oracle.com/technetwork/database/opBons/clustering/overview/extendedracversion11-‐435972.pdf
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 34
Extended Cluster – Network Summary
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage Appendix A Appendix A
OS Memory Config + OraChk / TFA
As for Generic Clusters
Network As discussed +Appendix B
As discussed +Appendix B
Cluster Appendix D
DB
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 35
Extended Cluster – Cluster Summary
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage Appendix A Appendix A
OS Memory Config + OraChk / TFA
As for Generic Clusters
Network As discussed +Appendix B
As discussed +Appendix B
Cluster Appendix D As Generic
DB
The goal in an Extended RAC setup is to hide the distance.
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Program Agenda 1
2
3
4
New in Oracle RAC 12.1.0.2 (Install)
OperaBonal Best PracBces for
Generic Clusters
Extended Cluster
Dedicated Environments
Consolidated Environments
Appendices A – D
36
5
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 37
Dedicated Environments – Only a few items to consider
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage Appendix A Appendix A
OS Memory Config + OraChk / TFA
As for Generic Clusters
Network As discussed +Appendix B
As discussed +Appendix B
Cluster Appendix D As Generic
DB
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 38
Dedicated Environments – Network
[GRID]> srvctl config scan -‐all SCAN name: cupscan.cupgnsdom.localdomain, Network: 1 Subnet IPv4: 10.1.1.0/255.255.255.0/eth0, staBc Subnet IPv6: SCAN 0 IPv4 VIP: 10.1.1.55 SCAN VIP is enabled. SCAN VIP is individually enabled on nodes: SCAN VIP is individually disabled on nodes: SCAN name: cupscan2, Network: 2 Subnet IPv4: 10.2.2.0/255.255.255.0/, staBc Subnet IPv6: SCAN 1 IPv4 VIP: 10.2.2.55 SCAN VIP is enabled. SCAN VIP is individually enabled on nodes: SCAN VIP is individually disabled on nodes:
SCAN on Network 1
SCAN on Network 2
More InformaBon: • Valid Node Checking For RegistraBon (VNCR) (Doc ID 1600630.1)
• How to Enable VNCR on RAC Database to Register only Local Instances (Doc ID 1914282.1)
More informaBon:
• Oracle Real ApplicaBon Clusters -‐ Overview of SCAN -‐ hOp://www.oracle.com/technetwork/database/opBons/clustering/overview/scan-‐129069.pdf
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 39
Dedicated Environments – Network Summary
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage Appendix A Appendix A
OS Memory Config + OraChk / TFA
As for Generic Clusters
Network As discussed +Appendix B
As discussed +Appendix B
Appendix B + as discussed
Cluster Appendix D As Generic
DB
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Problem: Patching and Upgrades
SoluBon: Rapid Home Provisioning
40
Problem: Memory consumpBon
SoluBon: Memory Caps
Problem: Number of ConnecBons
SoluBon: various, mostly using connecBon pools
Dedicated Environments – Database (DB)
argentina germany
Connec.on Pool
Oracle GI Oracle RAC
Oracle GI Oracle RAC
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
New in Oracle Database 12c:
• SGA and PGA aggregated targets can be limited.
• See documentaBon for “PGA_AGGREGATE_LIMIT”
41
Dedicated Environments – Database (DB)
[DB]> sqlplus / as sysdba SQL*Plus: Release 12.1.0.2.0 ProducBon on Thu Sep 18 18:57:30 2014 … SQL> show parameter pga NAME TYPE VALUE -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ pga_aggregate_limit big integer 2G pga_aggregate_target big integer 211M SQL> show parameter sga NAME TYPE VALUE -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ lock_sga boolean FALSE pre_page_sga boolean TRUE sga_max_size big integer 636M sga_target big integer 636M unified_audit_sga_queue_size integer 1048576
1. Do not handle connecBon storms, prevent them. 2. Limit the number of connecBons to the database. 3. Use ConnecBon Pools where possible:
• Oracle Universal ConnecBon Pool (UCP) -‐ hOp://docs.oracle.com/database/121/JJUCP/rac.htm#JJUCP8197
4. Ensure applicaBons close connecBons. • If number of acBve connecBons is fairly less than
the number of open connecBons, consider using “Database Resident ConnecBon Pooling” -‐ docs.oracle.com/database/121/JJDBC/drcp.htm#JJDBC29023
5. If you cannot prevent the storm, slow it down. • Use listener parameters to miBgate the negaBve
side effects of a connecBon storm. Most of these parameters can also be used with SCAN.
6. Services can be assigned to one subnet at a Bme. You control the subnet, you control the service.
Connec.on Pool
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 42
Dedicated Environments – Database Summary
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage Appendix A Appendix A
OS Memory Config + OraChk / TFA
As for Generic Clusters
Network As discussed +Appendix B
As discussed +Appendix B
Appendix B + as discussed
Cluster Appendix D As Generic
DB As discussed
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Program Agenda 1
2
3
4
New in Oracle RAC 12.1.0.2 (Install)
OperaBonal Best PracBces for
Generic Clusters
Extended Cluster
Dedicated Environments
Consolidated Environments
Appendices A – D
43
5
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 44
Consolidated Environments – Network Summary
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage Appendix A Appendix A
OS Memory Config + OraChk / TFA
As for Generic Clusters
Network As discussed +Appendix B
As discussed +Appendix B
Appendix B + as discussed
As dedicated + as discussed
Cluster Appendix D As Generic
DB As discussed
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Database ConsolidaBon • MulBple database instances running on a server
• Need to manage memory across instances
• Use Instance Caging and QoS (in RAC cluster)
45
Use Oracle MulBtenant
• A limited number of Container DB instances to manage
• Memory allocaBon on the server is simplified
• Instance Caging may not be needed (QoS sBll beneficial)
Consolidated Environments – No VMs è 2 Main Choices
argentina germany
racdb1_3
Oracle GI Oracle RAC
Oracle GI Oracle RAC
brazil
argentina germany Oracle GI | HUB Oracle GI | HUB
Oracle GI | HUB
Oracle RAC Oracle RAC
italy Oracle GI | HUB
Oracle RAC
cons
Oracle RAC
cons1_2
cons1_1
CPU_Count=5
CPU_Count=3
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
More informaBon: • hOp://www.oracle.com/technetwork/database/focus-‐areas/database-‐cloud/database-‐cons-‐best-‐pracBces-‐1561461.pdf
• http://www.oracle.com/technetwork/database/options/clustering/overview/rac-cloud-consolidation-1928888.pdf
46
Use Oracle MulBtenant
• Can be operated as a Dedicated Environment, • at least from the cluster perspecBve,
• if only 1 Container Database Instance per server is used
Consolidated Environments – Make them Dedicated …
argentina germany
racdb1_3
Oracle GI Oracle RAC
Oracle GI Oracle RAC
brazil
argentina germany Oracle GI | HUB Oracle GI | HUB
Oracle GI | HUB
Oracle RAC Oracle RAC
italy Oracle GI | HUB
Oracle RAC
cons
Oracle RAC
cons1_2
cons1_1
CPU_Count=5
CPU_Count=3
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 47
Consolidated Environments – Database (DB) Summary
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage Appendix A Appendix A
OS Memory Config + OraChk / TFA
As for Generic Clusters
Network As discussed +Appendix B
As discussed +Appendix B
Appendix B + as discussed
As dedicated + as discussed
Cluster Appendix D As Generic
DB As discussed As above
Specifically for Oracle MulBtenant on Oracle RAC, see: hOp://www.slideshare.net/MarkusMichalewicz/oracle-‐
mulBtenant-‐meets-‐oracle-‐rac-‐ioug-‐2014-‐version
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Appendix A Crea.ng “GRID” disk group to place the Oracle Clusterware files and the ASM files
Oracle ConfidenBal – Internal/Restricted/Highly Restricted 48
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 49
Create “GRID” Disk Group – Generic Cluster
Use “quorum” whenever possible.
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 50
Create “GRID” Disk Group – Extended Cluster
• More informaBon: hOp://www.oracle.com/technetwork/database/opBons/clustering/overview/extendedracversion11-‐435972.pdf
• Use logical names illustraBng the disk desBnaBon • Use a quorum for ALL (not only GRID) disk groups used in an ExtendedCluster
• Use VoBng Disk NFS desBnaBon
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Replace Vo.ng Disk Loca.on Add OCR Loca.on
51
Move Clusterware Files
[GRID]> crsctl query css votedisk ## STATE File Universal Id File Name Disk group -‐-‐ -‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐ 1. ONLINE 8bec21793ee84fd3bfc6831746bf60b4 (/dev/sde) [GIMR] Located 1 voBng disk(s). [GRID]> crsctl replace votedisk +GRID Successful addiBon of voBng disk 7a205a2588d44f1db~10fc91ecd334. Successful addiBon of voBng disk 8c05b220cfcc4f6�f5752b6763a18ac. Successful addiBon of voBng disk 223006a9c28e4fd5bf3b58a465fcb66a. Successful deleBon of voBng disk 8bec21793ee84fd3bfc6831746bf60b4. Successfully replaced vo.ng disk group with +GRID. CRS-‐4266: VoBng file(s) successfully replaced [GRID]> crsctl query css votedisk ## STATE File Universal Id File Name Disk group -‐-‐ -‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐ 1. ONLINE 7a205a2588d44f1db~10fc91ecd334 (/dev/sdd) [GRID] 2. ONLINE 8c05b220cfcc4f6�f5752b6763a18ac (/dev/sdb) [GRID] 3. ONLINE 223006a9c28e4fd5bf3b58a465fcb66a (/dev/sdc) [GRID] Located 3 voBng disk(s).
[GRID]> whoami Root [GRID]> ocrconfig -‐add +GRID [GRID]> ocrcheck Status of Oracle Cluster Registry is as follows :
Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 2984 Available space (kbytes) : 406584 ID : 759001629 Device/File Name : +GIMR
Device/File integrity check succeeded Device/File Name : +GRID
Device/File integrity check succeeded Device/File not configured
... Cluster registry integrity check succeeded Logical corrupBon check succeeded
Use “ocrconfig -‐delete +GIMR” if you want to “replace” and maintain a single OCR locaBon.
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Default ASM spfile loca.on is in the first disk group created (here: GIMR)
Perform a rolling ASM instance restart facilita.ng Flex ASM
52
Move ASM SPFILE – See also MOS note 1638177.1
[GRID]> export ORACLE_SID=+ASM1 [GRID]> sqlplus / as sysasm … SQL> show parameter spfile NAME TYPE VALUE -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ Spfile string +GIMR/cup-‐cluster/ASMPARAMETER
FILE/registry.253.857666347 #Change locaBon SQL> create pfile='/tmp/ASM.pfile' from spfile; File created. SQL> create spfile='+GRID' from pfile='/tmp/ASM.pfile'; File created. #NOTE: SQL> show parameter spfile NAME TYPE VALUE -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ Spfile string +GIMR/cup-‐cluster/ASMPARAMETER
FILE/registry.253.857666347
Use “gpnptool get” and filter for
“ASMPARAMETERFILE” to see updated ASM SPFILE locaBon in
GPnP profile prior to restarBng.
[GRID]> srvctl status asm ASM is running on argenBna,brazil,germany [GRID]> srvctl stop asm -‐n germany -‐f [GRID]> srvctl status asm -‐n germany ASM is not running on germany [GRID]> srvctl start asm -‐n germany [GRID]> srvctl status asm -‐n germany ASM is running on germany [GRID]> crsctl stat res ora.mgmtdb NAME=ora.mgmtdb TYPE=ora.mgmtdb.type TARGET=ONLINE STATE=ONLINE on argenBna
Perform rolling through cluster.
12c DB instances remain running!
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Default ASM shared password file loca.on is the same as for the SPFILE (here +GIMR)
Path-‐checking while moving the file (online opera.on)
53
Move ASM Password File
[GRID]> srvctl config ASM ASM home: <CRS home> Password file: +GIMR/orapwASM ASM listener: LISTENER ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM GRID]> srvctl modify asm -‐pwfile +GRID/orapwASM [GRID]> srvctl config ASM ASM home: <CRS home> Password file: +GRID/orapwASM ASM listener: LISTENER ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM
[GRID]> srvctl modify asm -‐pwfile GRID [GRID]> srvctl config ASM ASM home: <CRS home> Password file: GRID ASM listener: LISTENER ASM instance count: 3 Cluster ASM listener: ASMNET1LSNR_ASM [GRID]> srvctl modify asm -‐pwfile +GRID PRKO-‐3270 : The specified password file +GRID does not conform to an ASM path syntax
Use the correct ASM path syntax!
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Appendix B Crea.ng public and private (DHCP-‐based) networks including SCAN and SCAN Listeners
54
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Step 1: Add network Result
55
Add Public Network – DHCP
[GRID]> oifcfg iflist eth0 10.1.1.0 eth1 10.2.2.0 eth2 192.168.0.0 eth2 169.254.0.0 [GRID]> oifcfg se.f -‐global "*"/10.2.2.0:public [GRID]> oifcfg ge.f eth0 10.1.1.0 global public eth2 192.168.0.0 global cluster_interconnect,asm * 10.2.2.0 global public Only in OCR: eth1 10.2.2.0 global public PRIF-‐29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system. [GRID]> su Password: [GRID]> srvctl add network -‐netnum 2 -‐subnet 10.2.2.0/255.255.255.0 -‐neJype dhcp [GRID]> exit exit
[GRID]> srvctl config network -‐k 2 Network 2 exists Subnet IPv4: 10.2.2.0/255.255.255.0/, dhcp Subnet IPv6: Ping Targets: Network is enabled Network is individually enabled on nodes: Network is individually disabled on nodes: [GRID]> crsctl stat res -‐t -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ Name Target State Server State details -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ Local Resources -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ … ora.net2.network OFFLINE OFFLINE argenBna STABLE OFFLINE OFFLINE brazil STABLE OFFLINE OFFLINE germany STABLE …
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Step 2: Add SCAN / SCAN_LISTENER to the new network (as required) Result
56
Add Public Network – DHCP
[GRID]> su Password: [GRID]> srvctl update gns -‐adver.se MyScan -‐address 10.2.2.20 # Need to have a SCAN name. DHCP network requires dynamic VIP resoluBon via GNS [GRID]> srvctl modify gns -‐verify MyScan The name "MyScan" is adverBsed through GNS. [GRID]> srvctl add scan -‐k 2 PRKO-‐2082 : Missing mandatory opBon –scanname [GRID]> su Password: [GRID]> srvctl add scan -‐k 2 -‐scanname MyScan [GRID]> exit [GRID]> srvctl add scan_listener -‐k 2
[GRID]> srvctl config scan -‐k 2 SCAN name: MyScan.cupgnsdom.localdomain, Network: 2 Subnet IPv4: 10.2.2.0/255.255.255.0/, dhcp Subnet IPv6: SCAN VIP is enabled. SCAN VIP is individually enabled on nodes: SCAN VIP is individually disabled on nodes: SCAN VIP is enabled. SCAN VIP is individually enabled on nodes: SCAN VIP is individually disabled on nodes: SCAN VIP is enabled. SCAN VIP is individually enabled on nodes: SCAN VIP is individually disabled on nodes:
[GRID]> srvctl config scan_listener -‐k 2 SCAN Listener LISTENER_SCAN1_NET2 exists. Port: TCP:1521 RegistraBon invited nodes: RegistraBon invited subnets: SCAN Listener is enabled. SCAN Listener is individually enabled on nodes: SCAN Listener is individually disabled on nodes: SCAN Listener LISTENER_SCAN2_NET2 exists. Port: TCP:1521 RegistraBon invited nodes: RegistraBon invited subnets: SCAN Listener is enabled. SCAN Listener is individually enabled on nodes: SCAN Listener is individually disabled on nodes: SCAN Listener LISTENER_SCAN3_NET2 exists. Port: TCP:1521 RegistraBon invited nodes: RegistraBon invited subnets: SCAN Listener is enabled. SCAN Listener is individually enabled on nodes: SCAN Listener is individually disabled on nodes:
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
oifcfg commands Result (ifconfig -‐a on HUB)
57
Add Private Network – DHCP
[GRID]> oifcfg iflist eth0 10.1.1.0 eth1 10.2.2.0 eth2 192.168.0.0 eth2 169.254.0.0 eth3 172.149.0.0 [GRID]> oifcfg ge.f eth0 10.1.1.0 global public eth2 192.168.0.0 global cluster_interconnect,asm * 10.2.2.0 global public Only in OCR: eth1 10.2.2.0 global public PRIF-‐29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system. [GRID]> oifcfg se.f -‐global "*"/172.149.0.0:cluster_interconnect,asm [GRID]> oifcfg ge.f eth0 10.1.1.0 global public eth2 192.168.0.0 global cluster_interconnect,asm * 10.2.2.0 global public * 172.149.0.0 global cluster_interconnect,asm PRIF-‐29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.
BEFORE eth3 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE inet addr:172.149.2.7 Bcast:172.149.15.255 Mask:255.255.240.0 inet6 addr: fe80::a00:27ff:fe1e:2bfe/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:52 errors:0 dropped:0 overruns:0 frame:0 TX packets:17 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:20974 (20.4 KiB) TX bytes:4230 (4.1 KiB) AFTER eth3 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE inet addr:172.149.2.7 Bcast:172.149.15.255 Mask:255.255.240.0 inet6 addr: fe80::a00:27ff:fe1e:2bfe/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1161 errors:0 dropped:0 overruns:0 frame:0 TX packets:864 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:720040 (703.1 KiB) TX bytes:500289 (488.5 KiB) eth3:1 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE inet addr:169.254.245.67 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
HAIPs will only be used for
Load Balancing once at least the DB / ASM instances, of
not the node is restarted. They are considered for failover immediately.
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
ifconfig -‐a on HUB – excerpt ifconfig -‐a on Leaf – excerpt
58
Side note: Leaf Nodes don’t host HAIPs!
eth2 Link encap:Ethernet HWaddr 08:00:27:AD:DC:FD inet addr:192.168.7.11 Bcast:192.168.15.255 Mask:255.255.240.0 inet6 addr: fe80::a00:27ff:fead:dcfd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9303 errors:0 dropped:0 overruns:0 frame:0 TX packets:6112 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8344479 (7.9 MiB) TX bytes:2400797 (2.2 MiB) eth2:1 Link encap:Ethernet HWaddr 08:00:27:AD:DC:FD inet addr:169.254.190.250 Bcast:169.254.255.255 Mask:255.255.128.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth3 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE inet addr:172.149.2.5 Bcast:172.149.15.255 Mask:255.255.240.0 inet6 addr: fe80::a00:27ff:fe1e:2bfe/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4729 errors:0 dropped:0 overruns:0 frame:0 TX packets:5195 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1555796 (1.4 MiB) TX bytes:2128607 (2.0 MiB) eth3:1 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE inet addr:169.254.6.142 Bcast:169.254.127.255 Mask:255.255.128.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
eth2 Link encap:Ethernet HWaddr 08:00:27:CC:98:C3 inet addr:192.168.7.15 Bcast:192.168.15.255 Mask:255.255.240.0 inet6 addr: fe80::a00:27ff:fecc:98c3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7218 errors:0 dropped:0 overruns:0 frame:0 TX packets:11354 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2644101 (2.5 MiB) TX bytes:13979129 (13.3 MiB) eth3 Link encap:Ethernet HWaddr 08:00:27:06:D5:93 inet addr:172.149.2.6 Bcast:172.149.15.255 Mask:255.255.240.0 inet6 addr: fe80::a00:27ff:fe06:d593/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6074 errors:0 dropped:0 overruns:0 frame:0 TX packets:5591 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2262521 (2.1 MiB) TX bytes:1680094 (1.6 MiB)
HAIPs on the interconnect are only used by ASM / DB instances. Leaf nodes do
not host those, hence, they do not host HAIPs. CSSD (the node management daemon) uses a different redundancy approach.
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Step 1: Add network Result
59
Add Public Network – STATIC
[GRID]> oifcfg iflist eth0 10.1.1.0 eth1 10.2.2.0 eth2 192.168.0.0 eth2 169.254.128.0 eth3 172.149.0.0 eth3 169.254.0.0 #Assuming you have NO global public interface defined on subnet 10.2.2.0 [GRID]> oifcfg se.f -‐global "*"/10.2.2.0:public [GRID]> oifcfg ge.f eth0 10.1.1.0 global public eth2 192.168.0.0 global cluster_interconnect,asm * 172.149.0.0 global cluster_interconnect,asm * 10.2.2.0 global public PRIF-‐29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.
[GRID]> su Password: [GRID]> srvctl add network -‐netnum 2 -‐subnet 10.2.2.0/255.255.255.0 -‐neJype STATIC
[GRID]> srvctl config network -‐k 2 Network 2 exists Subnet IPv4: 10.2.2.0/255.255.255.0/, staBc Subnet IPv6: Ping Targets: Network is enabled Network is individually enabled on nodes: Network is individually disabled on nodes: [GRID]> crsctl stat res -‐t -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ Name Target State Server State details -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ Local Resources -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ … ora.net2.network OFFLINE OFFLINE argenBna STABLE OFFLINE OFFLINE brazil STABLE OFFLINE OFFLINE germany STABLE …
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Step 2: Add VIPs Result
60
Add Public Network – STATIC
[GRID]> srvctl add vip -‐node germany -‐address germany-‐vip2/255.255.255.0 -‐netnum 2 [GRID]> srvctl add vip -‐node argen.na -‐address argen.na-‐vip2/255.255.255.0 -‐netnum 2 [GRID]> srvctl add vip -‐node brazil -‐address brazil-‐vip2/255.255.255.0 -‐netnum 2 [GRID]> srvctl config vip -‐n germany VIP exists: network number 1, hosBng node germany VIP Name: germany-‐vip VIP IPv4 Address: 10.1.1.31 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes: VIP exists: network number 2, hosBng node germany VIP Name: germany-‐vip2 VIP IPv4 Address: 10.2.2.31 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes: [GRID]> srvctl start vip -‐n germany -‐k 2 [GRID]> srvctl start vip -‐n argen.na -‐k 2 [GRID]> srvctl start vip -‐n brazil -‐k 2
[GRID]> srvctl status vip -‐n germany VIP germany-‐vip is enabled VIP germany-‐vip is running on node: germany VIP germany-‐vip2 is enabled VIP germany-‐vip2 is running on node: germany [GRID]> srvctl status vip -‐n argen.na VIP argenBna-‐vip is enabled VIP argenBna-‐vip is running on node: argenBna VIP argenBna-‐vip2 is enabled VIP argenBna-‐vip2 is running on node: argenBna [GRID]> srvctl status vip -‐n brazil VIP brazil-‐vip is enabled VIP brazil-‐vip is running on node: brazil VIP brazil-‐vip2 is enabled VIP brazil-‐vip2 is running on node: brazil
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Step 3: Add SCAN / SCAN_LISTENER to the new network (as required) Result
61
Add Public Network – STATIC
#as root [GRID]> srvctl add scan -‐scanname cupscan2 -‐k 2 [GRID]> exit [GRID]> srvctl add scan_listener -‐k 2 -‐endpoints 1522 [GRID]> srvctl status scan_listener -‐k 2 SCAN Listener LISTENER_SCAN1_NET2 is enabled SCAN listener LISTENER_SCAN1_NET2 is not running [GRID]> srvctl start scan_listener -‐k 2
[GRID]> srvctl status scan_listener -‐k 2 SCAN Listener LISTENER_SCAN1_NET2 is enabled SCAN listener LISTENER_SCAN1_NET2 is running on node brazil [GRID]> srvctl status scan -‐k 2 SCAN VIP scan1_net2 is enabled SCAN VIP scan1_net2 is running on node brazil
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Appendix C Automa.c Diagnos.c Repository (ADR) support for Oracle Grid Infrastructure
62
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
• The ADR is a file-‐based repository for diagnosBc data such as traces, dumps, the alert log, health monitor reports, and more.
• ADR helps prevenBng, detecBng, diagnosing, and resolving problems.
• ADR comes with its own command line tool (adrci) to get easy access to and manage diagnosBc informaBon for Oracle GI + DB.
63
AutomaBc DiagnosBc Repository (ADR) Convenience
ADR_base
diag
asm rdbms tnslsnr clients crs (others)
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
adrci adrci incident management
64
Some Management Examples
[GRID]> adrci ADRCI: Release 12.1.0.2.0 -‐ ProducBon on Thu Sep 18 11:35:31 2014 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. ADR base = "/u01/app/grid“ adrci> show homes ADR Homes: diag/rdbms/_mgmtdb/-‐MGMTDB diag/tnslsnr/germany/asmnet1lsnr_asm diag/tnslsnr/germany/listener_scan1 diag/tnslsnr/germany/listener diag/tnslsnr/germany/mgmtlsnr diag/asm/+asm/+ASM1 diag/crs/germany/crs diag/clients/user_grid/host_2998292599_82 diag/clients/user_oracle/host_2998292599_82 diag/clients/user_root/host_2998292599_82
[GRID]> adrci ADR base = "/u01/app/grid" … adrci> show incident; ADR Home = /u01/app/grid/diag/rdbms/_mgmtdb/-‐MGMTDB: ************************************************************************* INCIDENT_ID PROBLEM_KEY CREATE_TIME -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ 12073 ORA 700 [kskvmstatact: excessive swapping observed] 2014-‐09-‐08 17:44:56.580000 -‐07:00 36081 ORA 700 [kskvmstatact: excessive swapping observed] 2014-‐09-‐14 20:11:17.388000 -‐07:00 40881 ORA 700 [kskvmstatact: excessive swapping observed] 2014-‐09-‐16 15:30:18.319000 -‐07:00 … adrci> set home diag/rdbms/_mgmtdb/-‐MGMTDB adrci> ips create package incident 12073; Created package 1 based on incident id 12073, correlaBon level typical adrci> ips generate package 1 in /tmp Generated package 1 in file /tmp/ORA700ksk_20140918110411_COM_1.zip, mode complete [GRID]> ls –lart /tmp -‐rw-‐r-‐-‐r-‐-‐. 1 grid oinstall 811806 Sep 18 11:05 ORA700ksk_20140918110411_COM_1.zip
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Binary / Log per Node Space Requirement
Grid Infra. (GI) Home ~6.6 GB
RAC DB Home ~5.5 GB
TFA Repository 10 GB
GI Daemon Traces ~2.6 GB
ASM Traces ~9 GB
DB Traces 1.5 GB per DB per month
Listener Traces 60MB per node per month
Total over 3 months • For 2 RAC DBs • For 100 RAC DBs
• ~43 GB
• ~483 GB
• Flex ASM vs. Standard ASM Flex Cluster vs. Standard Cluster – Does not make a difference for ADR!
65
Space Requirements, ExcepBons, and Rules
gnsd
ocssd ocssdrim
havip
exporzs NFS
helper
hanfs
ghc ghs
mgmtdb
agent
APX
gns
mount
Some OC4J Logs
Some GI home
Logs
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Appendix D Flex Cluster – add nodes as needed
66
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Ini.al installa.on: HUB nodes only Add Leafs later (addNode)
67
RecommendaBon: Install HUB Nodes, Add Leaf Nodes
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 68
Add “argenBna” as a HUB Node – addNode Part 1
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 69
Add “argenBna” as a HUB Node – addNode Part 2
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 70
Add Leaf Nodes – addNode in Short Note: Leaf nodes do not require a virtual node name (VIP). ApplicaBon VIPs for non-‐DB use
cases need to be added manually later.
Normal, can be ignored.
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Database installer sugges.on Consider Use Case
71
ConBnue to use Leaf Nodes for ApplicaBons in 12.1.0.2
Useful, if “spain” is likely to become a HUB at some point in Bme.
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
DBCA Despite running Leaf Nodes
72
ConBnue to use Leaf Nodes for ApplicaBons in 12.1.0.2
[GRID]> olsnodes -‐s -‐t germany AcBve Unpinned argenBna AcBve Unpinned brazil AcBve Unpinned italy Ac.ve Unpinned spain Ac.ve Unpinned
Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Leaf Listener (OFFLINE/OFFLINE) Trace File Analyzer (TFA)
73
Some Examples of Resources running on Leaf Nodes
[grid@spain Desktop]$ . grid_profile [GRID]> crsctl stat res -‐t -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ Name Target State Server State details -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ Local Resources -‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐-‐ ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE argenBna STABLE ONLINE ONLINE brazil STABLE ONLINE ONLINE germany STABLE … ora.LISTENER.lsnr ONLINE ONLINE argenBna STABLE ONLINE ONLINE brazil STABLE ONLINE ONLINE germany STABLE ora.LISTENER_LEAF.lsnr OFFLINE OFFLINE italy STABLE OFFLINE OFFLINE spain STABLE ora.net1.network ONLINE ONLINE argenBna STABLE ONLINE ONLINE brazil STABLE ONLINE ONLINE germany STABLE
[GRID]> ps -‐ef |grep grid_1 root 1431 1 0 14:12 ? 00:00:19 /u01/app/12.1.0/grid_1/jdk/jre/bin/java -‐Xms128m -‐Xmx512m -‐classpath /u01/app/12.1.0/grid_1/�a/spain/�a_home/jlib/RATFA.jar:/u01/app/12.1.0/grid_1/�a/spain/�a_home/jlib/je-‐5.0.84.jar:/u01/app/12.1.0/grid_1/�a/spain/�a_home/jlib/ojdbc6.jar:/u01/app/12.1.0/grid_1/�a/spain/�a_home/jlib/commons-‐io-‐2.2.jar oracle.rat.�a.TFAMain /u01/app/12.1.0/grid_1/�a/spain/�a_home