RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM...

72
Unofficial documentation (IBM Internal) 1/72 IBM Oracle Center (IOC) Oracle RAC with ASM Part 1 IBM Systems and Technology Group IBM Oracle Center march 2012 [email protected]

Transcript of RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM...

Page 1: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 1/72 IBM Oracle Center (IOC)

Oracle RAC with ASM

Part 1

IBM Systems and Technology Group IBM Oracle Center

march 2012

[email protected]

Page 2: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 2/72 IBM Oracle Center (IOC)

I. Table of Contents

II. INFORMATION....................................................................................................................................... 3

CHANGE HISTORY........................................................................................................................................ 3

III. NOTICE ...................................................................................................................................................4

IV. IBM POWER SYSTEM SETUP FOR ORACLE 11GR2 RAC INSTALL ATION .......................... 5

1 INTRODUCTION....................................................................................................................................... 5

2 POWER SYSTEM SETUP AND LOGICAL PARTITIONS CONFIGURA TION ............................. 6

2.1 HMC VERSION .......................................................................................................................................... 6 2.2 THE SERVERS............................................................................................................................................ 6 2.3 THE PHYSICAL ADAPTER RESOURCES ..................................................................................................... 8 VIRTUAL CDS AND VIRTUAL OPTICAL DRIVES ............................................................................................. 11 2.4 GLOBAL SETUP ........................................................................................................................................ 15 2.5 THE NODES PARTITION CONFIGURATION ............................................................................................. 15

3 AIX CONFIGURATION FOR ORACLE RAC ................... ................................................................. 18

3.1 TCP/IP FOR NODE INTERCONNECT NETWORK .................................ERROR! BOOKMARK NOT DEFINED. 3.2 NETWORK TIME PROTOCOL (NTP) ...................................................................................................... 23 3.3 DOMAIN NAME SERVER (DNS).............................................................................................................. 24

4 PREPARATION FOR RAC .................................................................................................................... 26

4.1 USER’S CREATION FOR RAC.................................................................................................................. 26 4.2 CREATE REQUIRED DIRECTORIES ........................................................................................................ 29 4.3 ASSIGNED CAPABILITIES TO USERS......................................................................................................29

5 PREPARE DISKS......................................................................ERROR! BOOKMARK NOT DEFINED.

5.1 PREPARE DISKS FOR ASM DISKS GROUPS.......................................ERROR! BOOKMARK NOT DEFINED.

6 USING CLUSTER VERIFICATION UTILITY ................. .................................................................. 38

6.1 ORACLE 11GR2 GRID INFRASTRUCTURE INSTALLATION .................................................................... 40 6.2 PREPARE THE ENVIRONMENT ................................................................................................................ 40 6.3 INSTALL THE CLUSTERWARE ENVIRONMENT ....................................................................................... 42

Page 3: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 3/72 IBM Oracle Center (IOC)

II. INFORMATION

Authors - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - François Martin [IBM] – IT Specialist Claire NOIRAULT – Senior Specialist - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Email address - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - [email protected] - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Abstract - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - This document is designed to help reader to implement Oracle 11gR2 Grid Infrastructure on IBM Power System running AIX 7 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Version 1.0.0 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Contacts - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - IBM Oracle Center (Montpellier- France) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Change history

The demo described in this document was implemented in October 2010.

We applied the Oracle & IBM recommendations that we have defined for hundred of customers, based on our experience of architecture, IBM Power systems, AIX, PowerVM Virtualization and Oracle clusterware and database components.

Version Date Editor Reviewers Editing description

1.0 03/31/2011 NOIRAULT Claire

Page 4: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 4/72 IBM Oracle Center (IOC)

III. Notice

This document is based on our experiences. This is not an official (IBM) documentation. This document will be constantly updated

and we’re open to any add-on or feedback from your own experiences!

This document is presented “As-Is” and IBM does not assume responsibility for the statements expressed herein. All statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice, represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of a specific Statement of General Direction. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements quoted in this document may have been made on development-level systems. There is no guarantee these measurements will be the same on generally available systems. Some measurements quoted in this document may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. The information contained in this paper resulted fr om:

• Oracle and IBM documentations • Experiences done in the IBM Oracle Center

Information in this document concerning non-IBM products was obtained from the suppliers of these products, published announcement material or other publicly available sources. IBM has not tested these products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of these products. Your comments are important for us, and we thank th e ones who send us their feedback about how this document did help them in their implementation . We want our technical papers to be as helpful as possible.

Page 5: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC)

IV. IBM Power System setup for Oracle 11gR2

RAC installation

1 Introduction

This part of the manual will guide you through the configuration and infrastructure setup of the IBM Power System, prior to the Oracle 11gR2 Grid installation. It describes the step by step scenario to prepare the environment for the AIX logical partitions. These setup must be done before the Oracle installation.

Many parts are related to the AIX OS and PowerVM virtualization features and can be implemented for both RAC and non RAC environments. Only the disk clustering and node interconnect network AIX are compulsory for RAC environment.

The document explains the installation on a virtual environment, including VIO Servers and usage of PowerVM virtualization features. It can also be used for the setup of a physical configuration.

It contains details about the main concepts and the features of IBM PowerVM Virtualization which result in a flexible architecture for the Oracle implementation.

The System environment is defined for a typical configuration and it can be customized depending on the customer requirements. You can contact the authors of this document for architecture design and discussion.

One of the key points is to show and explain you how the PowerVM Virtualization features can be combined with Oracle RAC, and help you to optimize the usage of the resources.

PowerVM virtualization features are integrated from many years in the Power System, you may use some of the main features and tools in the following steps.

Page 6: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 6/72 IBM Oracle Center (IOC)

2 Power system setup and logical partitions

configuration

The Oracle RDBMS software is certified according the AIX version, so there is no hardware dependency and the following configuration can be implemented on any of the IBM Power systems for both Power6 and Power7 processor based systems.

Let’s introduce the environment given as example.

2.1 HMC version An Hardware Management Console (HMC) is connected to the Service processor of both systems in order to manage the server resources and the Logical partitions (LPARs). It could have been two different HMCs.

2.2 The servers Two Power systems are configured.

The server Server-8233-E8B-SN1041B6P will host two nodes (LPM-Demo_01-118 and LPM-Demo_01-120) and the other server Server-8233-E8B-SN1041B7P will host the third node of the Oracle RAC cluster (LPM-Demo_01-119).

Page 7: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 7/72 IBM Oracle Center (IOC)

The configuration example contains Power7 processors based systems. It could have been a mixed configuration Power6 & Power7 processors based system.

{icc-118a:root}/ # lsattr -EHl proc0 attribute value description us er_settable frequency 3550000000 Processor Speed Fa lse smt_enabled true Processor SMT enabled Fa lse smt_threads 4 Processor SMT threads Fa lse state enable Processor state Fa lse type PowerPC_POWER7 Processor type Fa lse {icc-118a:root}/ #

The Oracle RAC environment is fully virtualized and two Virtual I/O Server partition have been created and installed on each server (VIOS version 2.2) for high availability.

vios $ ioslevel 2.2.0.10-FP-24 Vios $

Page 8: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 8/72 IBM Oracle Center (IOC)

2.3 The physical adapter resources

Ethernet and Fiber Chanel are assigned to the VIOS partition, and provide network and Storage access to LAN and SAN, so, all the node partitions are fully virtualized and do not use any I/O physical resource.

2.3.1 Vios1a on Server-8233-E8B-SN1041B6P

2.3.2 Vios2a Server-8233-E8B-SN1041B6P

Page 9: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 9/72 IBM Oracle Center (IOC)

2.3.3 Vios1b on Server-8233-E8B-SN1041B7P

2.3.4 Vios2b on Server-8233-E8B-SN1041B6P

There is no restriction to assign physical resources to the RAC nodes partitions and mixed them with virtual adapters, and use redundant path mechanisms such as Etherchannel, or heterogeneous Multipathing (MPIO/SDDPCM, …).

Nodes LPARs are defined with Virtual Ethernet adapters and are connected to the Virtual LANs defined in the Power Hypervisor Virtual Switch.

Nodes LPARs get access to the SAN disks Logical Units (LUNs) via VIOS and a WWPN.

{icc-118a:root}/ # lsdev | grep Fibre fcs0 Available C8-T1 Virtual Fibre Chan nel Client Adapter fcs1 Available C9-T1 Virtual Fibre Chan nel Client Adapter fcs2 Available 10-T1 Virtual Fibre Chan nel Client Adapter fcs3 Available 11-T1 Virtual Fibre Chan nel Client Adapter sfwcomm0 Available C9-T1-01-FF Fibre Channel Stor age Framework Comm

Page 10: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 10/72 IBM Oracle Center (IOC)

sfwcomm1 Available 11-T1-01-FF Fibre Channel Stor age Framework Comm sfwcomm2 Available C8-T1-01-FF Fibre Channel Stor age Framework Comm sfwcomm3 Available 10-T1-01-FF Fibre Channel Stor age Framework Comm {icc-118a:root}/ # Other LPARs can be hosted on the servers for other purpose and workload consolidation.

VIOS1a VIOS2a

ID vios 23 25 24 26

ID LPM-118 8 10 9 11

WWPN c050760344280112 c050760344280116 c050760344280114 c050760344280118

VIOS1a VIOS2a

ID vios 55 57 56 58

ID LPM-120 8 10 9 11

WWPN c05076034428011a c05076034428011e c05076034428011c c050760344280120

VIOS1b VIOS2b

ID vios 39 41 40 42

ID LPM-119 8 10 9 11

WWPN c05076034427014c c050760344270150 c05076034427014e c050760344270150

Page 11: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 11/72 IBM Oracle Center (IOC)

2.4 Virtual CDs and Virtual Optical Drives

You can create a virtual optical drive on the VIO Server and map it to your client partition (node icc-118 and icc-119) using the Virtual SCSI protocol. So, a cd/dvd drive is assigned to your client partition after running cfgmgr command (well known configuration command in AIX).

Virtual CDs can be created backed on files. They could be defined as blank cds or burned as iso files and containing several files from a directory. These CDs are saved in directory /var/vio/VMLibrary.

These virtual CDs can be loaded in the Virtual optical drives and then can be used from client partitions.

Blanks cds are usually defined as writable and can be loaded to one client drive at a time for backup purpose

In our example, one Virtual iso CD has been burned with Oracle binaries and it is available as read only in the Virtual Media repository of VIOS, so this is the way you will get access from the nodes to the Oracle distribution for the installation and also GPFS filesets.

lsrep

See the following VIO Server commands examples to understand process and purpose:

Page 12: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 12/72 IBM Oracle Center (IOC)

2.4.1 Virtual Media Repository commands

You can create a virtual optical device and map it to one of your nodes using the following command on VIOS server:

# mkvdev -fbo -vadapter vhost# -dev node#cd

Don’t forget to delete it using using the following command .

#rmdev –dev node#cd

2.4.2 Command examples $ mksp -f coderepossp hdisk22 coderepossp $ mksp -fb fbpool -sp coderepossp -size 20G fbpool File system created successfully. 20888756 kilobytes total disk space. New File System size is 41943040 $ mkrep -sp coderepossp -size 20G Virtual Media Repository Created Repository created within "VMLibrary" logical volum e $ lsrep Size(mb) Free(mb) Parent Pool Parent Size Parent Free 20396 20396 coderepossp 139904 98944 $ df -gt Filesystem GB blocks Used Free %Used M ounted on /dev/hd4 0.50 0.16 0.34 32% / /dev/hd2 5.50 4.57 0.93 84% / usr /dev/hd9var 1.00 0.24 0.76 24% / var /dev/hd3 3.50 0.01 3.49 1% / tmp /dev/hd1 10.00 0.01 9.99 1% / home /dev/hd11admin 0.50 0.00 0.50 1% / admin /proc - - - - / proc /dev/hd10opt 3.50 1.12 2.38 33% / opt /dev/livedump 0.50 0.00 0.50 1% / var/adm/ras/livedump /dev/fbpool 20.00 0.08 19.92 1% / var/vio/storagepools/fbpool /dev/VMLibrary 20.00 0.08 19.92 1% / var/vio/VMLibrary �

2.4.3 Virtual Optical Drive commands On the VIOS Partition, use as an example the following commands to create virtual cd device and map it to the virtual client partition.

To create a physical device.

$ mkvdev -fbo -vadapter vhost0 -dev node1cd node1cd Available $

To create and burn the iso CD containing the files from the /home/padmin/… directory, run the following command as root (execute oem_setup_env first).

$ echo “mkisofs -R -o /var/vio/VMLibrary/OracleCDs.is o /home/padmin/WS_11gR2_Distrib” | oem_setup_env $

To create a blank cd with 5GB size:

$ mkvopt -name aixmksysb -size 5G $

Page 13: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 13/72 IBM Oracle Center (IOC)

To list the available virtual drives and display the loaded CDs:

$ lsvopt VTD Media Size(mb) node1cd No Media n/a node2cd scripts.iso 1 node3cd scripts.iso 1

To list the virtual CDs:

$ lsrep Size(mb) Free(mb) Parent Pool Parent Size Parent Free 20270 2289 coderepossp 139904 119552 Name File Size O ptical Access OracleCDs.iso 12860 N one ro aixmksysb 5120 N one rw scripts.iso 1 n ode2cd ro scripts.iso 1 n ode3cd ro

To load a virtual cd in the virtual drive:

$ loadopt -disk aixmksysb -vtd node1cd $ lsrep Size(mb) Free(mb) Parent Pool Parent Size Parent Free 20270 2289 coderepossp 139904 119552 Name File Size O ptical Access OracleCDs.iso 12860 N one rw aixmksysb 5120 n ode1cd rw scripts.iso 1 n ode2cd ro scripts.iso 1 n ode3cd ro

To unload a virtual drive:

$ unloadopt -vtd node1cd $ lsrep Size(mb) Free(mb) Parent Pool Parent Size Parent Free 20269 7497 coderepossp 139904 119552 Name File Size O ptical Access OracleCDs.iso 12771 N one rw scripts.iso 1 n ode2cd ro scripts.iso 1 n ode3cd ro

To delete a virtual cd:

$rmvopt -name aixopt1 $

To delete a virtual optical device:

$rmdev –dev node1cd $

Page 14: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 14/72 IBM Oracle Center (IOC)

Another way for not simultaneous sharing: 1:N drive

Node n

VIOS1 VIOS2

« any partition »but only one vsci mapping active at a time

Node n+1 Node n+2

2.4.4 Example of AIX system backup using the Virtua l cd and VIOS technique

On the AIX client run cfgmgr command to configure the virtual drive after VIO Server configuration.

$ cfgmgr

Create a system backup image with command mksysb in UDF format to your virtual DVD-RAM from the system rootvg

$ smitty sysbackup

Page 15: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 15/72 IBM Oracle Center (IOC)

2.5 Global setup In this exemple, our cluster will be made of three RAC nodes.

Node Name Logical partition name IP address

icc-118 LPM_Demo_01-118 9.52.156.118

icc-119 LPM_Demo_02-119 9.52.156.119

icc-120 LPM_Demo_03-120 9.52.156.120

2.6 The nodes partition configuration The partition nodes are defined as micro partitions with uncapped parameters. In our example, the nodes icc-118 and icc-119 are identical.

{icc-118a:root}/ # lparstat -i Node Name : icc-11 8a Partition Name : LPM_De mo_01-118 Partition Number : 31 Type : Dedica ted-SMT-4 Mode : Capped Entitled Capacity : 2.00 Partition Group-ID : 32799 Shared Pool ID : - Online Virtual CPUs : 2 Maximum Virtual CPUs : 4 Minimum Virtual CPUs : 1 Online Memory : 6144 M B Maximum Memory : 8192 M B Minimum Memory : 1024 M B Variable Capacity Weight : - Minimum Capacity : 1.00 Maximum Capacity : 4.00 Capacity Increment : 1.00 Maximum Physical CPUs in system : 32 Active Physical CPUs in system : 32

Page 16: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 16/72 IBM Oracle Center (IOC)

Active CPUs in Pool : - Shared Physical CPUs in system : 0 Maximum Capacity of Pool : 0 Entitled Capacity of Pool : 0 Unallocated Capacity : - Physical CPU Percentage : 100.00 % Unallocated Weight : - Memory Mode : Dedica ted Total I/O Memory Entitlement : - Variable Memory Capacity Weight : - Memory Pool ID : - Physical Memory in the Pool : - Hypervisor Page Size : - Unallocated Variable Memory Capacity Weight: - Unallocated I/O Memory entitlement : - Memory Group ID of LPAR : - Desired Virtual CPUs : 2 Desired Memory : 6144 M B Desired Variable Capacity Weight : - Desired Capacity : 2.00 Target Memory Expansion Factor : - Target Memory Expansion Size : - Power Saving Mode : Disabl ed {icc-118a:root}/ #

{icc119a:root}/ # lparstat -i Node Name : icc-11 9a Partition Name : LPM_De mo_02-119 Partition Number : 32 Type : Dedica ted-SMT-4 Mode : Capped Entitled Capacity : 2.00 Partition Group-ID : 32800 Shared Pool ID : - Online Virtual CPUs : 2 Maximum Virtual CPUs : 4 Minimum Virtual CPUs : 1 Online Memory : 6144 M B Maximum Memory : 8192 M B Minimum Memory : 1024 M B Variable Capacity Weight : - Minimum Capacity : 1.00 Maximum Capacity : 4.00 Capacity Increment : 1.00 Maximum Physical CPUs in system : 32 Active Physical CPUs in system : 32 Active CPUs in Pool : - Shared Physical CPUs in system : 0 Maximum Capacity of Pool : 0 Entitled Capacity of Pool : 0 Unallocated Capacity : - Physical CPU Percentage : 100.00 % Unallocated Weight : - Memory Mode : Dedica ted Total I/O Memory Entitlement : - Variable Memory Capacity Weight : - Memory Pool ID : - Physical Memory in the Pool : - Hypervisor Page Size : - Unallocated Variable Memory Capacity Weight: - Unallocated I/O Memory entitlement : - Memory Group ID of LPAR : - Desired Virtual CPUs : 2 Desired Memory : 6144 M B

Page 17: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 17/72 IBM Oracle Center (IOC)

Desired Variable Capacity Weight : - Desired Capacity : 2.00 Target Memory Expansion Factor : - Target Memory Expansion Size : - Power Saving Mode : Disabl ed {icc119a:root}/ #

{icc120a:root}/ # lparstat -i Node Name : icc-12 0a Partition Name : LPM_De mo_03-120 Partition Number : 33 Type : Dedica ted-SMT-4 Mode : Capped Entitled Capacity : 2.00 Partition Group-ID : 32801 Shared Pool ID : - Online Virtual CPUs : 2 Maximum Virtual CPUs : 4 Minimum Virtual CPUs : 1 Online Memory : 6144 M B Maximum Memory : 8192 M B Minimum Memory : 1024 M B Variable Capacity Weight : - Minimum Capacity : 1.00 Maximum Capacity : 4.00 Capacity Increment : 1.00 Maximum Physical CPUs in system : 32 Active Physical CPUs in system : 32 Active CPUs in Pool : - Shared Physical CPUs in system : 0 Maximum Capacity of Pool : 0 Entitled Capacity of Pool : 0 Unallocated Capacity : - Physical CPU Percentage : 100.00 % Unallocated Weight : - Memory Mode : Dedica ted Total I/O Memory Entitlement : - Variable Memory Capacity Weight : - Memory Pool ID : - Physical Memory in the Pool : - Hypervisor Page Size : - Unallocated Variable Memory Capacity Weight: - Unallocated I/O Memory entitlement : - Memory Group ID of LPAR : - Desired Virtual CPUs : 2 Desired Memory : 6144 M B Desired Variable Capacity Weight : - Desired Capacity : 2.00 Target Memory Expansion Factor : - Target Memory Expansion Size : - Power Saving Mode : Disabl ed {icc120a:root}/ #

Page 18: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 18/72 IBM Oracle Center (IOC)

3 AIX configuration for Oracle RAC

The following steps will guide you through the prerequisite configuration of your node partitions for Oracle RAC/CRS configuration.

3.1 Admin, scan and virtual address An administration network has been previously defined for your nodes (i.e: 1.1.1.X IP address. You have now to configure additional network for the RAC interconnect and add Virtual IP addresses for VIP on the administration network.

Page 19: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 19/72 IBM Oracle Center (IOC)

Host name icc-118 icc-119 icc-120

Public address en0 9.52.156.118 9.52.156.119 9.52.156.120

Interconnect address en1

192.168.2.118

icc-118X

192.168.2.119

icc-119X

192.168.2.120

icc-120X

Admin address

en2

1.1.1.118

icc-118a

1.1.1.119

icc-119a

1.1.1.120

icc-120a

VIP address en2

(Establish by RAC)

1.1.1.218

icc-118a-vip

1.1.1.219

icc-119a-vip

1.1.1.220

icc-120a-vip

Scan address (cluster DNS)

1.1.1.18

iccioccluster

1.1.1.19

iccioccluster

1.1.1.20

iccioccluster

All en0, en1 , en2 interfaces are defined using virtual adapters ent0 ,ent1 and ent2 .

reséau Switch icc-118 icc-119 icc-120

VIOS1a_B6P-93(ent2 slot 2)

ethernet2 ent0 (slot 12)

VLAN 1

ent0(Slot 12)

VLAN 1

ethernet2 ent2 (slot14)

VLAN 1

ent0(Slot 14)

VLAN 1

VIOS1a_B6P-93(ent3 slot 3)

ethernet3 ent1(slot 13)

VLAN 1

ent1(slot 13)

VLAN 1

VIOS1b_B7P-95 ent2(Slot 2)

ethernet2 ent0(Slot 12)

VLAN 1

ent2 (slot14)

VLAN 1

VIOS1b_B7P-95 ent3(Slot 3)

ethernet3 ent1(Slot 13)

VLAN 1

1 Use smitty chinet command from the AIX CLI to configure network 9.52.156.n on en0 interface, 192.168.2.n on en1 and 1.1.1.n on en2 interface.

2 Verify your configuration with ifconfig.

In our grid configuration, the interface en0 will not be used.

Interface en2 will be the public network and en1 will ne the interconnect network.

Page 20: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 20/72 IBM Oracle Center (IOC)

{icc-118a:root}/ # ifconfig -a en0: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING, SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN> inet 9.52.156.118 netmask 0xffffff00 broadc ast 9.52.156.255 tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1 en1: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING, SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN> inet 192.168.2.118 netmask 0xffffff00 broad cast 192.168.2.255 tcp_sendspace 65536 tcp_recvspace 65536 rf c1323 1 en2: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING, SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN> inet 1.1.1.118 netmask 0xffffff00 broadcast 1.1.1.255 tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1 lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMP LEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN> inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255 inet6 ::1%1/0 tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1 {icc-118a:root}/ #

{icc119a:root}/ # ifconfig -a en0: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING, SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN> inet 9.52.156.119 netmask 0xffffff00 broadc ast 9.52.156.255 tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1 en1 : flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING, SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN> inet 192.168.2.119 netmask 0xffffff00 broad cast 192.168.2.255 tcp_sendspace 65536 tcp_recvspace 65536 rf c1323 1 en2 : flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING, SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN> inet 1.1.1.119 netmask 0xffffff00 broadcast 1.1.1.255 tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1 lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMP LEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN> inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255 inet6 ::1%1/0 tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1 {icc119a:root}/ #

{icc120a:root}/ # ifconfig -a en0: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING, SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN> inet 9.52.156.120 netmask 0xffffff00 broadc ast 9.52.156.255 tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1 en1: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING, SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN> inet 192.168.2.120 netmask 0xffffff00 broad cast 192.168.2.255 tcp_sendspace 65536 tcp_recvspace 65536 rf c1323 1

Page 21: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 21/72 IBM Oracle Center (IOC)

en2: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING, SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN> inet 1.1.1.120 netmask 0xffffff00 broadcast 1.1.1.255 tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1 lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMP LEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN> inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255 inet6 ::1%1/0 tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1 {icc120a:root}/ #

2 Edit the /etc/hosts file on each nodes and add the corresponding addresses of the other nodes of your cluster.

# {icc-118a:root}/ # cat /etc/hosts 127.0.0.1 loopback localhost # l oopback (lo0) name/address ::1 loopback localhost # I Pv6 loopback (lo0) # 9.52.156.118 icc-118 1.1.1.118 icc-118a 192.168.2.118 icc-118X 1.1.1.218 icc-118a-vip #1.1.1.18 iccioccluster # 9.52.156.119 icc-119 1.1.1.119 icc-119a 192.168.2.119 icc-119X 1.1.1.219 icc-119a-vip #1.1.1.19 iccioccluster # 9.52.156.120 icc-120 1.1.1.120 icc-120a 192.168.2.120 icc-120X 1.1.1.220 icc-120a-vip #1.1.1.18 iccioccluster {icc118a:root}/

{icc119a:root}/ # cat /etc/hosts 127.0.0.1 loopback localhost # l oopback (lo0) name/address ::1 loopback localhost # I Pv6 loopback (lo0) # 9.52.156.118 icc-118 1.1.1.118 icc-118a 192.168.2.118 icc-118X 1.1.1.218 icc-118a-vip #1.1.1.18 iccioccluster # 9.52.156.119 icc-119 1.1.1.119 icc-119a 192.168.2.119 icc-119X 1.1.1.219 icc-119a-vip #1.1.1.19 iccioccluster # 9.52.156.120 icc-120 1.1.1.120 icc-120a 192.168.2.120 icc-120X #1.1.1.220 icc-120a-vip # 1.1.1.19 iccioccluster {icc119a:root}/ #

Page 22: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 22/72 IBM Oracle Center (IOC)

{icc120a:root}/ 127.0.0.1 loopback localhost # l oopback (lo0) name/address ::1 loopback localhost # I Pv6 loopback (lo0) # 9.52.156.118 icc-118 1.1.1.118 icc-118a 192.168.2.118 icc-118X 1.1.1.218 icc-118a-vip #1.1.1.18 iccioccluster # 9.52.156.119 icc-119 1.1.1.119 icc-119a 192.168.2.119 icc-119X 1.1.1.219 icc-119a-vip #1.1.1.19 iccioccluster # 9.52.156.120 icc-120 1.1.1.120 icc-120a 192.168.2.120 icc-120X 1.1.1.220 icc-120a-vip # #1.1.1.20 iccioccluster {icc120a:root}/ #

Page 23: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 23/72 IBM Oracle Center (IOC)

3.2 Network Time Protocol (NTP)

NTP is a protocol designed to synchronize the clocks of computers over a network. It is formalized by RFCs released by the IETF.

The ntpd daemon maintains the system time of day in synchronism with Internet standard time servers by exchanging messages with one or more configured servers at designated poll intervals.

Under ordinariy conditions, ntpd adjusts the clock in small steps so that the timescale is effectively continuous and without discontinuities. Under conditions of extreme network congestion, the roundtrip delay jitter can exceed three seconds and the synchronization distance, which is equal to one-half the roundtrip delay plus error budget terms, can become very large. The ntpd algorithms discard sample offsets exceeding 128 ms, unless the interval during which no sample offset is less than 128 ms exceeds 900s. The first sample after that, no matter what the offset, steps the clock to the indicated time. In practice this reduces the false alarm rate where the clock is stepped in error to a vanishingly low incidence.

As the result of this behavior, once the clock has been set, it very rarely stays more than 128 ms, even under extreme cases of network path congestion. Sometimes, in particular when ntpd is first started, the error might exceed 128 ms. With RAC, this behavior is unacceptable. If the -x option is included on the command line, the clock will never be stepped and only slew corrections will be used.

The -x option sets the threshold to 600 s, which is well within the accuracy window to set the clock manually.

To configure the daemon NTP, proceed as follow:

1 Edit the file /etc/ntp.conf and add the following lines

# Broadcast client, no authentication. # broadcastclient driftfile /etc/ntp.drift tracefile /etc/ntp.trace authenticate no OPTIONS="-x"

2 Run /usr/sbin/xntpd -x to start the NTP daemon.

Don’t forget the x option.

3 To start ntp daemon automatically at reboot with –x (slewing option), update the file /etc/rc.tcpip and uncomment the following line and add the –x options

# Start up Network Time Protocol (NTP) daemon startsrc /usr/sbin/xntpd "$src_running" "–x" # lssrc –a | grep ntp

To start ntp daemon manually with –x (slewing option), enter :

{iccxx:root}/ # startsrc -s xntpd -a "-x"

Once the cluster is installed, you can ignore the following message in one of the alertORA.log of the cluster.

[ctssd(2687544)]CRS-2409:The clock on host rac1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.

Page 24: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 24/72 IBM Oracle Center (IOC)

3.3 Domain Name Server (DNS)

A DNS server has been setup on a partition of the administration network. The purpose is to resolve the cluster name.

1 Edit /etc/resolv.conf file and add the following lines.

nameserver 9.0.9.1 domain usca.ibm.com

2 Edit the /etc/netsvc.conf file and add the following line to specify the name resolution order.

First try to resolve from local /etc/hosts files, then if not resolved ask the DNS.

hosts=local,bind

If you need more information to install DNS, see the document

3.3.1 SCAN – Single Client Name Scan (Single Client Access Name) is a single point of access for all applications connecting to an Oracle 11gR2 RAC Cluster and allows consistent connections without the need to know how many nodes are in the cluster. Virtual IPs are still used internally and can still be used for connections, but the initial connection to the cluster is made via a scan. Connections to any database in a cluster will be made via the scan.

Unlike a virtual IP, the SCAN is associated with the entire cluster, rather than an individual node, and associated with multiple IP addresses, not just one address.

The SCAN works by being able to resolve to multiple IP addresses reflecting multiple listeners in the cluster handling public client connections. When a client submits a request, the SCAN listener listening on a SCAN IP address and the SCAN port is contracted on a client's behalf. Because all services on the cluster are registered with the SCAN listener, the SCAN listener replies with the address of the local listener on the least-loaded node where the service is currently being offered. Finally, the client establishes connection to the service through the listener on the node where service is offered. All of these actions take place transparently to the client without any explicit configuration required in the client.

The SCAN should be configured so that it is resolvable either by using Grid Naming Service (GNS) within the cluster, or by using Domain Name Service (DNS) resolution. For high availability and scalability, Oracle recommends that you configure the SCAN name so that it resolves to three IP addresses. At a minimum, the SCAN must resolve to at least one address. If you specify a GNS domain, then the SCAN name defaults to clustername-scan.GNS_domain. Otherwise, it defaults to clustername-scan.current_domain. For example, if you start Oracle grid infrastructure installation from the server node1, the cluster name is mycluster, and the GNS domain is grid.example.com, then the SCAN Name is mycluster-scan.grid.example.com.

In our case, we decided to no use the GNS Domain and the DCHP server in order to control the IP addresses allocation.

1 On icc118 , as root user:

{icc-118a:root}/ # nslookup iccioccluster iccioccluster 1 Server: 9.52.156.118 Address: 9.52.156.118#53 Non-authoritative answer: Name: iccioccluster Address: 1.1.1.18 {icc-118a:root}/ # nslookup iccioccluster iccioccluster

Page 25: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 25/72 IBM Oracle Center (IOC)

2 Server: 9.52.156.118 Address: 9.52.156.118#53 Non-authoritative answer: Name: iccioccluster Address:1.1.1.19 {icc-118a:root}/ # nslookup iccioccluster iccioccluster 3 Server: 9.52.156.118 Address: 9.52.156.118#53 Non-authoritative answer: Name: iccioccluster Address: 1.1.1.20 {icc-118a:root}/ #

2 Repeat step 1 commands on icc-119a node and on icc-120a node as root user !!

Page 26: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 26/72 IBM Oracle Center (IOC)

4 Preparation for RAC

4.1 User’s creation for RAC For the ease of administration we’ll implement a different user for each installation.

User oracle will be responsible for ORACLE_HOME and user grid will be responsible for GRID_HOME installation.

1 On each node, connect as root and create the unix groups required with the following command :

On icc-118a, as root user

{icc-118a:root}/ # /usr/bin/mkgroup -'A' id='1000' adms='root' oinstall {icc-118a:root}/ # /usr/bin/mkgroup -'A' id='1100' adms='root' asmadmin {icc-118a:root}/ # /usr/bin/mkgroup -'A' id='1200' adms='root' dba {icc-118a:root}/ # /usr/bin/mkgroup -'A' id='1201' adms='root' oper {icc-118a:root}/ # /usr/bin/mkgroup -'A' id='1300' adms='root' asmdba {icc-118a:root}/ # /usr/bin/mkgroup -'A' id='1301' adms='root' asmoper {icc-118a:root}/ # /usr/bin/mkgroup -'A' id='1600' adms='root' orauser

On icc-119a, as root user

On icc-120, as root user

2 On each node, connect as root and create the users grid and oracle required and assign required unix groups using the following commands

On icc-118a, as root user:

{icc-118a:root}/ # /usr/bin/mkuser id='1100' pgrp=' oinstall' groups='dba,asmadmin,oper,asmdba,asmoper' admgroups ='system' home='/home/grid' grid {icc-118a:root}/ # {icc-118a:root}/ # /usr/bin/mkuser id='1101' pgrp=' oinstall' groups='dba,asmdba' admgroups='system' home='/home/oracle' oracle {icc-118a:root}/ #

On icc-119a, as root user

On icc-120, as root user

Be sure that all the groups and user numbers are identical through the nodes.

3 Setup user password for grid and oracle users.

Connect as root and setup password for users grid and oracle with the following command on each node.

On icc-118a, as root user:

{icc-118a:root}/ # passwd grid Changing password for "grid" grid's New password: Enter the new password again: {icc-118a:root}/ # passwd oracle Changing password for "oracle" oracle's New password: Enter the new password again: {icc-118a:root}/ #

On icc-119a, as root user

On icc-120a, as root user

Page 27: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 27/72 IBM Oracle Center (IOC)

4 Validate user password for grid and oracle users

Before to install Oracle RAC, you must validate users password on each node.

On icc-118a for user grid

{icc-118a:root}/ # telnet icc-118a Trying... Connected to icc-118a. Escape character is '^]'. telnet (icc-118a) AIX Version 7 Copyright IBM Corporation, 1982, 2011. login: … $ exit Connection closed. {icc-118a:root}/ #

Repeat for user oracle on icc-118a

Then exit, and validate user password for grid and oracle users icc-119a

Then exit, and validate user password for grid and oracle users icc-120a

5 Setup ssh for grid and oracle users

To setup ssh , you can process the old way, wait for the Oracle Universal Installer (OUI) to do it for you, or do it right now using provided Oracle shell script to setup or using the manual method!

Usage ./sshUserSetup.sh -user <user name> [ -hosts "<space separated hostlist>" | -hostfile <absolute path of cluster configuration f ile> ] [ -advanced ] [ -verify] [ -exverify ] [ -logfile <desired absolute path of logfile> ] [-confirm] [-shared] [-help] [-usePassphrase] [-noPromptPassph rase]

To solve SSH setup issue (issue is tracked with Oracle bug 9044791) while installing with Oracle Universal Installer, use the following workaround.

4.2 ssh There are many ssh versions. For this demo, I have used :

$ ssh -v OpenSSH_5.8p1, OpenSSL 0.9.8r 8 Feb 2011 usage: ssh [-1246AaCfgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec] [-D [bind_address:]port] [-e escape_char] [-F configfile] [-I pkcs11] [-i identity_file] [-L [bind_address:]port:host:hostport] [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port] [-R [bind_address:]port:host:hostport] [-S ctl_path] [-W host:port] [-w local_tun[:remote_tun]] [user@]hostname [command] $

1 On all nodes (icc-118a, icc-119a and icc-120a), as root user, apply workaround or use the following script as root user from node1.

for node in icc-118 icc-119 do echo $node ssh $node cp usr/bin/ssh-keygen /usr/local/bin/ssh- keygen ssh $node cp /usr/bin/ssh /usr/local/bin/ssh

Page 28: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 28/72 IBM Oracle Center (IOC)

done

2 On all nodes (icc-118a, icc-119a and icc-120a), as grid user

# cd /home/grid/ # mkdir .ssh # chmod 700 .ssh # cd .ssh # ssh-keygen –t rsa

3 Then only on icc118a

# cat id_rsa.pub >> authorized_keys # scp authorized_keys icc-119a:/home/grid/.ssh/ # scp authorized_keys icc-120a:/home/grid/.ssh/

4 Then only on icc-119a

# cat id_rsa.pub >> authorized_keys # scp authorized_keys icc-118a:/home/grid/.ssh/ # scp authorized_keys icc-120a:/home/grid/.ssh/

5 Then only on icc-120a

# cat id_rsa.pub >> authorized_keys # scp authorized_keys icc-118a:/home/grid/.ssh/ # scp authorized_keys icc-119a:/home/grid/.ssh/

6 On icc-118a, icc-119a, icc-120a ssh icc-120a date ssh icc-119a date ssh icc-118a date

Page 29: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 29/72 IBM Oracle Center (IOC)

4.3 Create Required Directories

On each node, connect as root and create the directories as follow :

1 On icc-118a, as root user, enter the following commands.

{icc-118a:root}/ # /usr/bin/mkdir -p /oracle/app/11.2.0/grid {icc-118a:root}/ # /usr/bin/chown -R grid:oinstall /oracle {icc-118a:root}/ # /usr/bin/mkdir /oracle/app/oracle {icc-118a:root}/ # /usr/bin/chown oracle:oinstall /oracle/app/oracle {icc-118a:root}/ # /usr/bin/chmod -R 775 /oracle {icc-118a:root}/ #

2 Repeat step 1 on icc-119a and icc-120a as root user.

4.4 Assigned Capabilities to Users On each node, connect as root and give the following capabilities and limits to user grid and user oracle :

1 On icc-118a, as root user:

{icc-118a:root}/ # /usr/sbin/lsuser -a capabilities grid grid capabilities= {icc-118a:root}/ # /usr/sbin/lsuser -a capabilities oracle grid capabilities= {icc-118a:root}/ # /usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP _PROPAGATE grid {icc-118a:root}/ # /usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP _PROPAGATE oracle {icc-118a:root}/ # /usr/sbin/lsuser -a capabilities grid grid capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VM M,CAP_PROPAGATE {icc-118a:root}/ # /usr/sbin/lsuser -a capabilities oracle oracle capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_ VMM,CAP_PROPAGATE

2 Repeat step 1 on icc-119a and icc-120a as root user:

{icc-119a:root}/ # /usr/sbin/lsuser -a capabilities grid grid capabilities= {icc-119a:root}/ # /usr/sbin/lsuser -a capabilities oracle grid capabilities= {icc-119a:root}/ # /usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP _PROPAGATE grid {icc-119a:root}/ # /usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP _PROPAGATE oracle {icc-119a:root}/ # /usr/sbin/lsuser -a capabilities grid grid capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VM M,CAP_PROPAGATE {icc-119a:root}/ # /usr/sbin/lsuser -a capabilities oracle oracle capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_ VMM,CAP_PROPAGATE

3 Assign Hard limit and Soft Limit to user grid and user oracle for maximum user processes

You may used smit or the commande chuser with option nproc_hard=-1 and nproc=1

chuser nproc='-1' fsize_hard='-1' data_hard='-1' stack_hard='-1' rss_hard='-1' nofiles_hard='-1' threads_hard='-1' nproc_hard='-1' grid

chuser nproc='-1' fsize_hard='-1' data_hard='-1' stack_hard='-1' rss_hard='-1' nofiles_hard='-1' threads_hard='-1' nproc_hard='-1' oracle

Page 30: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 30/72 IBM Oracle Center (IOC)

5 Prepare Disks

Before to install Oracle Clusterware, you must choice between a clustered filesystem (GPFS) or ASM. This demo was prepared with ASM.

5.1 Prepare Disks for ASM Disks Groups If you plan to use ASM, you have to prepare the disks to enable them to be seen as candidate disk by ASM when it will be needed to create diskgroups OCRVOT_DG and DATA_DG.

As one LUN can be assigned to a different hdisk name on each node, then we can either:

• assign grid/asm ownership to the /dev/rhdisk, or /dev/hdisk

• or provide a unique virtual device name for the same LUN on each node

For ease of administration, providing a virtual device name makes it simple to identify which disk is to be used and for what. As a best practice, we suggest to use a tab like the folowing exemple.

{icc-118a:root}/ # ls -l /dev/hdis* brw------- 1 root system 13, 1 Mar 12 15:08 /dev/hdisk0 brw------- 1 root system 13, 2 Mar 16 08:49 /dev/hdisk1 brw------- 1 root system 13, 3 Mar 12 15:08 /dev/hdisk2 brw------- 1 root system 13, 4 Mar 12 15:08 /dev/hdisk3 brw------- 1 root system 13, 5 Mar 12 15:08 /dev/hdisk4 brw------- 1 root system 13, 6 Mar 12 15:08 /dev/hdisk5 brw------- 1 root system 13, 7 Mar 12 15:08 /dev/hdisk6 brw------- 1 root system 13, 8 Mar 12 15:08 /dev/hdisk7 brw------- 1 root system 13, 9 Mar 12 15:08 /dev/hdisk8 brw------- 1 root system 13, 10 Mar 12 15:08 /dev/hdisk9 {icc-118a:root}/ # for i in 2 3 4 5 6 7 8 9 > do > bootinfo -s hdisk$i > done 2048 2048 2048 2048 2048 102400 102400 102400 {icc-118a:root}/ # {icc119a:root}/ # ls -l /dev/hdis* brw------- 1 root system 13, 2 Mar 13 16:44 /dev/hdisk0 brw------- 1 root system 13, 11 Mar 22 05:08 /dev/hdisk1 brw------- 1 root system 13, 3 Mar 13 16:44 /dev/hdisk2 brw------- 1 root system 13, 5 Mar 13 16:44 /dev/hdisk3 brw------- 1 root system 13, 6 Mar 13 16:44 /dev/hdisk4 brw------- 1 root system 13, 4 Mar 13 16:44 /dev/hdisk5 brw------- 1 root system 13, 7 Mar 13 16:44 /dev/hdisk6 brw------- 1 root system 13, 9 Mar 13 16:44 /dev/hdisk7 brw------- 1 root system 13, 10 Mar 13 16:44 /dev/hdisk8 brw------- 1 root system 13, 8 Mar 13 16:44 /dev/hdisk9 {icc119a:root}/ # for i in 2 3 4 5 6 7 8 9 do

Page 31: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 31/72 IBM Oracle Center (IOC)

bootinfo -s hdisk$i done> > > 2048 2048 2048 2048 2048 102400 102400 102400 {icc119a:root}/ # {icc120a:root}/ # ls -l /dev/hdis* brw------- 1 root system 13, 2 Mar 13 14:50 /dev/hdisk0 brw------- 1 root system 13, 3 Mar 22 06:15 /dev/hdisk1 brw------- 1 root system 13, 4 Mar 13 14:50 /dev/hdisk2 brw------- 1 root system 13, 5 Mar 13 14:50 /dev/hdisk3 brw------- 1 root system 13, 6 Mar 13 14:50 /dev/hdisk4 brw------- 1 root system 13, 7 Mar 13 14:50 /dev/hdisk5 brw------- 1 root system 13, 8 Mar 13 14:50 /dev/hdisk6 brw------- 1 root system 13, 9 Mar 13 14:50 /dev/hdisk7 brw------- 1 root system 13, 10 Mar 13 14:50 /dev/hdisk8 brw------- 1 root system 13, 11 Mar 13 14:50 /dev/hdisk9 {icc120a:root}/ # for i in 2 3 4 5 6 7 8 9 do bootinfo -s hdisk$i done> > > 2048 2048 2048 2048 2048 102400 102400 102400 {icc120a:root}/ #

1. With Oracle 11g, it is not necessary to have raw devices to store OCR and Voting disk.

2. With Oracle 11g, you need a group disk size of roughtly 10 Gb to store OCR and Voting disks. In our configuration, we will aggregate 5 disks of 2Gb to create the group disk. You may use a single disk if the size is ad hoc.

Page 32: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction
Page 33: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 33/72 IBM Oracle Center (IOC)

1 Create a tab similar to the example.

Node icc-118 Node icc-119 Node icc-120 Disks

Size Device Name

hdisk Major Num.

Minor Num. hdisk Major

Num. Minor Num. hdisk Major

Num. Minor Num.

ASM Disk for OCR/VOT

2048 /dev/ASM_OCRVOT_disk1 hdisk2 13 3 hdisk2 13 3 hdisk2 13 4

ASM Disk for OCR/VOT

2048 /dev/ASM_OCRVOT_disk2 hdisk3 13 4 hdisk3 13 5 hdisk3 13 5

ASM Disk for OCR/VOT

2048 /dev/ASM_OCRVOT_disk3 hdisk4 13 5 hdisk4 13 6 hdisk4 13 6

ASM Disk for OCR/VOT

2048 /dev/ASM_OCRVOT_disk4 hdisk5 13 6 hdisk5 13 4 hdisk5 13 7

ASM Disk for OCR/VOT

2048 /dev/ASM_OCRVOT_disk5 hdisk6 13 7 hdisk6 13 7 hdisk6 13 8

ASM Disk for Database

102400

/dev/ASM_DATA_disk1 hdisk7 13 8 hdisk7 13 9 hdisk7 13 9

ASM Disk for Database

102400

/dev/ASM_DATA_disk2 hdisk8 13 9 hdisk8 13 10 hdisk8 13 10

ASM Disk for Database

102400

/dev/ASM_DATA_disk3 hdisk9 13 10 hdisk9 13 8 hdisk9 13 11

---- You may use a script similar to the following example: # mknod /dev/ASM_OCRVOT_disk1 c 13 3 chown grid:dba /dev/ASM_OCRVOT_disk1 chmod 660 /dev/ASM_OCRVOT_disk1 dd if=/dev/zero of=/dev/rhdisk2 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk2 c 13 5 chown grid:dba /dev/ASM_OCRVOT_disk2 chmod 660 /dev/ASM_OCRVOT_disk2 dd if=/dev/zero of=/dev/rhdisk3 bs=8192 count=25000

Page 34: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 34/72 IBM Oracle Center (IOC)

# mknod /dev/ASM_OCRVOT_disk3 c 13 6 chown grid:dba /dev/ASM_OCRVOT_disk3 chmod 660 /dev/ASM_OCRVOT_disk3 dd if=/dev/zero of=/dev/rhdisk4 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk4 c 13 4 chown grid:dba /dev/ASM_OCRVOT_disk4 chmod 660 /dev/ASM_OCRVOT_disk4 dd if=/dev/zero of=/dev/rhdisk5 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk5 c 13 7 chown grid:dba /dev/ASM_OCRVOT_disk5 chmod 660 /dev/ASM_OCRVOT_disk5 dd if=/dev/zero of=/dev/rhdisk6 bs=8192 count=25000 # mknod /dev/ASM_DATA_disk1 c 13 9 chown grid:dba /dev/ASM_DATA_disk1 chmod 660 /dev/ASM_DATA_disk1 dd if=/dev/zero of=/dev/rhdisk7 bs=8192 count=25000 # mknod /dev/ASM_DATA_disk2 c 13 10 chown grid:dba /dev/ASM_DATA_disk2 chmod 660 /dev/ASM_DATA_disk2 dd if=/dev/zero of=/dev/rhdisk8 bs=8192 count=25000 # mknod /dev/ASM_DATA_disk3 c 13 8 chown grid:dba /dev/ASM_DATA_disk3 chmod 660 /dev/ASM_DATA_disk3 dd if=/dev/zero of=/dev/rhdisk9 bs=8192 count=25000 $

Page 35: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 35/72 IBM Oracle Center (IOC)

We need to process as follow taking example with LUN L8200000000000000

2 On icc-118a as user root:

{icc-118a:root}/ # cat createasm3.sh # mknod /dev/ASM_OCRVOT_disk1 c 13 3 chown grid:dba /dev/ASM_OCRVOT_disk1 chmod 660 /dev/ASM_OCRVOT_disk1 dd if=/dev/zero of=/dev/rhdisk2 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk2 c 13 4 chown grid:dba /dev/ASM_OCRVOT_disk2 chmod 660 /dev/ASM_OCRVOT_disk2 dd if=/dev/zero of=/dev/rhdisk3 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk3 c 13 5 chown grid:dba /dev/ASM_OCRVOT_disk3 chmod 660 /dev/ASM_OCRVOT_disk3 dd if=/dev/zero of=/dev/rhdisk4 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk4 c 13 6 chown grid:dba /dev/ASM_OCRVOT_disk4 chmod 660 /dev/ASM_OCRVOT_disk4 dd if=/dev/zero of=/dev/rhdisk5 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk5 c 13 7 chown grid:dba /dev/ASM_OCRVOT_disk5 chmod 660 /dev/ASM_OCRVOT_disk5 dd if=/dev/zero of=/dev/rhdisk6 bs=8192 count=25000 # mknod /dev/ASM_DATA_disk1 c 13 8 chown grid:dba /dev/ASM_DATA_disk1 chmod 660 /dev/ASM_DATA_disk1 dd if=/dev/zero of=/dev/rhdisk7 bs=8192 count=25000 # mknod /dev/ASM_DATA_disk2 c 13 9 chown grid:dba /dev/ASM_DATA_disk2 chmod 660 /dev/ASM_DATA_disk2 dd if=/dev/zero of=/dev/rhdisk8 bs=8192 count=25000 # mknod /dev/ASM_DATA_disk3 c 13 10 chown grid:dba /dev/ASM_DATA_disk3 chmod 660 /dev/ASM_DATA_disk3 dd if=/dev/zero of=/dev/rhdisk9 bs=8192 count=25000 {icc-118a:root}/ # ls -las /dev/ASM* 0 crw-rw---- 1 grid dba 13, 8 M ar 22 09:16 /dev/ASM_DATA_disk1 0 crw-rw---- 1 grid dba 13, 9 M ar 22 09:16 /dev/ASM_DATA_disk2 0 crw-rw---- 1 grid dba 13, 10 M ar 22 09:17 /dev/ASM_DATA_disk3 0 crw-rw---- 1 grid dba 13, 3 M ar 22 09:10 /dev/ASM_OCRVOT_disk1 0 crw-rw---- 1 grid dba 13, 4 M ar 22 09:16 /dev/ASM_OCRVOT_disk2 0 crw-rw---- 1 grid dba 13, 5 M ar 22 09:16 /dev/ASM_OCRVOT_disk3 0 crw-rw---- 1 grid dba 13, 6 M ar 22 09:16 /dev/ASM_OCRVOT_disk4 0 crw-rw---- 1 grid dba 13, 7 M ar 22 09:16 /dev/ASM_OCRVOT_disk5 {icc-118a:root}/ #

3 On icc-119a as user root :

{icc119a:root}/ # cat createasm3.sh # mknod /dev/ASM_OCRVOT_disk1 c 13 3 chown grid:dba /dev/ASM_OCRVOT_disk1 chmod 660 /dev/ASM_OCRVOT_disk1 dd if=/dev/zero of=/dev/rhdisk2 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk2 c 13 5

Page 36: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 36/72 IBM Oracle Center (IOC)

chown grid:dba /dev/ASM_OCRVOT_disk2 chmod 660 /dev/ASM_OCRVOT_disk2 dd if=/dev/zero of=/dev/rhdisk3 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk3 c 13 6 chown grid:dba /dev/ASM_OCRVOT_disk3 chmod 660 /dev/ASM_OCRVOT_disk3 dd if=/dev/zero of=/dev/rhdisk4 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk4 c 13 4 chown grid:dba /dev/ASM_OCRVOT_disk4 chmod 660 /dev/ASM_OCRVOT_disk4 dd if=/dev/zero of=/dev/rhdisk5 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk5 c 13 7 chown grid:dba /dev/ASM_OCRVOT_disk5 chmod 660 /dev/ASM_OCRVOT_disk5 dd if=/dev/zero of=/dev/rhdisk6 bs=8192 count=25000 # mknod /dev/ASM_DATA_disk1 c 13 9 chown grid:dba /dev/ASM_DATA_disk1 chmod 660 /dev/ASM_DATA_disk1 dd if=/dev/zero of=/dev/rhdisk7 bs=8192 count=25000 # mknod /dev/ASM_DATA_disk2 c 13 10 chown grid:dba /dev/ASM_DATA_disk2 chmod 660 /dev/ASM_DATA_disk2 dd if=/dev/zero of=/dev/rhdisk8 bs=8192 count=25000 # mknod /dev/ASM_DATA_disk3 c 13 8 chown grid:dba /dev/ASM_DATA_disk3 chmod 660 /dev/ASM_DATA_disk3 dd if=/dev/zero of=/dev/rhdisk9 bs=8192 count=25000 {icc119a:root}/ # ls -las /dev/ASM* 0 crw-rw---- 1 grid dba 13, 9 M ar 22 09:21 /dev/ASM_DATA_disk1 0 crw-rw---- 1 grid dba 13, 10 M ar 22 09:21 /dev/ASM_DATA_disk2 0 crw-rw---- 1 grid dba 13, 8 M ar 22 09:21 /dev/ASM_DATA_disk3 0 crw-rw---- 1 grid dba 13, 3 M ar 22 09:20 /dev/ASM_OCRVOT_disk1 0 crw-rw---- 1 grid dba 13, 5 M ar 22 09:20 /dev/ASM_OCRVOT_disk2 0 crw-rw---- 1 grid dba 13, 6 M ar 22 09:20 /dev/ASM_OCRVOT_disk3 0 crw-rw---- 1 grid dba 13, 4 M ar 22 09:20 /dev/ASM_OCRVOT_disk4 0 crw-rw---- 1 grid dba 13, 7 M ar 22 09:20 /dev/ASM_OCRVOT_disk5 {icc119a:root}/ #

4 On icc-120a as user root :

{icc120a:root}/ # cat /script/createasm3.sh mknod /dev/ASM_OCRVOT_disk1 c 13 4 chown grid:dba /dev/ASM_OCRVOT_disk1 chmod 660 /dev/ASM_OCRVOT_disk1 dd if=/dev/zero of=/dev/rhdisk2 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk2 c 13 5 chown grid:dba /dev/ASM_OCRVOT_disk2 chmod 660 /dev/ASM_OCRVOT_disk2 dd if=/dev/zero of=/dev/rhdisk3 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk3 c 13 6 chown grid:dba /dev/ASM_OCRVOT_disk3 chmod 660 /dev/ASM_OCRVOT_disk3 dd if=/dev/zero of=/dev/rhdisk4 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk4 c 13 7 chown grid:dba /dev/ASM_OCRVOT_disk4

Page 37: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 37/72 IBM Oracle Center (IOC)

chmod 660 /dev/ASM_OCRVOT_disk4 dd if=/dev/zero of=/dev/rhdisk5 bs=8192 count=25000 # mknod /dev/ASM_OCRVOT_disk5 c 13 8 chown grid:dba /dev/ASM_OCRVOT_disk5 chmod 660 /dev/ASM_OCRVOT_disk5 dd if=/dev/zero of=/dev/rhdisk6 bs=8192 count=25000 # mknod /dev/ASM_DATA_disk1 c 13 9 chown grid:dba /dev/ASM_DATA_disk1 chmod 660 /dev/ASM_DATA_disk1 dd if=/dev/zero of=/dev/rhdisk7 bs=8192 count=25000 # mknod /dev/ASM_DATA_disk2 c 13 10 chown grid:dba /dev/ASM_DATA_disk2 chmod 660 /dev/ASM_DATA_disk2 dd if=/dev/zero of=/dev/rhdisk8 bs=8192 count=25000 # mknod /dev/ASM_DATA_disk3 c 13 11 chown grid:dba /dev/ASM_DATA_disk3 chmod 660 /dev/ASM_DATA_disk3 dd if=/dev/zero of=/dev/rhdisk9 bs=8192 count=25000 {icc120a:root}/ # ls -las /dev/ASM* 0 crw-rw---- 1 grid dba 13, 9 M ar 22 09:23 /dev/ASM_DATA_disk1 0 crw-rw---- 1 grid dba 13, 10 M ar 22 09:23 /dev/ASM_DATA_disk2 0 crw-rw---- 1 grid dba 13, 11 M ar 22 09:23 /dev/ASM_DATA_disk3 0 crw-rw---- 1 grid dba 13, 4 M ar 22 09:22 /dev/ASM_OCRVOT_disk1 0 crw-rw---- 1 grid dba 13, 5 M ar 22 09:22 /dev/ASM_OCRVOT_disk2 0 crw-rw---- 1 grid dba 13, 6 M ar 22 09:22 /dev/ASM_OCRVOT_disk3 0 crw-rw---- 1 grid dba 13, 7 M ar 22 09:22 /dev/ASM_OCRVOT_disk4 0 crw-rw---- 1 grid dba 13, 8 M ar 22 09:23 /dev/ASM_OCRVOT_disk5 {icc120a:root}/ #

5 On step 2, 3, and 4, the result must be identical as the disks are shared.

Later , when the cluster will be ready, you will be able to create other disk groups using the ASM CLI or graphical assistant asmca .

With CLI

icc-118a:root}/ # su - grid $ asmcmd ASMCMD> create diskgroup DATA1 external redundancy disk ‘/dev/ASM_DATA_disk1’;

With asmca

icc-118a:root}/ # su - grid $ asmca

To identify easily the disks in ASM from SQL*Plus, use the table v$ASM_DISK.

SQL > select NAME, HEADER_STATUS, PATH, STATE, FREE _MB from v$ASM_DISK;

Page 38: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 38/72 IBM Oracle Center (IOC)

6 Using Cluster Verification Utility

Before to install the grid infrastructure, run the script runcluvfy.sh to check the configuration. We can check requirements in document

Oracle® Grid Infrastructure

Installation Guide

11g Release 2 (11.2) for IBM AIX Based Systems

ipqmaxlen = 512 rfc1323 = 1 sb_max = 1310720 tcp_recvspace = 65536 tcp_sendspace = 65536 udp_recvspace = 655360 udp_sendspace = 65536 minperm = 29946 minperm% = 3 maxperm = 898419 vpm_xvcpus = 0

You must execute the script runcluvfy.sh on each node, as grid user. It takes only 5 mn .

1 On icc-118a, as grid user

{icc-118a:root}/ # su - grid $ cd /Grid_Infrastructure_11gR2/grid $ ./ runcluvfy.sh stage -pre crsinst -n icc-118a,icc-119 a,icc-120a -fixup –verbose >runcluvfy.log {icc-118a:root}/cd/Grid_Infrastructure_11gR2/grid # cat runcluvfy.log | grep failed Result: Swap space check failed icc-119 hard 128 163 84 failed icc-120 hard 128 163 84 failed icc-118 hard 128 163 84 failed Result: Hard limits check failed for "maximum user processes" icc-119 soft 128 204 7 failed icc-120 soft 128 204 7 failed icc-118 soft 128 204 7 failed Result: Soft limits check failed for "maximum user processes" icc-119 128 2048 failed icc-120 128 2048 failed icc-118 128 2048 failed Result: Kernel parameter check failed for "maxuproc " icc-119 32768 9000 failed (ignorable) icc-120 32768 9000 failed (ignorable) icc-118 32768 9000 failed (ignorable) icc-119 65535 65500 failed (ignorable) icc-120 65535 65500 failed (ignorable) icc-118 65535 65500 failed (ignorable) icc-119 32768 9000 failed (ignorable) icc-120 32768 9000 failed (ignorable) icc-118 32768 9000 failed (ignorable) icc-119 65535 65500 failed (ignorable) icc-120 65535 65500 failed (ignorable) icc-118 65535 65500 failed (ignorable) $

Page 39: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 39/72 IBM Oracle Center (IOC)

If you see “Pre-check for cluster services setup was unsuccessful on all the nodes.”, check what is failed :

2 About line Result: Swap space check failed

This amount of swap space is not mandatory on AIX ! We can ignore it.

3 About lines Result: Soft limits check failed for "maximum user processes" or any raison

You may change the user’s attributes with smit. If you change the value in /etc/environment/limits , you will have to create once again the user grid.

3 About lines Result: PRVF-4007 : User equivalence check failed for user "grid"

Check if all users grid have the same UID/GID and if ssh is available between all the nodes.

Page 40: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 40/72 IBM Oracle Center (IOC)

6.1 Oracle 11gR2 grid Infrastructure Installation

6.2 Prepare the environment Before to install Clusterware, the grid infrastructure , you must prepare the environment. On all nodes (demo1 , icc-119), as root user, it is mantatory execute the script rootpre.sh .

{icc-118a:root}/ # cd /cd/Grid_Infrastructure_11gR2/grid {node1:root}/cd/Grid_Infrastructure_11gR2/grid # ls doc response rootpre.sh runInstaller sshsetup welcome.html install rootpre rpm runcluvfy .sh stage {node1:root}/cd/Grid_Infrastructure_11gR2/grid #

1 Launch the script rootpre.sh on icc-118a

{icc-118a:root}/cd/Grid_Infrastructure_11gR2/grid # ./rootpre.sh ./rootpre.sh output will be logged in /tmp/rootpre. out_10-06-28.15:20:09 Saving the original files in /etc/ora_save_10-06-28 .15:20:09.... Copying new kernel extension to /etc.... Loading the kernel extension from /etc Oracle Kernel Extension Loader for AIX Copyright (c) 1998,1999 Oracle Corporation Successfully loaded /etc/pw-syscall.64bit_kernel w ith kmid: 0x50949000 Successfully configured /etc/pw-syscall.64bit_kern el with kmid: 0x50949000 The kernel extension was successfuly loaded. Checking if group services should be configured.... Nothing to configure. {node1:root}/cd/Grid_Infrastructure_11gR2/grid #

2 Launch the script rootpre.sh on icc-119a and on icc-120a

3 On all nodes, as root user, execute:

[root@node1 ~]# slibclean

4 On icc-118a, as root user, execute:

{icc-118a:root}/ # for node in icc-118 icc-119 > do > ssh $node mkdir /oracle/tmp > ssh $node chmod 777 /oracle/tmp > done {icc-118a:root}/ #

5 Connect as grid user, update .profile file on all nodes for grid user and reconnect as grid on icc-118.

{icc-118a:root}/ # su - grid $ $ pwd /home/grid $ vi .profile --------------------------------------------------- ----------------------- PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/us r/bin/X11:/sbin:. export PATH AIXTHREAD_SCOPE=S; export AIXTHREAD_SCOPE TMP=/oracle/tmp; export TMP TMPDIR=/oracle/tmp; export TMPDIR TEMPDIR=/oracle/tmp; export TEMPDIR TEMP=/oracle/tmp; export TEMP set –o vi

Page 41: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 41/72 IBM Oracle Center (IOC)

if [ -t 0 ]; then stty intr ^C fi if [ -s "$MAIL" ] # This is at Shell star tup. In normal then echo "$MAILMSG" # operation, the Shell checks fi # periodically. $ . ./profile $ set | grep tmp TEMP=/oracle/tmp TEMPDIR=/oracle/tmp TMP=/oracle/tmp TMPDIR=/oracle/tmp $

6 Under VNC Client session, or other graphical interface, execute from ONLY one node (demo1), as root user :

[root@node1 ~]# xhost + access control disabled, clients can connect from a ny host [root@node1 ~]#

Page 42: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 42/72 IBM Oracle Center (IOC)

6.3 Install the clusterware environment

This operation takes 3 hours.

Wait until the OUI Installation screen appear.

Page 43: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 43/72 IBM Oracle Center (IOC)

1 At the Select Installation Option screen, select Install and Configure Grid Infrastructure for a Cluster, then click the Next button.

2 At the Select Installation Type Screen screen, select Advanced Installation, then click the Next button.

Typical Install follows the same goal of the Typical install for Database. This is meant for those who want an easy fast install. As much as possible is defaulted with limited interview screens.

Page 44: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 44/72 IBM Oracle Center (IOC)

3 At the Select Product Languages screen, select the languages in which the Oracle Grid Infrastructure Product will run and then click on the Next button.

In our example, we kept the default language which is English .

4 At the Grid Plug and Play Information screen, modify Cluster Name, Scan Name, Scan port and uncheck the box related to the GNS domain.

Enter a name for your cluster that is unique throughout your entire enterprise network. As you set up the SCAN entries in DNS, with the SCAN name resolving up to three different ip addresses, don’t check Configure GNS .

During installation, listeners are created on nodes for the SCAN IP addresses. Oracle Net Services routes application requests to the least loaded instance providing the service. Because the SCAN addresses resolve to the cluster, rather than to a node address in the cluster, nodes can be added to or removed from the cluster without affecting the SCAN address configuration.

Page 45: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 45/72 IBM Oracle Center (IOC)

5 Then click on the Next Button and wait for the screen Cluster Node Information.

6 At the Cluster Node Information screen, add an entry for each node within the cluster by clicking on the Add... button.

The node names provided at this step have to be able to resolve their name.

The management of the virtual ip address will be done automatically as long as a working DHCP service is available to serve ip addresses within that network. In our exemple, there is no DHCP server available, then the IP Addresses resolution must be done through the /etc/hosts file.

If the GNS Configuration box has been checked at the previous step, the resolution of VIPs can be done automatically and you do not need to provide any virtual hostname for each node.

The ssh configuration is already done, so you don’t need to click on the SSH Connectivity button neither have to enter the grid password. The RunInstaller will automatically check it.

7 Enter the details for all the nodes in the cluster, then click the OK button.

Page 46: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 46/72 IBM Oracle Center (IOC)

8 Once the configuration is done, click on the Next button and wait for the screen Specify Network Interface Usage.

9 At the Specify Network Interface Usage screen, define which interface will be used as public and/or private network.

Check the public and private networks are specified correctly.

Page 47: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 47/72 IBM Oracle Center (IOC)

en0 is “Do Not use” Network

en1 is Private Netrwork (Oracle Cluster Interconnect)

en2 is public Netrwork

10 Then click the Next button and wait for the Storage Option Information Screen.

With this new Release (11gR2), it is now possible to store the OCR and Voting Disks inside ASM. You have two possibilities :

• ASM : OCR & Voting Disks will be placed in an ASM Diskgroup. • Shared File System. Raw devices and block devices are not supported to store the OCR and Voting

disks. You have to use a cluster file system like GPFS to use this option. In our case, we will use ASM to store the OCR and V oting Disks.

11 Select the redundancy (External) and change the Dicovery path (/dev/ASM* in our example).

Do not forget to change the Dicovery path (/dev/ASM* in our example).

12 Select the disks you want to use for OCR Files, then click the Next button.

Page 48: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 48/72 IBM Oracle Center (IOC)

12 Assign different groups to specific management options and then click on the Next Button.

Page 49: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 49/72 IBM Oracle Center (IOC)

13 At the Specify Installation Location screen, specify the Oracle base and Software Location for the Grid Infrastructure. These locations have to exist (read and write) on all nodes that you plan to install the software on.

Page 50: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 50/72 IBM Oracle Center (IOC)

Oracle Grid Home should be installed parallel to the Oracle Base, so not inside the Oracle Base!

15 Provide the Oracle Inventory Directory and then click on the Next button.

16 Wait while the system is being verified thouroughly for prerequisites.

Page 51: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 51/72 IBM Oracle Center (IOC)

17

The result can be seen in the next screen shot.

Page 52: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 52/72 IBM Oracle Center (IOC)

A new feature in this release is the fixup script. Everything that Oracle can change in a script, a script is provided for by the installer. Just run the script and those checks that failed and are Fixable are being fixed. Don't worry about prerequisite checks before running the installer. Just let the installer do the work for you. It'll report everything you need to fix before installing. Just push the "Check Again " button as many times as you wish until everything is reported as passed. Below are an example of the explanation errors we get : - Swap Size : Don’t expand the swap size in our case ! - AIX APAR’s : ALL APAR’s included in AIX 6.1 TL5, Don’t need to !

17 Check the box Ignore All to ignore the errors and click on the Next button.

Page 53: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 53/72 IBM Oracle Center (IOC)

18 The next screen summarizes all informations provided in the previous steps.

Save Response File by clicking on the Save Response File button. The file will be save on $HOME/grid.rsp.

Page 54: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 54/72 IBM Oracle Center (IOC)

18 Then start the installation by clicking on Finish button.

Page 55: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 55/72 IBM Oracle Center (IOC)

19 Run these two scripts to finished with the installation of Grid Infrastructure.

Warning : run the script on node 1 and after successful completion, you can run the

{icc-118a:root}/ # /oracle/app/oraInventory/orainst Root.sh Changing permissions of /oracle/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /oracle/app/oraInventory to o install. The execution of the script is complete. {icc-118a:root}/tmp # {icc-119a:root}/ # /oracle/app/oraInventory/orainst Root.sh Changing permissions of /oracle/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /oracle/app/oraInventory to o install. The execution of the script is complete. {icc-119a:root}/ # On each node, one after the others (not at the same time), as root user:

FIRST on icc-118a, as root user:

{icc-118a:root}/ # cd /oracle/app/11.2.0/grid {icc-118a:root}/ # ./root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /oracle/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]:

Page 56: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 56/72 IBM Oracle Center (IOC)

The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Creating /etc/oratab file... Entries will be added to the /etc/oratab file as ne eded by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed . Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_param s Creating trace directory User ignored Prerequisites during installation User grid has the required capabilities to run CSSD in realtime mode OLR initialization - successful root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Adding Clusterware entries to inittab CRS-2672: Attempting to start 'ora.mdnsd' on 'icc-1 18a' CRS-2676: Start of 'ora.mdnsd' on 'icc-118a' succee ded CRS-2672: Attempting to start 'ora.gpnpd' on 'icc-1 18a' CRS-2676: Start of 'ora.gpnpd' on 'icc-118a' succee ded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'icc-118a' CRS-2672: Attempting to start 'ora.gipcd' on 'icc-1 18a' CRS-2676: Start of 'ora.gipcd' on 'icc-118a' succee ded CRS-2676: Start of 'ora.cssdmonitor' on 'icc-118a' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'icc-11 8a' CRS-2672: Attempting to start 'ora.diskmon' on 'icc -118a' CRS-2676: Start of 'ora.diskmon' on 'icc-118a' succ eeded CRS-2676: Start of 'ora.cssd' on 'icc-118a' succeed ed ASM created and started successfully. Disk Group OCRVOT created successfully. clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'system' .. Operation successful. CRS-4256: Updating the profile Successful addition of voting disk 424fd4165c9a4f3f bf1db8ab22770344. Successfully replaced voting disk group with +OCRVO T.

Page 57: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 57/72 IBM Oracle Center (IOC)

CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- ----- ---- --------- 1. ONLINE 424fd4165c9a4f3fbf1db8ab22770344 (/dev /ASM_OCRVOT_disk5) [OCRVOT] Located 1 voting disk(s). CRS-2672: Attempting to start 'ora.asm' on 'icc-118 a' CRS-2676: Start of 'ora.asm' on 'icc-118a' succeede d CRS-2672: Attempting to start 'ora.OCRVOT.dg' on 'i cc-118a' CRS-2676: Start of 'ora.OCRVOT.dg' on 'icc-118a' su cceeded CRS-2672: Attempting to start 'ora.registry.acfs' o n 'icc-118a' CRS-2676: Start of 'ora.registry.acfs' on 'icc-118a ' succeeded Configure Oracle Grid Infrastructure for a Cluster ... succeeded {icc-118a:root}/ #

On icc-119a, as root user:

{icc119a:root}/ # cd /oracle/app/11.2.0/grid {icc119a:root}/ # ./root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /oracle/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as ne eded by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed . Using configuration parameter file: /oracle/app/11. 2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation User grid has the required capabilities to run CSSD in realtime mode OLR initialization - successful Adding Clusterware entries to inittab CRS-4402: The CSS daemon was started in exclusive m ode but found an active CSS daemon on node icc-118a, number 1, and is terminating An active cluster was found during exclusive startu p, restarting to join the cluster Configure Oracle Grid Infrastructure for a Cluster ... succeeded {icc119a:root}/ #

{icc120a:root}/ # cd /oracle/app/11.2.0/grid {icc120a:root}/ # ./root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /oracle/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as ne eded by

Page 58: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 58/72 IBM Oracle Center (IOC)

Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed . Using configuration parameter file: /oracle/app/11. 2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation User grid has the required capabilities to run CSSD in realtime mode OLR initialization - successful Adding Clusterware entries to inittab CRS-4402: The CSS daemon was started in exclusive m ode but found an active CSS daemon on node icc-118a, number 1, and is terminating An active cluster was found during exclusive startu p, restarting to join the cluster Configure Oracle Grid Infrastructure for a Cluster ... succeeded {icc120a:root}/ #

The root.sh is simply a parent script that calls the following scripts:

<GRID_HOME>/install/utl/rootmacro.sh # validates home and user <GRID_HOME>/install/utl/rootinstall.sh # create s some local files <GRID_HOME>/network/install/sqlnet/setowner.sh # opens up /tmp permissions <GRID_HOME>/rdbms/install/rootadd_rdbms.sh # misc file/permission checks <GRID_HOME>/rdbms/install/rootadd_filemap.sh # mi sc file/permission checks <GRID_HOME>/crs/install/rootcrs.pl # MAIN CLUSTER WARE CONFIG SCRIPT

Most problems are likely going to happen in the rootcrs.pl script which is the main clusterware config script. This script will log useful trace data to <GRID_HOME>/cfgtoollogs/crsconfig/rootcrs_<nodename>.log.

You should also check the clusterware alert log under <GRID_HOME>/log/<nodename> first for any obvious problems or errors.

In case where root.sh is failing to execute for on an initial install, it is OK to re-run root.sh after the cause of the failure is corrected (permissions, paths, etc.). In this case, please run rootdelete.sh (($ORACLE_HOME/crs/utl/rootdelete.sh) to undo the local effects of root.sh before re-running root.sh.

20 Once the root.sh is executed on all nodes, click on the button OK to continue the installation.

Page 59: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 59/72 IBM Oracle Center (IOC)

21 Click on the button Close when you see the following message.

Page 60: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 60/72 IBM Oracle Center (IOC)

22 From icc-118a, as grid user ,verify the installation by running the cluvfy utility tool.

You can have the output displayed on your screen or redirect the output in a file

{node1:root}/oracle/grid/app/oraInventory/logs # su - grid $ $ id uid=1100(grid) gid=1000(oinstall) groups=1100(asmadmin),1200(dba),1201(oper),1300(asm dba),1301(asmoper) $ $ cd $ORACLE_HOME/bin $ $ pwd /oracle/grid/app/11.2.0/grid/bin $ ./cluvfy stage -post crsinst -n all > /tmp/cluvfy-c luster.log $ cat /tmp/cluvfy-cluster.log Performing post-checks for cluster services setup Checking node reachability... Check: Node reachability from node "demo1" Destination Node Reachable? ------------------------------------ ----------- ------------- icc-119 yes icc-118 yes Result: Node reachability check passed from node "d emo1" Checking user equivalence... Check: User equivalence for user "grid" Node Name Comment ------------------------------------ ----------- ------------- icc-118 passed icc-119 passed Result: User equivalence check passed for user "gri d"

Page 61: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 61/72 IBM Oracle Center (IOC)

Checking time zone consistency... Time zone consistency check passed. Checking Cluster manager integrity... Checking CSS daemon... Node Name Status ------------------------------------ ----------- ------------- icc-118 running icc-119 running Oracle Cluster Synchronization Services appear to b e online. Cluster manager integrity check passed Check default user file creation mask Node Name Available Required Comment ------------ ------------------------ --------- --------------- ---------- icc-118 022 0022 passed icc-119 022 0022 passed Result: Default user file creation mask check passe d Checking cluster integrity... Node Name ------------------------------------ icc-118 icc-119 Cluster integrity check passed Checking OCR integrity... Checking the absence of a non-clustered configurati on... All nodes free of non-clustered, local-only configu rations Checking OCR config file "/etc/oracle/ocr.loc"... OCR config file "/etc/oracle/ocr.loc" check success ful Checking OCR location "/gpfs1/storage/ocr1"... Check for OCR location "/gpfs1/storage/ocr1" succes sful Checking OCR location "/gpfs2/storage/ocr2"... Check for OCR location "/gpfs2/storage/ocr2" succes sful Checking OCR location "/gpfs3/storage/ocr3"... Check for OCR location "/gpfs3/storage/ocr3" succes sful WARNING: This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR. OCR integrity check passed Checking CRS integrity... The Oracle clusterware is healthy on node "demo1" The Oracle clusterware is healthy on node "demo2" CRS integrity check passed Checking node application existence... Checking existence of VIP node application Node Name Required Status Comment ------------ ------------------------ --------- --------------- ---------- icc-118 yes online passed icc-119 yes online passed Result: Check passed. Checking existence of ONS node application Node Name Required Status Comment ------------ ------------------------ --------- --------------- ---------- icc-118 no online passed icc-119 no online passed

Page 62: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 62/72 IBM Oracle Center (IOC)

Result: Check passed. Checking existence of GSD node application Node Name Required Status Comment ------------ ------------------------ --------- --------------- ---------- icc-118 no does no t exist ignored icc-119 no does no t exist ignored Result: Check ignored. Checking existence of EONS node application Node Name Required Status Comment ------------ ------------------------ --------- --------------- ---------- icc-118 no online passed icc-119 no online passed Result: Check passed. Checking existence of NETWORK node application Node Name Required Status Comment ------------ ------------------------ --------- --------------- ---------- icc-118 no online passed icc-119 no online passed Result: Check passed. Checking Single Client Access Name (SCAN)... SCAN VIP name Node Running? Lis tenerName Port Running? ---------------- ------------ ------------ --- --------- ------------ ------------ clusterha icc-119 true L ISTENER 1521 true Checking name resolution setup for "clusterha"... SCAN Name IP Address Status Comment ------------ ------------------------ --------- --------------- ---------- clusterha 10.3.25.171 passed clusterha 10.3.25.172 passed clusterha 10.3.25.173 passed Verification of SCAN VIP and Listener setup passed Checking Oracle Cluster Voting Disk configuration.. . Oracle Cluster Voting Disk configuration check pass ed Checking to make sure user "grid" is not in "root" group Node Name Status Comment ------------ ------------------------ --------- --------------- icc-118 does not exist passed icc-119 does not exist passed Result: User "grid" is not part of "root" group. Ch eck passed Checking if Clusterware is installed on all nodes.. . Check of Clusterware install passed Checking if CTSS Resource is running on all nodes.. . Check: CTSS Resource running on all nodes Node Name Status ------------------------------------ ----------- ------------- icc-118 passed icc-119 passed Result: CTSS resource check passed Querying CTSS for time offset on all nodes... Result: Query of CTSS for time offset passed Check CTSS state started... Check: CTSS state Node Name State ------------------------------------ ----------- ------------- icc-118 Observer icc-119 Observer CTSS is in Observer state. Switching over to clock synchronization checks using NTP

Page 63: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 63/72 IBM Oracle Center (IOC)

Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... The NTP configuration file "/etc/ntp.conf" is avail able on all nodes NTP Configuration file check passed Checking daemon liveness... Check: Liveness for "xntpd" Node Name Running? ------------------------------------ ----------- ------------- icc-118 yes icc-119 yes Result: Liveness check passed for "xntpd" Checking NTP daemon command line for slewing option "-x" Check: NTP daemon command line Node Name Slewing Opt ion Set? ------------------------------------ ----------- ------------- icc-118 yes icc-119 yes Result: NTP daemon slewing option check passed Checking NTP daemon's boot time configuration, in f ile "/etc/rc.tcpip", for slewing option "-x" Check: NTP daemon's boot time configuration Node Name Slewing Opt ion Set? ------------------------------------ ----------- ------------- icc-118 yes icc-119 yes Result: NTP daemon's boot time configuration check for slew ing option passed Result: Clock synchronization check using Network T ime Protocol(NTP) passed Oracle Cluster Time Synchronization Services check passed Post-check for cluster services setup was successfu l. {icc-118a:root}/tmp #

Page 64: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 64/72 IBM Oracle Center (IOC)

6.3.1 Check Oracle Clusterware resources

1 Verify the file.profile for user grid

$ cat .profile PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/us r/bin/X11:/sbin:. export PATH if [ -s "$MAIL" ] # This is at Shell star tup. In normal then echo "$MAILMSG" # operation, the Shell checks fi # periodically. DISPLAY=icc-118a:1;export DISPLAY # Oracle Grid Infrastructure Settings AIXTHREAD_SCOPE=S; export AIXTHREAD_SCOPE TMP=/oracle/tmp; export TMP TMPDIR=/oracle/tmp; export TMPDIR TEMPDIR=/oracle/tmp; export TEMPDIR TEMP=/oracle/tmp; export TEMP ORACLE_HOSTNAME=icc-118a; export ORACLE_HOSTNAME ORA_CRS_HOME=/oracle/app/11.2.0/grid; export ORA_CR S_HOME ORACLE_BASE=/oracle/app/grid; export ORACLE_BASE ORACLE_HOME=$ORA_CRS_HOME; export ORACLE_HOME ORACLE_SID=+ASM1; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH; ex port PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY _PATH LIBPATH=$LD_LIBRARY_PAT; export LD_LIBRARY_PATH set -o vi if [ -t 0 ]; then stty intr ^C fi if [ -s "$MAIL" ] # This is at Shell star tup. In normal then echo "$MAILMSG" # operation, the Shell checks fi $

2 Command crsctl is a versatil command to config or display configuration.

$ crsctl Usage: crsctl <command> <object> [<options>] command: enable|disable|config|start|stop|relocate|replace|s tat|add|delete|modify|getperm|setperm|check|set|get|unset|debug|lsmodules|query|p in|unpin For complete usage, use: crsctl [-h | --help] For detailed help on each command and object and it s options use: crsctl <command> <object> -h e.g. crsctl reloc ate resource -h

Page 65: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 65/72 IBM Oracle Center (IOC)

3 On icc-118a, as grid user, you may use the following commands to display the configuration.

To display the local Resources and Cluster Resources:

$ crsctl status resource -t --------------------------------------------------- ----------------------------- NAME TARGET STATE SERVER STATE_DETAILS --------------------------------------------------- ----------------------------- Local Resources --------------------------------------------------- ----------------------------- ora.LISTENER.lsnr ONLINE ONLINE icc-118a ONLINE ONLINE icc-119a ONLINE ONLINE icc-120a ora.OCRVOT.dg ONLINE ONLINE icc-118a ONLINE ONLINE icc-119a ONLINE ONLINE icc-120a ora.asm ONLINE ONLINE icc-118a Started ONLINE ONLINE icc-119a Started ONLINE ONLINE icc-120a Started ora.gsd OFFLINE OFFLINE icc-118a OFFLINE OFFLINE icc-119a OFFLINE OFFLINE icc-120a ora.net1.network ONLINE ONLINE icc-118a ONLINE ONLINE icc-119a ONLINE ONLINE icc-120a ora.ons ONLINE ONLINE icc-118a ONLINE ONLINE icc-119a ONLINE ONLINE icc-120a ora.registry.acfs ONLINE ONLINE icc-118a ONLINE ONLINE icc-119a ONLINE ONLINE icc-120a --------------------------------------------------- ----------------------------- Cluster Resources --------------------------------------------------- ----------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE icc-118a ora.cvu 1 ONLINE ONLINE icc-118a ora.icc-118a.vip 1 ONLINE ONLINE icc-118a ora.icc-119a.vip 1 ONLINE ONLINE icc-119a ora.icc-120a.vip 1 ONLINE ONLINE icc-120a ora.oc4j 1 ONLINE ONLINE icc-118a ora.scan1.vip 1 ONLINE ONLINE icc-118a $

To display status of the cluster resources

$ crsctl status resource NAME=ora.LISTENER.lsnr TYPE=ora.listener.type TARGET=ONLINE , ONLINE , ONLI NE STATE=ONLINE on icc-118a, ONLINE on icc-119a, ONLIN E on icc-120a NAME=ora.LISTENER_SCAN1.lsnr TYPE=ora.scan_listener.type TARGET=ONLINE

Page 66: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 66/72 IBM Oracle Center (IOC)

STATE=ONLINE on icc-118a NAME=ora.OCRVOT.dg TYPE=ora.diskgroup.type TARGET=ONLINE , ONLINE , ONLI NE STATE=ONLINE on icc-118a, ONLINE on icc-119a, ONLIN E on icc-120a NAME=ora.asm TYPE=ora.asm.type TARGET=ONLINE , ONLINE , ONLI NE STATE=ONLINE on icc-118a, ONLINE on icc-119a, ONLIN E on icc-120a NAME=ora.cvu TYPE=ora.cvu.type TARGET=ONLINE STATE=ONLINE on icc-118a NAME=ora.gsd TYPE=ora.gsd.type TARGET=OFFLINE, OFFLINE, OFFLINE STATE=OFFLINE, OFFLINE, OFFLINE NAME=ora.icc-118a.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on icc-118a NAME=ora.icc-119a.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on icc-119a NAME=ora.icc-120a.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on icc-120a NAME=ora.net1.network TYPE=ora.network.type TARGET=ONLINE , ONLINE , ONLI NE STATE=ONLINE on icc-118a, ONLINE on icc-119a, ONLIN E on icc-120a NAME=ora.oc4j TYPE=ora.oc4j.type TARGET=ONLINE STATE=ONLINE on icc-118a NAME=ora.ons TYPE=ora.ons.type TARGET=ONLINE , ONLINE , ONLI NE STATE=ONLINE on icc-118a, ONLINE on icc-119a, ONLIN E on icc-120a NAME=ora.registry.acfs TYPE=ora.registry.acfs.type TARGET=ONLINE , ONLINE , ONLI NE STATE=ONLINE on icc-118a, ONLINE on icc-119a, ONLIN E on icc-120a NAME=ora.scan1.vip TYPE=ora.scan_vip.type TARGET=ONLINE STATE=ONLINE on icc-118a $

4 To display the state of the cluster:

{icc118a:grid}/ $ crs_stat -t Name Type Target State H ost --------------------------------------------------- --------- ora....ER.lsnr ora....er.type ONLINE ONLINE i cc-118a

Page 67: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 67/72 IBM Oracle Center (IOC)

ora....N1.lsnr ora....er.type ONLINE ONLINE i cc-118a ora.OCRVOT.dg ora....up.type ONLINE ONLINE i cc-118a ora.asm ora.asm.type ONLINE ONLINE i cc-118a ora.cvu ora.cvu.type ONLINE ONLINE i cc-118a ora.gsd ora.gsd.type OFFLINE OFFLINE ora....SM1.asm application ONLINE ONLINE i cc-118a ora....8A.lsnr application ONLINE ONLINE i cc-118a ora....18a.gsd application OFFLINE OFFLINE ora....18a.ons application ONLINE ONLINE i cc-118a ora....18a.vip ora....t1.type ONLINE ONLINE i cc-118a ora....SM2.asm application ONLINE ONLINE i cc-119a ora....9A.lsnr application ONLINE ONLINE i cc-119a ora....19a.gsd application OFFLINE OFFLINE ora....19a.ons application ONLINE ONLINE i cc-119a ora....19a.vip ora....t1.type ONLINE ONLINE i cc-119a ora....SM3.asm application ONLINE ONLINE i cc-120a ora....0A.lsnr application ONLINE ONLINE i cc-120a ora....20a.gsd application OFFLINE OFFLINE ora....20a.ons application ONLINE ONLINE i cc-120a ora....20a.vip ora....t1.type ONLINE ONLINE i cc-120a ora....network ora....rk.type ONLINE ONLINE i cc-118a ora.oc4j ora.oc4j.type ONLINE ONLINE i cc-118a ora.ons ora.ons.type ONLINE ONLINE i cc-118a ora....ry.acfs ora....fs.type ONLINE ONLINE i cc-118a ora.scan1.vip ora....ip.type ONLINE ONLINE i cc-118a $

5 To check registered network interfaces and subnets

{icc-118a:root}/ # su - grid $ oifcfg getif en1 192.168.2.0 global cluster_interconnect en2 1.1.1.0 global public $

{icc-119a:root}/ # su - grid $ oifcfg getif en1 192.168.2.0 global cluster_interconnect en2 1.1. 1.0 global public $

{icc120a:root}/ # su - grid $ oifcfg getif en1 192.168.2.0 global cluster_interconnect en2 1.1. 1.0 global public $

6 On each node, create following script, named for example crsstat1 in $ORA_CRS_HOME/bin :

Include the following content in the file crsstat1 .

#--------------------------- Begin Shell Script --- ------------------------- #!/usr/bin/ksh # # Sample 10g CRS resource status query script # # Description: # - Returns formatted version of crs_stat -t, in tabular # format, with the complete rsc names and filt ering keywords # - The argument, $RSC_KEY, is optional and if pa ssed to the script, will

Page 68: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 68/72 IBM Oracle Center (IOC)

# limit the output to HA resources whose names match $RSC_KEY. # Requirements: # - $ORA_CRS_HOME should be set in your environme nt RSC_KEY=$1 QSTAT=-u AWK=/usr/bin/awk # if not available use /usr/bin /awk # Table header:echo "" $AWK \ 'BEGIN {printf "%-45s %-10s %-18s\n", "HA Resourc e", "Target", "State"; printf "%-45s %-10s %-18s\n", "---------- -", "------", "-----";}' # Table body: crs_stat $QSTAT | $AWK \ 'BEGIN { FS="="; state = 0; } $1~/NAME/ && $2~/'$RSC_KEY'/ {appname = $2; state =1}; state == 0 {next;} $1~/TARGET/ && state == 1 {apptarget = $2; state= 2;} $1~/STATE/ && state == 2 {appstate = $2; state=3; } state == 3 {printf "%-45s %-10s %-18s\n", appname , apptarget, appstate; state=0;}' #--------------------------- End Shell Script ----- -------------------------

and change attributes to execute the file

Now as grid user from any node, execute the script :

# chmod +x crsstat1 # ./ crsstat1 $ ./crsstat1.sh HA Resource Targe t State ----------- ----- - ----- ora.LISTENER.lsnr ONLIN E ONLINE on icc-118a ora.LISTENER_SCAN1.lsnr ONLIN E ONLINE on icc-118a ora.OCRVOT.dg ONLIN E ONLINE on icc-118a ora.asm ONLIN E ONLINE on icc-118a ora.cvu ONLIN E ONLINE on icc-118a ora.gsd OFFLI NE OFFLINE ora.icc-118a.ASM1.asm ONLIN E ONLINE on icc-118a ora.icc-118a.LISTENER_ICC-118A.lsnr ONLIN E ONLINE on icc-118a ora.icc-118a.gsd OFFLI NE OFFLINE ora.icc-118a.ons ONLIN E ONLINE on icc-118a ora.icc-118a.vip ONLIN E ONLINE on icc-118a ora.icc-119a.ASM2.asm ONLIN E ONLINE on icc-119a ora.icc-119a.LISTENER_ICC-119A.lsnr ONLIN E ONLINE on icc-119a ora.icc-119a.gsd OFFLI NE OFFLINE ora.icc-119a.ons ONLIN E ONLINE on icc-119a ora.icc-119a.vip ONLIN E ONLINE on icc-119a ora.icc-120a.ASM3.asm ONLIN E ONLINE on icc-120a ora.icc-120a.LISTENER_ICC-120A.lsnr ONLIN E ONLINE on icc-120a ora.icc-120a.gsd OFFLI NE OFFLINE ora.icc-120a.ons ONLIN E ONLINE on icc-120a ora.icc-120a.vip ONLIN E ONLINE on icc-120a ora.net1.network ONLIN E ONLINE on icc-118a ora.oc4j ONLIN E ONLINE on icc-118a ora.ons ONLIN E ONLINE on icc-118a ora.registry.acfs ONLIN E ONLINE on icc-118a ora.scan1.vip ONLIN E ONLINE on icc-118a $$

7 List all VIP ressources:

{icc118a:grid}/ $ ./crsstat1.sh | grep vip ora.icc-118a.vip ONLIN E ONLINE on icc-118a

Page 69: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 69/72 IBM Oracle Center (IOC)

ora.icc-119a.vip ONLIN E ONLINE on icc-119a ora.icc-120a.vip ONLIN E ONLINE on icc-120a ora.scan1.vip ONLIN E ONLINE on icc-118a $

8 List all listener ressources :

$ ./crsstat1.sh |grep lsnr ora.LISTENER.lsnr ONLIN E ONLINE on icc-118a ora.LISTENER_SCAN1.lsnr ONLIN E ONLINE on icc-118a ora.icc-118a.LISTENER_ICC-118A.lsnr ONLIN E ONLINE on icc-118a ora.icc-119a.LISTENER_ICC-119A.lsnr ONLIN E ONLINE on icc-119a ora.icc-120a.LISTENER_ICC-120A.lsnr ONLIN E ONLINE on icc-120a $

9 Check OCR Disks :

$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 2792 Available space (kbytes) : 259328 ID : 1119411277 Device/File Name : +OCRVOT Device/File int egrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check bypassed due to n on-privileged user $

10 Check Voting Disks :

$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- ----- ---- --------- 1. ONLINE 424fd4165c9a4f3fbf1db8ab22770344 (/dev /ASM_OCRVOT_disk5) [OCRVOT] Located 1 voting disk(s). $

11 Command to stop the cluster:

# crsctl stop Usage: crsctl stop resource {<resName>[...]|-w <filter>] |-all} [-n <server>] [-k <cid>] [-d <did>] [-env "env1=val1,env2=val2,..."] [-f] [-i] Stop designated resources where resName [...] One or more blank-separated r esource names -w Resource filter -all All resources -n Server name

Page 70: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 70/72 IBM Oracle Center (IOC)

-k Resource cardinality ID -d Resource degree ID -env Stop command environment -f Force option -i Fail if request cannot be pro cessed immediately crsctl stop crs [-f] Stop OHAS on this server where -f Force option crsctl stop cluster [[-all]|[-n <server>[...]]] [ -f] Stop CRS stack where Default Stop local server -all Stop all servers -n Stop named servers server [...] One or more blank-separated ser ver names -f Force option #

./crsctl stop cluster –all

12 Command to start the cluster

# crsctl start Usage: crsctl start resource {<resName> [...]|-w <filter >]|-all} [-n <server>] [-k <cid>] [-d <did>] [-env "env1=val1,env2=val2,..."] [-f] [-i] Start designated resources where resName [...] One or more blank-separated r esource names -w Resource filter -all All resources -n Server name -k Resource cardinality ID -d Resource degree ID -env Start command environment -f Force option -i Fail if request cannot be pro cessed immediately crsctl start crs [-excl|-nowait] Start OHAS on this server where -excl Start Oracle Clusterware in exclu sive mode -nowait Do not wait for OHAS to start crsctl start cluster [[-all]|[-n <server>[...]]] Start CRS stack where Default Start local server -all Start all servers -n Start named servers server [...] One or more blank-separated ser ver names #

./crsctl start cluster -all

If there is an error, you can display the crs log file.

$ cd /oracle/app/11.2.0/grid/log/demo2/crsd $ ls crsd.l01 crsd.log crsd.trc crsdOUT.log $ cat crsd.log

Page 71: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 71/72 IBM Oracle Center (IOC)

12 Command to stop and start the grid listener

# su – grid # srvctl stop listener -l LISTENER # srvctl start listener -l LISTENER

13 To connect to ASM database

$ sqlplus /nolog SQL*Plus: Release 11.2.0.3.0 Production on Sun Mar 25 04:37:21 2012 Copyright (c) 1982, 2011, Oracle. All rights reser ved. SQL> CONNECT SYS AS SYSASM Enter password: root Connected. SQL> show parameter name NAME TYPE VA LUE ------------------------------------ ----------- -- ---------------------------- db_unique_name string +A SM instance_name string +A SM1 lock_name_space string service_names string +A SM SQL>

Owned by

icc-118a

icc-119a

icc-120a

Port Clusterware Resource managed

LISTENER_SCAN1 grid Yes No No 1522 - Database Instances TEST - ASM Instances

LISTENER

grid Yes +ASM1

Yes +ASM2

Yes +ASM2

1524 ASM Instances

LISTENER_RDBMS oracle Yes Yes Yes 1523 None

{icc-118a:grid}/ $ lsnrctl status listener_scan1 LSNRCTL for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production on 25-MAR-2012 04:42:21 Copyright (c) 1991, 2011, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))) STATUS of the LISTENER ------------------------ Alias LISTENER_SCAN1 Version TNSLSNR for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production Start Date 24-MAR-2012 16:28:50 Uptime 0 days 12 hr. 13 min. 30 sec Trace Level off Security ON: Local OS Authentication SNMP ON Listener Parameter File /oracle/app/11.2.0/grid/network/admin/listener.ora Listener Log File /oracle/app/11.2.0/grid/log/diag/tnslsnr/icc-118a/listener_scan1/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=1.1.1.18)(PORT=1522))) The listener supports no services The command completed successfully

$

Page 72: RAC ASM 1 clusterware - IBM · 2012-05-16 · Unofficial documentation (IBM Internal) 5/72 IBM Oracle Center (IOC) IV. IBM Power System setup for Oracle 11gR2 RAC installation 1 Introduction

Unofficial documentation (IBM Internal) 72/72 IBM Oracle Center (IOC)

For the cluster synchronization service (CSS), the master can be found by searching ORACLE_HOME/log/nodename/cssd/ocssd.log where it is either the Oracle HOME for the Oracle Clusterware (this is the Grid Infrastructure home in Oracle Database 11g Release 2). For master of a enqueue resource with Oracle RAC (user oracle), you can select from v$ges_resource. There should be a master_node column.