11gr2on openfiler

136
DBA Tips Archive for Oracle Building an Inexpensive Oracle RAC 11g R2 on Linux - (RHEL 5.5) by Jeff Hunter, Sr. Database Administrator Contents Introduction Oracle RAC 11g Overview Shared-Storage Overview iSCSI Technology Hardware and Costs Install the Linux Operating System Install Required Linux Packages for Oracle RAC Install Openfiler Network Configuration Cluster Time Synchronization Service Configure iSCSI Volumes using Openfiler Configure iSCSI Volumes on Oracle RAC Nodes Create Job Role Separation Operating System Privileges Groups, Users, and Directories Logging In to a Remote System Using X Terminal Configure the Linux Servers for Oracle Configure RAC Nodes for Remote Access using SSH - (Optional) Install and Configure ASMLib 2.0 Download Oracle RAC 11g release 2 Software Pre-installation Tasks for Oracle Grid Infrastructure for a Cluster Install Oracle Grid Infrastructure for a Cluster Post-installation Tasks for Oracle Grid Infrastructure for a Cluster Create ASM Disk Groups for Data and Fast Recovery Area Install Oracle Database 11g with Oracle Real Application Clusters Install Oracle Database 11g Examples (formerly Companion) Create the Oracle Cluster Database Post Database Creation Tasks - (Optional) Create / Alter Tablespaces Verify Oracle Grid Infrastructure and Database Configuration Starting / Stopping the Cluster Troubleshooting Conclusion Acknowledgements About the Author Introduction Oracle RAC 11g release 2 allows DBA's to configure a clustered database solution with superior fault tolerance, load balancing, and scalability. However, DBA's who want to become more familiar with the features and benefits of database clustering, will find the costs of configuring even a small RAC cluster costing in the range of US$10,000 to US$20,000. This cost would not even include the heart of a production RAC configuration, the shared storage. In most cases, this would be a High Guid Appl replic softw the r for y www DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml 1 of 136 4/18/2011 10:17 PM

Transcript of 11gr2on openfiler

Page 1: 11gr2on openfiler

DBA Tips Archive for Oracle

Building an Inexpensive Oracle RAC 11g R2 on Linux - (RHEL5.5)by Jeff Hunter, Sr. Database Administrator

Contents

IntroductionOracle RAC 11g OverviewShared-Storage OverviewiSCSI TechnologyHardware and CostsInstall the Linux Operating SystemInstall Required Linux Packages for Oracle RACInstall OpenfilerNetwork ConfigurationCluster Time Synchronization ServiceConfigure iSCSI Volumes using OpenfilerConfigure iSCSI Volumes on Oracle RAC NodesCreate Job Role Separation Operating System Privileges Groups, Users, and DirectoriesLogging In to a Remote System Using X TerminalConfigure the Linux Servers for OracleConfigure RAC Nodes for Remote Access using SSH - (Optional)Install and Configure ASMLib 2.0Download Oracle RAC 11g release 2 SoftwarePre-installation Tasks for Oracle Grid Infrastructure for a ClusterInstall Oracle Grid Infrastructure for a ClusterPost-installation Tasks for Oracle Grid Infrastructure for a ClusterCreate ASM Disk Groups for Data and Fast Recovery AreaInstall Oracle Database 11g with Oracle Real Application ClustersInstall Oracle Database 11g Examples (formerly Companion)Create the Oracle Cluster DatabasePost Database Creation Tasks - (Optional)Create / Alter TablespacesVerify Oracle Grid Infrastructure and Database ConfigurationStarting / Stopping the ClusterTroubleshootingConclusionAcknowledgementsAbout the Author

Introduction

Oracle RAC 11g release 2 allows DBA's to configure a clustered database solution with superior fault tolerance, load

balancing, and scalability. However, DBA's who want to become more familiar with the features and benefits of databaseclustering, will find the costs of configuring even a small RAC cluster costing in the range of US$10,000 to US$20,000. Thiscost would not even include the heart of a production RAC configuration, the shared storage. In most cases, this would be a

High AvailabilityGuideAppliance,replication orsoftware? Choosethe right solutionfor you.www.evidian.com

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

1 of 136 4/18/2011 10:17 PM

Page 2: 11gr2on openfiler

Storage Area Network (SAN), which generally start at US$10,000.

Unfortunately, for many shops, the price of the hardware required for a typical RAC configuration exceeds most trainingbudgets. For those who want to become familiar with Oracle RAC 11g without a major cash outlay, this guide provides a

low-cost alternative to configuring an Oracle RAC 11g release 2 system using commercial off-the-shelf components and

downloadable software at an estimated cost of US$2,800.

The system will consist of a two node cluster, both running Linux (CentOS 5.5 for x86_64), Oracle RAC 11g release 2 for

Linux x86_64, and ASMLib 2.0. All shared disk storage for Oracle RAC will be based on iSCSI using Openfiler release 2.3x86_64 running on a third node (known in this article as the Network Storage Server).

This guide is provided for educational purposes only, so the setup is kept simple to demonstrate ideas and concepts. For

example, the shared Oracle Clusterware files (OCR and voting files) and all physical database files in this article will be setup on only one physical disk, while in practice that should be stored on multiple physical drives configured for increasedperformance and redundancy (i.e. RAID). In addition, each Linux node will only be configured with two network interfaces —one for the public network (eth0) and one that will be used for both the Oracle RAC private interconnect "and" the networkstorage server for shared iSCSI access (eth1). For a production RAC implementation, the private interconnect should be atleast Gigabit (or more) with redundant paths and "only" be used by Oracle to transfer Cluster Manager and Cache Fusionrelated data. A third dedicated network interface (eth2, for example) should be configured on another redundant Gigabitnetwork for access to the network storage server (Openfiler).

Oracle Documentation

While this guide provides detailed instructions for successfully installing a complete Oracle RAC 11g system, it is by no

means a substitute for the official Oracle documentation (see list below). In addition to this guide, users should also

consult the following Oracle documents to gain a full understanding of alternative configuration options, installation, andadministration with Oracle RAC 11g. Oracle's official documentation site is docs.oracle.com.

Grid Infrastructure Installation Guide - 11g Release 2 (11.2) for Linux

Clusterware Administration and Deployment Guide - 11g Release 2 (11.2)

Oracle Real Application Clusters Installation Guide - 11g Release 2 (11.2) for Linux and UNIX

Real Application Clusters Administration and Deployment Guide - 11g Release 2 (11.2)

Oracle Database 2 Day + Real Application Clusters Guide - 11g Release 2 (11.2)

Oracle Database Storage Administrator's Guide - 11g Release 2 (11.2)

Network Storage Server

Powered by rPath Linux, Openfiler is a free browser-based network storage management utility that delivers file-basedNetwork Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework. The entiresoftware stack interfaces with open source applications such as Apache, Samba, LVM2, ext3, Linux NFS and iSCSIEnterprise Target. Openfiler combines these ubiquitous technologies into a small, easy to manage solution fronted by apowerful web-based management interface.

Openfiler supports CIFS, NFS, HTTP/DAV, FTP, however, we will only be making use of its iSCSI capabilities to implementan inexpensive SAN for the shared storage components required by Oracle RAC 11g. The operating system (rPath Linux)

and the Openfiler application will be installed on one internal SATA disk. A second internal 73GB 15K SCSI hard disk will beconfigured as a single volume group that will be used for all shared disk storage requirements. The Openfiler server will beconfigured to use this volume group for iSCSI based storage and will be used in our Oracle RAC 11g configuration to store

the shared files required by Oracle grid infrastructure and the Oracle RAC database.

Oracle Grid Infrastructure 11g Release 2 (11.2)

With Oracle grid infrastructure 11g release 2 (11.2), the Automatic Storage Management (ASM) and Oracle Clusterware

software is packaged together in a single binary distribution and installed into a single home directory, which is referred toas the Grid Infrastructure home. You must install the grid infrastructure in order to use Oracle RAC 11g release 2.

Configuration assistants start after the installer interview process that will be responsible for configuring ASM and OracleClusterware. While the installation of the combined products is called Oracle grid infrastructure, Oracle Clusterware andAutomatic Storage Manager remain separate products.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

2 of 136 4/18/2011 10:17 PM

Page 3: 11gr2on openfiler

After Oracle grid infrastructure is installed and configured on both nodes in the cluster, the next step will be to install theOracle Real Application Clusters (Oracle RAC) software on both Oracle RAC nodes.

In this article, the Oracle grid infrastructure and Oracle RAC software will be installed on both nodes using the optional Job

Role Separation configuration. One OS user will be created to own each Oracle software product — "grid" for the Oracle

grid infrastructure owner and "oracle" for the Oracle RAC software. Throughout this article, a user created to own theOracle grid infrastructure binaries is called the grid user. This user will own both the Oracle Clusterware and OracleAutomatic Storage Management binaries. The user created to own the Oracle database binaries (Oracle RAC) will becalled the oracle user. Both Oracle software owners must have the Oracle Inventory group (oinstall) as their primarygroup, so that each Oracle software installation owner can write to the central inventory (oraInventory), and so that OCRand Oracle Clusterware resource permissions are set correctly. The Oracle RAC software owner must also have theOSDBA group and the optional OSOPER group as secondary groups.

Assigning IP Address

Prior to Oracle Clusterware 11g release 2, the only method available for assigning IP addresses to each of the Oracle RAC

nodes was to have the network administrator manually assign static IP addresses in DNS — never to use DHCP. This wouldinclude the public IP address for the node, the RAC interconnect, virtual IP address (VIP), and new to 11g release 2, theSingle Client Access Name (SCAN) virtual IP address(s).

Oracle Clusterware 11g release 2 now provides two methods for assigning IP addresses to all Oracle RAC nodes:

Assigning IP addresses dynamically using Grid Naming Service (GNS) which makes use of DHCP1.

The traditional method of manually assigning static IP addresses in Domain Name Service (DNS)2.

Assigning IP Addresses Dynamically using Grid Naming Service (GNS)

A new method for assigning IP addresses was introduced in Oracle Clusterware 11g release 2 named Grid Naming

Service (GNS) which allows all private interconnect addresses, as well as most of the VIP addresses to be dynamically

assigned using DHCP. GNS and DHCP are key elements to Oracle's new Grid Plug and Play (GPnP) feature that, asOracle states, eliminates per-node configuration data and the need for explicit add and delete nodes steps. GNS enables adynamic grid infrastructure through the self-management of the network requirements for the cluster.

While assigning IP addresses using GNS certainly has its benefits and offers more flexibility over manually defining static IPaddresses, it does come at the cost of complexity and requires components not defined in this guide. For example,activating GNS in a cluster requires a DHCP server on the public network which falls outside the scope of building aninexpensive Oracle RAC.

The example Oracle RAC configuration described in this guide will use the traditional method of manually assigning static IPaddresses in DNS.

To learn more about the benefits and how to configure GNS, please see Oracle Grid Infrastructure Installation Guide 11g

Release 2 (11.2) for Linux.

Assigning IP Addresses Manually using Static IP Address - (The DNS Method)

If you choose not to use GNS, manually defining static IP addresses is still available with Oracle Clusterware 11g release 2

and will be the method used in this article to assign all required Oracle Clusterware networking components (public IPaddress for the node, RAC interconnect, virtual IP address, and SCAN virtual IP).

It should be pointed out that previous to Oracle 11g release 2, the need for DNS in order to successfully configure Oracle

RAC was not a strict requirement. It was technically possible (although not recommended for a production system) to defineall IP addresses only in the hosts file on all nodes in the cluster (i.e. /etc/hosts). This actually worked to my advantagewith any of my previous articles on building an inexpensive RAC because it was one less component to document andconfigure.

So, why is the use of DNS now a requirement when manually assigning static IP addresses? The answer is SCAN. OracleClusterware 11g release 2 requires the use of DNS in order to store the SCAN virtual IP address(s). In addition to therequirement of configuring the SCAN virtual IP address in DNS, we will also configure the public and virtual IP address for all

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

3 of 136 4/18/2011 10:17 PM

Page 4: 11gr2on openfiler

Oracle RAC nodes in DNS for name resolution. If you do not have access to a DNS, instructions will be included later in thisguide on how to install a minimal DNS server on the Openfiler network storage server.

When using the DNS method for assigning IP addresses, Oracle recommends that all staticIP addresses be manually configured in DNS before starting the Oracle grid infrastructureinstallation.

Single Client Access Name (SCAN) for the Cluster

If you have ever been tasked with extending an Oracle RAC cluster by adding a new node (or shrinking a RAC cluster byremoving a node), then you know the pain of going through a list of all clients and updating their SQL*Net or JDBCconfiguration to reflect the new or deleted node. To address this problem, Oracle 11g release 2 introduced a new feature

known as Single Client Access Name or SCAN for short. SCAN is a new feature that provides a single host name for

clients to access an Oracle Database running in a cluster. Clients using SCAN do not need to change their TNSconfiguration if you add or remove nodes in the cluster. The SCAN resource and its associated IP address(s) provide astable name for clients to use for connections, independent of the nodes that make up the cluster. You will be asked toprovide the host name (also called the SCAN name in this document) and up to three IP addresses to be used for the SCANresource during the interview phase of the Oracle grid infrastructure installation. For high availability and scalability, Oraclerecommends that you configure the SCAN name for round-robin resolution to three IP addresses. At a minimum, the SCANmust resolve to at least one address.

The SCAN virtual IP name is similar to the names used for a node's virtual IP address, such as racnode1-vip. However,unlike a virtual IP, the SCAN is associated with the entire cluster, rather than an individual node, and can be associated withmultiple IP addresses, not just one address.

During installation of the Oracle grid infrastructure, a listener is created for each of the SCAN addresses. Clients thataccess the Oracle RAC database should use the SCAN or SCAN address, not the VIP name or address. If an applicationuses a SCAN to connect to the cluster database, the network configuration files on the client computer do not need to bemodified when nodes are added to or removed from the cluster. Note that SCAN addresses, virtual IP addresses, andpublic IP addresses must all be on the same subnet.

The SCAN should be configured so that it is resolvable either by using Grid Naming Service (GNS) within the cluster or byusing the traditional method of assigning static IP addresses using Domain Name Service (DNS) resolution.

In this article, I will configure SCAN for round-robin resolution to three, manually configured static IP address using the DNSmethod:

racnode-cluster-scan IN A 192.168.1.187racnode-cluster-scan IN A 192.168.1.188racnode-cluster-scan IN A 192.168.1.189

Further details regarding the configuration of SCAN will be provided in the section "Verify SCAN Configuration" during thenetwork configuration phase of this guide..

Automatic Storage Management and Oracle Clusterware Files

Automatic Storage Management (ASM) is now fully integrated with Oracle Clusterware in the Oracle grid infrastructure.Oracle ASM and Oracle Database 11g release 2 provide a more enhanced storage solution from previous releases. Part of

this solution is the ability to store the Oracle Clusterware files; namely the Oracle Cluster Registry (OCR) and the VotingFiles (VF, also known as the Voting Disks) on ASM. This feature enables ASM to provide a unified storage solution, storingall the data for the clusterware and the database without the need for third-party volume managers or cluster file systems.

Just like database files, Oracle Clusterware files are stored in an ASM disk group and therefore utilize the ASM disk groupconfiguration with respect to redundancy. For example, a Normal Redundancy ASM disk group will hold a two-way-

mirrored OCR. A failure of one disk in the disk group will not prevent access to the OCR. With a High Redundancy ASM

disk group (three-way-mirrored), two independent disks can fail without impacting access to the OCR. With External

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

4 of 136 4/18/2011 10:17 PM

Page 5: 11gr2on openfiler

Redundancy, no protection is provided by Oracle.

Oracle only allows one OCR per disk group in order to protect against physical disk failures. When configuring OracleClusterware files on a production system, Oracle recommends using either normal or high redundancy ASM disk groups. Ifdisk mirroring is already occurring at either the OS or hardware level, you can use external redundancy.

The Voting Files are managed in a similar way to the OCR. They follow the ASM disk group configuration with respect toredundancy, but are not managed as normal ASM files in the disk group. Instead, each voting disk is placed on a specificdisk in the disk group. The disk and the location of the Voting Files on the disks are stored internally within OracleClusterware.

The following example describes how the Oracle Clusterware files are stored in ASM after installing Oracle gridinfrastructure using this guide. To view the OCR, use ASMCMD:

[grid@racnode1 ~]$ asmcmdASMCMD> ls -l +CRS/racnode-cluster/OCRFILEType Redund Striped Time Sys NameOCRFILE UNPROT COARSE NOV 22 12:00:00 Y REGISTRY.255.703024853

From the example above, you can see that after listing all of the ASM files in the +CRS/racnode-cluster/OCRFILEdirectory, it only shows the OCR (REGISTRY.255.703024853). The listing does not show the Voting File(s) because theyare not managed as normal ASM files. To find the location of all Voting Files within Oracle Clusterware, use the crsctlquery css votedisk command as follows:

[grid@racnode1 ~]$ crsctl query css votedisk## STATE File Universal Id File Name Disk group-- ----- ----------------- --------- --------- 1. ONLINE 4cbbd0de4c694f50bfd3857ebd8ad8c4 (ORCL:CRSVOL1) [CRS]Located 1 voting disk(s).

If you decide against using ASM for the OCR and voting disk files, Oracle Clusterware still allows these files to be stored ona cluster file system like Oracle Cluster File System release 2 (OCFS2) or a NFS system. Please note that installing OracleClusterware files on raw or block devices is no longer supported, unless an existing system is being upgraded.

Previous versions of this guide used OCFS2 for storing the OCR and voting disk files. This guide will store the OCR andvoting disk files on ASM in an ASM disk group named +CRS using external redundancy which is one OCR location and one

voting disk location. The ASM disk group should be be created on shared storage and be at least 2GB in size.

The Oracle physical database files (data, online redo logs, control files, archived redo logs) will be installed on ASM in anASM disk group named +RACDB_DATA while the Fast Recovery Area will be created in a separate ASM disk group named+FRA.

The two Oracle RAC nodes and the network storage server will be configured as follows:

Oracle RAC / Openfiler Nodes

Node Name Instance Name Database Name Processor RAM

racnode1 racdb1

racdb.idevelopment.info

1 x Dual Core Intel Xeon, 3.00 GHz 4GB

racnode2 racdb2 1 x Dual Core Intel Xeon, 3.00 GHz 4GB

openfiler1 2 x Intel Xeon, 3.00 GHz 6GB

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

5 of 136 4/18/2011 10:17 PM

Page 6: 11gr2on openfiler

Network Configuration

Node Name Public IP Private IP Virtual IP SCAN Name

racnode1 192.168.1.151 192.168.2.151 192.168.1.251

racnode-cluster-scanracnode2 192.168.1.152 192.168.2.152 192.168.1.252

openfiler1 192.168.1.195 192.168.2.195

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

6 of 136 4/18/2011 10:17 PM

Page 7: 11gr2on openfiler

Node Name Public IP Private IP Virtual IP SCAN Name

Oracle Software Components

Software Component OS User Primary Group Supplementary Groups Home Directory Oracle Base / Oracle H

Grid Infrastructure grid oinstall asmadmin, asmdba, asmoper /home/grid/u01/app/grid/u01/app/11.2.0/grid

Oracle RAC oracle oinstall dba, oper, asmdba /home/oracle/u01/app/oracle/u01/app/oracle/product/11.2.

Storage Components

Storage Component File System Volume Size ASM Volume Group Name ASM Redundancy

OCR/Voting Disk ASM 2GB +CRS External

Database Files ASM 32GB +RACDB_DATA External

Fast Recovery Area ASM 32GB +FRA External

This article is only designed to work as documented with absolutely no substitutions. The only exception here is the choiceof vendor hardware (i.e. machines, networking equipment, and internal / external hard drives). Ensure that the hardware youpurchase from the vendor is supported on Red Hat Enterprise Linux 5 and Openfiler 2.3 (Final Release).

If you are looking for an example that takes advantage of Oracle RAC 10g release 2 with RHEL 5.3 using iSCSI, click here.

If you are looking for an example that takes advantage of Oracle RAC 11g release 1 with RHEL 5.1 using iSCSI, click here.

Oracle RAC 11g Overview

Before introducing the details for building a RAC cluster, it might be helpful to first clarify what a cluster is. A cluster is agroup of two or more interconnected computers or servers that appear as if they are one server to end users andapplications and generally share the same set of physical disks. The key benefit of clustering is to provide a highly availableframework where the failure of one node (for example a database server running an instance of Oracle) does not bringdown an entire application. In the case of failure with one of the servers, the other surviving server (or servers) can takeover the workload from the failed server and the application continues to function normally as if nothing has happened.

The concept of clustering computers actually started several decades ago. The first successful cluster product wasdeveloped by DataPoint in 1977 named ARCnet. The ARCnet product enjoyed much success by academia types inresearch labs, but didn't really take off in the commercial market. It wasn't until the 1980's when Digital EquipmentCorporation (DEC) released its VAX cluster product for the VAX/VMS operating system.

With the release of Oracle 6 for the Digital VAX cluster product, Oracle was the first commercial database to supportclustering at the database level. It wasn't long, however, before Oracle realized the need for a more efficient and scalabledistributed lock manager (DLM) as the one included with the VAX/VMS cluster product was not well suited for databaseapplications. Oracle decided to design and write their own DLM for the VAX/VMS cluster product which provided thefine-grain block level locking required by the database. Oracle's own DLM was included in Oracle 6.2 which gave birth toOracle Parallel Server (OPS) - the first database to run the parallel server.

By Oracle 7, OPS was extended to included support for not only the VAX/VMS cluster product but also with most flavors ofUNIX. This framework required vendor-supplied clusterware which worked well, but made for a complex environment tosetup and manage given the multiple layers involved. By Oracle8, Oracle introduced a generic lock manager that wasintegrated into the Oracle kernel. In later releases of Oracle, this became known as the Integrated Distributed LockManager (IDLM) and relied on an additional layer known as the Operating System Dependant (OSD) layer. This new modelpaved the way for Oracle to not only have their own DLM, but to also create their own clusterware product in futurereleases.

Oracle Real Application Clusters (RAC), introduced with Oracle9i, is the successor to Oracle Parallel Server. Using the

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

7 of 136 4/18/2011 10:17 PM

Page 8: 11gr2on openfiler

same IDLM, Oracle9i could still rely on external clusterware but was the first release to include their own clusterware

product named Cluster Ready Services (CRS). With Oracle9i, CRS was only available for Windows and Linux. By Oracle

10g release 1, Oracle's clusterware product was available for all operating systems and was the required cluster

technology for Oracle RAC. With the release of Oracle Database 10g release 2 (10.2), Cluster Ready Services was

renamed to Oracle Clusterware. When using Oracle 10g or higher, Oracle Clusterware is the only clusterware that you

need for most platforms on which Oracle RAC operates (except for Tru cluster, in which case you need vendorclusterware). You can still use clusterware from other vendors if the clusterware is certified, but keep in mind that OracleRAC still requires Oracle Clusterware as it is fully integrated with the database software. This guide uses OracleClusterware which as of 11g release 2 (11.2), is now a component of Oracle grid infrastructure.

Like OPS, Oracle RAC allows multiple instances to access the same database (storage) simultaneously. RAC provides faulttolerance, load balancing, and performance benefits by allowing the system to scale out, and at the same time since allinstances access the same database, the failure of one node will not cause the loss of access to the database.

At the heart of Oracle RAC is a shared disk subsystem. Each instance in the cluster must be able to access all of the data,redo log files, control files and parameter file for all other instances in the cluster. The data disks must be globally availablein order to allow all instances to access the database. Each instance has its own redo log files and UNDO tablespace thatare locally read-writeable. The other instances in the cluster must be able to access them (read-only) in order to recoverthat instance in the event of a system failure. The redo log files for an instance are only writeable by that instance and willonly be read from another instance during system failure. The UNDO, on the other hand, is read all the time during normaldatabase operation (e.g. for CR fabrication).

A big difference between Oracle RAC and OPS is the addition of Cache Fusion. With OPS a request for data from oneinstance to another required the data to be written to disk first, then the requesting instance can read that data (afteracquiring the required locks). This process was called disk pinging. With cache fusion, data is passed along a high-speedinterconnect using a sophisticated locking algorithm.

Not all database clustering solutions use shared storage. Some vendors use an approach known as a Federated Cluster, inwhich data is spread across several machines rather than shared by all. With Oracle RAC, however, multiple instances usethe same set of disks for storing data. Oracle's approach to clustering leverages the collective processing power of all thenodes in the cluster and at the same time provides failover security.

Pre-configured Oracle RAC solutions are available from vendors such as Dell, IBM and HP for production environments.This article, however, focuses on putting together your own Oracle RAC 11g environment for development and testing by

using Linux servers and a low cost shared disk solution; iSCSI.

For more background about Oracle RAC, visit the Oracle RAC Product Center on OTN.

Shared-Storage Overview

Today, fibre channel is one of the most popular solutions for shared storage. As mentioned earlier, fibre channel is ahigh-speed serial-transfer interface that is used to connect systems and storage devices in either point-to-point (FC-P2P),arbitrated loop (FC-AL), or switched topologies (FC-SW). Protocols supported by Fibre Channel include SCSI and IP. Fibrechannel configurations can support as many as 127 nodes and have a throughput of up to 2.12 Gigabits per second in eachdirection, and 4.25 Gbps is expected.

Fibre channel, however, is very expensive. Just the fibre channel switch alone can start at around US$1,000. This does noteven include the fibre channel storage array and high-end drives, which can reach prices of about US$300 for a single 36GBdrive. A typical fibre channel setup which includes fibre channel cards for the servers is roughly US$10,000, which does notinclude the cost of the servers that make up the Oracle database cluster.

A less expensive alternative to fibre channel is SCSI. SCSI technology provides acceptable performance for sharedstorage, but for administrators and developers who are used to GPL-based Linux prices, even SCSI can come in overbudget, at around US$2,000 to US$5,000 for a two-node cluster.

Another popular solution is the Sun NFS (Network File System) found on a NAS. It can be used for shared storage but onlyif you are using a network appliance or something similar. Specifically, you need servers that guarantee direct I/O over NFS,TCP as the transport protocol, and read/write block sizes of 32K. See the Certify page on Oracle Metalink for supportedNetwork Attached Storage (NAS) devices that can be used with Oracle RAC. One of the key drawbacks that has limited thebenefits of using NFS and NAS for database storage has been performance degradation and complex configurationrequirements. Standard NFS client software (client systems that use the operating system provided NFS driver) is not

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

8 of 136 4/18/2011 10:17 PM

Page 9: 11gr2on openfiler

optimized for Oracle database file I/O access patterns. With the introduction of Oracle 11g, a new feature known as Direct

NFS Client integrates the NFS client functionality directly in the Oracle software. Through this integration, Oracle is able to

optimize the I/O path between the Oracle software and the NFS server resulting in significant performance gains. DirectNFS Client can simplify, and in many cases automate, the performance optimization of the NFS client configuration fordatabase workloads. To learn more about Direct NFS Client, see the Oracle White Paper entitled "Oracle Database 11g

Direct NFS Client".

The shared storage that will be used for this article is based on iSCSI technology using a network storage server installedwith Openfiler. This solution offers a low-cost alternative to fibre channel for testing and educational purposes, but given thelow-end hardware being used, it is not often used in a production environment.

iSCSI Technology

For many years, the only technology that existed for building a network based storage solution was a Fibre ChannelStorage Area Network (FC SAN). Based on an earlier set of ANSI protocols called Fiber Distributed Data Interface (FDDI),

Fibre Channel was developed to move SCSI commands over a storage network.

Several of the advantages to FC SAN include greater performance, increased disk utilization, improved availability, betterscalability, and most important to us — support for server clustering! Still today, however, FC SANs suffer from three majordisadvantages. The first is price. While the costs involved in building a FC SAN have come down in recent years, the cost ofentry still remains prohibitive for small companies with limited IT budgets. The second is incompatible hardwarecomponents. Since its adoption, many product manufacturers have interpreted the Fibre Channel specifications differentlyfrom each other which has resulted in scores of interconnect problems. When purchasing Fibre Channel components from acommon manufacturer, this is usually not a problem. The third disadvantage is the fact that a Fibre Channel network is notEthernet! It requires a separate network technology along with a second set of skill sets that need to exist with the datacenter staff.

With the popularity of Gigabit Ethernet and the demand for lower cost, Fibre Channel has recently been given a run for itsmoney by iSCSI-based storage systems. Today, iSCSI SANs remain the leading competitor to FC SANs.

Ratified on February 11, 2003 by the Internet Engineering Task Force (IETF), the Internet Small Computer SystemInterface, better known as iSCSI, is an Internet Protocol (IP)-based storage networking standard for establishing andmanaging connections between IP-based storage devices, hosts, and clients. iSCSI is a data transport protocol defined inthe SCSI-3 specifications framework and is similar to Fibre Channel in that it is responsible for carrying block-level data overa storage network. Block-level communication means that data is transferred between the host and the client in chunkscalled blocks. Database servers depend on this type of communication (as opposed to the file level communication used bymost NAS systems) in order to work properly. Like a FC SAN, an iSCSI SAN should be a separate physical networkdevoted entirely to storage, however, its components can be much the same as in a typical IP network (LAN).

While iSCSI has a promising future, many of its early critics were quick to point out some of its inherent shortcomings withregards to performance. The beauty of iSCSI is its ability to utilize an already familiar IP network as its transportmechanism. The TCP/IP protocol, however, is very complex and CPU intensive. With iSCSI, most of the processing of thedata (both TCP and iSCSI) is handled in software and is much slower than Fibre Channel which is handled completely inhardware. The overhead incurred in mapping every SCSI command onto an equivalent iSCSI transaction is excessive. Formany the solution is to do away with iSCSI software initiators and invest in specialized cards that can offload TCP/IP andiSCSI processing from a server's CPU. These specialized cards are sometimes referred to as an iSCSI Host Bus Adaptor(HBA) or a TCP Offload Engine (TOE) card. Also consider that 10-Gigabit Ethernet is a reality today!

So with all of this talk about iSCSI, does this mean the death of Fibre Channel anytime soon? Probably not. Fibre Channelhas clearly demonstrated its capabilities over the years with its capacity for extremely high speeds, flexibility, and robustreliability. Customers who have strict requirements for high performance storage, large complex connectivity, and missioncritical reliability will undoubtedly continue to choose Fibre Channel.

As with any new technology, iSCSI comes with its own set of acronyms and terminology. For the purpose of this article, it isonly important to understand the difference between an iSCSI initiator and an iSCSI target.

iSCSI Initiator

Basically, an iSCSI initiator is a client device that connects and initiates requests to some service offered by a server (in thiscase an iSCSI target). The iSCSI initiator software will need to exist on each of the Oracle RAC nodes (racnode1 and

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

9 of 136 4/18/2011 10:17 PM

Page 10: 11gr2on openfiler

racnode2).

An iSCSI initiator can be implemented using either software or hardware. Software iSCSI initiators are available for mostmajor operating system platforms. For this article, we will be using the free Linux Open-iSCSI software driver found in theiscsi-initiator-utils RPM. The iSCSI software initiator is generally used with a standard network interface card(NIC) — a Gigabit Ethernet card in most cases. A hardware initiator is an iSCSI HBA (or a TCP Offload Engine (TOE)card), which is basically just a specialized Ethernet card with a SCSI ASIC on-board to offload all the work (TCP and SCSIcommands) from the system CPU. iSCSI HBAs are available from a number of vendors, including Adaptec, Alacritech, Intel,and QLogic.

iSCSI Target

An iSCSI target is the "server" component of an iSCSI network. This is typically the storage device that contains theinformation you want and answers requests from the initiator(s). For the purpose of this article, the node openfiler1 willbe the iSCSI target.

Hardware and Costs

The hardware used to build our example Oracle RAC 11g environment consists of three Linux servers (two Oracle RAC

nodes and one Network Storage Server) and components that can be purchased at many local computer stores or over theInternet.

Oracle RAC Node 1 - (racnode1)

Dell PowerEdge T100

Dual Core Intel(R) Xeon(R) E3110, 3.0 GHz, 6MB Cache, 1333MHz4GB, DDR2, 800MHz160GB 7.2K RPM SATA 3Gbps Hard DriveIntegrated Graphics - (ATI ES1000)Integrated Gigabit Ethernet - (Broadcom(R) NetXtreme IITM 5722)16x DVD DriveNo Keyboard, Monitor, or Mouse - (Connected to KVM Switch) US$500

1 x Ethernet LAN Card

Used for RAC interconnect to racnode2 and Openfiler networked storage.

Each Linux server for Oracle RAC should contain two NIC adapters. The DellPowerEdge T100 includes an embedded Broadcom(R) NetXtreme IITM 5722Gigabit Ethernet NIC that will be used to connect to the public network. Asecond NIC adapter will be used for the private network (RAC interconnectand Openfiler networked storage). Select the appropriate NIC adapter that iscompatible with the maximum data transmission speed of the networkswitch to be used for the private network. For the purpose of this article, Iused a Gigabit Ethernet switch (and a 1Gb Ethernet card) for the privatenetwork.

Intel(R) PRO/1000 PT Server Adapter - (EXPI9400PT) US$90

Oracle RAC Node 2 - (racnode2)

Dell PowerEdge T100

Dual Core Intel(R) Xeon(R) E3110, 3.0 GHz, 6MB Cache, 1333MHz4GB, DDR2, 800MHz160GB 7.2K RPM SATA 3Gbps Hard DriveIntegrated Graphics - (ATI ES1000)Integrated Gigabit Ethernet - (Broadcom(R) NetXtreme IITM 5722)16x DVD DriveNo Keyboard, Monitor, or Mouse - (Connected to KVM Switch) US$500

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

10 of 136 4/18/2011 10:17 PM

Page 11: 11gr2on openfiler

Oracle RAC Node 1 - (racnode1)

1 x Ethernet LAN Card

Used for RAC interconnect to racnode1 and Openfiler networked storage.

Each Linux server for Oracle RAC should contain two NIC adapters. The DellPowerEdge T100 includes an embedded Broadcom(R) NetXtreme IITM 5722Gigabit Ethernet NIC that will be used to connect to the public network. Asecond NIC adapter will be used for the private network (RAC interconnectand Openfiler networked storage). Select the appropriate NIC adapter that iscompatible with the maximum data transmission speed of the networkswitch to be used for the private network. For the purpose of this article, Iused a Gigabit Ethernet switch (and a 1Gb Ethernet card) for the privatenetwork.

Intel(R) PRO/1000 PT Server Adapter - (EXPI9400PT) US$90

Network Storage Server - (openfiler1)

Dell PowerEdge 1800

Dual 3.0GHz Xeon / 1MB Cache / 800FSB (SL7PE)6GB of ECC Memory500GB SATA Internal Hard Disk73GB 15K SCSI Internal Hard DiskIntegrated GraphicsSingle embedded Intel 10/100/1000 Gigabit NIC16x DVD DriveNo Keyboard, Monitor, or Mouse - (Connected to KVM Switch)

Note: The rPath Linux operating system and Openfiler application will beinstalled on the 500GB internal SATA disk. A second internal 73GB 15K SCSIhard disk will be configured for the shared database storage. The Openfilerserver will be configured to use this second hard disk for iSCSI based storageand will be used in our Oracle RAC 11g configuration to store the shared filesrequired by Oracle Clusterware as well as the clustered database files.

Please be aware that any type of hard disk (internal or external) should workfor the shared disk storage as long as it can be recognized by the networkstorage server (Openfiler) and has adequate space. For example, I couldhave made an extra partition on the 500GB internal SATA disk for the iSCSItarget, but decided to make use of the faster SCSI disk for this example.

Finally, although the Openfiler server used in this example configurationcontains 6GB of memory, this is by no means a requirement. The Openfilerserver could be configured with as little as 2GB for a small test / evaluationnetwork storage server. US$800

1 x Ethernet LAN Card

Used for networked storage on the private network.

The Network Storage Server (Openfiler server) should contain two NICadapters. The Dell PowerEdge 1800 machine included an integrated10/100/1000 Ethernet adapter that will be used to connect to the publicnetwork. The second NIC adapter will be used for the private network(Openfiler networked storage). Select the appropriate NIC adapter that iscompatible with the maximum data transmission speed of the networkswitch to be used for the private network. For the purpose of this article, Iused a Gigabit Ethernet switch (and 1Gb Ethernet card) for the privatenetwork.

Intel(R) PRO/1000 MT Server Adapter - (PWLA8490MT) US$125

Miscellaneous Components

1 x Ethernet Switch

US$50

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

11 of 136 4/18/2011 10:17 PM

Page 12: 11gr2on openfiler

Oracle RAC Node 1 - (racnode1)

Used for the interconnect between racnode1-priv and racnode2-priv which

will be on the 192.168.2.0 network. This switch will also be used for network

storage traffic for Openfiler. For the purpose of this article, I used a GigabitEthernet switch (and 1Gb Ethernet cards) for the private network.

Note: This article assumes you already have a switch or VLAN in place whatwill be used for the public network.

D-Link 8-port 10/100/1000 Desktop Switch - (DGS-2208)

6 x Network Cables

Category 6 patch cable - (Connect racnode1 to public network)Category 6 patch cable - (Connect racnode2 to public network)Category 6 patch cable - (Connect openfiler1 to public network)Category 6 patch cable - (Connect racnode1 to interconnect Ethernet switch)Category 6 patch cable - (Connect racnode2 to interconnect Ethernet switch)Category 6 patch cable - (Connect openfiler1 to interconnect Ethernetswitch)

US$10US$10US$10US$10US$10US$10

Optional Components

KVM Switch

This guide requires access to the console of all machines in order to installthe operating system and perform several of the configuration tasks. Whenmanaging a very small number of servers, it might make sense to connecteach server with its own monitor, keyboard, and mouse in order to access itsconsole. However, as the number of servers to manage increases, thissolution becomes unfeasible. A more practical solution would be to configurea dedicated device which would include a single monitor, keyboard, andmouse that would have direct access to the console of each server. Thissolution is made possible using a Keyboard, Video, Mouse Switch —betterknown as a KVM Switch. A KVM switch is a hardware device that allows auser to control multiple computers from a single keyboard, video monitor andmouse. Avocent provides a high quality and economical 4-port switch whichincludes four 6' cables:

AutoView(R) Analog KVM Switch

For a detailed explanation and guide on the use and KVM switches, pleasesee the article "KVM Switches For the Home and the Enterprise". US$350

Total US$2,565

We are about to start the installation process. Now that we have talked about the hardware that will be used in thisexample, let's take a conceptual look at what the environment would look like after connecting all of the hardwarecomponents (click on the graphic below to view larger image):

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

12 of 136 4/18/2011 10:17 PM

Page 13: 11gr2on openfiler

Figure 1: Oracle RAC 11g release 2 Test Configuration

As we start to go into the details of the installation, note that most of the tasks within this document will need to beperformed on both Oracle RAC nodes (racnode1 and racnode2). I will indicate at the beginning of each section whetheror not the task(s) should be performed on both Oracle RAC nodes or on the network storage server (openfiler1).

Install the Linux Operating System

Perform the following installation on both Oracle RAC nodes in the cluster.

This section provides a summary of the screens used to install the Linux operating system. This guide is designed to workwith CentOS release 5.5 for x86_64 or Red Hat Enterprise Linux 5.5 for x86_64 and follows Oracle's suggestion ofperforming a "default RPMs" installation type to ensure all expected Linux O/S packages are present for a successfulOracle RDBMS installation.

Although I have used Red Hat Fedora in the past, I wanted to switch to a Linux environment that would guarantee all of thefunctionality contained with Oracle. This is where CentOS comes in. The CentOS project takes the Red Hat Enterprise Linux5 source RPMs and compiles them into a free clone of the Red Hat Enterprise Server 5 product. This provides a free andstable version of the Red Hat Enterprise Linux 5 (AS/ES) operating environment that I can use for Oracle testing anddevelopment. I have moved away from Fedora as I need a stable environment that is not only free, but as close to theactual Oracle supported operating system as possible. While CentOS is not the only project performing the samefunctionality, I tend to stick with it as it is stable and reacts fast with regards to updates by Red Hat.

Download CentOS

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

13 of 136 4/18/2011 10:17 PM

Page 14: 11gr2on openfiler

Use the links below to download CentOS 5.5 for either x86 or x86_64 depending on your hardware architecture.

32-bit (x86) Installations

CentOS-5.5-i386-bin-1of7.iso (623 MB)CentOS-5.5-i386-bin-2of7.iso (621 MB)CentOS-5.5-i386-bin-3of7.iso (630 MB)CentOS-5.5-i386-bin-4of7.iso (619 MB)CentOS-5.5-i386-bin-5of7.iso (629 MB)CentOS-5.5-i386-bin-6of7.iso (637 MB)CentOS-5.5-i386-bin-7of7.iso (231 MB)

Note: If the Linux RAC nodes have a DVD installed, you may find it more convenient to make use of the single DVD

image:

CentOS-5.5-i386-bin-DVD.iso (3.9 GB)

64-bit (x86_64) Installations

CentOS-5.5-x86_64-bin-1of8.iso (623 MB)CentOS-5.5-x86_64-bin-2of8.iso (587 MB)CentOS-5.5-x86_64-bin-3of8.iso (634 MB)CentOS-5.5-x86_64-bin-4of8.iso (633 MB)CentOS-5.5-x86_64-bin-5of8.iso (634 MB)CentOS-5.5-x86_64-bin-6of8.iso (627 MB)CentOS-5.5-x86_64-bin-7of8.iso (624 MB)CentOS-5.5-x86_64-bin-8of8.iso (242 MB)

Note: If the Linux RAC nodes have a DVD installed, you may find it more convenient to make use of the two DVD

images (requires BitTorrent ):

CentOS-5.5-x86_64-bin-DVD.torrent (360 KB)

If you are downloading the above ISO files to a MS Windows machine, there are many options for burning these images(ISO files) to a CD. You may already be familiar with and have the proper software to burn images to CD. If you are notfamiliar with this process and do not have the required software to burn images to CD, here are just three of the manysoftware packages that can be used:

InfraRecorderUltraISOMagic ISO Maker

Install CentOS

After downloading and burning the CentOS images (ISO files) to CD/DVD, insert CentOS Disk #1 into the first server(racnode1 in this example), power it on, and answer the installation screen prompts as noted below. After completing theLinux installation on the first node, perform the same Linux installation on the second node while substituting the node nameracnode1 for racnode2 and the different IP addresses were appropriate.

Before installing the Linux operating system on both nodes, you should have the two NICinterfaces (cards) installed.

Boot Screen

The first screen is the CentOS boot screen. At the boot: prompt, hit [Enter] to start the installation process.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

14 of 136 4/18/2011 10:17 PM

Page 15: 11gr2on openfiler

Media Test

When asked to test the CD media, tab over to [Skip] and hit [Enter]. If there were any errors, the media burning softwarewould have warned us. After several seconds, the installer should then detect the video card, monitor, and mouse. Theinstaller then goes into GUI mode.

Welcome to CentOS

At the welcome screen, click [Next] to continue.

Language / Keyboard Selection

The next two screens prompt you for the Language and Keyboard settings. Make the appropriate selection for yourconfiguration and click [Next] to continue.

Detect Previous Installation

If the installer detects a previous version of RHEL / CentOS, it will ask if you would like to "Install CentOS" or "Upgrade anexisting Installation". Always select to Install CentOS.

Disk Partitioning Setup

Select "Remove all partitions on selected drives and create default layout" and check the option to "Review and modifypartitioning layout". Click "[Next]" to continue.

You will then be prompted with a dialog window asking if you really want to remove all Linux partitions. Click [Yes] toacknowledge this warning.

Partitioning

The installer will then allow you to view (and modify if needed) the disk partitions it automatically selected. For mostautomatic layouts, the installer will choose 100MB for /boot, double the amount of RAM (systems with <= 2,048MB RAM)or an amount equal to RAM (systems with > 2,048MB RAM) for swap, and the rest going to the root (/) partition. Startingwith RHEL 4, the installer will create the same disk configuration as just noted but will create them using the Logical VolumeManager (LVM). For example, it will partition the first hard drive (/dev/sda for my configuration) into two partitions — onefor the /boot partition (/dev/sda1) and the remainder of the disk dedicate to a LVM named VolGroup00 (/dev/sda2).The LVM Volume Group (VolGroup00) is then partitioned into two LVM partitions - one for the root file system (/) andanother for swap.

The main concern during the partitioning phase is to ensure enough swap space is allocated as required by Oracle (which isa multiple of the available RAM). The following is Oracle's minimum requirement for swap space:

Available RAM Swap Space Required

Between 1,024MB and 2,048MB 1.5 times the size of RAM

Between 2,049MB and 8,192MB Equal to the size of RAM

More than 8,192MB 0.75 times the size of RAM

For the purpose of this install, I will accept all automatically preferred sizes. (Including 5,952MB for swap since I have 4GBof RAM installed.)

If for any reason, the automatic layout does not configure an adequate amount of swap space, you can easily change thatfrom this screen. To increase the size of the swap partition, [Edit] the volume group VolGroup00. This will bring up the "EditLVM Volume Group: VolGroup00" dialog. First, [Edit] and decrease the size of the root file system (/) by the amount youwant to add to the swap partition. For example, to add another 512MB to swap, you would decrease the size of the root filesystem by 512MB (i.e. 36,032MB - 512MB = 35,520MB). Now add the space you decreased from the root file system

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

15 of 136 4/18/2011 10:17 PM

Page 16: 11gr2on openfiler

(512MB) to the swap partition. When completed, click [OK] on the "Edit LVM Volume Group: VolGroup00" dialog.

Once you are satisfied with the disk layout, click [Next] to continue.

Boot Loader Configuration

The installer will use the GRUB boot loader by default. To use the "GRUB boot loader", accept all default values and click[Next] to continue.

Network Configuration

I made sure to install both NIC interfaces (cards) in each of the Linux machines before starting the operating systeminstallation. The installer should have successfully detected each of the network devices. Since this guide will use thetraditional method of assigning static IP addresses for each of the Oracle RAC nodes, there will be several changes thatneed to be made to the network configuration. The settings you make here will, of course, depend on your networkconfiguration. The most important modification that will be required for this guide is to not configure the Oracle RAC nodeswith DHCP since we will be assigning static IP addresses. Additionally, you will need to configure the server with a real hostname.

First, make sure that each of the network devices are checked to "Active on boot". The installer may choose to not activateeth1 by default.

Second, [Edit] both eth0 and eth1 as follows. You may choose to use different IP addresses for both eth0 and eth1that I have documented in this guide and that is OK. Make certain to put eth1 (the interconnect) on a different subnet thaneth0 (the public network):

Oracle RAC Node Network Configuration

(racnode1)

eth0

Enable IPv4 support ON

Dynamic IP configuration (DHCP) - (select Manual configuration) OFF

IPv4 Address 192.168.1.151

Prefix (Netmask) 255.255.255.0

Enable IPv6 support OFF

eth1

Enable IPv4 support ON

Dynamic IP configuration (DHCP) - (select Manual configuration) OFF

IPv4 Address 192.168.2.151

Prefix (Netmask) 255.255.255.0

Enable IPv6 support OFF

Continue by manually setting your hostname. I used racnode1 for the first node and racnode2 for the second. Finish thisdialog off by supplying your gateway and DNS servers.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

16 of 136 4/18/2011 10:17 PM

Page 17: 11gr2on openfiler

Additional DNS configuration information for both of the Oracle RAC nodes will bediscussed later in this guide.

Time Zone Selection

Select the appropriate time zone for your environment and click [Next] to continue.

Set Root Password

Select a root password and click [Next] to continue.

Package Installation Defaults

By default, CentOS installs most of the software required for a typical server. There are several other packages (RPMs),however, that are required to successfully install the Oracle software. The installer includes a "Customize software"selection that allows the addition of RPM groupings such as "Development Libraries" or "Legacy Library Support". Theaddition of such RPM groupings is not an issue. De-selecting any "default RPM" groupings or individual RPMs, however, canresult in failed Oracle grid infrastructure and Oracle RAC installation attempts.

For the purpose of this article, select the radio button "Customize now" and click [Next] to continue.

This is where you pick the packages to install. Most of the packages required for the Oracle software are grouped into"Package Groups" (i.e. Application -> Editors). Since these nodes will be hosting the Oracle grid infrastructure and OracleRAC software, verify that at least the following package groups are selected for install. For many of the Linux packagegroups, not all of the packages associated with that group get selected for installation. (Note the "Optional packages"button after selecting a package group.) So although the package group gets selected for install, some of the packagesrequired by Oracle do not get installed. In fact, there are some packages that are required by Oracle that do not belong toany of the available package groups (i.e. libaio-devel). Not to worry. A complete list of required packages for Oracle

grid infrastructure 11g release 2 and Oracle RAC 11g release 2 for Linux will be provided in the next section. These

packages will need to be manually installed from the CentOS CDs after the operating system install. For now, install thefollowing package groups:

Desktop Environments

GNOME Desktop Environment

Applications

Editors Graphical Internet Text-based Internet

Development

Development Libraries Development Tools Legacy Software Development

Servers

Server Configuration Tools

Base System

Administration Tools Base Java Legacy Software Support

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

17 of 136 4/18/2011 10:17 PM

Page 18: 11gr2on openfiler

System Tools X Window System

In addition to the above packages, select any additional packages you wish to install for this node keeping in mind to NOTde-select any of the "default" RPM packages. After selecting the packages to install click [Next] to continue.

About to Install

This screen is basically a confirmation screen. Click [Next] to start the installation. If you are installing CentOS using CDs,you will be asked to switch CDs during the installation process depending on which packages you selected.

Congratulations

And that's it. You have successfully installed Linux on the first node (racnode1). The installer will eject the CD/DVD fromthe CD-ROM drive. Take out the CD/DVD and click [Reboot] to reboot the system.

Post Installation Wizard Welcome Screen

When the system boots into CentOS Linux for the first time, it will prompt you with another welcome screen for the "PostInstallation Wizard". The post installation wizard allows you to make final O/S configuration settings. On the "Welcomescreen", click [Forward] to continue.

Firewall

On this screen, make sure to select the "Disabled" option and click [Forward] to continue.

You will be prompted with a warning dialog about not setting the firewall. When this occurs, click [Yes] to continue.

SELinux

On the SELinux screen, choose the "Disabled" option and click [Forward] to continue.

You will be prompted with a warning dialog warning that changing the SELinux setting will require rebooting the system sothe entire file system can be relabeled. When this occurs, click [Yes] to acknowledge a reboot of the system will occur afterfirstboot (Post Installation Wizard) is completed.

Kdump

Accept the default setting on the Kdump screen (disabled) and click [Forward] to continue.

Date and Time Settings

Adjust the date and time settings if necessary and click [Forward] to continue.

Create User

Create any additional (non-oracle) operating system user accounts if desired and click [Forward] to continue. For thepurpose of this article, I will not be creating any additional operating system accounts. I will be creating the "grid" and"oracle" user accounts later in this guide.

If you chose not to define any additional operating system user accounts, click [Continue] to acknowledge the warningdialog.

Sound Card

This screen will only appear if the wizard detects a sound card. On the sound card screen click [Forward] to continue.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

18 of 136 4/18/2011 10:17 PM

Page 19: 11gr2on openfiler

Additional CDs

On the "Additional CDs" screen click [Finish] to continue.

Reboot System

Given we changed the SELinux option to "Disabled", we are prompted to reboot the system. Click [OK] to reboot thesystem for normal use.

Login Screen

After rebooting the machine, you are presented with the login screen. Log in using the "root" user account and the passwordyou provided during the installation.

Perform the same installation on the second node

After completing the Linux installation on the first node, repeat the above steps for the second node (racnode2). Whenconfiguring the machine name and networking, ensure to configure the proper values. For my installation, this is what Iconfigured for racnode2.

First, make sure that each of the network devices are checked to "Active on boot". The installer may choose to not activateeth1 by default.

Second, [Edit] both eth0 and eth1 as follows. You may choose to use different IP addresses for both eth0 and eth1that I have documented in this guide and that is OK. Make certain to put eth1 (the interconnect) on a different subnet thaneth0 (the public network):

Oracle RAC Node Network Configuration

(racnode2)

eth0

Enable IPv4 support ON

Dynamic IP configuration (DHCP) - (select Manual configuration) OFF

IPv4 Address 192.168.1.152

Prefix (Netmask) 255.255.255.0

Enable IPv6 support OFF

eth1

Enable IPv4 support ON

Dynamic IP configuration (DHCP) - (select Manual configuration) OFF

IPv4 Address 192.168.2.152

Prefix (Netmask) 255.255.255.0

Enable IPv6 support OFF

Continue by manually setting your hostname. I used racnode2 for the second node. Finish this dialog off by supplying yourgateway and DNS servers.

Perform the same Linux installation on racnode2

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

19 of 136 4/18/2011 10:17 PM

Page 20: 11gr2on openfiler

Install Required Linux Packages for Oracle RAC

Install the following required Linux packages on both Oracle RAC nodes in the cluster.

After installing the Linux O/S, the next step is to verify and install all packages (RPMs) required by both Oracle Clusterwareand Oracle RAC. The Oracle Universal Installer (OUI) performs checks on your machine during installation to verify that itmeets the appropriate operating system package requirements. To ensure that these checks complete successfully, verifythe software requirements documented in this section before starting the Oracle installs.

Although many of the required packages for Oracle were installed during the Linux installation, several will be missing eitherbecause they were considered optional within the package group or simply didn't exist in any package group!

The packages listed in this section (or later versions) are required for Oracle grid infrastructure 11g release 2 and Oracle

RAC 11g release 2 running on the Red Hat Enterprise Linux 5 or CentOS 5 platform.

32-bit (x86) Installations

binutils-2.17.50.0.6compat-libstdc++-33-3.2.3elfutils-libelf-0.125elfutils-libelf-devel-0.125elfutils-libelf-devel-static-0.125gcc-4.1.2gcc-c++-4.1.2glibc-2.5-24glibc-common-2.5glibc-devel-2.52glibc-headers-2.5kernel-headers-2.6.18ksh-20060214libaio-0.3.106libaio-devel-0.3.106libgcc-4.1.2libgomp-4.1.2libstdc++-4.1.2libstdc++-devel-4.1.2make-3.81pdksh-5.2.14sysstat-7.0.2unixODBC-2.2.11unixODBC-devel-2.2.11

Each of the packages listed above can be found on CD #1, CD #2, CD #3, and CD #4 on the CentOS 5.5 for x86 CDs.While it is possible to query each individual package to determine which ones are missing and need to be installed, aneasier method is to run the rpm -Uvh PackageName command from the four CDs as follows. For packages that already

exist and are up to date, the RPM command will simply ignore the install and print a warning message to the console thatthe package is already installed.

# From CentOS 5.5 (x86)- [CD #1]mkdir -p /media/cdrommount -r /dev/cdrom /media/cdromcd /media/cdrom/CentOSrpm -Uvh binutils-2.*rpm -Uvh elfutils-libelf-0.*rpm -Uvh glibc-2.*rpm -Uvh glibc-common-2.*rpm -Uvh kernel-headers-2.*rpm -Uvh ksh-2*rpm -Uvh libaio-0.*

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

20 of 136 4/18/2011 10:17 PM

Page 21: 11gr2on openfiler

rpm -Uvh libgcc-4.*rpm -Uvh libstdc++-4.*rpm -Uvh make-3.*cd /eject

# From CentOS 5.5 (x86) - [CD #2]mount -r /dev/cdrom /media/cdromcd /media/cdrom/CentOSrpm -Uvh libgomp-4.*rpm -Uvh unixODBC-2.*cd /eject

# From CentOS 5.5 (x86) - [CD #3]mount -r /dev/cdrom /media/cdromcd /media/cdrom/CentOSrpm -Uvh compat-libstdc++-33*rpm -Uvh elfutils-libelf-devel-*rpm -Uvh gcc-4.*rpm -Uvh gcc-c++-4.*rpm -Uvh glibc-devel-2.*rpm -Uvh glibc-headers-2.*rpm -Uvh libaio-devel-0.*rpm -Uvh libstdc++-devel-4.*rpm -Uvh pdksh-5.*rpm -Uvh unixODBC-devel-2.*cd /eject

# From CentOS 5.5 (x86) - [CD #4]mount -r /dev/cdrom /media/cdromcd /media/cdrom/CentOSrpm -Uvh sysstat-7.*cd /eject

--------------------------------------------------------------------------------------

# From CentOS 5.5 (x86)- [DVD #1]mkdir -p /media/cdrommount -r /dev/cdrom /media/cdromcd /media/cdrom/CentOSrpm -Uvh binutils-2.*rpm -Uvh elfutils-libelf-0.*rpm -Uvh glibc-2.*rpm -Uvh glibc-common-2.*rpm -Uvh kernel-headers-2.*rpm -Uvh ksh-2*rpm -Uvh libaio-0.*rpm -Uvh libgcc-4.*rpm -Uvh libstdc++-4.*rpm -Uvh make-3.*rpm -Uvh libgomp-4.*rpm -Uvh unixODBC-2.*rpm -Uvh compat-libstdc++-33*rpm -Uvh elfutils-libelf-devel-*rpm -Uvh gcc-4.*rpm -Uvh gcc-c++-4.*rpm -Uvh glibc-devel-2.*rpm -Uvh glibc-headers-2.*rpm -Uvh libaio-devel-0.*rpm -Uvh libstdc++-devel-4.*

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

21 of 136 4/18/2011 10:17 PM

Page 22: 11gr2on openfiler

rpm -Uvh pdksh-5.*rpm -Uvh unixODBC-devel-2.*rpm -Uvh sysstat-7.*cd /eject

64-bit (x86_64) Installations

binutils-2.17.50.0.6compat-libstdc++-33-3.2.3compat-libstdc++-33-3.2.3 (32 bit)elfutils-libelf-0.125elfutils-libelf-devel-0.125elfutils-libelf-devel-static-0.125gcc-4.1.2gcc-c++-4.1.2glibc-2.5-24glibc-2.5-24 (32 bit)glibc-common-2.5glibc-devel-2.5glibc-devel-2.5 (32 bit)glibc-headers-2.5ksh-20060214libaio-0.3.106libaio-0.3.106 (32 bit)libaio-devel-0.3.106libaio-devel-0.3.106 (32 bit)libgcc-4.1.2libgcc-4.1.2 (32 bit)libstdc++-4.1.2libstdc++-4.1.2 (32 bit)libstdc++-devel 4.1.2make-3.81pdksh-5.2.14sysstat-7.0.2unixODBC-2.2.11unixODBC-2.2.11 (32 bit)unixODBC-devel-2.2.11unixODBC-devel-2.2.11 (32 bit)

Each of the packages listed above can be found on CD #1, CD #3, CD #4, and CD #5 on the CentOS 5.5 for x86_64 CDs.While it is possible to query each individual package to determine which ones are missing and need to be installed, aneasier method is to run the rpm -Uvh PackageName command from the four CDs as follows. For packages that already

exist and are up to date, the RPM command will simply ignore the install and print a warning message to the console thatthe package is already installed.

# From CentOS 5.5 (x86_64)- [CD #1]mkdir -p /media/cdrommount -r /dev/cdrom /media/cdromcd /media/cdrom/CentOSrpm -Uvh binutils-2.*rpm -Uvh elfutils-libelf-0.*rpm -Uvh glibc-2.*rpm -Uvh glibc-common-2.*rpm -Uvh ksh-2*rpm -Uvh libaio-0.*rpm -Uvh libgcc-4.*rpm -Uvh libstdc++-4.*rpm -Uvh make-3.*

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

22 of 136 4/18/2011 10:17 PM

Page 23: 11gr2on openfiler

cd /eject

# From CentOS 5.5 (x86_64) - [CD #3]mount -r /dev/cdrom /media/cdromcd /media/cdrom/CentOSrpm -Uvh elfutils-libelf-devel-*rpm -Uvh gcc-4.*rpm -Uvh gcc-c++-4.*rpm -Uvh glibc-devel-2.*rpm -Uvh glibc-headers-2.*rpm -Uvh libstdc++-devel-4.*rpm -Uvh unixODBC-2.*cd /eject

# From CentOS 5.5 (x86_64) - [CD #4]mount -r /dev/cdrom /media/cdromcd /media/cdrom/CentOSrpm -Uvh compat-libstdc++-33*rpm -Uvh libaio-devel-0.*rpm -Uvh pdksh-5.*rpm -Uvh unixODBC-devel-2.*cd /eject

# From CentOS 5.5 (x86_64) - [CD #5]mount -r /dev/cdrom /media/cdromcd /media/cdrom/CentOSrpm -Uvh sysstat-7.*cd /eject

--------------------------------------------------------------------------------------

# From CentOS 5.5 (x86_64)- [DVD #1]mkdir -p /media/cdrommount -r /dev/cdrom /media/cdromcd /media/cdrom/CentOSrpm -Uvh binutils-2.*rpm -Uvh elfutils-libelf-0.*rpm -Uvh glibc-2.*rpm -Uvh glibc-common-2.*rpm -Uvh ksh-2*rpm -Uvh libaio-0.*rpm -Uvh libgcc-4.*rpm -Uvh libstdc++-4.*rpm -Uvh make-3.*rpm -Uvh elfutils-libelf-devel-*rpm -Uvh gcc-4.*rpm -Uvh gcc-c++-4.*rpm -Uvh glibc-devel-2.*rpm -Uvh glibc-headers-2.*rpm -Uvh libstdc++-devel-4.*rpm -Uvh unixODBC-2.*rpm -Uvh compat-libstdc++-33*rpm -Uvh libaio-devel-0.*rpm -Uvh pdksh-5.*rpm -Uvh unixODBC-devel-2.*rpm -Uvh sysstat-7.*cd /eject

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

23 of 136 4/18/2011 10:17 PM

Page 24: 11gr2on openfiler

Install Openfiler

Perform the following installation on the network storage server (openfiler1).

With Linux installed on both Oracle RAC nodes, the next step is to install the Openfiler software to the network storageserver (openfiler1). Later in this guide, the network storage server will be configured as an iSCSI storage device for allOracle Clusterware and Oracle RAC shared storage requirements.

Powered by rPath Linux, Openfiler is a free browser-based network storage management utility that delivers file-basedNetwork Attached Storage (NAS) and block-based Storage Area Networking (SAN) in a single framework. The entiresoftware stack interfaces with open source applications such as Apache, Samba, LVM2, ext3, Linux NFS and iSCSIEnterprise Target. Openfiler combines these ubiquitous technologies into a small, easy to manage solution fronted by apowerful web-based management interface.

Openfiler supports CIFS, NFS, HTTP/DAV, FTP, however, we will only be making use of its iSCSI capabilities to implementan inexpensive SAN for the shared storage components required by Oracle RAC 11g. The rPath Linux operating system and

Openfiler application will be installed on one internal SATA disk. A second internal 73GB 15K SCSI hard disk will beconfigured as a single volume group that will be used for all shared disk storage requirements. The Openfiler server will beconfigured to use this volume group for iSCSI based storage and will be used in our Oracle RAC 11g configuration to store

the shared files required by Oracle Clusterware and the Oracle RAC database.

Please be aware that any type of hard disk (internal or external) should work for the shared database storage as long as itcan be recognized by the network storage server (Openfiler) and has adequate space. For example, I could have made anextra partition on the 500GB internal SATA disk for the iSCSI target, but decided to make use of the faster SCSI disk forthis example.

To learn more about Openfiler, please visit their website at http://www.openfiler.com/.

Download Openfiler

Use the links below to download Openfiler NAS/SAN Appliance, version 2.3 (Final Release) for either x86 or x86_64depending on your hardware architecture. This guide uses x86_64. After downloading Openfiler, you will then need to burnthe ISO image to CD.

32-bit (x86) Installations

openfiler-2.3-x86-disc1.iso (322 MB)

64-bit (x86_64) Installations

openfiler-2.3-x86_64-disc1.iso (336 MB)

If you are downloading the above ISO file to a MS Windows machine, there are many options for burning these images(ISO files) to a CD. You may already be familiar with and have the proper software to burn images to CD. If you are notfamiliar with this process and do not have the required software to burn images to CD, here are just three of the manysoftware packages that can be used:

InfraRecorderUltraISOMagic ISO Maker

Install Openfiler

This section provides a summary of the screens used to install the Openfiler software. For the purpose of this article, Iopted to install Openfiler with all default options. The only manual change required was for configuring the local network

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

24 of 136 4/18/2011 10:17 PM

Page 25: 11gr2on openfiler

settings.

Once the install has completed, the server will reboot to make sure all required components, services and drivers arestarted and recognized. After the reboot, any external hard drives (if connected) will be discovered by the Openfiler server.

For more detailed installation instructions, please visit http://www.openfiler.com/learn/. I would suggest, however, that theinstructions I have provided below be used for this Oracle RAC 11g configuration.

Before installing the Openfiler software to the network storage server, you should have both NIC interfaces (cards) installedand any external hard drives connected and turned on (if you will be using external hard drives).

After downloading and burning the Openfiler ISO image file to CD, insert the CD into the network storage server(openfiler1 in this example), power it on, and answer the installation screen prompts as noted below.

Boot Screen

The first screen is the Openfiler boot screen. At the boot: prompt, hit [Enter] to start the installation process.

Media Test

When asked to test the CD media, tab over to [Skip] and hit [Enter]. If there were any errors, the media burning softwarewould have warned us. After several seconds, the installer should then detect the video card, monitor, and mouse. Theinstaller then goes into GUI mode.

Welcome to Openfiler NSA

At the welcome screen, click [Next] to continue.

Keyboard Configuration

The next screen prompts you for the Keyboard settings. Make the appropriate selection for your configuration.

Disk Partitioning Setup

The next screen asks whether to perform disk partitioning using "Automatic Partitioning" or "Manual Partitioning with DiskDruid". Although the official Openfiler documentation suggests to use Manual Partitioning, I opted to use "AutomaticPartitioning" given the simplicity of my example configuration.

Select [Automatically partition] and click [Next] continue.

Automatic Partitioning

If there were a previous installation of Linux on this machine, the next screen will ask if you want to "remove" or "keep" oldpartitions. Select the option to [Remove all partitions on this system]. For my example configuration, I selected ONLY the500GB SATA internal hard drive [sda] for the operating system and Openfiler application installation. I de-selected the73GB SCSI internal hard drive since this disk will be used exclusively later in this guide to create a single "Volume Group"(racdbvg) that will be used for all iSCSI based shared disk storage requirements for Oracle Clusterware and Oracle RAC.

I also keep the check-box [Review (and modify if needed) the partitions created] selected. Click [Next] to continue.

You will then be prompted with a dialog window asking if you really want to remove all partitions. Click [Yes] toacknowledge this warning.

Partitioning

The installer will then allow you to view (and modify if needed) the disk partitions it automatically chose for hard disksselected in the previous screen. In almost all cases, the installer will choose 100MB for /boot, an adequate amount ofswap, and the rest going to the root (/) partition for that disk (or disks). In this example, I am satisfied with the installersrecommended partitioning for /dev/sda.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

25 of 136 4/18/2011 10:17 PM

Page 26: 11gr2on openfiler

The installer will also show any other internal hard disks it discovered. For my example configuration, the installer found the73GB SCSI internal hard drive as /dev/sdb. For now, I will "Delete" any and all partitions on this drive (there was onlyone, /dev/sdb1). Later in this guide, I will create the required partition for this particular hard disk.

Network Configuration

I made sure to install both NIC interfaces (cards) in the network storage server before starting the Openfiler installation. Theinstaller should have successfully detected each of the network devices.

First, make sure that each of the network devices are checked to [Active on boot]. The installer may choose to not activateeth1 by default.

Second, [Edit] both eth0 and eth1 as follows. You may choose to use different IP addresses for both eth0 and eth1and that is OK. You must, however, configure eth1 (the storage network) to be on the same subnet you configured foreth1 on racnode1 and racnode2:

eth0

Configure using DHCP OFF

Activate on boot ON

IP Address 192.168.1.195

Netmask 255.255.255.0

eth1

Configure using DHCP OFF

Activate on boot ON

IP Address 192.168.2.195

Netmask 255.255.255.0

Continue by setting your hostname manually. I used a hostname of "openfiler1". Finish this dialog off by supplying yourgateway and DNS servers.

Time Zone Selection

The next screen allows you to configure your time zone information. Make the appropriate selection for your location.

Set Root Password

Select a root password and click [Next] to continue.

About to Install

This screen is basically a confirmation screen. Click [Next] to start the installation.

Congratulations

And that's it. You have successfully installed Openfiler on the network storage server. The installer will eject the CD fromthe CD-ROM drive. Take out the CD and click [Reboot] to reboot the system.

If everything was successful after the reboot, you should now be presented with a text login screen and the URL to use foradministering the Openfiler server.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

26 of 136 4/18/2011 10:17 PM

Page 27: 11gr2on openfiler

After installing Openfiler, verify you can log in to the machine using the root user accountand the password you supplied during installation. Do not attempt to log in to the console orSSH using the built-in openfiler user account. Attempting to do so will result in thefollowing error message:

openfiler1 login: openfilerPassword: passwordThis interface has not been implemented yet.

Only attempt to log in to the console or SSH using the root user account.

Network Configuration

Perform the following network configuration tasks on both Oracle RAC nodes in the cluster.

Although we configured several of the network settings during the Linux installation, it is important to not skip this section asit contains critical steps which include configuring DNS and verifying you have the networking hardware and Internet Protocol(IP) addresses required for an Oracle grid infrastructure for a cluster installation.

Network Hardware Requirements

The following is a list of hardware requirements for network configuration:

Each Oracle RAC node must have at least two network adapters or network interface cards (NICs) — one for thepublic network interface and one for the private network interface (the interconnect). To use multiple NICs for thepublic network or for the private network, Oracle recommends that you use NIC bonding. Use separate bonding forthe public and private networks (i.e. bond0 for the public network and bond1 for the private network), becauseduring installation each interface is defined as a public or private interface. NIC bonding is not covered in this article.

The public interface names associated with the network adapters for each network must be the same on all nodes,and the private interface names associated with the network adaptors should be the same on all nodes.

For example, with our two-node cluster, you cannot configure network adapters on racnode1 with eth0 as thepublic interface, but on racnode2 have eth1 as the public interface. Public interface names must be the same, soyou must configure eth0 as public on both nodes. You should configure the private interfaces on the same networkadapters as well. If eth1 is the private interface for racnode1, then eth1 must be the private interface forracnode2.

For the public network, each network adapter must support TCP/IP.

For the private network, the interconnect must support the user datagram protocol (UDP) using high-speed networkadapters and switches that support TCP/IP (minimum requirement 1 Gigabit Ethernet).

UDP is the default interconnect protocol for Oracle RAC, and TCP is the interconnect protocol for OracleClusterware. You must use a switch for the interconnect. Oracle recommends that you use a dedicated switch.

Oracle does not support token-rings or crossover cables for the interconnect.

For the private network, the endpoints of all designated interconnect interfaces must be completely reachable on thenetwork. There should be no node that is not connected to every private network interface. You can test if aninterconnect interface is reachable using ping.

During installation of Oracle grid infrastructure, you are asked to identify the planned use for each network interfacethat OUI detects on your cluster node. You must identify each interface as a public interface, a private interface, or

not used and you must use the same private interfaces for both Oracle Clusterware and Oracle RAC.

You can bond separate interfaces to a common interface to provide redundancy, in case of a NIC failure, but Oraclerecommends that you do not create separate interfaces for Oracle Clusterware and Oracle RAC. If you use morethan one NIC for the private interconnect, then Oracle recommends that you use NIC bonding. Note that multiple

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

27 of 136 4/18/2011 10:17 PM

Page 28: 11gr2on openfiler

private interfaces provide load balancing but not failover, unless bonded.

Starting with Oracle Clusterware 11g release 2, you no longer need to provide a private name or IP address for the

interconnect. IP addresses on the subnet you identify as private are assigned as private IP addresses for clustermember nodes. You do not need to configure these addresses manually in a hosts directory. If you want nameresolution for the interconnect, then you can configure private IP names in the hosts file or the DNS. However, OracleClusterware assigns interconnect addresses on the interface defined during installation as the private interface(eth1, for example), and to the subnet used for the private subnet.

In practice, and for the purpose of this guide, I will continue to include a private name and IP address on each nodefor the RAC interconnect. It provides self-documentation and a set of end-points on the private network I can use fortroubleshooting purposes:

192.168.2.151 racnode1-priv192.168.2.152 racnode2-priv

In a production environment that uses iSCSI for network storage, it is highly recommended to configure a redundantthird network interface (eth2, for example) for that storage traffic using a TCP/IP offload Engine (TOE) card. Forthe sake of brevity, this article will configure the iSCSI network storage traffic on the same network as the RACprivate interconnect (eth1). Combining the iSCSI storage traffic and cache fusion traffic for Oracle RAC on the samenetwork interface works great for an inexpensive test system (like the one described in this article) but should neverbe considered for production.

The basic idea of a TOE is to offload the processing of TCP/IP protocols from the host processor to the hardwareon the adapter or in the system. A TOE is often embedded in a network interface card (NIC) or a host bus adapter(HBA) and used to reduce the amount of TCP/IP processing handled by the CPU and server I/O subsystem andimprove overall performance.

Oracle RAC Network Configuration

For this guide, I opted not to use Grid Naming Service (GNS) for assigning IP addresses to each Oracle RAC node butinstead will manually assign them in DNS and hosts files. I often refer to this traditional method of manually assigning IPaddresses as the "DNS method" given the fact that all IP addresses should be resolved using DNS.

When using the DNS method for assigning IP addresses, Oracle recommends that all static IP addresses be manuallyconfigured in DNS before starting the Oracle grid infrastructure installation. This would include the public IP address for thenode, the RAC interconnect, virtual IP address (VIP), and new to 11g release 2, the Single Client Access Name (SCAN)virtual IP.

Note that Oracle requires you to define the SCAN domain address (racnode-cluster-scan in this example) to resolve on your DNS to one of three possible IPaddresses in order to successfully install Oracle grid infrastructure! Defining the SCANdomain address only in the hosts files for each Oracle RAC node, and not in DNS, willcause the "Oracle Cluster Verification Utility" to fail with an [INS-20802] error during theOracle grid infrastructure install.

The following table displays the network configuration that will be used to build the example two-node Oracle RACdescribed in this guide. Note that every IP address will be registered in DNS and the hosts file for each Oracle RAC nodewith the exception of the SCAN virtual IP. The SCAN virtual IP will only be registered in DNS.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

28 of 136 4/18/2011 10:17 PM

Page 29: 11gr2on openfiler

Example Two-Node Oracle RAC Network Configuration

Identity Name Type IP Address Resolved By

Node 1 Public racnode1 Public 192.168.1.151 DNS and hosts file

Node 1 Private racnode1-priv Private 192.168.2.151 DNS and hosts file

Node 1 VIP racnode1-vip Virtual 192.168.1.251 DNS and hosts file

Node 2 Public racnode2 Public 192.168.1.152 DNS and hosts file

Node 2 Private racnode2-priv Private 192.168.2.152 DNS and hosts file

Node 2 VIP racnode2-vip Virtual 192.168.1.252 DNS and hosts file

SCAN VIP 1 racnode-cluster-scan Virtual 192.168.1.187 DNS

SCAN VIP 2 racnode-cluster-scan Virtual 192.168.1.188 DNS

SCAN VIP 3 racnode-cluster-scan Virtual 192.168.1.189 DNS

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

29 of 136 4/18/2011 10:17 PM

Page 30: 11gr2on openfiler

Identity Name Type IP Address Resolved By

DNS Configuration

The example Oracle RAC configuration described in this guide will use the traditional method of manually assigning static IPaddresses and therefore requires a DNS server. If you do not have access to a DNS server, this section includes detailedinstructions for installing a minimal DNS server on the Openfiler network storage server.

Use and Existing DNS

If you already have access to a DNS server, simply add the appropriate A and PTR records for Oracle RAC to your DNSand skip ahead to the next section "Update /etc/resolv.conf File". Note that in the example below, I am using the domainname idevelopment.info. Please feel free to substitute your own domain name if needed.

; Forward Lookup Zoneracnode1 IN A 192.168.1.151racnode2 IN A 192.168.1.152racnode1-priv IN A 192.168.2.151racnode2-priv IN A 192.168.2.152racnode1-vip IN A 192.168.1.251racnode2-vip IN A 192.168.1.252openfiler1 IN A 192.168.1.195openfiler1-priv IN A 192.168.2.195racnode-cluster-scan IN A 192.168.1.187racnode-cluster-scan IN A 192.168.1.188racnode-cluster-scan IN A 192.168.1.189

; Reverse Lookup Zone151 IN PTR racnode1.idevelopment.info.152 IN PTR racnode2.idevelopment.info.251 IN PTR racnode1-vip.idevelopment.info.252 IN PTR racnode2-vip.idevelopment.info.187 IN PTR racnode-cluster-scan.idevelopment.info.188 IN PTR racnode-cluster-scan.idevelopment.info.189 IN PTR racnode-cluster-scan.idevelopment.info.

Install DNS on Openfiler

Installing DNS on the Openfiler network storage server is a trivial task. To install or update packages on Openfiler, use thecommand-line tool conary, developed by rPath.

To learn more about the different options and parameters that can be used with the conary utility, review the ConaryQuickReference guide.

To install packages on Openfiler you need access to the Internet!

To install DNS on the Openfiler server, run the following command as the root user account:

[root@openfiler1 ~]# conary update bind:runtimeIncluding extra troves to resolve dependencies: bind:lib=9.4.3_P5-1.1-1 info-named:user=1-1-0.1Applying update job 1 of 2: Install info-named(:user)=1-1-0.1Applying update job 2 of 2:

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

30 of 136 4/18/2011 10:17 PM

Page 31: 11gr2on openfiler

Update bind(:lib) (9.3.4_P1-0.5-1[ipv6,~!pie,ssl] -> 9.4.3_P5-1.1-1) Update bind-utils(:doc :runtime) (9.3.4_P1-0.5-1[ipv6,~!pie,ssl] -> 9.4.3_P5-1.1-1) Install bind:runtime=9.4.3_P5-1.1-1

Verify the files installed by the DNS bind package:

[root@openfiler1 ~]# conary q bind --lsllrwxrwxrwx 1 root root 16 2009-07-29 17:03:02 UTC /usr/lib/libbind.so.4 -> libbind.so.4.-rwxr-xr-x 1 root root 294260 2010-03-11 00:48:52 UTC /usr/lib/libbind.so.4.1.2lrwxrwxrwx 1 root root 18 2009-07-29 17:03:00 UTC /usr/lib/libbind9.so.30 -> libbind9.so-rwxr-xr-x 1 root root 37404 2010-03-11 00:48:52 UTC /usr/lib/libbind9.so.30.1.1lrwxrwxrwx 1 root root 16 2010-03-11 00:14:00 UTC /usr/lib/libdns.so.38 -> libdns.so.38.-rwxr-xr-x 1 root root 1421820 2010-03-11 00:48:52 UTC /usr/lib/libdns.so.38.0.0lrwxrwxrwx 1 root root 16 2009-07-29 17:02:58 UTC /usr/lib/libisc.so.36 -> libisc.so.36.-rwxr-xr-x 1 root root 308260 2010-03-11 00:48:52 UTC /usr/lib/libisc.so.36.0.2lrwxrwxrwx 1 root root 18 2007-03-09 17:26:37 UTC /usr/lib/libisccc.so.30 -> libisccc.so-rwxr-xr-x 1 root root 28112 2010-03-11 00:48:51 UTC /usr/lib/libisccc.so.30.0.1lrwxrwxrwx 1 root root 19 2009-07-29 17:03:00 UTC /usr/lib/libisccfg.so.30 -> libisccfg.-rwxr-xr-x 1 root root 71428 2010-03-11 00:48:52 UTC /usr/lib/libisccfg.so.30.0.5lrwxrwxrwx 1 root root 18 2009-07-29 17:03:01 UTC /usr/lib/liblwres.so.30 -> liblwres.so-rwxr-xr-x 1 root root 64360 2010-03-11 00:48:51 UTC /usr/lib/liblwres.so.30.0.6-rwxr-xr-x 1 root root 2643 2008-02-22 21:44:05 UTC /etc/init.d/named-rw-r--r-- 1 root root 163 2004-07-07 19:20:10 UTC /etc/logrotate.d/named-rw-r----- 1 root root 1435 2004-06-18 04:39:39 UTC /etc/rndc.conf-rw-r----- 1 root named 65 2005-09-24 20:40:23 UTC /etc/rndc.key-rw-r--r-- 1 root root 1561 2006-07-20 18:40:14 UTC /etc/sysconfig/nameddrwxr-xr-x 1 root named 0 2007-12-16 01:01:35 UTC /srv/nameddrwxr-xr-x 1 named named 0 2007-12-16 01:01:35 UTC /srv/named/datadrwxr-xr-x 1 named named 0 2007-12-16 01:01:35 UTC /srv/named/slaves-rwxr-xr-x 1 root root 2927 2010-03-11 00:14:02 UTC /usr/bin/isc-config.sh-rwxr-xr-x 1 root root 3168 2010-03-11 00:48:51 UTC /usr/sbin/dns-keygen-rwxr-xr-x 1 root root 21416 2010-03-11 00:48:51 UTC /usr/sbin/dnssec-keygen-rwxr-xr-x 1 root root 53412 2010-03-11 00:48:51 UTC /usr/sbin/dnssec-signzone-rwxr-xr-x 1 root root 379912 2010-03-12 14:07:50 UTC /usr/sbin/lwresd-rwxr-xr-x 1 root root 379912 2010-03-12 14:07:50 UTC /usr/sbin/named-rwxr-xr-x 1 root root 7378 2006-10-11 02:33:29 UTC /usr/sbin/named-bootconf-rwxr-xr-x 1 root root 20496 2010-03-11 00:48:51 UTC /usr/sbin/named-checkconf-rwxr-xr-x 1 root root 19088 2010-03-11 00:48:51 UTC /usr/sbin/named-checkzonelrwxrwxrwx 1 root root 15 2007-03-09 17:26:40 UTC /usr/sbin/named-compilezone -> named-c-rwxr-xr-x 1 root root 24032 2010-03-11 00:48:51 UTC /usr/sbin/rndc-rwxr-xr-x 1 root root 11708 2010-03-11 00:48:51 UTC /usr/sbin/rndc-confgendrwxr-xr-x 1 named named 0 2007-12-16 01:01:35 UTC /var/run/named

Configure DNS

Configuration of the DNS server involves creating and modifying the following files:

/etc/named.conf — (DNS configuration file)/srv/named/data/idevelopment.info.zone — (Forward zone definition file)/srv/named/data/1.168.192.in-addr.arpa.zone — (Reverse zone definition file)

/etc/named.conf

The first step will be to create the DNS configuration file "/etc/named.conf". The /etc/named.conf configuration fileused in this example will be kept fairly simple and only contain the necessary customizations required to run a minimal DNS.

For the purpose of this guide, I will be using the domain name idevelopment.info and the IP range "192.168.1.*" forthe public network. Please feel free to substitute your own domain name if so desired. If you do decide to use a differentdomain name, make certain to modify it in all of the files that are part of the network configuration described in this section.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

31 of 136 4/18/2011 10:17 PM

Page 32: 11gr2on openfiler

The DNS configuration file described below is configured to resolve the names of the servers described in this guide. Thisincludes the two Oracle RAC nodes, the Openfiler network storage server (which is now also a DNS server!), and severalother miscellaneous nodes. In order to make sure that servers on external networks, like those on the Internet, are resolvedproperly, I needed to add DNS Forwarding by defining the forwarders directive. This directive tells the DNS, anything it

can't resolve should be passed to the DNS(s) listed. For the purpose of this example, I am using my D-Link router which isconfigured as my gateway to the Internet. I could just as well have used the DNS entries provided by my ISP.

The next directive defined in the options section is directory. This directive specifies where named will look for zone

definition files. For example, if you skip forward in the DNS configuration file to the "idevelopment.info" forward lookupzone, you will notice it's zone definition file is "idevelopment.info.zone". The fully qualified name for this file is derivedby concatenating the directory directive and the "file" specified for that zone. For example, the fully qualified name forthe forward lookup zone definition file described below is "/srv/named/data/idevelopment.info.zone". The samerules apply for the reverse lookup zone which in this example would be "/srv/named/data/1.168.192.in-addr.arpa.zone".

Create the file /etc/named.conf with at least the following content:

# +-------------------------------------------------------------------+# | /etc/named.conf |# | |# | DNS configuration file for Oracle RAC 11g release 2 example |# +-------------------------------------------------------------------+

options {

// FORWARDERS: Forward any name this DNS can't resolve to my router. forwarders { 192.168.1.1; };

// DIRECTORY: Directory where named will look for zone files. directory "/srv/named/data";

};

# ----------------------------------# Forward Zone# ----------------------------------

zone "idevelopment.info" IN { type master; file "idevelopment.info.zone"; allow-update { none; };};

# ----------------------------------# Reverse Zone# ----------------------------------

zone "1.168.192.in-addr.arpa" IN { type master; file "1.168.192.in-addr.arpa.zone"; allow-update { none; };};

/srv/named/data/idevelopment.info.zone

In the DNS configuration file above, we defined the forward and reverse zone definition files. These files will be located inthe "/srv/named/data" directory.

Create and edit the file associated with your forward lookup zone, (which in my case is "/srv/named/data/idevelopment.info.zone"), to look like the one described below. Take note of the three entries used toconfigure the SCAN name for round-robin resolution to three IP addresses.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

32 of 136 4/18/2011 10:17 PM

Page 33: 11gr2on openfiler

; +-------------------------------------------------------------------+; | /srv/named/data/idevelopment.info.zone |; | |; | Forward zone definition file for idevelopment.info |; +-------------------------------------------------------------------+

$ORIGIN idevelopment.info.

$TTL 86400 ; time-to-live - (1 day)

@ IN SOA openfiler1.idevelopment.info. jhunter.idevelopment.info. ( 201011021 ; serial number - (yyyymmdd+s) 7200 ; refresh - (2 hours) 300 ; retry - (5 minutes) 604800 ; expire - (1 week) 60 ; minimum - (1 minute)) IN NS openfiler1.idevelopment.info.localhost IN A 127.0.0.1

; Oracle RAC Nodesracnode1 IN A 192.168.1.151racnode2 IN A 192.168.1.152racnode1-priv IN A 192.168.2.151racnode2-priv IN A 192.168.2.152racnode1-vip IN A 192.168.1.251racnode2-vip IN A 192.168.1.252

; Network Storage Serveropenfiler1 IN A 192.168.1.195openfiler1-priv IN A 192.168.2.195

; Single Client Access Name (SCAN) virtual IPracnode-cluster-scan IN A 192.168.1.187racnode-cluster-scan IN A 192.168.1.188racnode-cluster-scan IN A 192.168.1.189

; Miscellaneous Nodesrouter IN A 192.168.1.1packmule IN A 192.168.1.105domo IN A 192.168.1.121switch1 IN A 192.168.1.122oemprod IN A 192.168.1.125accesspoint IN A 192.168.1.245

/srv/named/data/1.168.192.in-addr.arpa.zone

Next, we need to create the "/srv/named/data/1.168.192.in-addr.arpa.zone" zone definition file for publicnetwork reverse lookups:

; +-------------------------------------------------------------------+; | /srv/named/data/1.168.192.in-addr.arpa.zone |; | |; | Reverse zone definition file for idevelopment.info |; +-------------------------------------------------------------------+

$ORIGIN 1.168.192.in-addr.arpa.

$TTL 86400 ; time-to-live - (1 day)

@ IN SOA openfiler1.idevelopment.info. jhunter.idevelopment.info. ( 201011021 ; serial number - (yyyymmdd+s)

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

33 of 136 4/18/2011 10:17 PM

Page 34: 11gr2on openfiler

7200 ; refresh - (2 hours) 300 ; retry - (5 minutes) 604800 ; expire - (1 week) 60 ; minimum - (1 minute)) IN NS openfiler1.idevelopment.info.

; Oracle RAC Nodes151 IN PTR racnode1.idevelopment.info.152 IN PTR racnode2.idevelopment.info.251 IN PTR racnode1-vip.idevelopment.info.252 IN PTR racnode2-vip.idevelopment.info.

; Network Storage Server195 IN PTR openfiler1.idevelopment.info.

; Single Client Access Name (SCAN) virtual IP187 IN PTR racnode-cluster-scan.idevelopment.info.188 IN PTR racnode-cluster-scan.idevelopment.info.189 IN PTR racnode-cluster-scan.idevelopment.info.

; Miscellaneous Nodes1 IN PTR router.idevelopment.info.105 IN PTR packmule.idevelopment.info.121 IN PTR domo.idevelopment.info.122 IN PTR switch1.idevelopment.info.125 IN PTR oemprod.idevelopment.info.245 IN PTR accesspoint.idevelopment.info.

Start the DNS Service

When the DNS configuration file and zone definition files are in place, start the DNS server by starting the "named" service:

[root@openfiler1 ~]# service named startStarting named: [ OK ]

If named finds any problems with the DNS configuration file or zone definition files, the service will fail to start and errors willbe displayed on the screen. To troubleshoot problems with starting the named service, check the /var/log/messagesfile.

If named starts successfully, the entries in the /var/log/messages file should resemble the following:

...Nov 2 21:35:49 openfiler1 named[7995]: starting BIND 9.4.3-P5 -u namedNov 2 21:35:49 openfiler1 named[7995]: adjusted limit on open files from 1024 to 1048576Nov 2 21:35:49 openfiler1 named[7995]: found 1 CPU, using 1 worker threadNov 2 21:35:49 openfiler1 named[7995]: using up to 4096 socketsNov 2 21:35:49 openfiler1 named[7995]: loading configuration from '/etc/named.conf'Nov 2 21:35:49 openfiler1 named[7995]: using default UDP/IPv4 port range: [1024, 65535]Nov 2 21:35:49 openfiler1 named[7995]: using default UDP/IPv6 port range: [1024, 65535]Nov 2 21:35:49 openfiler1 named[7995]: listening on IPv4 interface lo, 127.0.0.1#53Nov 2 21:35:49 openfiler1 named[7995]: listening on IPv4 interface eth0, 192.168.1.195#53Nov 2 21:35:49 openfiler1 named[7995]: listening on IPv4 interface eth1, 192.168.2.195#53Nov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 0.IN-ADDR.ARPANov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 127.IN-ADDR.ARPANov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 254.169.IN-ADDR.ARPANov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 2.0.192.IN-ADDR.ARPANov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 255.255.255.255.IN-ADDR.ARPANov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPANov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPANov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: D.F.IP6.ARPANov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 8.E.F.IP6.ARPANov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: 9.E.F.IP6.ARPANov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: A.E.F.IP6.ARPANov 2 21:35:49 openfiler1 named[7995]: automatic empty zone: B.E.F.IP6.ARPA

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

34 of 136 4/18/2011 10:17 PM

Page 35: 11gr2on openfiler

Nov 2 21:35:49 openfiler1 named[7995]: command channel listening on 127.0.0.1#953Nov 2 21:35:49 openfiler1 named[7995]: command channel listening on ::1#953Nov 2 21:35:49 openfiler1 named[7995]: no source of entropy foundNov 2 21:35:49 openfiler1 named[7995]: zone 1.168.192.in-addr.arpa/IN: loaded serial 201011021Nov 2 21:35:49 openfiler1 named[7995]: zone idevelopment.info/IN: loaded serial 201011021Nov 2 21:35:49 openfiler1 named: named startup succeededNov 2 21:35:49 openfiler1 named[7995]: running...

Configure DNS to Start Automatically

Now that the named service is running, issue the following commands to make sure this service starts automatically at boottime:

[root@openfiler1 ~]# chkconfig named on

[root@openfiler1 ~]# chkconfig named --listnamed 0:off 1:off 2:on 3:on 4:on 5:on 6:off

Update "/etc/resolv.conf" File

With DNS now setup and running, the next step is to configure each server to use it for name resolution. This isaccomplished by editing the "/etc/resolv.conf" file on each server including the two Oracle RAC nodes and theOpenfiler network storage server.

Make certain the /etc/resolv.conf file contains the following entries where the IP address of the name server anddomain match those of your DNS server and the domain you have configured:

nameserver 192.168.1.195search idevelopment.info

The second line allows you to resolve a name on this network without having to specify the fully qualified host name.

Verify that the /etc/resolv.conf file was successfully updated on all servers in our mini-network:

[root@openfiler1 ~]# cat /etc/resolv.confnameserver 192.168.1.195search idevelopment.info

[root@racnode1 ~]# cat /etc/resolv.confnameserver 192.168.1.195search idevelopment.info

[root@racnode2 ~]# cat /etc/resolv.confnameserver 192.168.1.195search idevelopment.info

After modifying the /etc/resolv.conf file on every server in the cluster, verify that DNS is functioning correctly bytesting forward and reverse lookups using the nslookup command-line utility. Perform tests similar to the following fromeach node to all other nodes in your cluster:

[root@racnode1 ~]# nslookup racnode2.idevelopment.infoServer: 192.168.1.195Address: 192.168.1.195#53

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

35 of 136 4/18/2011 10:17 PM

Page 36: 11gr2on openfiler

Name: racnode2.idevelopment.infoAddress: 192.168.1.152

[root@racnode1 ~]# nslookup racnode2Server: 192.168.1.195Address: 192.168.1.195#53

Name: racnode2.idevelopment.infoAddress: 192.168.1.152

[root@racnode1 ~]# nslookup 192.168.1.152Server: 192.168.1.195Address: 192.168.1.195#53

152.1.168.192.in-addr.arpa name = racnode2.idevelopment.info.

[root@racnode1 ~]# nslookup racnode-cluster-scanServer: 192.168.1.195Address: 192.168.1.195#53

Name: racnode-cluster-scan.idevelopment.infoAddress: 192.168.1.187Name: racnode-cluster-scan.idevelopment.infoAddress: 192.168.1.188Name: racnode-cluster-scan.idevelopment.infoAddress: 192.168.1.189

[root@racnode1 ~]# nslookup 192.168.1.187Server: 192.168.1.195Address: 192.168.1.195#53

187.1.168.192.in-addr.arpa name = racnode-cluster-scan.idevelopment.info.

Configuring Public and Private Network

In our two node example, we need to configure the network on both Oracle RAC nodes for access to the public network aswell as their private interconnect.

The easiest way to configure network settings in RHEL / CentOS is with the program "Network Configuration". NetworkConfiguration is a GUI application that can be started from the command-line as the root user account as follows:

[root@racnode1 ~]# /usr/bin/system-config-network &

Using the Network Configuration application, you need to configure both NIC devices as well as the /etc/hosts file andverifying the DNS configuration. All of these tasks can be completed using the Network Configuration GUI.

It should be noted that the /etc/hosts entries are the same for both Oracle RAC nodes and that I removed any entry thathas to do with IPv6. For example:

# ::1 localhost6.localdomain6 localhost6

Our example Oracle RAC configuration will use the following network settings:

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

36 of 136 4/18/2011 10:17 PM

Page 37: 11gr2on openfiler

Oracle RAC Node 1 - (racnode1)

Device IP Address Subnet Gateway Purpose

eth0 192.168.1.151 255.255.255.0 192.168.1.1 Connects racnode1 to the public network

eth1 192.168.2.151 255.255.255.0 Connects racnode1 (interconnect) to racnode2 (racnode2-priv)

/etc/resolv.conf

nameserver 192.168.1.195search idevelopment.info

/etc/hosts

# Do not remove the following line, or various programs# that require network functionality will fail.127.0.0.1 localhost.localdomain localhost

# Public Network - (eth0)192.168.1.151 racnode1.idevelopment.info racnode1192.168.1.152 racnode2.idevelopment.info racnode2

# Private Interconnect - (eth1)192.168.2.151 racnode1-priv.idevelopment.info racnode1-priv192.168.2.152 racnode2-priv.idevelopment.info racnode2-priv

# Public Virtual IP (VIP) addresses - (eth0:1)192.168.1.251 racnode1-vip.idevelopment.info racnode1-vip192.168.1.252 racnode2-vip.idevelopment.info racnode2-vip

# Private Storage Network for Openfiler - (eth1)192.168.1.195 openfiler1.idevelopment.info openfiler1192.168.2.195 openfiler1-priv.idevelopment.info openfiler1-priv

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

37 of 136 4/18/2011 10:17 PM

Page 38: 11gr2on openfiler

Oracle RAC Node 2 - (racnode2)

Device IP Address Subnet Gateway Purpose

eth0 192.168.1.152 255.255.255.0 192.168.1.1 Connects racnode2 to the public network

eth1 192.168.2.152 255.255.255.0 Connects racnode2 (interconnect) to racnode1 (racnode1-priv)

/etc/resolv.conf

nameserver 192.168.1.195search idevelopment.info

/etc/hosts

# Do not remove the following line, or various programs# that require network functionality will fail.127.0.0.1 localhost.localdomain localhost

# Public Network - (eth0)192.168.1.151 racnode1.idevelopment.info racnode1192.168.1.152 racnode2.idevelopment.info racnode2

# Private Interconnect - (eth1)192.168.2.151 racnode1-priv.idevelopment.info racnode1-priv192.168.2.152 racnode2-priv.idevelopment.info racnode2-priv

# Public Virtual IP (VIP) addresses - (eth0:1)192.168.1.251 racnode1-vip.idevelopment.info racnode1-vip192.168.1.252 racnode2-vip.idevelopment.info racnode2-vip

# Private Storage Network for Openfiler - (eth1)192.168.1.195 openfiler1.idevelopment.info openfiler1192.168.2.195 openfiler1-priv.idevelopment.info openfiler1-priv

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

38 of 136 4/18/2011 10:17 PM

Page 39: 11gr2on openfiler

Device IP Address Subnet Gateway Purpose

Openfiler Network Storage Server - (openfiler1)

Device IP Address Subnet Gateway Purpose

eth0 192.168.1.195 255.255.255.0 192.168.1.1 Connects openfiler1 to the public network

eth1 192.168.2.195 255.255.255.0 Connects openfiler1 to the private network

/etc/resolv.conf

nameserver 192.168.1.195search idevelopment.info

/etc/hosts

# Do not remove the following line, or various programs# that require network functionality will fail.127.0.0.1 localhost.localdomain localhost192.168.1.195 openfiler1.idevelopment.info openfiler1

In the screen shots below, only Oracle RAC Node 1 (racnode1) is shown. Be sure to make all the proper network settingsto both Oracle RAC nodes.

Figure 2: Network Configuration Screen, Node 1 (racnode1)

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

39 of 136 4/18/2011 10:17 PM

Page 40: 11gr2on openfiler

Figure 3: Ethernet Device Screen, eth0 (racnode1)

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

40 of 136 4/18/2011 10:17 PM

Page 41: 11gr2on openfiler

Figure 4: Ethernet Device Screen, eth1 (racnode1)

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

41 of 136 4/18/2011 10:17 PM

Page 42: 11gr2on openfiler

Figure 5: Network Configuration Screen, DNS (racnode1)

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

42 of 136 4/18/2011 10:17 PM

Page 43: 11gr2on openfiler

Figure 6: Network Configuration Screen, /etc/hosts (racnode1)

Once the network is configured, you can use the ifconfig command to verify everything is working. The followingexample is from racnode1:

[root@racnode1 ~]# /sbin/ifconfig -a

eth0 Link encap:Ethernet HWaddr 00:26:9E:02:D3:AC inet addr:192.168.1.151 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::226:9eff:fe02:d3ac/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:236549 errors:0 dropped:0 overruns:0 frame:0 TX packets:264953 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:28686645 (27.3 MiB) TX bytes:159319080 (151.9 MiB) Interrupt:177 Memory:dfef0000-dff00000

eth1 Link encap:Ethernet HWaddr 00:0E:0C:64:D1:E5 inet addr:192.168.2.151 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::20e:cff:fe64:d1e5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:120 errors:0 dropped:0 overruns:0 frame:0 TX packets:48 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:24544 (23.9 KiB) TX bytes:8634 (8.4 KiB) Base address:0xddc0 Memory:fe9c0000-fe9e0000

lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

43 of 136 4/18/2011 10:17 PM

Page 44: 11gr2on openfiler

RX packets:3191 errors:0 dropped:0 overruns:0 frame:0 TX packets:3191 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4296868 (4.0 MiB) TX bytes:4296868 (4.0 MiB)

sit0 Link encap:IPv6-in-IPv4 NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

Verify Network Configuration

As the root user account, verify the network configuration by using the ping command to test the connection from eachnode in the cluster to all the other nodes. For example, as the root user account, run the following commands on eachnode:

# ping -c 3 racnode1.idevelopment.info# ping -c 3 racnode2.idevelopment.info# ping -c 3 racnode1-priv.idevelopment.info# ping -c 3 racnode2-priv.idevelopment.info# ping -c 3 openfiler1.idevelopment.info# ping -c 3 openfiler1-priv.idevelopment.info

# ping -c 3 racnode1# ping -c 3 racnode2# ping -c 3 racnode1-priv# ping -c 3 racnode2-priv# ping -c 3 openfiler1# ping -c 3 openfiler1-priv

You should not get a response from the nodes using the ping command for the virtual IPs (racnode1-vip,racnode2-vip) or the SCAN IP addresses (racnode-cluster-scan) until after Oracle Clusterware is installed andrunning. If the ping commands for the public addresses fail, resolve the issue before you proceed.

Verify SCAN Configuration

In this article, I will configure SCAN for round-robin resolution to three, manually configured static IP addresses in DNS:

racnode-cluster-scan IN A 192.168.1.187racnode-cluster-scan IN A 192.168.1.188racnode-cluster-scan IN A 192.168.1.189

Oracle Corporation strongly recommends configuring three IP addresses considering load balancing and high availabilityrequirements, regardless of the number of servers in the cluster. These virtual IP addresses must all be on the same subnetas the public network in the cluster. The SCAN name must be 15 characters or less in length, not including the domain, andmust be resolvable without the domain suffix. For example, "racnode-cluster-scan" must be resolvable as opposed toonly "racnode-cluster-scan.idevelopment.info". The virtual IP addresses for SCAN (and the virtual IP addressfor the node) should not be manually assigned to a network interface on the cluster since Oracle Clusterware is responsiblefor enabling them after the Oracle grid infrastructure installation. In other words, the SCAN addresses and virtual IPaddresses (VIPs) should not respond to ping commands before installation.

Verify the SCAN configuration in DNS using the nslookup command-line utility. Since our DNS is set up to provideround-robin access to the IP addresses resolved by the SCAN entry, run the nslookup command several times to makecertain that the round-robin algorithm is functioning properly. The result should be that each time the nslookup is run, it will

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

44 of 136 4/18/2011 10:17 PM

Page 45: 11gr2on openfiler

return the set of three IP addresses in a different order. For example:

[root@racnode1 ~]# nslookup racnode-cluster-scanServer: 192.168.1.195Address: 192.168.1.195#53

Name: racnode-cluster-scan.idevelopment.infoAddress: 192.168.1.187Name: racnode-cluster-scan.idevelopment.infoAddress: 192.168.1.188Name: racnode-cluster-scan.idevelopment.infoAddress: 192.168.1.189

[root@racnode1 ~]# nslookup racnode-cluster-scanServer: 192.168.1.195Address: 192.168.1.195#53

Name: racnode-cluster-scan.idevelopment.infoAddress: 192.168.1.189Name: racnode-cluster-scan.idevelopment.infoAddress: 192.168.1.187Name: racnode-cluster-scan.idevelopment.infoAddress: 192.168.1.188

[root@racnode1 ~]# nslookup racnode-cluster-scanServer: 192.168.1.195Address: 192.168.1.195#53

Name: racnode-cluster-scan.idevelopment.infoAddress: 192.168.1.188Name: racnode-cluster-scan.idevelopment.infoAddress: 192.168.1.189Name: racnode-cluster-scan.idevelopment.infoAddress: 192.168.1.187

Confirm the RAC Node Name is Not Listed in Loopback Address

Ensure that the node name (racnode1 or racnode2) is not included for the loopback address in the /etc/hosts file. Ifthe machine name is listed in the in the loopback address entry as below:

127.0.0.1 racnode1 localhost.localdomain localhost

it will need to be removed as shown below:

127.0.0.1 localhost.localdomain localhost

If the RAC node name is listed for the loopback address, you will receive the following error during the RAC installation:

ORA-00603: ORACLE server session terminated by fatal error

or

ORA-29702: error occurred in Cluster Group Service operation

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

45 of 136 4/18/2011 10:17 PM

Page 46: 11gr2on openfiler

Check and turn off UDP ICMP rejections

During the Linux installation process, I indicated to not configure the firewall option. By default the option to configure afirewall is selected by the installer. This has burned me several times so I like to do a double-check that the firewall option isnot configured and to ensure udp ICMP filtering is turned off.

If UDP ICMP is blocked or rejected by the firewall, the Oracle Clusterware software will crash after several minutes ofrunning. When the Oracle Clusterware process fails, you will have something similar to the following in the<machine_name>_evmocr.log file:

08/29/2005 22:17:19oac_init:2: Could not connect to server, clsc retcode = 908/29/2005 22:17:19a_init:12!: Client init unsuccessful : [32]ibctx:1:ERROR: INVALID FORMATproprinit:problem reading the bootblock or superbloc 22

When experiencing this type of error, the solution is to remove the UDP ICMP (iptables) rejection rule - or to simply have thefirewall option turned off. The Oracle Clusterware software will then start to operate normally and not crash. The followingcommands should be executed as the root user account on both Oracle RAC nodes:

Check to ensure that the firewall option is turned off. If the firewall option is stopped (like it is in my example below)you do not have to proceed with the following steps.

[root@racnode1 ~]# /etc/rc.d/init.d/iptables statusFirewall is stopped.

[root@racnode2 ~]# /etc/rc.d/init.d/iptables statusFirewall is stopped.

1.

If the firewall option is operating, you will need to first manually disable UDP ICMP rejections:

[root@racnode1 ~]# /etc/rc.d/init.d/iptables stopFlushing firewall rules: [ OK ]Setting chains to policy ACCEPT: filter [ OK ]Unloading iptables modules: [ OK ]

2.

Then, turn UDP ICMP rejections off for all subsequent server reboots (which should always be turned off):

[root@racnode1 ~]# chkconfig iptables off

3.

Cluster Time Synchronization Service

Perform the following Cluster Time Synchronization Service configuration on both Oracle RAC nodes in the cluster.

Oracle Clusterware 11g release 2 and later requires time synchronization across all nodes within a cluster where Oracle

RAC is deployed. Oracle provides two options for time synchronization: an operating system configured network timeprotocol (NTP) or the new Oracle Cluster Time Synchronization Service (CTSS). Oracle Cluster Time SynchronizationService (ctssd) is designed for organizations whose Oracle RAC databases are unable to access NTP services.

Configuring NTP is outside the scope of this article and will therefore rely on the Oracle Cluster Time SynchronizationService as the network time protocol.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

46 of 136 4/18/2011 10:17 PM

Page 47: 11gr2on openfiler

Configure Cluster Time Synchronization Service - (CTSS)

If you want to use Cluster Time Synchronization Service to provide synchronization service in the cluster, then de-configureand de-install the Network Time Protocol (NTP) service.

To deactivate the NTP service, you must stop the existing ntpd service, disable it from the initialization sequences andremove the ntp.conf file. To complete these steps on Red Hat Enterprise Linux or CentOS, run the following commandsas the root user account on both Oracle RAC nodes:

[root@racnode1 ~]# /sbin/service ntpd stop[root@racnode1 ~]# chkconfig ntpd off[root@racnode1 ~]# mv /etc/ntp.conf /etc/ntp.conf.original

Also remove the following file:

[root@racnode1 ~]# rm /var/run/ntpd.pid

This file maintains the pid for the NTP daemon.

When the installer finds that the NTP protocol is not active, the Cluster Time Synchronization Service is automaticallyinstalled in active mode and synchronizes the time across the nodes. If NTP is found configured, then the Cluster Time

Synchronization Service is started in observer mode, and no active time synchronization is performed by Oracle

Clusterware within the cluster.

To confirm that ctssd is active after installation, enter the following command as the Grid installation owner (grid):

[grid@racnode1 ~]$ crsctl check ctssCRS-4701: The Cluster Time Synchronization Service is in Active mode.CRS-4702: Offset (in msec): 0

Configure Network Time Protocol - (only if not using CTSS as documented above)

Please note that this guide will use Cluster Time Synchronization Service for timesynchronization (described above) across both Oracle RAC nodes in the cluster. Thissection is provided for documentation purposes only and can be used by organizationsalready setup to use NTP within their domain.

If you are using NTP and you prefer to continue using it instead of Cluster Time Synchronization Service, then you need tomodify the NTP initialization file to set the -x flag, which prevents time from being adjusted backward. Restart the networktime protocol daemon after you complete this task.

To do this on Oracle Enterprise Linux, Red Hat Linux, and Asianux systems, edit the /etc/sysconfig/ntpd file to addthe -x flag, as in the following example:

# Drop root to id 'ntp:ntp' by default.OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"# Set to 'yes' to sync hw clock after successful ntpdateSYNC_HWCLOCK=no# Additional options for ntpdateNTPDATE_OPTIONS=""

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

47 of 136 4/18/2011 10:17 PM

Page 48: 11gr2on openfiler

Then, restart the NTP service.

# /sbin/service ntp restart

On SUSE systems, modify the configuration file /etc/sysconfig/ntp with the following settings:

NTPD_OPTIONS="-x -u ntp"

Restart the daemon using the following command:

# service ntp restart

Configure iSCSI Volumes using Openfiler

Perform the following configuration tasks on the network storage server (openfiler1).

Openfiler administration is performed using the Openfiler Storage Control Center — a browser based tool over an https

connection on port 446. For example:

https://openfiler1.idevelopment.info:446/

From the Openfiler Storage Control Center home page, log in as an administrator. The default administration logincredentials for Openfiler are:

Username: openfiler

Password: password

The first page the administrator sees is the [Status] / [System Overview] screen.

To use Openfiler as an iSCSI storage server, we have to perform six major tasks — set up iSCSI services, configurenetwork access, identify and partition the physical storage, create a new volume group, create all logical volumes, andfinally, create new iSCSI targets for each of the logical volumes.

Services

To control services, we use the Openfiler Storage Control Center and navigate to [Services] / [Manage Services]:

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

48 of 136 4/18/2011 10:17 PM

Page 49: 11gr2on openfiler

Figure 7: Enable iSCSI Openfiler Service

To enable the iSCSI service, click on the 'Enable' link under the 'iSCSI target server' service name. After that, the 'iSCSItarget server' status should change to 'Enabled'.

The ietd program implements the user level part of iSCSI Enterprise Target software for building an iSCSI storage systemon Linux. With the iSCSI target enabled, we should be able to SSH into the Openfiler server and see the iscsi-targetservice running:

[root@openfiler1 ~]# service iscsi-target statusietd (pid 14243) is running...

Network Access Configuration

The next step is to configure network access in Openfiler to identify both Oracle RAC nodes (racnode1 and racnode2)that will need to access the iSCSI volumes through the storage (private) network. Note that iSCSI logical volumes will becreated later on in this section. Also note that this step does not actually grant the appropriate permissions to the iSCSIvolumes required by both Oracle RAC nodes. That will be accomplished later in this section by updating the ACL for eachnew logical volume.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

49 of 136 4/18/2011 10:17 PM

Page 50: 11gr2on openfiler

As in the previous section, configuring network access is accomplished using the Openfiler Storage Control Center bynavigating to [System] / [Network Setup]. The "Network Access Configuration" section (at the bottom of the page) allows

an administrator to setup networks and/or hosts that will be allowed to access resources exported by the Openfilerappliance. For the purpose of this article, we will want to add both Oracle RAC nodes individually rather than allowing theentire 192.168.2.0 network have access to Openfiler resources.

When entering each of the Oracle RAC nodes, note that the 'Name' field is just a logical name used for reference only. As aconvention when entering nodes, I simply use the node name defined for that IP address. Next, when entering the actualnode in the 'Network/Host' field, always use its IP address even though its host name may already be defined in your/etc/hosts file or DNS. Lastly, when entering actual hosts in our Class C network, use a subnet mask of255.255.255.255.

It is important to remember that you will be entering the IP address of the private network (eth1) for each of the RAC

nodes in the cluster.

The following image shows the results of adding both Oracle RAC nodes:

Figure 8: Configure Openfiler Network Access for Oracle RAC Nodes

Physical Storage

In this section, we will be creating the three iSCSI volumes to be used as shared storage by both of the Oracle RAC nodesin the cluster. This involves multiple steps that will be performed on the internal 73GB 15K SCSI hard disk connected to theOpenfiler server.

Storage devices like internal IDE/SATA/SCSI/SAS disks, storage arrays, external USB drives, external FireWire drives, orANY other storage can be connected to the Openfiler server and served to the clients. Once these devices are discoveredat the OS level, Openfiler Storage Control Center can be used to set up and manage all of that storage.

In our case, we have a 73GB internal SCSI hard drive for our shared storage needs. On the Openfiler server this drive isseen as /dev/sdb (MAXTOR ATLAS15K2_73SCA). To see this and to start the process of creating our iSCSI volumes,navigate to [Volumes] / [Block Devices] from the Openfiler Storage Control Center:

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

50 of 136 4/18/2011 10:17 PM

Page 51: 11gr2on openfiler

Figure 9: Openfiler Physical Storage - Block Device Management

Partitioning the Physical Disk

The first step we will perform is to create a single primary partition on the /dev/sdb internal hard disk. By clicking on the/dev/sdb link, we are presented with the options to 'Edit' or 'Create' a partition. Since we will be creating a single primarypartition that spans the entire disk, most of the options can be left to their default setting where the only modification wouldbe to change the 'Partition Type' from 'Extended partition' to 'Physical volume'. Here are the values I specified to create

the primary partition on /dev/sdb:

Physical Disk Primary Partition

Mode Primary

Partition Type Physical volume

Starting Cylinder 1

Ending Cylinder 8924

The size now shows 68.36 GB. To accept that we click on the [Create] button. This results in a new partition (/dev/sdb1)

on our internal hard disk:

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

51 of 136 4/18/2011 10:17 PM

Page 52: 11gr2on openfiler

Figure 10: Partition the Physical Volume

Volume Group Management

The next step is to create a Volume Group. We will be creating a single volume group named racdbvg that contains the

newly created primary partition.

From the Openfiler Storage Control Center, navigate to [Volumes] / [Volume Groups]. There we would see any existing

volume groups, or none as in our case. Using the Volume Group Management screen, enter the name of the new volumegroup (racdbvg), click on the check-box in front of /dev/sdb1 to select that partition, and finally click on the [Add volume

group] button. After that we are presented with the list that now shows our newly created volume group named "racdbvg":

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

52 of 136 4/18/2011 10:17 PM

Page 53: 11gr2on openfiler

Figure 11: New Volume Group Created

Logical Volumes

We can now create the three logical volumes in the newly created volume group (racdbvg).

From the Openfiler Storage Control Center, navigate to [Volumes] / [Add Volume]. There we will see the newly created

volume group (racdbvg) along with its block storage statistics. Also available at the bottom of this screen is the option tocreate a new volume in the selected volume group - (Create a volume in "racdbvg"). Use this screen to create the following

three iSCSI logical volumes. After creating each logical volume, the application will point you to the "Manage Volumes"screen. You will then need to click back to the "Add Volume" tab to create the next logical volume until all three iSCSIvolumes are created:

iSCSI / Logical Volumes

Volume Name Volume Description Required Space (MB) Filesystem Type

racdb-crs1 racdb - ASM CRSVolume 1

2,208 iSCSI

racdb-data1racdb - ASM DataVolume 1

33,888 iSCSI

racdb-fra1racdb - ASM FRAVolume 1

33,888 iSCSI

In effect we have created three iSCSI disks that can now be presented to iSCSI clients (racnode1 and racnode2) on the

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

53 of 136 4/18/2011 10:17 PM

Page 54: 11gr2on openfiler

network. The "Manage Volumes" screen should look as follows:

Figure 12: New Logical (iSCSI) Volumes

iSCSI Targets

At this point, we have three iSCSI logical volumes. Before an iSCSI client can have access to them, however, an iSCSItarget will need to be created for each of these three volumes. Each iSCSI logical volume will be mapped to a specific

iSCSI target and the appropriate network access permissions to that target will be granted to both Oracle RAC nodes. Forthe purpose of this article, there will be a one-to-one mapping between an iSCSI logical volume and an iSCSI target.

There are three steps involved in creating and configuring an iSCSI target — create a unique Target IQN (basically, theuniversal name for the new iSCSI target), map one of the iSCSI logical volumes created in the previous section to the newlycreated iSCSI target, and finally, grant both of the Oracle RAC nodes access to the new iSCSI target. Please note that thisprocess will need to be performed for each of the three iSCSI logical volumes created in the previous section.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

54 of 136 4/18/2011 10:17 PM

Page 55: 11gr2on openfiler

For the purpose of this article, the following table lists the new iSCSI target names (the Target IQN) and which iSCSI logicalvolume it will be mapped to:

iSCSI Target / Logical Volume Mappings

Target IQN iSCSI Volume Name Volume Description

iqn.2006-01.com.openfiler:racdb.crs1 racdb-crs1racdb - ASM CRSVolume 1

iqn.2006-01.com.openfiler:racdb.data1 racdb-data1racdb - ASM DataVolume 1

iqn.2006-01.com.openfiler:racdb.fra1 racdb-fra1 racdb - ASM FRAVolume 1

We are now ready to create the three new iSCSI targets — one for each of the iSCSI logical volumes. The example belowillustrates the three steps required to create a new iSCSI target by creating the Oracle Clusterware / racdb-crs1 target(iqn.2006-01.com.openfiler:racdb.crs1). This three step process will need to be repeated for each of the threenew iSCSI targets listed in the table above.

Create New Target IQN

From the Openfiler Storage Control Center, navigate to [Volumes] / [iSCSI Targets]. Verify the grey sub-tab "Target

Configuration" is selected. This page allows you to create a new iSCSI target. A default value is automatically generated forthe name of the new iSCSI target (better known as the "Target IQN"). An example Target IQN is"iqn.2006-01.com.openfiler:tsn.ae4683b67fd3":

Figure 13: Create New iSCSI Target : Default Target IQN

I prefer to replace the last segment of the default Target IQN with something more meaningful. For the first iSCSI target(racdb-crs1), I will modify the default Target IQN by replacing the string "tsn.ae4683b67fd3" with "racdb.crs1" asshown in Figure 14 below:

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

55 of 136 4/18/2011 10:17 PM

Page 56: 11gr2on openfiler

Figure 14: Create New iSCSI Target : Replace Default Target IQN

Once you are satisfied with the new Target IQN, click the [Add] button. This will create a new iSCSI target and then bring

up a page that allows you to modify a number of settings for the new iSCSI target. For the purpose of this article, none ofsettings for the new iSCSI target need to be changed.

LUN Mapping

After creating the new iSCSI target, the next step is to map the appropriate iSCSI logical volume to it. Under the "TargetConfiguration" sub-tab, verify the correct iSCSI target is selected in the section "Select iSCSI Target". If not, use thepull-down menu to select the correct iSCSI target and click the [Change] button.

Next, click on the grey sub-tab named "LUN Mapping" (next to "Target Configuration" sub-tab). Locate the appropriateiSCSI logical volume (/dev/racdbvg/racdb-crs1 in this first example) and click the [Map] button. You do not need to

change any settings on this page. Your screen should look similar to Figure 15 after clicking the "Map" button for volume/dev/racdbvg/racdb-crs1:

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

56 of 136 4/18/2011 10:17 PM

Page 57: 11gr2on openfiler

Figure 15: Create New iSCSI Target : Map LUN

Network ACL

Before an iSCSI client can have access to the newly created iSCSI target, it needs to be granted the appropriatepermissions. Awhile back, we configured network access in Openfiler for two hosts (the Oracle RAC nodes). These are thetwo nodes that will need to access the new iSCSI targets through the storage (private) network. We now need to grant bothof the Oracle RAC nodes access to the new iSCSI target.

Click on the grey sub-tab named "Network ACL" (next to "LUN Mapping" sub-tab). For the current iSCSI target, change the"Access" for both hosts from 'Deny' to 'Allow' and click the [Update] button:

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

57 of 136 4/18/2011 10:17 PM

Page 58: 11gr2on openfiler

Figure 16: Create New iSCSI Target : Update Network ACL

Go back to the Create New Target IQN section and perform these same three tasks for the remaining two iSCSI logicalvolumes while substituting the values found in the "iSCSI Target / Logical Volume Mappings" table (namely, the value in the'Target IQN' column).

Configure iSCSI Volumes on Oracle RAC Nodes

Configure the iSCSI initiator on both Oracle RAC nodes in the cluster. Creating partitions, however, should only be executedon one of nodes in the RAC cluster.

An iSCSI client can be any system (Linux, Unix, MS Windows, Apple Mac, etc.) for which iSCSI support (a driver) isavailable. In our case, the clients are two Linux servers, racnode1 and racnode2, running Red Hat Enterprise Linux 5.5 orCentOS 5.5.

In this section we will be configuring the iSCSI software initiator on both of the Oracle RAC nodes. RHEL / CentOS 5.5includes the Open-iSCSI iSCSI software initiator which can be found in the iscsi-initiator-utils RPM. This is achange from previous versions of RHEL / CentOS (4.x) which included the Linux iscsi-sfnet software driver developed aspart of the Linux-iSCSI Project. All iSCSI management tasks like discovery and logins will use the command-line interfaceiscsiadm which is included with Open-iSCSI.

The iSCSI software initiator will be configured to automatically log in to the network storage server (openfiler1) anddiscover the iSCSI volumes created in the previous section. We will then go through the steps of creating persistent local

SCSI device names (i.e. /dev/iscsi/crs1) for each of the iSCSI target names discovered using udev. Having aconsistent local SCSI device name and which iSCSI target it maps to, helps to differentiate between the three volumeswhen configuring ASM. Before we can do any of this, however, we must first install the iSCSI initiator software.

This guide makes use of ASMLib 2.0 which is a support library for the Automatic StorageManagement (ASM) feature of the Oracle Database. ASMLib will be used to label all iSCSIvolumes used in this guide. By default, ASMLib already provides persistent paths andpermissions for storage devices used with ASM. This feature eliminates the need forupdating udev or devlabel files with storage device paths and permissions. For thepurpose of this article and in practice, I still opt to create persistent local SCSI device

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

58 of 136 4/18/2011 10:17 PM

Page 59: 11gr2on openfiler

names for each of the iSCSI target names discovered using udev. This provides a meansof self-documentation which helps to quickly identify the name and location of each volume.

Installing the iSCSI (initiator) service

With Red Hat Enterprise Linux 5.5 or CentOS 5.5, the Open-iSCSI iSCSI software initiator does not get installed by default.The software is included in the iscsi-initiator-utils package which can be found on CD/DVD #1. To determine ifthis package is installed (which in most cases, it will not be), perform the following on both Oracle RAC nodes:

[root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscsi-initia

If the iscsi-initiator-utils package is not installed, load CD/DVD #1 into each of the Oracle RAC nodes andperform the following:

[root@racnode1 ~]# mount -r /dev/cdrom /media/cdrom[root@racnode1 ~]# cd /media/cdrom/CentOS[root@racnode1 ~]# rpm -Uvh iscsi-initiator-utils-*[root@racnode1 ~]# cd /[root@racnode1 ~]# eject

Verify the iscsi-initiator-utils package is now installed on both Oracle RAC nodes:

[root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscsi-initiaiscsi-initiator-utils-6.2.0.871-0.16.el5 (x86_64)

[root@racnode2 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscsi-initiaiscsi-initiator-utils-6.2.0.871-0.16.el5 (x86_64)

Configure the iSCSI (initiator) service

After verifying that the iscsi-initiator-utils package is installed, start the iscsid service on both Oracle RACnodes and enable it to automatically start when the system boots. We will also configure the iscsi service to automaticallystart which logs into iSCSI targets needed at system startup.

[root@racnode1 ~]# service iscsid startTurning off network shutdown. Starting iSCSI daemon: [ OK ] [ OK ]

[root@racnode1 ~]# chkconfig iscsid on[root@racnode1 ~]# chkconfig iscsi on

Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on thenetwork storage server. This should be performed on both Oracle RAC nodes to verify the configuration is functioningproperly:

[root@racnode1 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-priv192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs1192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.fra1192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.data1

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

59 of 136 4/18/2011 10:17 PM

Page 60: 11gr2on openfiler

Manually Log In to iSCSI Targets

At this point, the iSCSI initiator service has been started and each of the Oracle RAC nodes were able to discover theavailable targets from the Openfiler network storage server. The next step is to manually log in to each of the availableiSCSI targets which can be done using the iscsiadm command-line interface. This needs to be run on both Oracle RACnodes. Note that I had to specify the IP address and not the host name of the network storage server(openfiler1-priv) — I believe this is required given the discovery (above) shows the targets using the IP address.

[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 -l[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 -l[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 -l

Configure Automatic Log In

The next step is to ensure the client will automatically log in to each of the targets listed above when the machine is booted(or the iSCSI initiator service is started/restarted). As with the manual log in process described above, perform the followingon both Oracle RAC nodes:

[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.195 --op updat[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.data1 -p 192.168.2.195 --op upda[root@racnode1 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.fra1 -p 192.168.2.195 --op updat

Create Persistent Local SCSI Device Names

In this section, we will go through the steps to create persistent local SCSI device names for each of the iSCSI targetnames. This will be done using udev. Having a consistent local SCSI device name and which iSCSI target it maps to, helpsto differentiate between the three volumes when configuring ASM. Although this is not a strict requirement since we will beusing ASMLib 2.0 for all volumes, it provides a means of self-documentation to quickly identify the name and location ofeach iSCSI volume.

By default, when either of the Oracle RAC nodes boot and the iSCSI initiator service is started, it will automatically log in toeach of the iSCSI targets configured in a random fashion and map them to the next available local SCSI device name. Forexample, the target iqn.2006-01.com.openfiler:racdb.crs1 may get mapped to /dev/sdb. I can actuallydetermine the current mappings for all targets by looking at the /dev/disk/by-path directory:

[root@racnode1 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0 -> ../../sdbip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0 -> ../../sddip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0 -> ../../sdc

Using the output from the above listing, we can establish the following current mappings:

Current iSCSI Target Name to local SCSI Device Name

Mappings

iSCSI Target Name Local SCSI Device Name

iqn.2006-01.com.openfiler:racdb.crs1 /dev/sdb

iqn.2006-01.com.openfiler:racdb.data1 /dev/sdd

iqn.2006-01.com.openfiler:racdb.fra1 /dev/sdc

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

60 of 136 4/18/2011 10:17 PM

Page 61: 11gr2on openfiler

This mapping, however, may change every time the Oracle RAC node is rebooted. For example, after a reboot it may bedetermined that the iSCSI target iqn.2006-01.com.openfiler:racdb.crs1 gets mapped to the local SCSI device/dev/sdc. It is therefore impractical to rely on using the local SCSI device name given there is no way to predict the iSCSItarget mappings after a reboot.

What we need is a consistent device name we can reference (i.e. /dev/iscsi/crs1) that will always point to theappropriate iSCSI target through reboots. This is where the Dynamic Device Management tool named udev comes in.

udev provides a dynamic device directory using symbolic links that point to the actual device using a configurable set ofrules. When udev receives a device event (for example, the client logging in to an iSCSI target), it matches its configuredrules against the available device attributes provided in sysfs to identify the device. Rules that match may provide additional

device information or specify a device node name and multiple symlink names and instruct udev to run additional programs(a SHELL script for example) as part of the device event handling process.

The first step is to create a new rules file. The file will be named /etc/udev/rules.d/55-openiscsi.rules and

contain only a single line of name=value pairs used to receive events we are interested in. It will also define a call-outSHELL script (/etc/udev/scripts/iscsidev.sh) to handle the event.

Create the following rules file /etc/udev/rules.d/55-openiscsi.rules on both Oracle RAC nodes:

# /etc/udev/rules.d/55-openiscsi.rulesKERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c/part%n"

We now need to create the UNIX SHELL script that will be called when this event is received. Let's first create a separatedirectory on both Oracle RAC nodes where udev scripts can be stored:

[root@racnode1 ~]# mkdir -p /etc/udev/scripts

[root@racnode2 ~]# mkdir -p /etc/udev/scripts

Next, create the UNIX shell script /etc/udev/scripts/iscsidev.sh on both Oracle RAC nodes:

#!/bin/sh

# FILE: /etc/udev/scripts/iscsidev.sh

BUS=${1}HOST=${BUS%%:*}

[ -e /sys/class/iscsi_host ] || exit 1

file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"

target_name=$(cat ${file})

# This is not an open-scsi driveif [ -z "${target_name}" ]; then exit 1fi

# Check if QNAP drivecheck_qnap_target_name=${target_name%%:*}if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then target_name=`echo "${target_name%.*}"`fi

echo "${target_name##*.}"

After creating the UNIX SHELL script, change it to executable:

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

61 of 136 4/18/2011 10:17 PM

Page 62: 11gr2on openfiler

[root@racnode1 ~]# chmod 755 /etc/udev/scripts/iscsidev.sh

[root@racnode2 ~]# chmod 755 /etc/udev/scripts/iscsidev.sh

Now that udev is configured, restart the iSCSI service on both Oracle RAC nodes:

[root@racnode1 ~]# service iscsi stopLogging out of session [sid: 1, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]Logging out of session [sid: 2, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260Logging out of session [sid: 3, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]Logout of [sid: 1, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: successfulLogout of [sid: 2, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: successfulLogout of [sid: 3, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: successfulStopping iSCSI daemon: [ OK ]

[root@racnode1 ~]# service iscsi startiscsid dead but pid file existsTurning off network shutdown. Starting iSCSI daemon: [ OK ] [ OK ]Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, poLogging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: succLogin to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: succLogin to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: suc [ OK ]

[root@racnode2 ~]# service iscsi stopLogging out of session [sid: 1, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]Logging out of session [sid: 2, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260Logging out of session [sid: 3, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]Logout of [sid: 1, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: successfulLogout of [sid: 2, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: successfulLogout of [sid: 3, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: successfulStopping iSCSI daemon: [ OK ]

[root@racnode2 ~]# service iscsi startiscsid dead but pid file existsStarting iSCSI daemon: [ OK ] [ OK ]Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, pLogging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.data1, portal: 192.168.2.195,3260]: sucLogin to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.195,3260]: succLogin to [iface: default, target: iqn.2006-01.com.openfiler:racdb.fra1, portal: 192.168.2.195,3260]: succ [ OK ]

Let's see if our hard work paid off:

[root@racnode1 ~]# ls -l /dev/iscsi/*/dev/iscsi/crs1:total 0lrwxrwxrwx 1 root root 9 Nov 6 17:32 part -> ../../sdc

/dev/iscsi/data1:total 0lrwxrwxrwx 1 root root 9 Nov 6 17:32 part -> ../../sdd

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

62 of 136 4/18/2011 10:17 PM

Page 63: 11gr2on openfiler

/dev/iscsi/fra1:total 0lrwxrwxrwx 1 root root 9 Nov 6 17:32 part -> ../../sde

[root@racnode2 ~]# ls -l /dev/iscsi/*/dev/iscsi/crs1:total 0lrwxrwxrwx 1 root root 9 Nov 6 17:36 part -> ../../sdd

/dev/iscsi/data1:total 0lrwxrwxrwx 1 root root 9 Nov 6 17:36 part -> ../../sdc

/dev/iscsi/fra1:total 0lrwxrwxrwx 1 root root 9 Nov 6 17:36 part -> ../../sde

The listing above shows that udev did the job it was suppose to do! We now have a consistent set of local device namesthat can be used to reference the iSCSI targets. For example, we can safely assume that the device name /dev/iscsi/crs1/part will always reference the iSCSI target iqn.2006-01.com.openfiler:racdb.crs1. We now have aconsistent iSCSI target name to local device name mapping which is described in the following table:

iSCSI Target Name to Local Device Name Mappings

iSCSI Target Name Local Device Name

iqn.2006-01.com.openfiler:racdb.crs1 /dev/iscsi/crs1/part

iqn.2006-01.com.openfiler:racdb.data1 /dev/iscsi/data1/part

iqn.2006-01.com.openfiler:racdb.fra1 /dev/iscsi/fra1/part

Create Partitions on iSCSI Volumes

We now need to create a single primary partition on each of the iSCSI volumes that spans the entire size of the volume. Asmentioned earlier in this article, I will be using Automatic Storage Management (ASM) to store the shared files required forOracle Clusterware, the physical database files (data/index files, online redo log files, and control files), and the FastRecovery Area (FRA) for the clustered database.

The Oracle Clusterware shared files (OCR and voting disk) will be stored in an ASM disk group named +CRS which will beconfigured for external redundancy. The physical database files for the clustered database will be stored in an ASM disk

group named +RACDB_DATA which will also be configured for external redundancy. Finally, the Fast Recovery Area (RMANbackups and archived redo log files) will be stored in a third ASM disk group named +FRA which too will be configured forexternal redundancy.

The following table lists the three ASM disk groups that will be created and which iSCSI volume they will contain:

Oracle Shared Drive Configuration

File Types ASM Diskgroup Name iSCSI Target (short) Name ASM Redundancy Size ASMLib Volume Name

OCR and Voting Disk +CRS crs1 External 2GB ORCL:CRSVOL1

Oracle Database Files +RACDB_DATA data1 External 32GB ORCL:DATAVOL1

Oracle Fast Recovery Area +FRA fra1 External 32GB ORCL:FRAVOL1

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

63 of 136 4/18/2011 10:17 PM

Page 64: 11gr2on openfiler

As shown in the table above, we will need to create a single Linux primary partition on each of the three iSCSI volumes. Thefdisk command is used in Linux for creating (and removing) partitions. For each of the three iSCSI volumes, you can usethe default values when creating the primary partition as the default action is to use the entire disk. You can safely ignoreany warnings that may indicate the device does not contain a valid DOS partition (or Sun, SGI or OSF disklabel).

In this example, I will be running the fdisk command from racnode1 to create a single primary partition on each iSCSItarget using the local device names created by udev in the previous section:

/dev/iscsi/crs1/part

/dev/iscsi/data1/part

/dev/iscsi/fra1/part

Creating the single partition on each of the iSCSI volumes must only be run from one of thenodes in the Oracle RAC cluster! (i.e. racnode1)

# ---------------------------------------

[root@racnode1 ~]# fdisk /dev/iscsi/crs1/partCommand (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4): 1First cylinder (1-1012, default 1): 1Last cylinder or +size or +sizeM or +sizeK (1-1012, default 1012): 1012

Command (m for help): p

Disk /dev/iscsi/crs1/part: 2315 MB, 2315255808 bytes72 heads, 62 sectors/track, 1012 cylindersUnits = cylinders of 4464 * 512 = 2285568 bytes

Device Boot Start End Blocks Id System/dev/iscsi/crs1/part1 1 1012 2258753 83 Linux

Command (m for help): wThe partition table has been altered!

Calling ioctl() to re-read partition table.Syncing disks.

# ---------------------------------------

[root@racnode1 ~]# fdisk /dev/iscsi/data1/partCommand (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4): 1First cylinder (1-33888, default 1): 1Last cylinder or +size or +sizeM or +sizeK (1-33888, default 33888): 33888

Command (m for help): p

Disk /dev/iscsi/data1/part: 35.5 GB, 35534143488 bytes64 heads, 32 sectors/track, 33888 cylindersUnits = cylinders of 2048 * 512 = 1048576 bytes

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

64 of 136 4/18/2011 10:17 PM

Page 65: 11gr2on openfiler

Device Boot Start End Blocks Id System/dev/iscsi/data1/part1 1 33888 34701296 83 Linux

Command (m for help): wThe partition table has been altered!

Calling ioctl() to re-read partition table.Syncing disks.

# ---------------------------------------

[root@racnode1 ~]# fdisk /dev/iscsi/fra1/partCommand (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4): 1First cylinder (1-33888, default 1): 1Last cylinder or +size or +sizeM or +sizeK (1-33888, default 33888): 33888

Command (m for help): p

Disk /dev/iscsi/fra1/part: 35.5 GB, 35534143488 bytes64 heads, 32 sectors/track, 33888 cylindersUnits = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System/dev/iscsi/fra1/part1 1 33888 34701296 83 Linux

Command (m for help): wThe partition table has been altered!

Calling ioctl() to re-read partition table.Syncing disks.

Verify New Partitions

After creating all required partitions from racnode1, you should now inform the kernel of the partition changes using thefollowing command as the root user account from all remaining nodes in the Oracle RAC cluster (racnode2). Note thatthe mapping of iSCSI target names discovered from Openfiler and the local SCSI device name will be different on bothOracle RAC nodes. This is not a concern and will not cause any problems since we will not be using the local SCSI devicenames but rather the local device names created by udev in the previous section.

From racnode2, run the following commands:

[root@racnode2 ~]# partprobe

[root@racnode2 ~]# fdisk -l

Disk /dev/sda: 160.0 GB, 160000000000 bytes255 heads, 63 sectors/track, 19452 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System/dev/sda1 * 1 13 104391 83 Linux/dev/sda2 14 19452 156143767+ 8e Linux LVM

Disk /dev/sdb: 35.5 GB, 35534143488 bytes64 heads, 32 sectors/track, 33888 cylindersUnits = cylinders of 2048 * 512 = 1048576 bytes

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

65 of 136 4/18/2011 10:17 PM

Page 66: 11gr2on openfiler

Device Boot Start End Blocks Id System/dev/sdb1 1 33888 34701296 83 Linux

Disk /dev/sdc: 35.5 GB, 35534143488 bytes64 heads, 32 sectors/track, 33888 cylindersUnits = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System/dev/sdc1 1 33888 34701296 83 Linux

Disk /dev/sdd: 2315 MB, 2315255808 bytes72 heads, 62 sectors/track, 1012 cylindersUnits = cylinders of 4464 * 512 = 2285568 bytes

Device Boot Start End Blocks Id System/dev/sdd1 1 1012 2258753 83 Linux

As a final step you should run the following command on both Oracle RAC nodes to verify that udev created the newsymbolic links for each new partition:

[root@racnode1 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0 -> ../../sdcip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0-part1 -> ../../sdc1ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0 -> ../../sddip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0-part1 -> ../../sdd1ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0 -> ../../sdeip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0-part1 -> ../../sde1

[root@racnode2 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " " $11}')ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0 -> ../../sddip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0-part1 -> ../../sdd1ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0 -> ../../sdcip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.data1-lun-0-part1 -> ../../sdc1ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0 -> ../../sdeip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:racdb.fra1-lun-0-part1 -> ../../sde1

The listing above shows that udev did indeed create new device names for each of the new partitions. We will be usingthese new device names when configuring the volumes for ASMlib later in this guide:

/dev/iscsi/crs1/part1

/dev/iscsi/data1/part1

/dev/iscsi/fra1/part1

Create Job Role Separation Operating System Privileges Groups, Users, and Directories

Perform the following user, group, directory configuration, and setting shell limit tasks for the grid and oracle users onboth Oracle RAC nodes in the cluster.

This section provides the instructions on how to create the operating system users and groups to install all Oracle softwareusing a Job Role Separation configuration. The commands in this section should be performed on both Oracle RAC nodes

as root to create these groups, users, and directories. Note that the group and user IDs must be identical on both OracleRAC nodes in the cluster. Check to make sure that the group and user IDs you want to use are available on each clustermember node, and confirm that the primary group for each grid infrastructure for a cluster installation owner has the samename and group ID which for the purpose of this guide is oinstall (GID 1000).

A Job Role Separation privileges configuration of Oracle is a configuration with operating system groups and users thatdivide administrative access privileges to the Oracle grid infrastructure installation from other administrative privileges usersand groups associated with other Oracle installations (e.g. the Oracle database software). Administrative privileges access

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

66 of 136 4/18/2011 10:17 PM

Page 67: 11gr2on openfiler

is granted by membership in separate operating system groups, and installation privileges are granted by using differentinstallation owners for each Oracle installation.

One OS user will be created to own each Oracle software product — "grid" for the Oracle grid infrastructure owner and"oracle" for the Oracle RAC software. Throughout this article, a user created to own the Oracle grid infrastructurebinaries is called the grid user. This user will own both the Oracle Clusterware and Oracle Automatic StorageManagement binaries. The user created to own the Oracle database binaries (Oracle RAC) will be called the oracle user.Both Oracle software owners must have the Oracle Inventory group (oinstall) as their primary group, so that eachOracle software installation owner can write to the central inventory (oraInventory), and so that OCR and OracleClusterware resource permissions are set correctly. The Oracle RAC software owner must also have the OSDBA groupand the optional OSOPER group as secondary groups.

This type of configuration is optional but highly recommend by Oracle for organizations that need to restrict user access toOracle software by responsibility areas for different administrator users. For example, a small organization could simplyallocate operating system user privileges so that you can use one administrative user and one group for operating systemauthentication for all system privileges on the storage and database tiers. With this type of configuration, you can designatethe oracle user to be the sole installation owner for all Oracle software (Grid infrastructure and the Oracle databasesoftware), and designate oinstall to be the single group whose members are granted all system privileges for OracleClusterware, Automatic Storage Management, and all Oracle Databases on the servers, and all privileges as installationowners. Other organizations, however, have specialized system roles who will be responsible for installing the Oraclesoftware such as system administrators, network administrators, or storage administrators. These different administrativeusers can configure a system in preparation for an Oracle grid infrastructure for a cluster installation, and complete allconfiguration tasks that require operating system root privileges. When grid infrastructure installation and configuration iscompleted successfully, a system administrator should only need to provide configuration information and to grant access tothe database administrator to run scripts as root during an Oracle RAC installation.

The following O/S groups will be created to support job role separation:

Description OS Group Name OS Users Assigned to this Group Oracle Privilege

Oracle Inventory and Software Owner oinstall grid, oracle

Oracle Automatic Storage Management Group asmadmin grid SYSASM

ASM Database Administrator Group asmdba grid, oracle SYSDBA for ASM

ASM Operator Group asmoper grid SYSOPER for ASM

Database Administrator dba oracle SYSDBA

Database Operator oper oracle SYSOPER

Oracle Inventory Group (typically oinstall)

Members of the OINSTALL group are considered the "owners" of the Oracle software and are granted privileges towrite to the Oracle central inventory (oraInventory). When you install Oracle software on a Linux system for the firsttime, OUI creates the /etc/oraInst.loc file. This file identifies the name of the Oracle Inventory group (bydefault, oinstall), and the path of the Oracle Central Inventory directory.

By default, if an oraInventory group does not exist, then the installer lists the primary group of the installation ownerfor the grid infrastructure for a cluster as the oraInventory group. Ensure that this group is available as a primarygroup for all planned Oracle software installation owners. For the purpose of this guide, the grid and oracleinstallation owners must be configured with oinstall as their primary group.

The Oracle Automatic Storage Management Group (typically asmadmin)

This is a required group. Create this group as a separate group if you want to have separate administration privilegegroups for Oracle ASM and Oracle Database administrators. In Oracle documentation, the operating system groupwhose members are granted privileges is called the OSASM group, and in code examples, where there is a groupspecifically created to grant this privilege, it is referred to as asmadmin.

Members of the OSASM group can use SQL to connect to an Oracle ASM instance as SYSASM using operating

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

67 of 136 4/18/2011 10:17 PM

Page 68: 11gr2on openfiler

system authentication. The SYSASM privilege that was introduced in Oracle ASM 11g release 1 (11.1) is now fully

separated from the SYSDBA privilege in Oracle ASM 11g release 2 (11.2). SYSASM privileges no longer provide

access privileges on an RDBMS instance. Providing system privileges for the storage tier using the SYSASM privilegeinstead of the SYSDBA privilege provides a clearer division of responsibility between ASM administration anddatabase administration, and helps to prevent different databases using the same storage from accidentallyoverwriting each others files. The SYSASM privileges permit mounting and dismounting disk groups, and other storageadministration tasks.

The ASM Database Administrator group (OSDBA for ASM, typically asmdba)

Members of the ASM Database Administrator group (OSDBA for ASM) is a subset of the SYSASM privileges and aregranted read and write access to files managed by Oracle ASM. The grid infrastructure installation owner (grid)and all Oracle Database software owners (oracle) must be a member of this group, and all users with OSDBAmembership on databases that have access to the files managed by Oracle ASM must be members of the OSDBAgroup for ASM.

Members of the ASM Operator Group (OSOPER for ASM, typically asmoper)

This is an optional group. Create this group if you want a separate group of operating system users to have a limitedset of Oracle ASM instance administrative privileges (the SYSOPER for ASM privilege), including starting up andstopping the Oracle ASM instance. By default, members of the OSASM group also have all privileges granted by theSYSOPER for ASM privilege.

To use the ASM Operator group to create an ASM administrator group with fewer privileges than the defaultasmadmin group, then you must choose the Advanced installation type to install the Grid infrastructure software. Inthis case, OUI prompts you to specify the name of this group. In this guide, this group is asmoper.

If you want to have an OSOPER for ASM group, then the grid infrastructure for a cluster software owner (grid)must be a member of this group.

Database Administrator (OSDBA, typically dba)

Members of the OSDBA group can use SQL to connect to an Oracle instance as SYSDBA using operating systemauthentication. Members of this group can perform critical database administration tasks, such as creating thedatabase and instance startup and shutdown. The default name for this group is dba. The SYSDBA system privilegeallows access to a database instance even when the database is not open. Control of this privilege is totally outsideof the database itself.

The SYSDBA system privilege should not be confused with the database role DBA. The DBA role does not include theSYSDBA or SYSOPER system privileges.

Database Operator (OSOPER, typically oper)

Members of the OSOPER group can use SQL to connect to an Oracle instance as SYSOPER using operating systemauthentication. Members of this optional group have a limited set of database administrative privileges such asmanaging and running backups. The default name for this group is oper. The SYSOPER system privilege allowsaccess to a database instance even when the database is not open. Control of this privilege is totally outside of thedatabase itself. To use this group, choose the Advanced installation type to install the Oracle database software.

Create Groups and User for Grid Infrastructure

Lets start this section by creating the recommended OS groups and user for Grid Infrastructure on both Oracle RAC nodes:

[root@racnode1 ~]# groupadd -g 1000 oinstall[root@racnode1 ~]# groupadd -g 1200 asmadmin[root@racnode1 ~]# groupadd -g 1201 asmdba[root@racnode1 ~]# groupadd -g 1202 asmoper[root@racnode1 ~]# useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash

[root@racnode1 ~]# id griduid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

68 of 136 4/18/2011 10:17 PM

Page 69: 11gr2on openfiler

-------------------------------------------------

[root@racnode2 ~]# groupadd -g 1000 oinstall[root@racnode2 ~]# groupadd -g 1200 asmadmin[root@racnode2 ~]# groupadd -g 1201 asmdba[root@racnode2 ~]# groupadd -g 1202 asmoper[root@racnode2 ~]# useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash

[root@racnode2 ~]# id griduid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

Set the password for the grid account on both Oracle RAC nodes:

[root@racnode1 ~]# passwd gridChanging password for user grid.New UNIX password: xxxxxxxxxxxRetype new UNIX password: xxxxxxxxxxxpasswd: all authentication tokens updated successfully.

[root@racnode2 ~]# passwd gridChanging password for user grid.New UNIX password: xxxxxxxxxxxRetype new UNIX password: xxxxxxxxxxxpasswd: all authentication tokens updated successfully.

Create Login Script for the grid User Account

Log in to both Oracle RAC nodes as the grid user account and create the following login script (.bash_profile):

When setting the Oracle environment variables for each Oracle RAC node in the loginscript, make certain to assign each RAC node a unique Oracle SID for ASM:

racnode1: ORACLE_SID=+ASM1racnode2: ORACLE_SID=+ASM2

[root@racnode1 ~]# su - grid

# ---------------------------------------------------# .bash_profile# ---------------------------------------------------# OS User: grid# Application: Oracle Grid Infrastructure# Version: Oracle 11g release 2# ---------------------------------------------------

# Get the aliases and functionsif [ -f ~/.bashrc ]; then . ~/.bashrcfi

alias ls="ls -FA"

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

69 of 136 4/18/2011 10:17 PM

Page 70: 11gr2on openfiler

# ---------------------------------------------------# ORACLE_SID# ---------------------------------------------------# Specifies the Oracle system identifier (SID)# for the Automatic Storage Management (ASM)instance# running on this node.# Each RAC node must have a unique ORACLE_SID.# (i.e. +ASM1, +ASM2,...)# ---------------------------------------------------ORACLE_SID=+ASM1; export ORACLE_SID

# ---------------------------------------------------# JAVA_HOME# ---------------------------------------------------# Specifies the directory of the Java SDK and Runtime# Environment.# ---------------------------------------------------JAVA_HOME=/usr/local/java; export JAVA_HOME

# ---------------------------------------------------# ORACLE_BASE# ---------------------------------------------------# Specifies the base of the Oracle directory structure# for Optimal Flexible Architecture (OFA) compliant# installations. The Oracle base directory for the# grid installation owner is the location where# diagnostic and administrative logs, and other logs# associated with Oracle ASM and Oracle Clusterware# are stored.# ---------------------------------------------------ORACLE_BASE=/u01/app/grid; export ORACLE_BASE

# ---------------------------------------------------# ORACLE_HOME# ---------------------------------------------------# Specifies the directory containing the Oracle# Grid Infrastructure software. For grid# infrastructure for a cluster installations, the Grid# home must not be placed under one of the Oracle base# directories, or under Oracle home directories of# Oracle Database installation owners, or in the home# directory of an installation owner. During # installation, ownership of the path to the Grid # home is changed to root. This change causes # permission errors for other installations.# ---------------------------------------------------ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME

# ---------------------------------------------------# ORACLE_PATH# ---------------------------------------------------# Specifies the search path for files used by Oracle# applications such as SQL*Plus. If the full path to# the file is not specified, or if the file is not# in the current directory, the Oracle application# uses ORACLE_PATH to locate the file.# This variable is used by SQL*Plus, Forms and Menu.# ---------------------------------------------------ORACLE_PATH=/u01/app/oracle/dba_scripts/common/sql; export ORACLE_PATH

# ---------------------------------------------------# SQLPATH# ---------------------------------------------------# Specifies the directory or list of directories that# SQL*Plus searches for a login.sql file.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

70 of 136 4/18/2011 10:17 PM

Page 71: 11gr2on openfiler

# ---------------------------------------------------# SQLPATH=/u01/app/oracle/dba_scripts/common/sql; export SQLPATH

# ---------------------------------------------------# ORACLE_TERM# ---------------------------------------------------# Defines a terminal definition. If not set, it# defaults to the value of your TERM environment# variable. Used by all character mode products. # ---------------------------------------------------ORACLE_TERM=xterm; export ORACLE_TERM

# ---------------------------------------------------# NLS_DATE_FORMAT# ---------------------------------------------------# Specifies the default date format to use with the# TO_CHAR and TO_DATE functions. The default value of# this parameter is determined by NLS_TERRITORY. The# value of this parameter can be any valid date# format mask, and the value must be surrounded by # double quotation marks. For example:## NLS_DATE_FORMAT = "MM/DD/YYYY"## ---------------------------------------------------NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT

# ---------------------------------------------------# TNS_ADMIN# ---------------------------------------------------# Specifies the directory containing the Oracle Net# Services configuration files like listener.ora, # tnsnames.ora, and sqlnet.ora.# ---------------------------------------------------TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN

# ---------------------------------------------------# ORA_NLS11# ---------------------------------------------------# Specifies the directory where the language,# territory, character set, and linguistic definition# files are stored.# ---------------------------------------------------ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11

# ---------------------------------------------------# PATH# ---------------------------------------------------# Used by the shell to locate executable programs;# must include the $ORACLE_HOME/bin directory.# ---------------------------------------------------PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/binPATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/binPATH=${PATH}:/u01/app/oracle/dba_scripts/common/binexport PATH

# ---------------------------------------------------# LD_LIBRARY_PATH# ---------------------------------------------------# Specifies the list of directories that the shared# library loader searches to locate shared object# libraries at runtime.# ---------------------------------------------------LD_LIBRARY_PATH=$ORACLE_HOME/libLD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/libLD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

71 of 136 4/18/2011 10:17 PM

Page 72: 11gr2on openfiler

export LD_LIBRARY_PATH

# ---------------------------------------------------# CLASSPATH# ---------------------------------------------------# Specifies the directory or list of directories that# contain compiled Java classes.# ---------------------------------------------------CLASSPATH=$ORACLE_HOME/JRECLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlibCLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlibCLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlibexport CLASSPATH

# ---------------------------------------------------# THREADS_FLAG# ---------------------------------------------------# All the tools in the JDK use green threads as a# default. To specify that native threads should be# used, set the THREADS_FLAG environment variable to# "native". You can revert to the use of green# threads by setting THREADS_FLAG to the value# "green".# ---------------------------------------------------THREADS_FLAG=native; export THREADS_FLAG

# ---------------------------------------------------# TEMP, TMP, and TMPDIR# ---------------------------------------------------# Specify the default directories for temporary# files; if set, tools that create temporary files# create them in one of these directories.# ---------------------------------------------------export TEMP=/tmpexport TMPDIR=/tmp

# ---------------------------------------------------# UMASK# ---------------------------------------------------# Set the default file mode creation mask# (umask) to 022 to ensure that the user performing# the Oracle software installation creates files# with 644 permissions.# ---------------------------------------------------umask 022

Create Groups and User for Oracle Database Software

Next, create the the recommended OS groups and user for the Oracle database software on both Oracle RAC nodes:

[root@racnode1 ~]# groupadd -g 1300 dba[root@racnode1 ~]# groupadd -g 1301 oper[root@racnode1 ~]# useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "O

[root@racnode1 ~]# id oracleuid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)

-------------------------------------------------

[root@racnode2 ~]# groupadd -g 1300 dba[root@racnode2 ~]# groupadd -g 1301 oper[root@racnode2 ~]# useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "O

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

72 of 136 4/18/2011 10:17 PM

Page 73: 11gr2on openfiler

[root@racnode2 ~]# id oracleuid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)

Set the password for the oracle account:

[root@racnode1 ~]# passwd oracleChanging password for user oracle.New UNIX password: xxxxxxxxxxxRetype new UNIX password: xxxxxxxxxxxpasswd: all authentication tokens updated successfully.

[root@racnode2 ~]# passwd oracleChanging password for user oracle.New UNIX password: xxxxxxxxxxxRetype new UNIX password: xxxxxxxxxxxpasswd: all authentication tokens updated successfully.

Create Login Script for the oracle User Account

Log in to both Oracle RAC nodes as the oracle user account and create the following login script (.bash_profile):

When setting the Oracle environment variables for each Oracle RAC node in the loginscript, make certain to assign each RAC node a unique Oracle SID:

racnode1: ORACLE_SID=racdb1racnode2: ORACLE_SID=racdb2

[root@racnode1 ~]# su - oracle

# ---------------------------------------------------# .bash_profile# ---------------------------------------------------# OS User: oracle# Application: Oracle Database Software Owner# Version: Oracle 11g release 2# ---------------------------------------------------

# Get the aliases and functionsif [ -f ~/.bashrc ]; then . ~/.bashrcfi

alias ls="ls -FA"

# ---------------------------------------------------# ORACLE_SID# ---------------------------------------------------# Specifies the Oracle system identifier (SID) for# the Oracle instance running on this node.# Each RAC node must have a unique ORACLE_SID.# (i.e. racdb1, racdb2,...)

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

73 of 136 4/18/2011 10:17 PM

Page 74: 11gr2on openfiler

# ---------------------------------------------------ORACLE_SID=racdb1; export ORACLE_SID

# ---------------------------------------------------# ORACLE_UNQNAME# ---------------------------------------------------# In previous releases of Oracle Database, you were # required to set environment variables for# ORACLE_HOME and ORACLE_SID to start, stop, and# check the status of Enterprise Manager. With# Oracle Database 11g release 2 (11.2) and later, you# need to set the environment variables ORACLE_HOME # and ORACLE_UNQNAME to use Enterprise Manager. # Set ORACLE_UNQNAME equal to the database unique# name.# ---------------------------------------------------ORACLE_UNQNAME=racdb; export ORACLE_UNQNAME

# ---------------------------------------------------# JAVA_HOME# ---------------------------------------------------# Specifies the directory of the Java SDK and Runtime# Environment.# ---------------------------------------------------JAVA_HOME=/usr/local/java; export JAVA_HOME

# ---------------------------------------------------# ORACLE_BASE# ---------------------------------------------------# Specifies the base of the Oracle directory structure# for Optimal Flexible Architecture (OFA) compliant# database software installations.# ---------------------------------------------------ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

# ---------------------------------------------------# ORACLE_HOME# ---------------------------------------------------# Specifies the directory containing the Oracle# Database software.# ---------------------------------------------------ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME

# ---------------------------------------------------# ORACLE_PATH# ---------------------------------------------------# Specifies the search path for files used by Oracle# applications such as SQL*Plus. If the full path to# the file is not specified, or if the file is not# in the current directory, the Oracle application# uses ORACLE_PATH to locate the file.# This variable is used by SQL*Plus, Forms and Menu.# ---------------------------------------------------ORACLE_PATH=/u01/app/oracle/dba_scripts/common/sql:$ORACLE_HOME/rdbms/admin; export ORACLE_PATH

# ---------------------------------------------------# SQLPATH# ---------------------------------------------------# Specifies the directory or list of directories that# SQL*Plus searches for a login.sql file.# ---------------------------------------------------# SQLPATH=/u01/app/oracle/dba_scripts/common/sql; export SQLPATH

# ---------------------------------------------------# ORACLE_TERM# ---------------------------------------------------

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

74 of 136 4/18/2011 10:17 PM

Page 75: 11gr2on openfiler

# Defines a terminal definition. If not set, it# defaults to the value of your TERM environment# variable. Used by all character mode products. # ---------------------------------------------------ORACLE_TERM=xterm; export ORACLE_TERM

# ---------------------------------------------------# NLS_DATE_FORMAT# ---------------------------------------------------# Specifies the default date format to use with the# TO_CHAR and TO_DATE functions. The default value of# this parameter is determined by NLS_TERRITORY. The# value of this parameter can be any valid date# format mask, and the value must be surrounded by # double quotation marks. For example:## NLS_DATE_FORMAT = "MM/DD/YYYY"## ---------------------------------------------------NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT

# ---------------------------------------------------# TNS_ADMIN# ---------------------------------------------------# Specifies the directory containing the Oracle Net# Services configuration files like listener.ora, # tnsnames.ora, and sqlnet.ora.# ---------------------------------------------------TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN

# ---------------------------------------------------# ORA_NLS11# ---------------------------------------------------# Specifies the directory where the language,# territory, character set, and linguistic definition# files are stored.# ---------------------------------------------------ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11

# ---------------------------------------------------# PATH# ---------------------------------------------------# Used by the shell to locate executable programs;# must include the $ORACLE_HOME/bin directory.# ---------------------------------------------------PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/binPATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/binPATH=${PATH}:/u01/app/oracle/dba_scripts/common/binexport PATH

# ---------------------------------------------------# LD_LIBRARY_PATH# ---------------------------------------------------# Specifies the list of directories that the shared# library loader searches to locate shared object# libraries at runtime.# ---------------------------------------------------LD_LIBRARY_PATH=$ORACLE_HOME/libLD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/libLD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/libexport LD_LIBRARY_PATH

# ---------------------------------------------------# CLASSPATH# ---------------------------------------------------# Specifies the directory or list of directories that

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

75 of 136 4/18/2011 10:17 PM

Page 76: 11gr2on openfiler

# contain compiled Java classes.# ---------------------------------------------------CLASSPATH=$ORACLE_HOME/JRECLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlibCLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlibCLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlibexport CLASSPATH

# ---------------------------------------------------# THREADS_FLAG# ---------------------------------------------------# All the tools in the JDK use green threads as a# default. To specify that native threads should be# used, set the THREADS_FLAG environment variable to# "native". You can revert to the use of green# threads by setting THREADS_FLAG to the value# "green".# ---------------------------------------------------THREADS_FLAG=native; export THREADS_FLAG

# ---------------------------------------------------# TEMP, TMP, and TMPDIR# ---------------------------------------------------# Specify the default directories for temporary# files; if set, tools that create temporary files# create them in one of these directories.# ---------------------------------------------------export TEMP=/tmpexport TMPDIR=/tmp

# ---------------------------------------------------# UMASK# ---------------------------------------------------# Set the default file mode creation mask# (umask) to 022 to ensure that the user performing# the Oracle software installation creates files# with 644 permissions.# ---------------------------------------------------umask 022

Verify That the User nobody Exists

Before installing the software, complete the following procedure to verify that the user nobody exists on both Oracle RACnodes:

To determine if the user exists, enter the following command:

[root@racnode1 ~]# id nobodyuid=99(nobody) gid=99(nobody) groups=99(nobody)

[root@racnode2 ~]# id nobodyuid=99(nobody) gid=99(nobody) groups=99(nobody)

If this command displays information about the nobody user, then you do not have to create that user.

1.

If the user nobody does not exist, then enter the following command to create it:

[root@racnode1 ~]# /usr/sbin/useradd nobody

[root@racnode2 ~]# /usr/sbin/useradd nobody

2.

Create the Oracle Base Directory Path

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

76 of 136 4/18/2011 10:17 PM

Page 77: 11gr2on openfiler

The final step is to configure an Oracle base path compliant with an Optimal Flexible Architecture (OFA) structure andcorrect permissions. This will need to be performed on both Oracle RAC nodes in the cluster as root.

This guide assumes that the /u01 directory is being created in the root file system. Please note that this is being done forthe sake of brevity and is not recommended as a general practice. Normally, the /u01 directory would be provisioned as aseparate file system with either hardware or software mirroring configured.

[root@racnode1 ~]# mkdir -p /u01/app/grid[root@racnode1 ~]# mkdir -p /u01/app/11.2.0/grid[root@racnode1 ~]# chown -R grid:oinstall /u01[root@racnode1 ~]# mkdir -p /u01/app/oracle[root@racnode1 ~]# chown oracle:oinstall /u01/app/oracle[root@racnode1 ~]# chmod -R 775 /u01

-------------------------------------------------------------

[root@racnode2 ~]# mkdir -p /u01/app/grid[root@racnode2 ~]# mkdir -p /u01/app/11.2.0/grid[root@racnode2 ~]# chown -R grid:oinstall /u01[root@racnode2 ~]# mkdir -p /u01/app/oracle[root@racnode2 ~]# chown oracle:oinstall /u01/app/oracle[root@racnode2 ~]# chmod -R 775 /u01

At the end of this section, you should have the following on both Oracle RAC nodes:

An Oracle central inventory group, or oraInventory group (oinstall), whose members that have the centralinventory group as their primary group are granted permissions to write to the oraInventory directory.

A separate OSASM group (asmadmin), whose members are granted the SYSASM privilege to administer OracleClusterware and Oracle ASM.

A separate OSDBA for ASM group (asmdba), whose members include grid and oracle, and who are grantedaccess to Oracle ASM.

A separate OSOPER for ASM group (asmoper), whose members include grid, and who are granted limited OracleASM administrator privileges, including the permissions to start and stop the Oracle ASM instance.

An Oracle grid installation for a cluster owner (grid), with the oraInventory group as its primary group, and with theOSASM (asmadmin), OSDBA for ASM (asmdba) and OSOPER for ASM (asmoper) groups as secondary groups.

A separate OSDBA group (dba), whose members are granted the SYSDBA privilege to administer the OracleDatabase.

A separate OSOPER group (oper), whose members include oracle, and who are granted limited Oracle databaseadministrator privileges.

An Oracle Database software owner (oracle), with the oraInventory group as its primary group, and with theOSDBA (dba), OSOPER (oper), and the OSDBA for ASM group (asmdba) as their secondary groups.

An OFA-compliant mount point /u01 owned by grid:oinstall before installation.

An Oracle base for the grid /u01/app/grid owned by grid:oinstall with 775 permissions, and changedduring the installation process to 755 permissions. The grid installation owner Oracle base directory is the locationwhere Oracle ASM diagnostic and administrative log files are placed.

A Grid home /u01/app/11.2.0/grid owned by grid:oinstall with 775 (drwxdrwxr-x) permissions. Thesepermissions are required for installation, and are changed during the installation process to root:oinstall with755 permissions (drwxr-xr-x).

During installation, OUI creates the Oracle Inventory directory in the path /u01/app/oraInventory. This pathremains owned by grid:oinstall, to enable other Oracle software owners to write to the central inventory.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

77 of 136 4/18/2011 10:17 PM

Page 78: 11gr2on openfiler

An Oracle base /u01/app/oracle owned by oracle:oinstall with 775 permissions.

Set Resource Limits for the Oracle Software Installation Users

To improve the performance of the software on Linux systems, you must increase the following resource limits for theOracle software owner users (grid, oracle):

Shell Limit Item in limits.conf Hard Limit

Maximum number of open file descriptors nofile 65536

Maximum number of processes available to a single user nproc 16384

Maximum size of the stack segment of the process stack 10240

To make these changes, run the following as root:

On each Oracle RAC node, add the following lines to the /etc/security/limits.conf file (the followingexample shows the software account owners oracle and grid):

[root@racnode1 ~]# cat >> /etc/security/limits.conf <<EOFgrid soft nproc 2047grid hard nproc 16384grid soft nofile 1024grid hard nofile 65536oracle soft nproc 2047oracle hard nproc 16384oracle soft nofile 1024oracle hard nofile 65536EOF

[root@racnode2 ~]# cat >> /etc/security/limits.conf <<EOFgrid soft nproc 2047grid hard nproc 16384grid soft nofile 1024grid hard nofile 65536oracle soft nproc 2047oracle hard nproc 16384oracle soft nofile 1024oracle hard nofile 65536EOF

1.

On each Oracle RAC node, add or edit the following line in the /etc/pam.d/login file, if it does not already exist:

[root@racnode1 ~]# cat >> /etc/pam.d/login <<EOFsession required pam_limits.soEOF

[root@racnode2 ~]# cat >> /etc/pam.d/login <<EOFsession required pam_limits.soEOF

2.

Depending on your shell environment, make the following changes to the default shell startup file in order to changeulimit settings for all Oracle installation owners (note that these examples show the users oracle and grid):

For the Bourne, Bash, or Korn shell, add the following lines to the /etc/profile file by running the following:

3.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

78 of 136 4/18/2011 10:17 PM

Page 79: 11gr2on openfiler

[root@racnode1 ~]# cat >> /etc/profile <<EOFif [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022fiEOF

[root@racnode2 ~]# cat >> /etc/profile <<EOFif [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022fiEOF

For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file by running the following:

[root@racnode1 ~]# cat >> /etc/csh.login <<EOFif ( \$USER == "oracle" || \$USER == "grid" ) then limit maxproc 16384 limit descriptors 65536endifEOF

[root@racnode2 ~]# cat >> /etc/csh.login <<EOFif ( \$USER == "oracle" || \$USER == "grid" ) then limit maxproc 16384 limit descriptors 65536endifEOF

Logging In to a Remote System Using X Terminal

This guide requires access to the console of all machines (Oracle RAC nodes and Openfiler) in order to install the operatingsystem and perform several of the configuration tasks. When managing a very small number of servers, it might makesense to connect each server with its own monitor, keyboard, and mouse in order to access its console. However, as thenumber of servers to manage increases, this solution becomes unfeasible. A more practical solution would be to configure adedicated device which would include a single monitor, keyboard, and mouse that would have direct access to the consoleof each machine. This solution is made possible using a Keyboard, Video, Mouse Switch —better known as a KVM Switch.

After installing the Linux operating system, there are several applications which are needed to install and configure OracleRAC that use a Graphical User Interface (GUI) and require the use of an X11 display server. The most notable of theseGUI applications (or better known as an X application) is the Oracle Universal Installer (OUI) although others like the VirtualIP Configuration Assistant (VIPCA) also require the use of an X11 display server.

Given the fact that I created this article on a system that makes use of a KVM Switch, I am able to toggle to each node andrely on the native X11 display server for Linux in order to display X applications.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

79 of 136 4/18/2011 10:17 PM

Page 80: 11gr2on openfiler

If you are not logged directly on to the graphical console of a node but rather you are using a remote client like SSH,PuTTY, or Telnet to connect to the node, any X application will require an X11 display server installed on the client. Forexample, if you are making a terminal remote connection to racnode1 from a Windows workstation, you would need toinstall an X11 display server on that Windows client (Xming for example). If you intend to install the Oracle gridinfrastructure and Oracle RAC software from a Windows workstation or other system with an X11 display server installed,then perform the following actions:

Start the X11 display server software on the client workstation.1.

Configure the security settings of the X server software to permit remote hosts to display X applications on the localsystem.

2.

From the client workstation, log in to the server where you want to install the software as the Oracle gridinfrastructure for a cluster software owner (grid) or the Oracle RAC software (oracle).

3.

As the software owner (grid, oracle), set the DISPLAY environment:

[root@racnode1 ~]# su - grid [grid@racnode1 ~]$ DISPLAY=<your local workstation>:0.0[grid@racnode1 ~]$ export DISPLAY [grid@racnode1 ~]$ # TEST X CONFIGURATION BY RUNNING xterm[grid@racnode1 ~]$ xterm &

Figure 17: Test X11 Display Server on Windows; Run xterm from Node 1 (racnode1)

4.

Configure the Linux Servers for Oracle

Perform the following configuration procedures on both Oracle RAC nodes in the cluster.

This section provides information about setting all OS kernel parameters required for Oracle. The kernel parametersdiscussed in this section will need to be set on both Oracle RAC nodes in the cluster every time the machine is booted.Instructions for setting all OS kernel parameters required by Oracle in a startup script (/etc/sysctl.conf) will bediscussed later in this section.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

80 of 136 4/18/2011 10:17 PM

Page 81: 11gr2on openfiler

Overview

This section focuses on configuring both Oracle RAC Linux servers — getting each one prepared for the Oracle 11g release

2 grid infrastructure and Oracle RAC 11g release 2 installations on the Red Hat Enterprise Linux 5 or CentOS 5 platform.

This includes verifying enough memory and swap space, setting shared memory and semaphores, setting the maximumnumber of file handles, setting the IP local port range, and finally, how to activate all kernel parameters for the system.

There are several different ways to set these parameters. For the purpose of this article, I will be making all changespermanent through reboots by placing all values in the /etc/sysctl.conf file.

Memory and Swap Space Considerations

The minimum required RAM on RHEL/CentOS is 1.5 GB for grid infrastructure for a cluster, or 2.5 GB for grid infrastructurefor a cluster and Oracle RAC. In this guide, each Oracle RAC node will be hosting Oracle grid infrastructure and OracleRAC and will therefore require at least 2.5 GB in each server. Each of the Oracle RAC nodes used in this example areequipped with 4 GB of physical RAM.

The minimum required swap space is 1.5 GB. Oracle recommends that you set swap space to 1.5 times the amount ofRAM for systems with 2 GB of RAM or less. For systems with 2 GB to 16 GB RAM, use swap space equal to RAM. Forsystems with more than 16 GB RAM, use 16 GB of RAM for swap space.

To check the amount of memory you have, type:

[root@racnode1 ~]# cat /proc/meminfo | grep MemTotalMemTotal: 4038512 kB

[root@racnode2 ~]# cat /proc/meminfo | grep MemTotalMemTotal: 4038512 kB

To check the amount of swap you have allocated, type:

[root@racnode1 ~]# cat /proc/meminfo | grep SwapTotalSwapTotal: 6094840 kB

[root@racnode2 ~]# cat /proc/meminfo | grep SwapTotalSwapTotal: 6094840 kB

If you have less than 4GB of memory (between your RAM and SWAP), you can add temporary swap space bycreating a temporary swap file. This way you do not have to use a raw device or even more drastic, rebuild yoursystem.

As root, make a file that will act as additional swap space, let's say about 500MB:

[root@racnode1 ~]# dd if=/dev/zero of=tempswap bs=1k count=500000

Next, change the file permissions:

[root@racnode1 ~]# chmod 600 tempswap

Finally, format the "partition" as swap and add it to the swap space:

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

81 of 136 4/18/2011 10:17 PM

Page 82: 11gr2on openfiler

[root@racnode1 ~]# mke2fs tempswap[root@racnode1 ~]# mkswap tempswap[root@racnode1 ~]# swapon tempswap

Configure Kernel Parameters

The kernel parameters presented in this section are recommended values only as documented by Oracle. For productiondatabase systems, Oracle recommends that you tune these values to optimize the performance of the system.

On both Oracle RAC nodes, verify that the kernel parameters described in this section are set to values greater than orequal to the recommended values. Also note that when setting the four semaphore values that all four values need to beentered on one line.

Oracle Database 11g release 2 on RHEL/CentOS 5 requires the kernel parameter settings shown below. The values given

are minimums, so if your system uses a larger value, do not change it.

kernel.shmmax = 4294967295kernel.shmall = 2097152kernel.shmmni = 4096kernel.sem = 250 32000 100 128fs.file-max = 6815744net.ipv4.ip_local_port_range = 9000 65500net.core.rmem_default=262144net.core.rmem_max=4194304net.core.wmem_default=262144net.core.wmem_max=1048576fs.aio-max-nr=1048576

RHEL/CentOS 5 already comes configured with default values defined for the following kernel parameters. The defaultvalues for these two kernel parameters is adequate for Oracle Database 11g release 2 and therefore do not need to be

modified:

kernel.shmallkernel.shmmax

Use the default values if they are the same or larger than the required values.

This article assumes a fresh new install of RHEL/CentOS 5 and as such, many of the required kernel parameters arealready set (see above). This being the case, you can simply copy / paste the following to both Oracle RAC nodes whilelogged in as root:

[root@racnode1 ~]# cat >> /etc/sysctl.conf <<EOF

# Controls the maximum number of shared memory segments system widekernel.shmmni = 4096

# Sets the following semaphore values:# SEMMSL_value SEMMNS_value SEMOPM_value SEMMNI_valuekernel.sem = 250 32000 100 128

# Sets the maximum number of file-handles that the Linux kernel will allocatefs.file-max = 6815744

# Defines the local port range that is used by TCP and UDP# traffic to choose the local port

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

82 of 136 4/18/2011 10:17 PM

Page 83: 11gr2on openfiler

net.ipv4.ip_local_port_range = 9000 65500

# Default setting in bytes of the socket "receive" buffer which# may be set by using the SO_RCVBUF socket optionnet.core.rmem_default=262144

# Maximum setting in bytes of the socket "receive" buffer which# may be set by using the SO_RCVBUF socket optionnet.core.rmem_max=4194304

# Default setting in bytes of the socket "send" buffer which# may be set by using the SO_SNDBUF socket optionnet.core.wmem_default=262144

# Maximum setting in bytes of the socket "send" buffer which # may be set by using the SO_SNDBUF socket optionnet.core.wmem_max=1048576

# Maximum number of allowable concurrent asynchronous I/O requests requestsfs.aio-max-nr=1048576EOF

[root@racnode2 ~]# cat >> /etc/sysctl.conf <<EOF

# Controls the maximum number of shared memory segments system widekernel.shmmni = 4096

# Sets the following semaphore values:# SEMMSL_value SEMMNS_value SEMOPM_value SEMMNI_valuekernel.sem = 250 32000 100 128

# Sets the maximum number of file-handles that the Linux kernel will allocatefs.file-max = 6815744

# Defines the local port range that is used by TCP and UDP# traffic to choose the local portnet.ipv4.ip_local_port_range = 9000 65500

# Default setting in bytes of the socket "receive" buffer which# may be set by using the SO_RCVBUF socket optionnet.core.rmem_default=262144

# Maximum setting in bytes of the socket "receive" buffer which# may be set by using the SO_RCVBUF socket optionnet.core.rmem_max=4194304

# Default setting in bytes of the socket "send" buffer which# may be set by using the SO_SNDBUF socket optionnet.core.wmem_default=262144

# Maximum setting in bytes of the socket "send" buffer which # may be set by using the SO_SNDBUF socket optionnet.core.wmem_max=1048576

# Maximum number of allowable concurrent asynchronous I/O requests requestsfs.aio-max-nr=1048576EOF

Activate All Kernel Parameters for the System

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

83 of 136 4/18/2011 10:17 PM

Page 84: 11gr2on openfiler

The above command persisted the required kernel parameters through reboots by inserting them in the/etc/sysctl.conf startup file. Linux allows modification of these kernel parameters to the current system while it is upand running, so there's no need to reboot the system after making kernel parameter changes. To activate the new kernelparameter values for the currently running system, run the following as root on both Oracle RAC nodes in the cluster:

[root@racnode1 ~]# sysctl -pnet.ipv4.ip_forward = 0net.ipv4.conf.default.rp_filter = 1net.ipv4.conf.default.accept_source_route = 0kernel.sysrq = 0kernel.core_uses_pid = 1net.ipv4.tcp_syncookies = 1kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296kernel.shmmni = 4096kernel.sem = 250 32000 100 128fs.file-max = 6815744net.ipv4.ip_local_port_range = 9000 65500net.core.rmem_default = 262144net.core.rmem_max = 4194304net.core.wmem_default = 262144net.core.wmem_max = 1048576fs.aio-max-nr = 1048576

[root@racnode2 ~]# sysctl -pnet.ipv4.ip_forward = 0net.ipv4.conf.default.rp_filter = 1net.ipv4.conf.default.accept_source_route = 0kernel.sysrq = 0kernel.core_uses_pid = 1net.ipv4.tcp_syncookies = 1kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296kernel.shmmni = 4096kernel.sem = 250 32000 100 128fs.file-max = 6815744net.ipv4.ip_local_port_range = 9000 65500net.core.rmem_default = 262144net.core.rmem_max = 4194304net.core.wmem_default = 262144net.core.wmem_max = 1048576fs.aio-max-nr = 1048576

Verify the new kernel parameter values by running the following on both Oracle RAC nodes in the cluster:

[root@racnode1 ~]# /sbin/sysctl -a | grep shmvm.hugetlb_shm_group = 0kernel.shmmni = 4096kernel.shmall = 4294967296kernel.shmmax = 68719476736

[root@racnode1 ~]# /sbin/sysctl -a | grep semkernel.sem = 250 32000 100 128

[root@racnode1 ~]# /sbin/sysctl -a | grep file-maxfs.file-max = 6815744

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

84 of 136 4/18/2011 10:17 PM

Page 85: 11gr2on openfiler

[root@racnode1 ~]# /sbin/sysctl -a | grep ip_local_port_rangenet.ipv4.ip_local_port_range = 9000 65500

[root@racnode1 ~]# /sbin/sysctl -a | grep 'core\.[rw]mem'net.core.rmem_default = 262144net.core.wmem_default = 262144net.core.rmem_max = 4194304net.core.wmem_max = 1048576

Configure RAC Nodes for Remote Access using SSH - (Optional)

Perform the following optional procedures on both Oracle RAC nodes to manually configure passwordless SSH connectivitybetween the two cluster member nodes as the "grid" and "oracle" user.

One of the best parts about this section of the document is that it is completely optional! That's not to say configuringSecure Shell (SSH) connectivity between the Oracle RAC nodes is not necessary. To the contrary, the Oracle UniversalInstaller (OUI) uses the secure shell tools ssh and scp commands during installation to run remote commands on and copyfiles to the other cluster nodes. During the Oracle software installations, SSH must be configured so that these commandsdo not prompt for a password. The ability to run SSH commands without being prompted for a password is sometimesreferred to as user equivalence.

The reason this section of the document is optional is that the OUI interface in 11g release 2 includes a new feature that can

automatically configure SSH during the install phase of the Oracle software for the user account running the installation. Theautomatic configuration performed by OUI creates passwordless SSH connectivity between all cluster member nodes.Oracle recommends that you use the automatic procedure provided by the OUI whenever possible.

In addition to installing the Oracle software, SSH is used after installation by configuration assistants, Oracle EnterpriseManager, OPatch, and other features that perform configuration operations from local to remote nodes.

Configuring SSH with a passphrase is no longer supported for Oracle Clusterware 11g

release 2 and later releases. Passwordless SSH is required for Oracle 11g release 2 and

higher.

Since this guide uses grid as the Oracle grid infrastructure software owner and oracle as the owner of the Oracle RACsoftware, passwordless SSH must be configured for both user accounts.

When SSH is not available, the installer attempts to use the rsh and rcp commandsinstead of ssh and scp. These services, however, are disabled by default on most Linuxsystems. The use of RSH will not be discussed in this guide.

Verify SSH Software is Installed

The supported version of SSH for Linux distributions is OpenSSH. OpenSSH should be included in the Linux distributionminimal installation. To confirm that SSH packages are installed, run the following command on both Oracle RAC nodes:

[root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep sshopenssh-askpass-4.3p2-41.el5 (x86_64)openssh-clients-4.3p2-41.el5 (x86_64)openssh-server-4.3p2-41.el5 (x86_64)openssh-4.3p2-41.el5 (x86_64)

If you do not see a list of SSH packages, then install those packages for your Linux distribution. For example, load CD #1

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

85 of 136 4/18/2011 10:17 PM

Page 86: 11gr2on openfiler

into each of the Oracle RAC nodes and perform the following to install the OpenSSH packages:

[root@racnode1 ~]# mount -r /dev/cdrom /media/cdrom[root@racnode1 ~]# cd /media/cdrom/Server[root@racnode1 ~]# rpm -Uvh openssh-*[root@racnode1 ~]# cd /[root@racnode1 ~]# eject

Why Configure SSH User Equivalence Using the Manual Method Option?

So, if the OUI already includes a feature that automates the SSH configuration between the Oracle RAC nodes, then whyprovide a section on how to manually configure passwordless SSH connectivity? In fact, for the purpose of this article, Idecided to forgo manually configuring SSH connectivity in favor of Oracle's automatic methods included in the installer.

One reason to include this section on manually configuring SSH is to make mention of the fact that you must remove sttycommands from the profiles of any Oracle software installation owners, and remove other security measures that aretriggered during a login and that generate messages to the terminal. These messages, mail checks, and other displaysprevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle UniversalInstaller. If they are not disabled, then SSH must be configured manually before an installation can be run. Furtherdocumentation on preventing installation errors caused by stty commands can be found later in this section.

Another reason you may decide to manually configure SSH for user equivalence is to have the ability to run the ClusterVerification Utility (CVU) prior to installing the Oracle software. The CVU (runcluvfy.sh) is a valuable tool located in theOracle Clusterware root directory that not only verifies all prerequisites have been met before software installation, it alsohas the ability to generate shell script programs, called fixup scripts, to resolve many incomplete system configurationrequirements. The CVU does, however, have a prerequisite of its own and that is that SSH user equivalency is configuredcorrectly for the user account running the installation. If you intend to configure SSH connectivity using the OUI, know thatthe CVU utility will fail before having the opportunity to perform any of its critical checks:

[grid@racnode1 ~]$ /media/cdrom/grid/runcluvfy.sh stage -pre crsinst -fixup -n racnode1,racnode2 -verboPerforming pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "racnode1" Destination Node Reachable? ------------------------------------ ------------------------ racnode1 yes racnode2 yes Result: Node reachability check passed from node "racnode1"

Checking user equivalence...

Check: User equivalence for user "grid" Node Name Comment ------------------------------------ ------------------------ racnode2 failed racnode1 failedResult: PRVF-4007 : User equivalence check failed for user "grid"

ERROR: User equivalence unavailable on all the specified nodesVerification cannot proceed

Pre-check for cluster services setup was unsuccessful on all the nodes.

Please note that it is not required to run the CVU utility before installing the Oracle software. Starting with Oracle 11g

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

86 of 136 4/18/2011 10:17 PM

Page 87: 11gr2on openfiler

release 2, the installer detects when minimum requirements for installation are not completed and performs the same tasksdone by the CVU to generate fixup scripts to resolve incomplete system configuration requirements.

Configure SSH Connectivity Manually on All Cluster Nodes

To reiterate, it is not required to manually configure SSH connectivity before running the OUI. The OUI in 11g release 2

provides an interface during the install for the user account running the installation to automatically create passwordlessSSH connectivity between all cluster member nodes. This is the recommend approach by Oracle and the method used inthis article. The tasks below to manually configure SSH connectivity between all cluster member nodes is included fordocumentation purposes only. Keep in mind that this guide uses grid as the Oracle grid infrastructure software owner andoracle as the owner of the Oracle RAC software. If you decide to manually configure SSH connectivity, it should beperformed for both user accounts.

The goal in this section is to setup user equivalence for the grid and oracle OS user accounts. User equivalence enablesthe grid and oracle user accounts to access all other nodes in the cluster (running commands and copying files) withoutthe need for a password. Oracle added support in 10g release 1 for using the SSH tool suite for setting up user

equivalence. Before Oracle Database 10g, user equivalence had to be configured using remote shell (RSH).

In the example that follows, the Oracle software owner grid will be configured for passwordless SSH.

Checking Existing SSH Configuration on the System

To determine if SSH is installed and running, enter the following command:

[grid@racnode1 ~]$ pgrep sshd253519852

If SSH is running, then the response to this command is a list of process ID number(s). Run this check on both Oracle RACnodes in the cluster to verify the SSH daemons are installed and running.

You need either an RSA or a DSA key for the SSH protocol. RSA is used with the SSH 1.5 protocol, while DSA is thedefault for the SSH 2.0 protocol. With OpenSSH, you can use either RSA or DSA. The instructions that follow are for SSH1.If you have an SSH2 installation, and you cannot use SSH1, then refer to your SSH distribution documentation to configureSSH1 compatibility or to configure SSH2 with DSA.

Automatic passwordless SSH configuration using the OUI creates RSA encryption keys onall nodes of the cluster.

Configuring Passwordless SSH on Cluster Nodes

To configure passwordless SSH, you must first create RSA or DSA keys on each cluster node, and then copy all the keysgenerated on all cluster node members into an authorized keys file that is identical on each node. Note that the SSH files

must be readable only by root and by the software installation user (grid, oracle), as SSH ignores a private key file if itis accessible by others. In the examples that follow, the DSA key is used.

You must configure passwordless SSH separately for each Oracle software installation owner that you intend to use forinstallation (grid, oracle).

To configure passwordless SSH, complete the following on both Oracle RAC nodes.

Create SSH Directory and SSH Keys

Complete the following steps on each Oracle RAC node.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

87 of 136 4/18/2011 10:17 PM

Page 88: 11gr2on openfiler

Log in to both Oracle RAC nodes as the software owner (in this example, the grid user):

[root@racnode1 ~]# su - grid

1.

To ensure that you are logged in as grid and to verify that the user ID matches the expected user ID you haveassigned to the grid user, enter the commands id and id grid. Verify that the Oracle user group and user andthe user terminal window process you are using have group and user IDs that are identical. For example:

[grid@racnode1 ~]$ iduid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),

[grid@racnode1 ~]$ id griduid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),

2.

If necessary, create the .ssh directory in the grid user's home directory, and set permissions on it to ensure thatonly the grid user has read and write permissions:

[grid@racnode1 ~]$ mkdir ~/.ssh[grid@racnode1 ~]$ chmod 700 ~/.ssh

SSH configuration will fail if the permissions are not set to 700.

3.

Enter the following command to generate a DSA key pair (public and private key) for the SSH protocol. At theprompts, accept the default key file location and no passphrase (simply press [Enter] three times!):

[grid@racnode1 ~]$ /usr/bin/ssh-keygen -t dsaGenerating public/private dsa key pair.Enter file in which to save the key (/home/grid/.ssh/id_dsa): [Enter]Enter passphrase (empty for no passphrase): [Enter]Enter same passphrase again: [Enter]Your identification has been saved in /home/grid/.ssh/id_dsa.Your public key has been saved in /home/grid/.ssh/id_dsa.pub.The key fingerprint is:57:21:d7:d5:54:29:4c:12:40:23:36:e9:6e:2f:e6:40 grid@racnode1

SSH with passphrase is not supported for Oracle Clusterware 11g release 2 and later

releases. Passwordless SSH is required for Oracle 11g release 2 and higher.

This command writes the DSA public key to the ~/.ssh/id_dsa.pub file and the private key to the ~/.ssh/id_dsa file.

Never distribute the private key to anyone not authorized to perform Oracle software installations.

4.

Repeat steps 1 through 4 for all remaining nodes that you intend to make a member of the cluster using the DSA key(racnode2).

5.

Add All Keys to a Common authorized_keys File

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

88 of 136 4/18/2011 10:17 PM

Page 89: 11gr2on openfiler

Now that both Oracle RAC nodes contain a public and private key for DSA, you will need to create an authorized key file(authorized_keys) on one of the nodes. An authorized key file is nothing more than a single file that contains a copy ofeveryone's (every node's) DSA public key. Once the authorized key file contains all of the public keys for each node, it isthen distributed to all of the nodes in the cluster.

The grid user's ~/.ssh/authorized_keys file on every node must contain thecontents from all of the ~/.ssh/id_dsa.pub files that you generated on all clusternodes.

Complete the following steps on one of the nodes in the cluster to create and then distribute the authorized key file. For thepurpose of this example, I am using the primary node in the cluster, racnode1:

From racnode1, determine if the authorized key file ~/.ssh/authorized_keys already exists in the .sshdirectory of the owner's home directory. In most cases this will not exist since this article assumes you are workingwith a new install. If the file doesn't exist, create it now:

[grid@racnode1 ~]$ touch ~/.ssh/authorized_keys[grid@racnode1 ~]$ ls -l ~/.sshtotal 8-rw-r--r-- 1 grid oinstall 0 Nov 7 17:25 authorized_keys-rw------- 1 grid oinstall 672 Nov 7 16:56 id_dsa-rw-r--r-- 1 grid oinstall 603 Nov 7 16:56 id_dsa.pub

In the .ssh directory, you should see the id_dsa.pub public key that was created and the blank fileauthorized_keys.

1.

From racnode1, use SCP (Secure Copy) or SFTP (Secure FTP) to copy the public key (~/.ssh/id_dsa.pub)from both Oracle RAC nodes in the cluster to the authorized key file just created (~/.ssh/authorized_keys).Again, this will be done from racnode1. You will be prompted for the grid OS user account password for bothOracle RAC nodes accessed.

The following example is being run from racnode1 and assumes a two-node cluster, with nodes racnode1 andracnode2:

[grid@racnode1 ~]$ ssh racnode1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keysThe authenticity of host 'racnode1 (192.168.1.151)' can't be established.RSA key fingerprint is 66:65:a6:99:5f:cb:6e:60:6a:06:18:b7:fc:c2:cc:3e.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'racnode1,192.168.1.151' (RSA) to the list of known hostsgrid@racnode1's password: xxxxx

[grid@racnode1 ~]$ ssh racnode2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keysThe authenticity of host 'racnode2 (192.168.1.152)' can't be established.RSA key fingerprint is 30:cd:90:ad:18:00:24:c5:42:49:21:b0:1d:59:2d:7b.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'racnode2,192.168.1.152' (RSA) to the list of known hostsgrid@racnode2's password: xxxxx

The first time you use SSH to connect to a node from a particular system, you will see a message similar to thefollowing:

The authenticity of host 'racnode1 (192.168.1.151)' can't be established.RSA key fingerprint is 66:65:a6:99:5f:cb:6e:60:6a:06:18:b7:fc:c2:cc:3e.Are you sure you want to continue connecting (yes/no)? yes

Enter yes at the prompt to continue. The public hostname will then be added to the known_hosts file in the~/.ssh directory and you will not see this message again when you connect from this system to the same node.

2.

At this point, we have the DSA public key from every node in the cluster contained in the authorized key file (~/.ssh3.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

89 of 136 4/18/2011 10:17 PM

Page 90: 11gr2on openfiler

/authorized_keys) on racnode1:

[grid@racnode1 ~]$ ls -l ~/.sshtotal 16-rw-r--r-- 1 grid oinstall 1206 Nov 7 17:31 authorized_keys-rw------- 1 grid oinstall 672 Nov 7 16:56 id_dsa-rw-r--r-- 1 grid oinstall 603 Nov 7 16:56 id_dsa.pub-rw-r--r-- 1 grid oinstall 808 Nov 7 17:31 known_hosts

We now need to copy the authorized key file to the remaining nodes in the cluster. In our two-node cluster example,the only remaining node is racnode2. Use the scp command to copy the authorized key file to all remaining nodesin the cluster:

[grid@racnode1 ~]$ scp ~/.ssh/authorized_keys racnode2:.ssh/authorized_keysgrid@racnode2's password: xxxxxauthorized_keys 100% 1206 1.2KB/s 00:00

Change the permission of the authorized key file for both Oracle RAC nodes in the cluster by logging into the nodeand running the following:

[grid@racnode1 ~]$ chmod 600 ~/.ssh/authorized_keys

[grid@racnode2 ~]$ chmod 600 ~/.ssh/authorized_keys

4.

Enable SSH User Equivalency on Cluster Nodes

After you have copied the authorized_keys file that contains all public keys to each node in the cluster, complete thesteps in this section to ensure passwordless SSH connectivity between all cluster member nodes is configured correctly. Inthis example, the Oracle grid infrastructure software owner will be used which is named grid.

When running the test SSH commands in this section, if you see any other messages or text, apart from the date and hostname, then the Oracle installation will fail. If any of the nodes prompt for a password or pass phrase then verify that the~/.ssh/authorized_keys file on that node contains the correct public keys and that you have created an Oraclesoftware owner with identical group membership and IDs. Make any changes required to ensure that only the date and hostname is displayed when you enter these commands. You should ensure that any part of a login script that generates anyoutput, or asks any questions, is modified so it acts only when the shell is an interactive shell.

On the system where you want to run OUI from (racnode1), log in as the grid user.

[root@racnode1 ~]# su - grid

1.

If SSH is configured correctly, you will be able to use the ssh and scp commands without being prompted for apassword or pass phrase from the terminal session:

[grid@racnode1 ~]$ ssh racnode1 "date;hostname"Sun Nov 7 18:06:17 EST 2010racnode1 [grid@racnode1 ~]$ ssh racnode2 "date;hostname"Sun Nov 7 18:07:55 EST 2010racnode2

2.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

90 of 136 4/18/2011 10:17 PM

Page 91: 11gr2on openfiler

Perform the same actions above from the remaining nodes in the Oracle RAC cluster (racnode2) to ensure they toocan access all other nodes without being prompted for a password or pass phrase and get added to theknown_hosts file:

[root@racnode2 ~]# su - grid

[grid@racnode2 ~]$ ssh racnode1 "date;hostname"The authenticity of host 'racnode1 (192.168.1.151)' can't be established.RSA key fingerprint is 66:65:a6:99:5f:cb:6e:60:6a:06:18:b7:fc:c2:cc:3e.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'racnode1,192.168.1.151' (RSA) to the list of known hostsSun Nov 7 18:08:46 EST 2010racnode1

[grid@racnode2 ~]$ ssh racnode1 "date;hostname"Sun Nov 7 18:08:53 EST 2010racnode1

--------------------------------------------------------------------------

[grid@racnode2 ~]$ ssh racnode2 "date;hostname"The authenticity of host 'racnode2 (192.168.1.152)' can't be established.RSA key fingerprint is 30:cd:90:ad:18:00:24:c5:42:49:21:b0:1d:59:2d:7b.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'racnode2,192.168.1.152' (RSA) to the list of known hostsSun Nov 7 18:11:51 EST 2010racnode2

[grid@racnode2 ~]$ ssh racnode2 "date;hostname"Sun Nov 7 18:11:54 EST 2010racnode2

3.

The Oracle Universal Installer is a GUI interface and requires the use of an X Server. From the terminal sessionenabled for user equivalence (the node you will be performing the Oracle installations from), set the environmentvariable DISPLAY to a valid X Windows display:

Bourne, Korn, and Bash shells:

[grid@racnode1 ~]$ DISPLAY=<Any X-Windows Host>:0[grid@racnode1 ~]$ export DISPLAY

C shell:

[grid@racnode1 ~]$ setenv DISPLAY <Any X-Windows Host>:0

After setting the DISPLAY variable to a valid X Windows display, you should perform another test of the currentterminal session to ensure that X11 forwarding is not enabled:

[grid@racnode1 ~]$ ssh racnode1 hostnameracnode1

[grid@racnode1 ~]$ ssh racnode2 hostnameracnode2

4.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

91 of 136 4/18/2011 10:17 PM

Page 92: 11gr2on openfiler

If you are using a remote client to connect to the node performing the installation, and you see a messagesimilar to: "Warning: No xauth data; using fake authentication data for X11forwarding." then this means that your authorized keys file is configured correctly, however, your SSHconfiguration has X11 forwarding enabled. For example:

[grid@racnode1 ~]$ export DISPLAY=melody:0[grid@racnode1 ~]$ ssh racnode2 hostnameWarning: No xauth data; using fake authentication data for X11 forwarding.racnode2

Note that having X11 Forwarding enabled will cause the Oracle installation to fail. To correct this problem, create auser-level SSH client configuration file for the grid and oracle OS user account that disables X11 Forwarding:

Using a text editor, edit or create the file ~/.ssh/config1.

Make sure that the ForwardX11 attribute is set to no. For example, insert the following into the ~/.ssh/config file:

Host * ForwardX11 no

2.

Preventing Installation Errors Caused by stty Commands

During an Oracle grid infrastructure or Oracle RAC software installation, OUI uses SSH to run commands and copy files tothe other nodes. During the installation, hidden files on the system (for example, .bashrc or .cshrc) will cause makefileand other installation errors if they contain stty commands.

To avoid this problem, you must modify these files in each Oracle installation owner user home directory to suppress alloutput on STDERR, as in the following examples:

Bourne, Bash, or Korn shell:

if [ -t 0 ]; then stty intr ^Cfi

C shell:

test -t 0if ($status == 0) then stty intr ^Cendif

If there are hidden files that contain stty commands that are loaded by the remote shell,then OUI indicates an error and stops the installation.

Install and Configure ASMLib 2.0

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

92 of 136 4/18/2011 10:17 PM

Page 93: 11gr2on openfiler

The installation and configuration procedures in this section should be performed on both of the Oracle RAC nodes in thecluster. Creating the ASM disks, however, will only need to be performed on a single node within the cluster (racnode1).

In this section, we will install and configure ASMLib 2.0 which is an optional support library for the Automatic StorageManagement (ASM) feature of the Oracle Database. In this article, ASM will be used as the shared file system and volumemanager for Oracle Clusterware files (OCR and voting disk), Oracle Database files (data, online redo logs, control files,archived redo logs), and the Fast Recovery Area.

Automatic Storage Management simplifies database administration by eliminating the need for the DBA to directly managepotentially thousands of Oracle database files requiring only the management of groups of disks allocated to the OracleDatabase. ASM is built into the Oracle kernel and can be used for both single and clustered instances of Oracle. All of thefiles and directories to be used for Oracle will be contained in a disk group — (or for the purpose of this article, three disk

groups). ASM automatically performs load balancing in parallel across all available disk drives to prevent hot spots andmaximize performance, even with rapidly changing data usage patterns. ASMLib allows an Oracle Database using ASMmore efficient and capable access to the disk groups it is using.

Keep in mind that ASMLib is only a support library for the ASM software. The ASM software will be installed as part ofOracle grid infrastructure later in this guide.

Starting with Oracle grid infrastructure 11g release 2 (11.2), the Automatic Storage Management and Oracle Clusterware

software is packaged together in a single binary distribution and installed into a single home directory, which is referred toas the Grid Infrastructure home. The Oracle grid infrastructure software will be owned by the user grid.

So, is ASMLib required for ASM? Not at all. In fact, there are two different methods to configure ASM on Linux:

ASM with ASMLib I/O:

This method creates all Oracle database files on raw block devices managed by ASM using ASMLib calls. RAW

character devices are not required with this method as ASMLib works with block devices.

ASM with Standard Linux I/O:

This method does not make use of ASMLib. Oracle database files are created on raw character devices managed

by ASM using standard Linux I/O system calls. You will be required to create RAW devices for all disk partitionsused by ASM.

In this article, I will be using the "ASM with ASMLib I/O" method. Oracle states in Metalink Note 275315.1 that "ASMLib

was provided to enable ASM I/O to Linux disks without the limitations of the standard UNIX I/O API". I plan on performing

several tests in the future to identify the performance gains in using ASMLib. Those performance metrics and testing detailsare out of scope of this article and therefore will not be discussed.

If you would like to learn more about Oracle ASMLib 2.0, visit http://www.oracle.com/technetwork/topics/linux/asmlib/index-101839.html.

Download ASMLib 2.0 Packages

We start this section by downloading the latest ASMLib 2.0 libraries and the kernel driver from OTN.

Oracle ASMLib Downloads for Red Hat Enterprise Linux Server 5

At the time of this writing, the latest release of the ASMLib kernel driver is 2.0.5-1. We need to download the appropriateversion of the ASMLib driver for the Linux kernel which in my case is kernel 2.6.18-194.el5 running on the x86_64architecture:

[root@racnode1 ~]# uname -aLinux racnode1 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux

32-bit (x86) Installations

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

93 of 136 4/18/2011 10:17 PM

Page 94: 11gr2on openfiler

oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm

Next, download the ASMLib tools:

oracleasm-support-2.1.3-1.el5.i386.rpmoracleasmlib-2.0.4-1.el5.i386.rpm

64-bit (x86_64) Installations

oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm

Next, download the ASMLib tools:

oracleasm-support-2.1.3-1.el5.x86_64.rpmoracleasmlib-2.0.4-1.el5.x86_64.rpm

Install ASMLib 2.0 Packages

The installation of ASMLib 2.0 needs to be performed on both nodes in the Oracle RAC cluster as the root user account:

[root@racnode1 ~]# rpm -Uvh oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm \> oracleasmlib-2.0.4-1.el5.x86_64.rpm \> oracleasm-support-2.1.3-1.el5.x86_64.rpmwarning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [ 33%] 2:oracleasm-2.6.18-194.el########################################### [ 67%] 3:oracleasmlib ########################################### [100%]

[root@racnode2 ~]# rpm -Uvh oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm \> oracleasmlib-2.0.4-1.el5.x86_64.rpm \> oracleasm-support-2.1.3-1.el5.x86_64.rpmwarning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [ 33%] 2:oracleasm-2.6.18-194.el########################################### [ 67%] 3:oracleasmlib ########################################### [100%]

After installing the ASMLib packages, verify from both Oracle RAC nodes that the software is installed:

[root@racnode1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep oracleasm | oracleasm-2.6.18-194.el5-2.0.5-1.el5 (x86_64)oracleasmlib-2.0.4-1.el5 (x86_64)oracleasm-support-2.1.3-1.el5 (x86_64)

[root@racnode2 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep oracleasm | oracleasm-2.6.18-194.el5-2.0.5-1.el5 (x86_64)oracleasmlib-2.0.4-1.el5 (x86_64)oracleasm-support-2.1.3-1.el5 (x86_64)

Configure ASMLib

Now that you have installed the ASMLib Packages for Linux, you need to configure and load the ASM kernel module. Thistask needs to be run on both Oracle RAC nodes as the root user account.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

94 of 136 4/18/2011 10:17 PM

Page 95: 11gr2on openfiler

The oracleasm command by default is in the path /usr/sbin. The /etc/init.d path, which was used in previousreleases, is not deprecated but the oracleasm binary in that path is now used typically for internal commands. If you enterthe command oracleasm configure without the -i flag, then you are shown the current configuration. For example,

[root@racnode1 ~]# /usr/sbin/oracleasm configureORACLEASM_ENABLED=falseORACLEASM_UID=ORACLEASM_GID=ORACLEASM_SCANBOOT=trueORACLEASM_SCANORDER=""ORACLEASM_SCANEXCLUDE=""

Enter the following command to run the oracleasm initialization script with the configure option:

[root@racnode1 ~]# /usr/sbin/oracleasm configure -iConfiguring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM librarydriver. The following questions will determine whether the driver isloaded on boot and what permissions it will have. The current valueswill be shown in brackets ('[]'). Hitting <ENTER> without typing ananswer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: gridDefault group to own the driver interface []: asmadminStart Oracle ASM library driver on boot (y/n) [n]: yScan for Oracle ASM disks on boot (y/n) [y]: yWriting Oracle ASM library driver configuration: done

The script completes the following tasks:

Creates the /etc/sysconfig/oracleasm configuration fileCreates the /dev/oracleasm mount pointMounts the ASMLib driver file system

The ASMLib driver file system is not a regular file system. It is used only by the AutomaticStorage Management library to communicate with the Automatic Storage Managementdriver.

1.

Enter the following command to load the oracleasm kernel module:

[root@racnode1 ~]# /usr/sbin/oracleasm initCreating /dev/oracleasm mount point: /dev/oracleasmLoading module "oracleasm": oracleasmMounting ASMlib driver filesystem: /dev/oracleasm

2.

Repeat this procedure on all nodes in the cluster (racnode2) where you want to install Oracle RAC.3.

Create ASM Disks for Oracle

Creating the ASM disks only needs to be performed from one node in the RAC cluster as the root user account. I will berunning these commands on racnode1. On the other Oracle RAC node(s), you will need to perform a scandisk to

recognize the new volumes. When that is complete, you should then run the oracleasm listdisks command on bothOracle RAC nodes to verify that all ASM disks were created and available.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

95 of 136 4/18/2011 10:17 PM

Page 96: 11gr2on openfiler

In the section "Create Partitions on iSCSI Volumes", we configured (partitioned) three iSCSI volumes to be used by ASM.ASM will be used for storing Oracle Clusterware files, Oracle database files like online redo logs, database files, controlfiles, archived redo log files, and the Fast Recovery Area. Use the local device names that were created by udev whenconfiguring the three ASM volumes.

To create the ASM disks using the iSCSI target names to local device name mappings, type the following:

[root@racnode1 ~]# /usr/sbin/oracleasm createdisk CRSVOL1 /dev/iscsi/crs1/part1Writing disk header: doneInstantiating disk: done

[root@racnode1 ~]# /usr/sbin/oracleasm createdisk DATAVOL1 /dev/iscsi/data1/part1Writing disk header: doneInstantiating disk: done

[root@racnode1 ~]# /usr/sbin/oracleasm createdisk FRAVOL1 /dev/iscsi/fra1/part1Writing disk header: doneInstantiating disk: done

To make the volumes available on the other nodes in the cluster (racnode2), enter the following command as root oneach node:

[root@racnode2 ~]# /usr/sbin/oracleasm scandisksReloading disk partitions: doneCleaning any stale ASM disks...Scanning system for ASM disks...Instantiating disk "DATAVOL1"Instantiating disk "CRSVOL1"Instantiating disk "FRAVOL1"

We can now test that the ASM disks were successfully created by using the following command on both nodes in the RACcluster as the root user account. This command identifies shared disks attached to the node that are marked as AutomaticStorage Management disks:

[root@racnode1 ~]# /usr/sbin/oracleasm listdisksCRSVOL1DATAVOL1FRAVOL1

[root@racnode2 ~]# /usr/sbin/oracleasm listdisksCRSVOL1DATAVOL1FRAVOL1

Download Oracle RAC 11g release 2 Software

The following download procedures only need to be performed on one node in the cluster (racnode1).

The next step is to download and extract the required Oracle software packages from the Oracle Technology Network(OTN):

If you do not currently have an account with Oracle OTN, you will need to create one. Thisis a FREE account!

Oracle offers a development and testing license free of charge. No support, however, is

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

96 of 136 4/18/2011 10:17 PM

Page 97: 11gr2on openfiler

provided and the license does not permit production use. A full description of the licenseagreement is available on OTN.

32-bit (x86) Installations

http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linuxsoft-085393.html

64-bit (x86_64) Installations

http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linx8664soft-100572.html

You will be downloading and extracting the required software from Oracle to only one of the Linux nodes in the cluster —namely, racnode1. You will perform all Oracle software installs from this machine. The Oracle installer will copy therequired software packages to all other nodes in the RAC configuration using remote access (scp).

Log in to the node that you will be performing all of the Oracle installations from (racnode1) as the appropriate softwareowner. For example, login and download the Oracle grid infrastructure software to the directory /home/grid/software/oracle as the grid user. Next, log in and download the Oracle Database and Oracle Examples (optional) software tothe /home/oracle/software/oracle directory as the oracle user.

Download and Extract the Oracle Software

Download the following software packages:

Oracle Database 11g Release 2 Grid Infrastructure (11.2.0.1.0) for Linux

Oracle Database 11g Release 2 (11.2.0.1.0) for Linux

Oracle Database 11g Release 2 Examples (optional)

All downloads are available from the same page.

Extract the Oracle grid infrastructure software as the grid user:

[grid@racnode1 ~]$ mkdir -p /home/grid/software/oracle[grid@racnode1 ~]$ mv linux.x64_11gR2_grid.zip /home/grid/software/oracle[grid@racnode1 ~]$ cd /home/grid/software/oracle[grid@racnode1 oracle]$ unzip linux.x64_11gR2_grid.zip

Extract the Oracle Database and Oracle Examples software as the oracle user:

[oracle@racnode1 ~]$ mkdir -p /home/oracle/software/oracle[oracle@racnode1 ~]$ mv linux.x64_11gR2_database_1of2.zip /home/oracle/software/oracle[oracle@racnode1 ~]$ mv linux.x64_11gR2_database_2of2.zip /home/oracle/software/oracle[oracle@racnode1 ~]$ mv linux.x64_11gR2_examples.zip /home/oracle/software/oracle[oracle@racnode1 ~]$ cd /home/oracle/software/oracle[oracle@racnode1 oracle]$ unzip linux.x64_11gR2_database_1of2.zip[oracle@racnode1 oracle]$ unzip linux.x64_11gR2_database_2of2.zip[oracle@racnode1 oracle]$ unzip linux.x64_11gR2_examples.zip

Pre-installation Tasks for Oracle Grid Infrastructure for a Cluster

Perform the following checks on both Oracle RAC nodes in the cluster.

This section contains any remaining pre-installation tasks for Oracle grid infrastructure that have not already beendiscussed. Please note that manually running the Cluster Verification Utility (CVU) before running the Oracle installer is not

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

97 of 136 4/18/2011 10:17 PM

Page 98: 11gr2on openfiler

required. The CVU is run automatically at the end of the Oracle grid infrastructure installation as part of the ConfigurationAssistants process.

Install the cvuqdisk Package for Linux

Install the operating system package cvuqdisk to both Oracle RAC nodes. Without cvuqdisk, Cluster Verification Utilitycannot discover shared disks, and you will receive the error message "Package cvuqdisk not installed" when the ClusterVerification Utility is run (either manually or at the end of the Oracle grid infrastructure installation). Use the cvuqdisk RPMfor your hardware architecture (for example, x86_64 or i386).

The cvuqdisk RPM can be found on the Oracle grid infrastructure installation media in the rpm directory. For the purposeof this article, the Oracle grid infrastructure media was extracted to the /home/grid/software/oracle/grid directoryon racnode1 as the grid user.

To install the cvuqdisk RPM, complete the following procedures:

Locate the cvuqdisk RPM package, which is in the directory rpm on the installation media from racnode1:

[racnode1]: /home/grid/software/oracle/grid/rpm/cvuqdisk-1.0.7-1.rpm

1.

Copy the cvuqdisk package from racnode1 to racnode2 as the grid user account:

[racnode2]: /home/grid/software/oracle/grid/rpm/cvuqdisk-1.0.7-1.rpm

2.

Log in as root on both Oracle RAC nodes:

[grid@racnode1 rpm]$ su

[grid@racnode2 rpm]$ su

3.

Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk, which for this article isoinstall:

[root@racnode1 rpm]# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP

[root@racnode2 rpm]# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP

4.

In the directory where you have saved the cvuqdisk RPM, use the following command to install the cvuqdiskpackage on both Oracle RAC nodes:

[root@racnode1 rpm]# rpm -iv cvuqdisk-1.0.7-1.rpmPreparing packages for installation...cvuqdisk-1.0.7-1

[root@racnode2 rpm]# rpm -iv cvuqdisk-1.0.7-1.rpmPreparing packages for installation...cvuqdisk-1.0.7-1

5.

Verify the cvuqdisk utility was successfully installed:6.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

98 of 136 4/18/2011 10:17 PM

Page 99: 11gr2on openfiler

[root@racnode1 rpm]# ls -l /usr/sbin/cvuqdisk-rwsr-xr-x 1 root oinstall 9832 May 28 2009 /usr/sbin/cvuqdisk

[root@racnode2 rpm]# ls -l /usr/sbin/cvuqdisk-rwsr-xr-x 1 root oinstall 9832 May 28 2009 /usr/sbin/cvuqdisk

Verify Oracle Clusterware Requirements with CVU - (optional)

As stated earlier in this section, running the Cluster Verification Utility before running the Oracle installer is not required.Starting with Oracle Clusterware 11g release 2, Oracle Universal Installer (OUI) detects when the minimum requirements

for an installation are not met and creates shell scripts called fixup scripts to finish incomplete system configuration steps. If

OUI detects an incomplete task, it then generates fixup scripts (runfixup.sh). You can run the fixup script after you clickthe [Fix and Check Again Button] during the Oracle grid infrastructure installation.

You also can have CVU generate fixup scripts before installation.

If you decide that you want to run the CVU, please keep in mind that it should be run as the grid user from from the nodeyou will be performing the Oracle installation from (racnode1). In addition, SSH connectivity with user equivalence must beconfigured for the grid user. If you intend to configure SSH connectivity using the OUI, the CVU utility will fail before havingthe opportunity to perform any of its critical checks and generate the fixup scripts:

Checking user equivalence...

Check: User equivalence for user "grid" Node Name Comment ------------------------------------ ------------------------ racnode2 failed racnode1 failedResult: PRVF-4007 : User equivalence check failed for user "grid"

ERROR: User equivalence unavailable on all the specified nodesVerification cannot proceed

Pre-check for cluster services setup was unsuccessful on all the nodes.

Once all prerequisites for running the CVU utility have been met, you can now manually check your cluster configurationbefore installation and generate a fixup script to make operating system changes before starting the installation.

[grid@racnode1 ~]$ cd /home/grid/software/oracle/grid[grid@racnode1 grid]$ ./runcluvfy.sh stage -pre crsinst -n racnode1,racnode2 -fixup -verbose

Review the CVU report.

The only failure that should be found given the configuration described in this guide is:

Check: Membership of user "grid" in group "dba" Node Name User Exists Group Exists User in Group Comment ---------------- ------------ ------------ ------------ ---------------- racnode2 yes yes no failed racnode1 yes yes no failedResult: Membership check for user "grid" in group "dba" failed

The check fails because this guide creates role-allocated groups and users by using a Job Role Separation configuration

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

99 of 136 4/18/2011 10:17 PM

Page 100: 11gr2on openfiler

which is not accurately recognized by the CVU. Creating a Job Role Separation configuration was described in the sectionCreate Job Role Separation Operating System Privileges Groups, Users, and Directories. The CVU fails to recognize thistype of configuration and assumes the grid user should always be part of the dba group. This failed check can be safelyignored. All other checks performed by CVU should be reported as "passed" before continuing with the Oracle gridinfrastructure installation.

Verify Hardware and Operating System Setup with CVU

The next CVU check to run will verify the hardware and operating system setup. Again, run the following as the grid useraccount from racnode1 with user equivalence configured:

[grid@racnode1 ~]$ cd /home/grid/software/oracle/grid[grid@racnode1 grid]$ ./runcluvfy.sh stage -post hwos -n racnode1,racnode2 -verbose

Review the CVU report. All checks performed by CVU should be reported as "passed" before continuing with the Oraclegrid infrastructure installation.

Install Oracle Grid Infrastructure for a Cluster

Perform the following installation procedures from only one of the Oracle RAC nodes in the cluster (racnode1). The Oraclegrid infrastructure software (Oracle Clusterware and Automatic Storage Management) will be installed to both of the OracleRAC nodes in the cluster by the Oracle Universal Installer.

You are now ready to install the "grid" part of the environment — Oracle Clusterware and Automatic Storage Management.Complete the following steps to install Oracle grid infrastructure on your cluster.

At any time during installation, if you have a question about what you are being asked to do, click the Help button on the

OUI page.

Typical and Advanced Installation

Starting with 11g release 2, Oracle now provides two options for installing the Oracle grid infrastructure software:

Typical Installation

The typical installation option is a simplified installation with a minimal number of manual configuration choices. Thisnew option provides streamlined cluster installations, especially for those customers who are new to clustering.Typical installation defaults as many options as possible to those recommended as best practices.

Advanced Installation

The advanced installation option is an advanced procedure that requires a higher degree of system knowledge. Itenables you to select particular configuration choices including additional storage and network choices, use ofoperating system group authentication for role-based administrative privileges, integration with IPMI, and moregranularity in specifying Automatic Storage Management roles.

Given the fact that this guide makes use of role-based administrative privileges and high granularity in specifying AutomaticStorage Management roles, we will be using the "Advanced Installation" option.

Verify Terminal Shell Environment

Before starting the Oracle Universal Installer, log in to racnode1 as the owner of the Oracle grid infrastructure softwarewhich for this article is grid. Next, if you are using a remote client to connect to the Oracle RAC node performing theinstallation (SSH or Telnet to racnode1 from a workstation configured with an X Server), verify your X11 display serversettings which were described in the section Logging In to a Remote System Using X Terminal.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

100 of 136 4/18/2011 10:17 PM

Page 101: 11gr2on openfiler

Install Oracle Grid Infrastructure

Perform the following tasks as the grid user to install Oracle grid infrastructure:

[grid@racnode1 ~]$ iduid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

[grid@racnode1 ~]$ DISPLAY=<your local workstation>:0.0[grid@racnode1 ~]$ export DISPLAY

[grid@racnode1 ~]$ cd /home/grid/software/oracle/grid[grid@racnode1 grid]$ ./runInstaller

Screen Name Response Screen Shot

SelectInstallationOption

Select "Install and Configure Grid Infrastructure for a Cluster"

SelectInstallationType

Select "Advanced Installation"

Select ProductLanguages

Make the appropriate selection(s) for your environment.

Grid Plug andPlayInformation

Instructions on how to configure Grid Naming Service (GNS) is beyond thescope of this article. Un-check the option to "Configure GNS".

Cluster Name SCAN Name SCAN Port

racnode-cluster racnode-cluster-scan 1521

After clicking [Next], the OUI will attempt to validate the SCAN information:

Cluster NodeInformation

Use this screen to add the node racnode2 to the cluster and to configure SSH

connectivity.

Click the [Add] button to add "racnode2.idevelopment.info" and its virtualIP address "racnode2-vip.idevelopment.info" according to the table below:

Public Node Name Virtual Host Name

racnode1.idevelopment.info racnode1-vip.idevelopment.info

racnode2.idevelopment.info racnode2-vip.idevelopment.info

Next, click the [SSH Connectivity] button. Enter the "OS Password" for thegrid user and click the [Setup] button. This will start the "SSH Connectivity"

configuration process:

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

101 of 136 4/18/2011 10:17 PM

Page 102: 11gr2on openfiler

Screen Name Response Screen Shot

After the SSH configuration process successfully completes, acknowledge thedialog box.

Finish off this screen by clicking the [Test] button to verify passwordless SSHconnectivity.

SpecifyNetworkInterfaceUsage

Identify the network interface to be used for the "Public" and "Private" network.Make any changes necessary to match the values in the table below:

Interface Name Subnet Interface Type

eth0 192.168.1.0 Public

eth1 192.168.2.0 Private

StorageOptionInformation

Select "Automatic Storage Management (ASM)".

Create ASMDisk Group

Create an ASM Disk Group that will be used to store the Oracle Clusterware filesaccording to the values in the table below:

Disk Group Name Redundancy Disk Path

CRS External ORCL:CRSVOL1

Specify ASMPassword

For the purpose of this article, I choose to "Use same passwords for theseaccounts".

FailureIsolationSupport

Configuring Intelligent Platform Management Interface (IPMI) is beyond thescope of this article. Select "Do not use Intelligent Platform ManagementInterface (IPMI)".

PrivilegedOperatingSystemGroups

This article makes use of role-based administrative privileges and highgranularity in specifying Automatic Storage Management roles using a Job RoleSeparation configuration.

Make any changes necessary to match the values in the table below:

OSDBA for ASM OSOPER for ASM OSASM

asmdba asmoper asmadmin

SpecifyInstallationLocation

Set the "Oracle Base" ($ORACLE_BASE) and "Software Location" ($ORACLE_HOME)

for the Oracle grid infrastructure installation: Oracle Base: /u01/app/grid

Software Location: /u01/app/11.2.0/grid

CreateInventory

Since this is the first install on the host, you will need to create the OracleInventory. Use the default values provided by the OUI: Inventory Directory: /u01/app/oraInventory

oraInventory Group Name: oinstall

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

102 of 136 4/18/2011 10:17 PM

Page 103: 11gr2on openfiler

Screen Name Response Screen Shot

PrerequisiteChecks

The installer will run through a series of checks to determine if both Oracle RACnodes meet the minimum requirements for installing and configuring the OracleClusterware and Automatic Storage Management software.

Starting with Oracle Clusterware 11g release 2 (11.2), if any check fails, theinstaller (OUI) will create shell script programs called fixup scripts to resolvemany incomplete system configuration requirements. If OUI detects anincomplete task that is marked "fixable", then you can easily fix the issue bygenerating the fixup script by clicking the [Fix & Check Again] button.

The fixup script is generated during installation. You will be prompted to run thescript as root in a separate terminal session. When you run the script, it raises

kernel values to required minimums, if necessary, and completes otheroperating system configuration tasks.

If all prerequisite checks pass (as was the case for my install), the OUIcontinues to the Summary screen.

Summary Click [Finish] to start the installation.

SetupThe installer performs the Oracle grid infrastructure setup process on bothOracle RAC nodes.

ExecuteConfigurationscripts

After the installation completes, you will be prompted to run the /u01/app

/oraInventory/orainstRoot.sh and /u01/app/11.2.0/grid/root.sh scripts.

Open a new console window on both Oracle RAC nodes in the cluster, (startingwith the node you are performing the install from), as the root user account.

Run the orainstRoot.sh script on both nodes in the RAC cluster:

[root@racnode1 ~]# /u01/app/oraInventory/orainstRoot.sh [root@racnode2 ~]# /u01/app/oraInventory/orainstRoot.sh

Within the same new console window on both Oracle RAC nodes in the cluster,(starting with the node you are performing the install from), stay logged in asthe root user account. Run the root.sh script on both nodes in the RAC cluster

one at a time starting with the node you are performing the install from:

[root@racnode1 ~]# /u01/app/11.2.0/grid/root.sh [root@racnode2 ~]# /u01/app/11.2.0/grid/root.sh

The root.sh script can take several minutes to run. When running root.sh on

the last node, you will receive output similar to the following which signifies asuccessful install:

...The inventory pointer is located at /etc/oraInst.locThe inventory is located at /u01/app/oraInventory'UpdateNodeList' was successful.

Go back to OUI and acknowledge the "Execute Configuration scripts" dialogwindow.

ConfigureOracle GridInfrastructurefor a Cluster

The installer will run configuration assistants for Oracle Net Services (NETCA),Automatic Storage Management (ASMCA), and Oracle Private Interconnect(VIPCA). The final step performed by OUI is to run the Cluster VerificationUtility (CVU).

Finish At the end of the installation, click the [Close] button to exit the OUI.

After installation is complete, do not manually remove or run cron jobs that remove/tmp/.oracle or /var/tmp/.oracle or its files while Oracle Clusterware is up. If youremove these files, then Oracle Clusterware could encounter intermittent hangs and you will

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

103 of 136 4/18/2011 10:17 PM

Page 104: 11gr2on openfiler

encounter error:

CRS-0184: Cannot communicate with the CRS daemon

Post-installation Tasks for Oracle Grid Infrastructure for a Cluster

Perform the following postinstallation procedures on both Oracle RAC nodes in the cluster.

Verify Oracle Clusterware Installation

After the installation of Oracle grid infrastructure, you should run through several tests to verify the install was successful.Run the following commands on both nodes in the RAC cluster as the grid user.

Check CRS Status

[grid@racnode1 ~]$ crsctl check crsCRS-4638: Oracle High Availability Services is onlineCRS-4537: Cluster Ready Services is onlineCRS-4529: Cluster Synchronization Services is onlineCRS-4533: Event Manager is online

Check Clusterware Resources

[grid@racnode1 ~]$ crs_stat -t -vName Type R/RA F/FT Target State Host----------------------------------------------------------------------ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racnode1ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode2ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racnode1ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE racnode1ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINEora....network ora....rk.type 0/5 0/ ONLINE ONLINE racnode1ora.oc4j ora.oc4j.type 0/5 0/0 OFFLINE OFFLINEora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racnode1ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racnode1ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE racnode1ora....de1.gsd application 0/5 0/0 OFFLINE OFFLINEora....de1.ons application 0/3 0/0 ONLINE ONLINE racnode1ora....de1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode1ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racnode2ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE racnode2ora....de2.gsd application 0/5 0/0 OFFLINE OFFLINEora....de2.ons application 0/3 0/0 ONLINE ONLINE racnode2ora....de2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode2ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode2ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

104 of 136 4/18/2011 10:17 PM

Page 105: 11gr2on openfiler

The crs_stat command is deprecated in Oracle Clusterware 11g release 2 (11.2).

Check Cluster Nodes

[grid@racnode1 ~]$ olsnodes -nracnode1 1racnode2 2

Check Oracle TNS Listener Process on Both Nodes

[grid@racnode1 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'LISTENER_SCAN2LISTENER_SCAN3LISTENER

[grid@racnode2 ~]$ ps -ef | grep lsnr | grep -v 'grep' | grep -v 'ocfs' | awk '{print $9}'LISTENER_SCAN1LISTENER

Confirming Oracle ASM Function for Oracle Clusterware Files

If you installed the OCR and voting disk files on Oracle ASM, then use the following command syntax as the GridInfrastructure installation owner to confirm that your Oracle ASM installation is running:

[grid@racnode1 ~]$ srvctl status asm -aASM is running on racnode1,racnode2ASM is enabled.

Check Oracle Cluster Registry (OCR)

[grid@racnode1 ~]$ ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 2332 Available space (kbytes) : 259788 ID : 1559468462 Device/File Name : +CRS Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

105 of 136 4/18/2011 10:17 PM

Page 106: 11gr2on openfiler

Logical corruption check bypassed due to non-privileged user

Check Voting Disk

[grid@racnode1 ~]$ crsctl query css votedisk## STATE File Universal Id File Name Disk group-- ----- ----------------- --------- --------- 1. ONLINE 05592be032644f19bf2b50a929efe843 (ORCL:CRSVOL1) [CRS]Located 1 voting disk(s).

To manage Oracle ASM or Oracle Net 11g release 2 (11.2) or later installations, use the

srvctl binary in the Oracle grid infrastructure home for a cluster (Grid home). When weinstall Oracle Real Application Clusters (the Oracle database software), you cannot use thesrvctl binary in the database home to manage Oracle ASM or Oracle Net which reside inthe Oracle grid infrastructure home.

Check SCAN Resolution

After installing Oracle grid infrastructure, verify the SCAN virtual IP. As shown in the output below, the scan address isresolved to 3 different ip-addresses:

[grid@racnode1 ~]$ dig racnode-cluster-scan.idevelopment.info

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_4.2 <<>> racnode-cluster-scan.idevelopment.info;; global options: printcmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37366;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:;racnode-cluster-scan.idevelopment.info. IN A

;; ANSWER SECTION:racnode-cluster-scan.idevelopment.info. 86400 IN A 192.168.1.187racnode-cluster-scan.idevelopment.info. 86400 IN A 192.168.1.188racnode-cluster-scan.idevelopment.info. 86400 IN A 192.168.1.189

;; AUTHORITY SECTION:idevelopment.info. 86400 IN NS openfiler1.idevelopment.info.

;; ADDITIONAL SECTION:openfiler1.idevelopment.info. 86400 IN A 192.168.1.195

;; Query time: 0 msec;; SERVER: 192.168.1.195#53(192.168.1.195);; WHEN: Mon Nov 8 16:54:02 2010;; MSG SIZE rcvd: 145

Voting Disk Management

In prior releases, it was highly recommended to back up the voting disk using the dd command after installing the OracleClusterware software. With Oracle Clusterware release 11.2 and later, backing up and restoring a voting disk using the ddis not supported and may result in the loss of the voting disk.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

106 of 136 4/18/2011 10:17 PM

Page 107: 11gr2on openfiler

Backing up the voting disks in Oracle Clusterware 11g release 2 is no longer required. The voting disk data is automatically

backed up in OCR as part of any configuration change and is automatically restored to any voting disk added.

To learn more about managing the voting disks, Oracle Cluster Registry (OCR), and Oracle Local Registry (OLR), pleaserefer to the Oracle Clusterware Administration and Deployment Guide 11g Release 2 (11.2).

Back Up the root.sh Script

Oracle recommends that you back up the root.sh script after you complete an installation. If you install other products inthe same Oracle home directory, then the installer updates the contents of the existing root.sh script during theinstallation. If you require information contained in the original root.sh script, then you can recover it from the root.shfile copy.

Back up the root.sh file on both Oracle RAC nodes as root:

[root@racnode1 ~]# cd /u01/app/11.2.0/grid[root@racnode1 grid]# cp root.sh root.sh.racnode1.AFTER_INSTALL_NOV-08-2010 [root@racnode2 ~]# cd /u01/app/11.2.0/grid[root@racnode2 grid]# cp root.sh root.sh.racnode2.AFTER_INSTALL_NOV-08-2010

Install Cluster Health Management Software - (Optional)

To address troubleshooting issues, Oracle recommends that you install Instantaneous Problem Detection OS Tool

(IPD/OS) if you are using Linux kernel 2.6.9 or higher. This article was written using RHEL/CentOS 5.5 which uses the

2.6.18 kernel:

[root@racnode1 ~]# uname -aLinux racnode1 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux

If you are using a Linux kernel earlier than 2.6.9, then you would use OS Watcher and RACDDT which is available through

the My Oracle Support website (formerly Metalink).

The IPD/OS tool is designed to detect and analyze operating system and cluster resource-related degradation and failures.The tool can provide better explanations for many issues that occur in clusters where Oracle Clusterware, Oracle ASM andOracle RAC are running, such as node evictions. It tracks the operating system resource consumption at each node,process, and device level continuously. It collects and analyzes cluster-wide data. In real time mode, when thresholds arereached, an alert is shown to the operator. For root cause analysis, historical data can be replayed to understand what washappening at the time of failure.

Instructions for installing and configuring the IPD/OS tool is beyond the scope of this article and will not be discussed. Youcan download the IPD/OS tool along with a detailed installation and configuration guide at the following URL:

http://www.oracle.com/technology/products/database/clustering/ipd_download_homepage.html

Create ASM Disk Groups for Data and Fast Recovery Area

Run the ASM Configuration Assistant (asmca) as the grid user from only one node in the cluster (racnode1) to create theadditional ASM disk groups which will be used to create the clustered database.

During the installation of Oracle grid infrastructure, we configured one ASM disk group named +CRS which was used tostore the Oracle clusterware files (OCR and voting disk).

In this section, we will create two additional ASM disk groups using the ASM Configuration Assistant (asmca). These newASM disk groups will be used later in this guide when creating the clustered database.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

107 of 136 4/18/2011 10:17 PM

Page 108: 11gr2on openfiler

The first ASM disk group will be named +RACDB_DATA and will be used to store all Oracle physical database files (data,online redo logs, control files, archived redo logs). A second ASM disk group will be created for the Fast Recovery Areanamed +FRA.

Verify Terminal Shell Environment

Before starting the ASM Configuration Assistant, log in to racnode1 as the owner of the Oracle grid infrastructuresoftware which for this article is grid. Next, if you are using a remote client to connect to the Oracle RAC node performingthe installation (SSH or Telnet to racnode1 from a workstation configured with an X Server), verify your X11 display serversettings which were described in the section Logging In to a Remote System Using X Terminal.

Create Additional ASM Disk Groups using ASMCA

Perform the following tasks as the grid user to create two additional ASM disk groups:

[grid@racnode1 ~]$ asmca &

Screen Name Response Screen Shot

Disk Groups From the "Disk Groups" tab, click the [Create] button.

Create DiskGroup

The "Create Disk Group" dialog should show two of the ASMLib volumes wecreated earlier in this guide.

If the ASMLib volumes we created earlier in this article do not show up in the"Select Member Disks" window as eligible (ORCL:DATAVOL1 and ORCL:FRAVOL1)

then click on the [Change Disk Discovery Path] button and input "ORCL:*".

When creating the "Data" ASM disk group, use "RACDB_DATA" for the "DiskGroup Name". In the "Redundancy" section, choose "External (None)". Finally,check the ASMLib volume "ORCL:DATAVOL1" in the "Select Member Disks"section.

After verifying all values in this dialog are correct, click the [OK] button.

Disk GroupsAfter creating the first ASM disk group, you will be returned to the initial dialog.Click the [Create] button again to create the second ASM disk group.

Create DiskGroup

The "Create Disk Group" dialog should now show the final remaining ASMLibvolume.

When creating the "Fast Recovery Area" disk group, use "FRA" for the "DiskGroup Name". In the "Redundancy" section, choose "External (None)". Finally,check the ASMLib volume "ORCL:FRAVOL1" in the "Select Member Disks"section.

After verifying all values in this dialog are correct, click the [OK] button.

Disk Groups Exit the ASM Configuration Assistant by clicking the [Exit] button.

Install Oracle Database 11g with Oracle Real Application Clusters

Perform the Oracle Database software installation from only one of the Oracle RAC nodes in the cluster (racnode1). TheOracle Database software will be installed to both of the Oracle RAC nodes in the cluster by the Oracle Universal Installerusing SSH.

Now that the grid infrastructure software is functional, you can install the Oracle Database software on the one node in yourcluster (racnode1) as the oracle user. OUI copies the binary files from this node to all the other node in the clusterduring the installation process.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

108 of 136 4/18/2011 10:17 PM

Page 109: 11gr2on openfiler

For the purpose of this guide, we will forgo the "Create Database" option when installing the Oracle Database software.The clustered database will be created later in this guide using the Database Configuration Assistant (DBCA) after allinstalls have been completed.

Verify Terminal Shell Environment

Before starting the Oracle Universal Installer (OUI), log in to racnode1 as the owner of the Oracle Database softwarewhich for this article is oracle. Next, if you are using a remote client to connect to the Oracle RAC node performing theinstallation (SSH or Telnet to racnode1 from a workstation configured with an X Server), verify your X11 display serversettings which were described in the section Logging In to a Remote System Using X Terminal.

Install Oracle Database 11g Release 2 Software

Perform the following tasks as the oracle user to install the Oracle Database software:

[oracle@racnode1 ~]$ iduid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)

[oracle@racnode1 ~]$ DISPLAY=<your local workstation>:0.0[oracle@racnode1 ~]$ export DISPLAY

[oracle@racnode1 ~]$ cd /home/oracle/software/oracle/database[oracle@racnode1 database]$ ./runInstaller

Screen Name Response Screen Shot

ConfigureSecurityUpdates

For the purpose of this article, un-check the security updates check-box and clickthe [Next] button to continue. Acknowledge the warning dialog indicating youhave not provided an email address by clicking the [Yes] button.

InstallationOption

Select "Install database software only".

Grid Options

Select the "Real Application Clusters database installation" radio button(default) and verify that both Oracle RAC nodes are checked in the "Node Name"window.

Next, click the [SSH Connectivity] button. Enter the "OS Password" for theoracle user and click the [Setup] button. This will start the "SSH Connectivity"

configuration process:

After the SSH configuration process successfully completes, acknowledge thedialog box.

Finish off this screen by clicking the [Test] button to verify passwordless SSHconnectivity.

ProductLanguages

Make the appropriate selection(s) for your environment.

DatabaseEdition

Select "Enterprise Edition".

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

109 of 136 4/18/2011 10:17 PM

Page 110: 11gr2on openfiler

Screen Name Response Screen Shot

InstallationLocation

Specify the Oracle base and Software location (Oracle_home) as follows:

Oracle Base: /u01/app/oracle

Software Location: /u01/app/oracle/product/11.2.0/dbhome_1

OperatingSystemGroups

Select the OS groups to be used for the SYSDBA and SYSOPER privileges:

Database Administrator (OSDBA) Group: dba

Database Operator (OSOPER) Group: oper

PrerequisiteChecks

The installer will run through a series of checks to determine if both Oracle RACnodes meet the minimum requirements for installing and configuring the OracleDatabase software.

Starting with 11g release 2 (11.2), if any checks fail, the installer (OUI) willcreate shell script programs called fixup scripts, to resolve many incompletesystem configuration requirements. If OUI detects an incomplete task that ismarked "fixable", then you can easily fix the issue by generating the fixup scriptby clicking the [Fix & Check Again] button.

The fixup script is generated during installation. You will be prompted to run thescript as root in a separate terminal session. When you run the script, it raises

kernel values to required minimums, if necessary, and completes other operatingsystem configuration tasks.

If all prerequisite checks pass (as was the case for my install), the OUI continuesto the Summary screen.

Summary Click [Finish] to start the installation.

InstallProduct

The installer performs the Oracle Database software installation process on bothOracle RAC nodes.

ExecuteConfigurationscripts

After the installation completes, you will be prompted to run the /u01/app

/oracle/product/11.2.0/dbhome_1/root.sh script on both Oracle RAC nodes.

Open a new console window on both Oracle RAC nodes in the cluster, (startingwith the node you are performing the install from), as the root user account.

Run the root.sh script on all nodes in the RAC cluster:

[root@racnode1 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh [root@racnode2 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

Go back to OUI and acknowledge the "Execute Configuration scripts" dialogwindow.

Finish At the end of the installation, click the [Close] button to exit the OUI.

Install Oracle Database 11g Examples (formerly Companion)

Perform the Oracle Database 11g Examples software installation from only one of the Oracle RAC nodes in the cluster

(racnode1). The Oracle Database Examples software will be installed to both of Oracle RAC nodes in the cluster by theOracle Universal Installer using SSH.

Now that the Oracle Database 11g software is installed, you have the option to install the Oracle Database 11g Examples.

Like the Oracle Database software install, the Examples software is only installed from one node in your cluster(racnode1) as the oracle user. OUI copies the binary files from this node to all the other node in the cluster during theinstallation process.

Verify Terminal Shell Environment

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

110 of 136 4/18/2011 10:17 PM

Page 111: 11gr2on openfiler

Before starting the Oracle Universal Installer (OUI), log in to racnode1 as the owner of the Oracle Database softwarewhich for this article is oracle. Next, if you are using a remote client to connect to the Oracle RAC node performing theinstallation (SSH or Telnet to racnode1 from a workstation configured with an X Server), verify your X11 display serversettings which were described in the section Logging In to a Remote System Using X Terminal.

Install Oracle Database 11g Release 2 Examples

Perform the following tasks as the oracle user to install the Oracle Database Examples:

[oracle@racnode1 ~]$ cd /home/oracle/software/oracle/examples[oracle@racnode1 examples]$ ./runInstaller

Screen Name Response Screen Shot

InstallationLocation

Specify the Oracle base and Software location (Oracle_home) as follows:

Oracle Base: /u01/app/oracle

Software Location: /u01/app/oracle/product/11.2.0/dbhome_1

PrerequisiteChecks

The installer will run through a series of checks to determine if both Oracle RACnodes meet the minimum requirements for installing and configuring the OracleDatabase Examples software.

Starting with 11g release 2 (11.2), if any checks fail, the installer (OUI) willcreate shell script programs called fixup scripts, to resolve many incompletesystem configuration requirements. If OUI detects an incomplete task that ismarked "fixable", then you can easily fix the issue by generating the fixup scriptby clicking the [Fix & Check Again] button.

The fixup script is generated during installation. You will be prompted to run thescript as root in a separate terminal session. When you run the script, it raises

kernel values to required minimums, if necessary, and completes otheroperating system configuration tasks.

If all prerequisite checks pass (as was the case for my install), the OUIcontinues to the Summary screen.

Summary Click [Finish] to start the installation.

InstallProduct

The installer performs the Oracle Database Examples software installationprocess on both Oracle RAC nodes.

Finish At the end of the installation, click the [Close] button to exit the OUI.

Create the Oracle Cluster Database

The database creation process should only be performed from one of the Oracle RAC nodes in the cluster (racnode1).

Use the Oracle Database Configuration Assistant (DBCA) to create the clustered database.

Before executing the DBCA, make certain that the $ORACLE_HOME and $PATH are set appropriately for the$ORACLE_BASE/product/11.2.0/dbhome_1 environment. Setting environment variables in the login script for theoracle user account was covered in the section "Create Login Script for the oracle User Account".

You should also verify that all services we have installed up to this point (Oracle TNS listener, Oracle Clusterwareprocesses, etc.) are running on both Oracle RAC nodes before attempting to start the clustered database creation process:

[oracle@racnode1 ~]$ su - grid -c "crs_stat -t -v"Password: *********

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

111 of 136 4/18/2011 10:17 PM

Page 112: 11gr2on openfiler

Name Type R/RA F/FT Target State Host----------------------------------------------------------------------ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1ora.FRA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racnode1ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode2ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1ora....DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racnode1ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE racnode1ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINEora....network ora....rk.type 0/5 0/ ONLINE ONLINE racnode1ora.oc4j ora.oc4j.type 0/5 0/0 OFFLINE OFFLINEora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racnode1ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racnode1ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE racnode1ora....de1.gsd application 0/5 0/0 OFFLINE OFFLINEora....de1.ons application 0/3 0/0 ONLINE ONLINE racnode1ora....de1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode1ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racnode2ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE racnode2ora....de2.gsd application 0/5 0/0 OFFLINE OFFLINEora....de2.ons application 0/3 0/0 ONLINE ONLINE racnode2ora....de2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode2ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode2ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1

[oracle@racnode2 ~]$ su - grid -c "crs_stat -t -v"Password: *********Name Type R/RA F/FT Target State Host----------------------------------------------------------------------ora.CRS.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1ora.FRA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racnode1ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode2ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racnode1ora....DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE racnode1ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racnode1ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE racnode1ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINEora....network ora....rk.type 0/5 0/ ONLINE ONLINE racnode1ora.oc4j ora.oc4j.type 0/5 0/0 OFFLINE OFFLINEora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racnode1ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racnode1ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE racnode1ora....de1.gsd application 0/5 0/0 OFFLINE OFFLINEora....de1.ons application 0/3 0/0 ONLINE ONLINE racnode1ora....de1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode1ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racnode2ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE racnode2ora....de2.gsd application 0/5 0/0 OFFLINE OFFLINEora....de2.ons application 0/3 0/0 ONLINE ONLINE racnode2ora....de2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racnode2ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode2ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racnode1

Verify Terminal Shell Environment

Before starting the Database Configuration Assistant (DBCA), log in to racnode1 as the owner of the Oracle Database

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

112 of 136 4/18/2011 10:17 PM

Page 113: 11gr2on openfiler

software which for this article is oracle. Next, if you are using a remote client to connect to the Oracle RAC nodeperforming the installation (SSH or Telnet to racnode1 from a workstation configured with an X Server), verify your X11display server settings which were described in the section Logging In to a Remote System Using X Terminal.

Create the Clustered Database

To start the database creation process, run the following as the oracle user:

[oracle@racnode1 ~]$ dbca &

Screen Name Response Screen Shot

Welcome Screen Select Oracle Real Application Clusters database.

Operations Select Create a Database.

DatabaseTemplates

Select Custom Database.

DatabaseIdentification

Cluster database configuration. Configuration Type: Admin-Managed

Database naming. Global Database Name: racdb.idevelopment.info

SID Prefix: racdb

Note: I used idevelopment.info for the database domain. You may use any

database domain. Keep in mind that this domain does not have to be a validDNS domain.

Node Selection.Click the [Select All] button to select all servers: racnode1 and racnode2.

ManagementOptions

Leave the default options here, which is to Configure Enterprise Manager /Configure Database Control for local management.

DatabaseCredentials

I selected to Use the Same Administrative Password for All Accounts.Enter the password (twice) and make sure the password does not start with adigit number.

Database FileLocations

Specify storage type and locations for database files.

Storage Type: Automatic Storage Management (ASM)

Storage Locations: Use Oracle-Managed Files

Database Area: +RACDB_DATA

SpecifyASMSNMPPassword

Specify the ASMSNMP password for the ASM instance.

RecoveryConfiguration

Check the option for Specify Fast Recovery Area.

For the Fast Recovery Area, click the [Browse] button and select the disk

group name +FRA.

My disk group has a size of about 33GB. When defining the Fast RecoveryArea size, use the entire volume minus 10% for overhead — (33-10%=30GB). I used a Fast Recovery Area Size of 30 GB (30413 MB).

DatabaseContent

I left all of the Database Components (and destination tablespaces) set totheir default value although it is perfectly OK to select the Sample Schemas.This option is available since we installed the Oracle Database 11g Examples.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

113 of 136 4/18/2011 10:17 PM

Page 114: 11gr2on openfiler

Screen Name Response Screen Shot

InitializationParameters

Change any parameters for your environment. I left them all at their defaultsettings.

DatabaseStorage

Change any parameters for your environment. I left them all at their defaultsettings.

CreationOptions

Keep the default option Create Database selected. I also always select toGenerate Database Creation Scripts. Click Finish to start the databasecreation process. After acknowledging the database creation report and scriptgeneration dialog, the database creation will start.

Click OK on the "Summary" screen.

End of DatabaseCreation

At the end of the database creation, exit from the DBCA.

When the DBCA has completed, you will have a fully functional Oracle RAC 11g release 2 cluster running!

Verify Clustered Database is Open

[oracle@racnode1 ~]$ su - grid -c "crsctl status resource -w \"TYPE co 'ora'\" -t"Password: *********--------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS--------------------------------------------------------------------------------Local Resources--------------------------------------------------------------------------------ora.CRS.dg ONLINE ONLINE racnode1 ONLINE ONLINE racnode2ora.FRA.dg ONLINE ONLINE racnode1 ONLINE ONLINE racnode2ora.LISTENER.lsnr ONLINE ONLINE racnode1 ONLINE ONLINE racnode2ora.RACDB_DATA.dg ONLINE ONLINE racnode1 ONLINE ONLINE racnode2ora.asm ONLINE ONLINE racnode1 Started ONLINE ONLINE racnode2 Startedora.eons ONLINE ONLINE racnode1 ONLINE ONLINE racnode2ora.gsd OFFLINE OFFLINE racnode1 OFFLINE OFFLINE racnode2ora.net1.network ONLINE ONLINE racnode1 ONLINE ONLINE racnode2ora.ons ONLINE ONLINE racnode1 ONLINE ONLINE racnode2--------------------------------------------------------------------------------Cluster Resources--------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE racnode2ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE racnode1

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

114 of 136 4/18/2011 10:17 PM

Page 115: 11gr2on openfiler

ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE racnode1ora.oc4j 1 OFFLINE OFFLINEora.racdb.dbora.racdb.db 1 ONLINE ONLINE racnode1 Open 2 ONLINE ONLINE racnode2 Openora.racnode1.vip 1 ONLINE ONLINE racnode1ora.racnode2.vip 1 ONLINE ONLINE racnode2ora.scan1.vip 1 ONLINE ONLINE racnode2ora.scan2.vip 1 ONLINE ONLINE racnode1ora.scan3.vip 1 ONLINE ONLINE racnode1

Oracle Enterprise Manager

If you configured Oracle Enterprise Manager (Database Control), it can be used to view the database configuration andcurrent status of the database.

The URL for this example is: https://racnode1.idevelopment.info:1158/em

[oracle@racnode1 ~]$ emctl status dbconsoleOracle Enterprise Manager 11g Database Control Release 11.2.0.1.0Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.https://racnode1.idevelopment.info:1158/em/console/aboutApplicationOracle Enterprise Manager 11g is running.------------------------------------------------------------------Logs are generated in directory /u01/app/oracle/product/11.2.0/dbhome_1/racnode1_racdb/sysman/log

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

115 of 136 4/18/2011 10:17 PM

Page 116: 11gr2on openfiler

Figure 18: Oracle Enterprise Manager - (Database Console)

Post Database Creation Tasks - (Optional)

This section offers several optional tasks that can be performed on your new Oracle 11g in order to enhance availability as

well as database management.

Re-compile Invalid Objects

Run the utlrp.sql script to recompile all invalid PL/SQL packages now instead of when the packages are accessed forthe first time. This step is optional but recommended.

[oracle@racnode1 ~]$ sqlplus / as sysdbaSQL> @?/rdbms/admin/utlrp.sql

Enabling Archive Logs in a RAC Environment

Whether a single instance or clustered database, Oracle tracks and logs all changes to database blocks in online redolog

files. In an Oracle RAC environment, each instance will have its own set of online redolog files known as a thread. Each

Oracle instance will use its group of online redologs in a circular manner. Once an online redolog fills, Oracle moves to the

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

116 of 136 4/18/2011 10:17 PM

Page 117: 11gr2on openfiler

next one. If the database is in "Archive Log Mode", Oracle will make a copy of the online redo log before it gets reused. Athread must contain at least two online redologs (or online redolog groups). The same holds true for a single instanceconfiguration. The single instance must contain at least two online redologs (or online redolog groups).

The size of an online redolog file is completely independent of another instance's' redolog size. Although in mostconfigurations the size is the same, it may be different depending on the workload and backup / recovery considerations foreach node. It is also worth mentioning that each instance has exclusive write access to its own online redolog files. In acorrectly configured RAC environment, however, each instance can read another instance's current online redolog file toperform instance recovery if that instance was terminated abnormally. It is therefore a requirement that online redo logs belocated on a shared storage device (just like the database files).

As already mentioned, Oracle writes to its online redolog files in a circular manner. When the current online redolog fills,Oracle will switch to the next one. To facilitate media recovery, Oracle allows the DBA to put the database into "Archive LogMode" which makes a copy of the online redolog after it fills (and before it gets reused). This is a process known asarchiving.

The Database Configuration Assistant (DBCA) allows users to configure a new database to be in archive log mode,however most DBA's opt to bypass this option during initial database creation. In cases like this where the database is in noarchive log mode, it is a simple task to put the database into archive log mode. Note however that this will require a shortdatabase outage. From one of the nodes in the Oracle RAC configuration, use the following tasks to put a RAC enableddatabase into archive log mode. For the purpose of this article, I will use the node racnode1 which runs the racdb1instance:

Log in to one of the nodes (i.e. racnode1) as oracle and disable the cluster instance parameter by settingcluster_database to FALSE from the current instance:

[oracle@racnode1 ~]$ sqlplus / as sysdba

SQL> alter system set cluster_database=false scope=spfile sid='racdb1';

System altered.

1.

Shutdown all instances accessing the clustered database as the oracle user:

[oracle@racnode1 ~]$ srvctl stop database -d racdb

2.

Using the local instance, MOUNT the database:

[oracle@racnode1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sat Nov 21 19:26:47 2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mountORACLE instance started.

Total System Global Area 1653518336 bytesFixed Size 2213896 bytesVariable Size 1073743864 bytesDatabase Buffers 570425344 bytesRedo Buffers 7135232 bytes

3.

Enable archiving:4.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

117 of 136 4/18/2011 10:17 PM

Page 118: 11gr2on openfiler

SQL> alter database archivelog;

Database altered.

Re-enable support for clustering by modifying the instance parameter cluster_database to TRUE from thecurrent instance:

SQL> alter system set cluster_database=true scope=spfile sid='racdb1';

System altered.

5.

Shutdown the local instance:

SQL> shutdown immediate

ORA-01109: database not open

Database dismounted.ORACLE instance shut down.

6.

Bring all instances back up as the oracle account using srvctl:

[oracle@racnode1 ~]$ srvctl start database -d racdb

7.

Login to the local instance and verify Archive Log Mode is enabled:

[oracle@racnode1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Mon Nov 8 20:07:48 2010

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit ProductionWith the Partitioning, Real Application Clusters, Automatic Storage Management, OLAPData Mining and Real Application Testing options

SQL> archive log listDatabase log mode Archive ModeAutomatic archival EnabledArchive destination USE_DB_RECOVERY_FILE_DESTOldest online log sequence 68Next log sequence to archive 69Current log sequence 69

8.

After enabling Archive Log Mode, each instance in the RAC configuration can automatically archive redologs!

Download and Install Custom Oracle Database Scripts

DBA's rely on Oracle's data dictionary views and dynamic performance views in order to support and better manage theirdatabases. Although these views provide a simple and easy mechanism to query critical information regarding the database,

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

118 of 136 4/18/2011 10:17 PM

Page 119: 11gr2on openfiler

it helps to have a collection of accurate and readily available SQL scripts to query these views.

In this section you will download and install a collection of Oracle DBA scripts that can be used to manage many aspects ofyour database including space management, performance, backups, security, and session management. The DBA ScriptsArchive for Oracle can be downloaded using the following link http://www.idevelopment.info/data/Oracle/DBA_scripts/dba_scripts_archive_Oracle.zip. As the oracle user account, download the dba_scripts_archive_Oracle.ziparchive to the $ORACLE_BASE directory of each node in the cluster. For the purpose of this example, thedba_scripts_archive_Oracle.zip archive will be copied to /u01/app/oracle. Next, unzip the archive file to the$ORACLE_BASE directory.

For example, perform the following on both nodes in the Oracle RAC cluster as the oracle user account:

[oracle@racnode1 ~]$ mv dba_scripts_archive_Oracle.zip /u01/app/oracle[oracle@racnode1 ~]$ cd /u01/app/oracle[oracle@racnode1 ~]$ unzip dba_scripts_archive_Oracle.zip

The final step is to verify (or set) the appropriate environment variable for the current UNIX shell to ensure the Oracle SQLscripts can be run from within SQL*Plus while in any directory. For UNIX, verify the following environment variable is set andincluded in your login shell script:

ORACLE_PATH=$ORACLE_BASE/dba_scripts/common/sql:.:$ORACLE_HOME/rdbms/adminexport ORACLE_PATH

The ORACLE_PATH environment variable should already be set in the .bash_profilelogin script that was created in the section Create Login Script for the oracle User Account.

Now that the DBA Scripts Archive for Oracle has been unzipped and the UNIX environment variable ($ORACLE_PATH) hasbeen set to the appropriate directory, you should now be able to run any of the SQL scripts in the$ORACLE_BASE/dba_scripts/common/sql while logged into SQL*Plus. For example, to query tablespace informationwhile logged into the Oracle database as a DBA user:

SQL> @dba_tablespaces

Status Tablespace Name TS Type Ext. Mgt. Seg. Mgt. Tablespace Size Used (in bytes) Pct. Used------- ----------------- ------------ ---------- --------- ---------------- ---------------- ---------ONLINE SYSAUX PERMANENT LOCAL AUTO 629,145,600 511,967,232 81ONLINE UNDOTBS1 UNDO LOCAL MANUAL 1,059,061,760 948,043,776 90ONLINE USERS PERMANENT LOCAL AUTO 5,242,880 1,048,576 20ONLINE SYSTEM PERMANENT LOCAL MANUAL 734,003,200 703,135,744 96ONLINE EXAMPLE PERMANENT LOCAL AUTO 157,286,400 85,131,264 54ONLINE UNDOTBS2 UNDO LOCAL MANUAL 209,715,200 20,840,448 10ONLINE TEMP TEMPORARY LOCAL MANUAL 75,497,472 66,060,288 88 ---------------- ---------------- ---------avg 63sum 2,869,952,512 2,336,227,328

7 rows selected.

To obtain a list of all available Oracle DBA scripts while logged into SQL*Plus, run the help.sql script:

SQL> @help.sql

========================================

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

119 of 136 4/18/2011 10:17 PM

Page 120: 11gr2on openfiler

Automatic Shared Memory Management========================================asmm_components.sql

========================================Automatic Storage Management========================================asm_alias.sqlasm_clients.sqlasm_diskgroups.sqlasm_disks.sqlasm_disks_perf.sqlasm_drop_files.sqlasm_files.sqlasm_files2.sqlasm_templates.sql

< --- SNIP --- > perf_top_sql_by_buffer_gets.sqlperf_top_sql_by_disk_reads.sql

========================================Workspace Manager========================================wm_create_workspace.sqlwm_disable_versioning.sqlwm_enable_versioning.sqlwm_freeze_workspace.sqlwm_get_workspace.sqlwm_goto_workspace.sqlwm_merge_workspace.sqlwm_refresh_workspace.sqlwm_remove_workspace.sqlwm_unfreeze_workspace.sqlwm_workspaces.sql

Create / Alter Tablespaces

When creating the clustered database, we left all tablespaces set to their default size. If you are using a large drive for theshared storage, you may want to make a sizable testing database.

Below are several optional SQL commands for modifying and creating all tablespaces for the test database. Please keep inmind that the database file names (OMF files) used in this example may differ from what the Oracle Database ConfigurationAssistant (DBCA) creates for your environment. When working through this section, substitute the data file names that werecreated in your environment where appropriate. The following query can be used to determine the file names for yourenvironment:

SQL> select tablespace_name, file_name 2 from dba_data_files 3 union 4 select tablespace_name, file_name 5 from dba_temp_files;

TABLESPACE_NAME FILE_NAME--------------- --------------------------------------------------EXAMPLE +RACDB_DATA/racdb/datafile/example.263.703530435SYSAUX +RACDB_DATA/racdb/datafile/sysaux.260.703530411SYSTEM +RACDB_DATA/racdb/datafile/system.259.703530397TEMP +RACDB_DATA/racdb/tempfile/temp.262.703530429UNDOTBS1 +RACDB_DATA/racdb/datafile/undotbs1.261.703530423UNDOTBS2 +RACDB_DATA/racdb/datafile/undotbs2.264.703530441

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

120 of 136 4/18/2011 10:17 PM

Page 121: 11gr2on openfiler

USERS +RACDB_DATA/racdb/datafile/users.265.703530447

7 rows selected.

[oracle@racnode1 ~]$ sqlplus "/ as sysdba"

SQL> create user scott identified by tiger default tablespace users;

User created.

SQL> grant dba, resource, connect to scott;

Grant succeeded.

SQL> alter database datafile '+RACDB_DATA/racdb/datafile/users.265.703530447' resize 1024

Database altered.

SQL> alter tablespace users add datafile '+RACDB_DATA' size 1024m autoextend off;

Tablespace altered.

SQL> create tablespace indx datafile '+RACDB_DATA' size 1024m 2 autoextend on next 100m maxsize unlimited 3 extent management local autoallocate 4 segment space management auto;

Tablespace created.

SQL> alter database datafile '+RACDB_DATA/racdb/datafile/system.259.703530397' resize 102

Database altered.

SQL> alter database datafile '+RACDB_DATA/racdb/datafile/sysaux.260.703530411' resize 102

Database altered.

SQL> alter database datafile '+RACDB_DATA/racdb/datafile/undotbs1.261.703530423' resize 1

Database altered.

SQL> alter database datafile '+RACDB_DATA/racdb/datafile/undotbs2.264.703530441' resize 1

Database altered.

SQL> alter database tempfile '+RACDB_DATA/racdb/tempfile/temp.262.703530429' resize 1024m

Database altered.

Here is a snapshot of the tablespaces I have defined for my test database environment:

Status Tablespace Name TS Type Ext. Mgt. Seg. Mgt. Tablespace Size Used (in bytes) Pct. Used------- ----------------- ------------ ---------- --------- ---------------- ---------------- ---------ONLINE SYSAUX PERMANENT LOCAL AUTO 1,073,741,824 512,098,304 48ONLINE UNDOTBS1 UNDO LOCAL MANUAL 1,073,741,824 948,043,776 88ONLINE USERS PERMANENT LOCAL AUTO 2,147,483,648 2,097,152 0ONLINE SYSTEM PERMANENT LOCAL MANUAL 1,073,741,824 703,201,280 65ONLINE EXAMPLE PERMANENT LOCAL AUTO 157,286,400 85,131,264 54ONLINE INDX PERMANENT LOCAL AUTO 1,073,741,824 1,048,576 0ONLINE UNDOTBS2 UNDO LOCAL MANUAL 1,073,741,824 20,840,448 2ONLINE TEMP TEMPORARY LOCAL MANUAL 1,073,741,824 66,060,288 6

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

121 of 136 4/18/2011 10:17 PM

Page 122: 11gr2on openfiler

---------------- ---------------- ---------avg 33sum 8,747,220,992 2,338,521,088

8 rows selected.

Verify Oracle Grid Infrastructure and Database Configuration

The following Oracle Clusterware and Oracle RAC verification checks can be performed on any of the Oracle RAC nodes inthe cluster. For the purpose of this article, I will only be performing checks from racnode1 as the oracle OS user.

Most of the checks described in this section use the Server Control Utility (SRVCTL) and can be run as either the oracle

or grid OS user. There are five node-level tasks defined for SRVCTL:

Adding and deleting node-level applicationsSetting and un-setting the environment for node-level applicationsAdministering node applicationsAdministering ASM instancesStarting and stopping a group of programs that includes virtual IP addresses, listeners, Oracle Notification Services,and Oracle Enterprise Manager agents (for maintenance purposes).

Oracle also provides the Oracle Clusterware Control (CRSCTL) utility. CRSCTL is an interface between you and Oracle

Clusterware, parsing and calling Oracle Clusterware APIs for Oracle Clusterware objects.

Oracle Clusterware 11g release 2 (11.2) introduces cluster-aware commands with which you can perform check, start, and

stop operations on the cluster. You can run these commands from any node in the cluster on another node in the cluster, oron all nodes in the cluster, depending on the operation.

You can use CRSCTL commands to perform several operations on Oracle Clusterware, such as:

Starting and stopping Oracle Clusterware resourcesEnabling and disabling Oracle Clusterware daemonsChecking the health of the clusterManaging resources that represent third-party applicationsIntegrating Intelligent Platform Management Interface (IPMI) with Oracle Clusterware to provide failure isolationsupport and to ensure cluster integrityDebugging Oracle Clusterware components

For the purpose of this article (and this section), we will only make use of the "Checking the health of the cluster" operationwhich uses the Clusterized (Cluster Aware) Command:

crsctl check cluster

Many subprograms and commands were deprecated in Oracle Clusterware 11g release 2 (11.2):

crs_stat

crs_register

crs_unregister

crs_start

crs_stop

crs_getperm

crs_profile

crs_relocate

crs_setperm

crsctl check crsd

crsctl check cssd

crsctl check evmd

crsctl debug log

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

122 of 136 4/18/2011 10:17 PM

Page 123: 11gr2on openfiler

crsctl set css votedisk

crsctl start resources

crsctl stop resources

Check the Health of the Cluster - (Clusterized Command)

Run as the grid user.

[grid@racnode1 ~]$ crsctl check clusterCRS-4537: Cluster Ready Services is onlineCRS-4529: Cluster Synchronization Services is onlineCRS-4533: Event Manager is online

All Oracle Instances - (Database Status)

[oracle@racnode1 ~]$ srvctl status database -d racdbInstance racdb1 is running on node racnode1Instance racdb2 is running on node racnode2

Single Oracle Instance - (Status of Specific Instance)

[oracle@racnode1 ~]$ srvctl status instance -d racdb -i racdb1Instance racdb1 is running on node racnode1

Node Applications - (Status)

[oracle@racnode1 ~]$ srvctl status nodeappsVIP racnode1-vip is enabledVIP racnode1-vip is running on node: racnode1VIP racnode2-vip is enabledVIP racnode2-vip is running on node: racnode2Network is enabledNetwork is running on node: racnode1Network is running on node: racnode2GSD is disabledGSD is not running on node: racnode1GSD is not running on node: racnode2ONS is enabledONS daemon is running on node: racnode1ONS daemon is running on node: racnode2eONS is enabledeONS daemon is running on node: racnode1eONS daemon is running on node: racnode2

Node Applications - (Configuration)

[oracle@racnode1 ~]$ srvctl config nodeappsVIP exists.:racnode1VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0VIP exists.:racnode2

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

123 of 136 4/18/2011 10:17 PM

Page 124: 11gr2on openfiler

VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0GSD exists.ONS daemon exists. Local port 6100, remote port 6200eONS daemon exists. Multicast port 24057, multicast IP address 234.194.43.168, listening port 2016

List all Configured Databases

[oracle@racnode1 ~]$ srvctl config databaseracdb

Database - (Configuration)

[oracle@racnode1 ~]$ srvctl config database -d racdb -aDatabase unique name: racdbDatabase name: racdbOracle home: /u01/app/oracle/product/11.2.0/dbhome_1Oracle user: oracleSpfile: +RACDB_DATA/racdb/spfileracdb.oraDomain: idevelopment.infoStart options: openStop options: immediateDatabase role: PRIMARYManagement policy: AUTOMATICServer pools: racdbDatabase instances: racdb1,racdb2Disk Groups: RACDB_DATA,FRAServices: Database is enabledDatabase is administrator managed

ASM - (Status)

[oracle@racnode1 ~]$ srvctl status asmASM is running on racnode1,racnode2

ASM - (Configuration)

$ srvctl config asm -aASM home: /u01/app/11.2.0/gridASM listener: LISTENERASM is enabled.

TNS listener - (Status)

[oracle@racnode1 ~]$ srvctl status listenerListener LISTENER is enabledListener LISTENER is running on node(s): racnode1,racnode2

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

124 of 136 4/18/2011 10:17 PM

Page 125: 11gr2on openfiler

TNS listener - (Configuration)

[oracle@racnode1 ~]$ srvctl config listener -aName: LISTENERNetwork: 1, Owner: gridHome: /u01/app/11.2.0/grid on node(s) racnode2,racnode1End points: TCP:1521

SCAN - (Status)

[oracle@racnode1 ~]$ srvctl status scanSCAN VIP scan1 is enabledSCAN VIP scan1 is running on node racnode2SCAN VIP scan2 is enabledSCAN VIP scan2 is running on node racnode1SCAN VIP scan3 is enabledSCAN VIP scan3 is running on node racnode1

SCAN - (Configuration)

[oracle@racnode1 ~]$ srvctl config scanSCAN name: racnode-cluster-scan, Network: 1/192.168.1.0/255.255.255.0/eth0SCAN VIP name: scan1, IP: /racnode-cluster-scan.idevelopment.info/192.168.1.188SCAN VIP name: scan2, IP: /racnode-cluster-scan.idevelopment.info/192.168.1.189SCAN VIP name: scan3, IP: /racnode-cluster-scan.idevelopment.info/192.168.1.187

VIP - (Status of Specific Node)

[oracle@racnode1 ~]$ srvctl status vip -n racnode1VIP racnode1-vip is enabledVIP racnode1-vip is running on node: racnode1 [oracle@racnode1 ~]$ srvctl status vip -n racnode2VIP racnode2-vip is enabledVIP racnode2-vip is running on node: racnode2

VIP - (Configuration of Specific Node)

[oracle@racnode1 ~]$ srvctl config vip -n racnode1VIP exists.:racnode1VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0 [oracle@racnode1 ~]$ srvctl config vip -n racnode2VIP exists.:racnode2VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0

Configuration for Node Applications - (VIP, GSD, ONS, Listener)

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

125 of 136 4/18/2011 10:17 PM

Page 126: 11gr2on openfiler

[oracle@racnode1 ~]$ srvctl config nodeapps -a -g -s -l-l option has been deprecated and will be ignored.VIP exists.:racnode1VIP exists.: /racnode1-vip/192.168.1.251/255.255.255.0/eth0VIP exists.:racnode2VIP exists.: /racnode2-vip/192.168.1.252/255.255.255.0/eth0GSD exists.ONS daemon exists. Local port 6100, remote port 6200Name: LISTENERNetwork: 1, Owner: gridHome: /u01/app/11.2.0/grid on node(s) racnode2,racnode1End points: TCP:1521

Verifying Clock Synchronization across the Cluster Nodes

[oracle@racnode1 ~]$ cluvfy comp clocksync -verbose Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...Check: CTSS Resource running on all nodes Node Name Status ------------------------------------ ------------------------ racnode1 passedResult: CTSS resource check passed

Querying CTSS for time offset on all nodes...Result: Query of CTSS for time offset passed

Check CTSS state started...Check: CTSS state Node Name State ------------------------------------ ------------------------ racnode1 Active CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...Reference Time Offset Limit: 1000.0 msecsCheck: Reference Time Offset Node Name Time Offset Status ------------ ------------------------ ------------------------ racnode1 0.0 passed

Time offset is within the specified limits on the following set of nodes: "[racnode1]" Result: Check of clock time offsets passed

Oracle Cluster Time Synchronization Services check passed

Verification of Clock Synchronization across the cluster nodes was successful.

All running instances in the cluster - (SQL)

SELECT

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

126 of 136 4/18/2011 10:17 PM

Page 127: 11gr2on openfiler

inst_id , instance_number inst_no , instance_name inst_name , parallel , status , database_status db_status , active_state state , host_name hostFROM gv$instanceORDER BY inst_id;

INST_ID INST_NO INST_NAME PAR STATUS DB_STATUS STATE HOST-------- -------- ---------- --- ------- ------------ --------- ------- 1 1 racdb1 YES OPEN ACTIVE NORMAL racnode1 2 2 racdb2 YES OPEN ACTIVE NORMAL racnode2

All database files and the ASM disk group they reside in - (SQL)

select name from v$datafileunionselect member from v$logfileunionselect name from v$controlfileunionselect name from v$tempfile;

NAME-------------------------------------------+FRA/racdb/controlfile/current.256.703530389+FRA/racdb/onlinelog/group_1.257.703530391+FRA/racdb/onlinelog/group_2.258.703530393+FRA/racdb/onlinelog/group_3.259.703533497+FRA/racdb/onlinelog/group_4.260.703533499+RACDB_DATA/racdb/controlfile/current.256.703530389+RACDB_DATA/racdb/datafile/example.263.703530435+RACDB_DATA/racdb/datafile/indx.270.703542993+RACDB_DATA/racdb/datafile/sysaux.260.703530411+RACDB_DATA/racdb/datafile/system.259.703530397+RACDB_DATA/racdb/datafile/undotbs1.261.703530423+RACDB_DATA/racdb/datafile/undotbs2.264.703530441+RACDB_DATA/racdb/datafile/users.265.703530447+RACDB_DATA/racdb/datafile/users.269.703542943+RACDB_DATA/racdb/onlinelog/group_1.257.703530391+RACDB_DATA/racdb/onlinelog/group_2.258.703530393+RACDB_DATA/racdb/onlinelog/group_3.266.703533497+RACDB_DATA/racdb/onlinelog/group_4.267.703533499+RACDB_DATA/racdb/tempfile/temp.262.703530429

19 rows selected.

ASM Disk Volumes - (SQL)

SELECT pathFROM v$asm_disk;

PATH----------------------------------ORCL:CRSVOL1ORCL:DATAVOL1

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

127 of 136 4/18/2011 10:17 PM

Page 128: 11gr2on openfiler

ORCL:FRAVOL1

Starting / Stopping the Cluster

At this point, everything has been installed and configured for Oracle RAC 11g release 2. Oracle grid infrastructure was

installed by the grid user while the Oracle RAC software was installed by oracle. We also have a fully functionalclustered database running named racdb.

After all of that hard work, you may ask, "OK, so how do I start and stop services?". If you have followed the instructions inthis guide, all services, including Oracle Clusterware, ASM, network, SCAN, VIP, the Oracle Database, and so on shouldstart automatically on each reboot of the Linux nodes.

There are times, however, when you might want to take down the Oracle services on a node for maintenance purposes andrestart the Oracle Clusterware stack at a later time. Or you may find that Enterprise Manager is not running and need tostart it. This section provides the commands necessary to stop and start the Oracle Clusterware stack on a local server(racnode1).

The following stop/start actions need to be performed as root.

Stopping the Oracle Clusterware Stack on the Local Server

Use the "crsctl stop cluster" command on racnode1 to stop the Oracle Clusterware stack:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop clusterCRS-2673: Attempting to stop 'ora.crsd' on 'racnode1'CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racnode1'CRS-2673: Attempting to stop 'ora.CRS.dg' on 'racnode1'CRS-2673: Attempting to stop 'ora.racdb.db' on 'racnode1'CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'racnode1'CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'racnode1'CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'racnode1'CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'racnode1' succeededCRS-2673: Attempting to stop 'ora.racnode1.vip' on 'racnode1'CRS-2677: Stop of 'ora.racnode1.vip' on 'racnode1' succeededCRS-2672: Attempting to start 'ora.racnode1.vip' on 'racnode2'CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'racnode1' succeededCRS-2673: Attempting to stop 'ora.scan3.vip' on 'racnode1'CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'racnode1' succeededCRS-2673: Attempting to stop 'ora.scan2.vip' on 'racnode1'CRS-2677: Stop of 'ora.scan3.vip' on 'racnode1' succeededCRS-2672: Attempting to start 'ora.scan3.vip' on 'racnode2'CRS-2677: Stop of 'ora.scan2.vip' on 'racnode1' succeededCRS-2672: Attempting to start 'ora.scan2.vip' on 'racnode2'CRS-2676: Start of 'ora.racnode1.vip' on 'racnode2' succeeded <-- Notice racnode1 VIP moved to rCRS-2676: Start of 'ora.scan3.vip' on 'racnode2' succeeded <-- Notice SCAN3 VIP moved to racnCRS-2676: Start of 'ora.scan2.vip' on 'racnode2' succeeded <-- Notice SCAN2 VIP moved to racnCRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'racnode2'CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'racnode2'CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'racnode2' succeeded <-- Notice LISTENER_SCAN3 moved toCRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'racnode2' succeeded <-- Notice LISTENER_SCAN2 moved toCRS-2677: Stop of 'ora.CRS.dg' on 'racnode1' succeededCRS-2677: Stop of 'ora.racdb.db' on 'racnode1' succeededCRS-2673: Attempting to stop 'ora.FRA.dg' on 'racnode1'CRS-2673: Attempting to stop 'ora.RACDB_DATA.dg' on 'racnode1'CRS-2677: Stop of 'ora.FRA.dg' on 'racnode1' succeededCRS-2677: Stop of 'ora.RACDB_DATA.dg' on 'racnode1' succeededCRS-2673: Attempting to stop 'ora.asm' on 'racnode1'CRS-2677: Stop of 'ora.asm' on 'racnode1' succeededCRS-2673: Attempting to stop 'ora.eons' on 'racnode1'CRS-2673: Attempting to stop 'ora.ons' on 'racnode1'

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

128 of 136 4/18/2011 10:17 PM

Page 129: 11gr2on openfiler

CRS-2677: Stop of 'ora.ons' on 'racnode1' succeededCRS-2673: Attempting to stop 'ora.net1.network' on 'racnode1'CRS-2677: Stop of 'ora.net1.network' on 'racnode1' succeededCRS-2677: Stop of 'ora.eons' on 'racnode1' succeededCRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racnode1' has completedCRS-2677: Stop of 'ora.crsd' on 'racnode1' succeededCRS-2673: Attempting to stop 'ora.cssdmonitor' on 'racnode1'CRS-2673: Attempting to stop 'ora.ctssd' on 'racnode1'CRS-2673: Attempting to stop 'ora.evmd' on 'racnode1'CRS-2673: Attempting to stop 'ora.asm' on 'racnode1'CRS-2677: Stop of 'ora.cssdmonitor' on 'racnode1' succeededCRS-2677: Stop of 'ora.evmd' on 'racnode1' succeededCRS-2677: Stop of 'ora.ctssd' on 'racnode1' succeededCRS-2677: Stop of 'ora.asm' on 'racnode1' succeededCRS-2673: Attempting to stop 'ora.cssd' on 'racnode1'CRS-2677: Stop of 'ora.cssd' on 'racnode1' succeededCRS-2673: Attempting to stop 'ora.diskmon' on 'racnode1'CRS-2677: Stop of 'ora.diskmon' on 'racnode1' succeeded

If any resources that Oracle Clusterware manages are still running after you run the"crsctl stop cluster" command, then the entire command fails. Use the -f option tounconditionally stop all resources and stop the Oracle Clusterware stack.

Also note that you can stop the Oracle Clusterware stack on all servers in the cluster by specifying -all. The following willbring down the Oracle Clusterware stack on both racnode1 and racnode2:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all

Starting the Oracle Clusterware Stack on the Local Server

Use the "crsctl start cluster" command on racnode1 to start the Oracle Clusterware stack:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start clusterCRS-2672: Attempting to start 'ora.cssdmonitor' on 'racnode1'CRS-2676: Start of 'ora.cssdmonitor' on 'racnode1' succeededCRS-2672: Attempting to start 'ora.cssd' on 'racnode1'CRS-2672: Attempting to start 'ora.diskmon' on 'racnode1'CRS-2676: Start of 'ora.diskmon' on 'racnode1' succeededCRS-2676: Start of 'ora.cssd' on 'racnode1' succeededCRS-2672: Attempting to start 'ora.ctssd' on 'racnode1'CRS-2676: Start of 'ora.ctssd' on 'racnode1' succeededCRS-2672: Attempting to start 'ora.asm' on 'racnode1'CRS-2672: Attempting to start 'ora.evmd' on 'racnode1'CRS-2676: Start of 'ora.evmd' on 'racnode1' succeededCRS-2676: Start of 'ora.asm' on 'racnode1' succeededCRS-2672: Attempting to start 'ora.crsd' on 'racnode1'CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded

You can choose to start the Oracle Clusterware stack on all servers in the cluster by specifying -all:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -all

You can also start the Oracle Clusterware stack on one or more named servers in the cluster by listing the servers

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

129 of 136 4/18/2011 10:17 PM

Page 130: 11gr2on openfiler

separated by a space:

[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -n racnode1 racnode2

Start/Stop All Instances with SRVCTL

Finally, you can start/stop all instances and associated services using the following:

[oracle@racnode1 ~]$ srvctl stop database -d racdb [oracle@racnode1 ~]$ srvctl start database -d racdb

Troubleshooting

This section contains a short list of common errors (and solutions) that can be encountered during the Oracle RACinstallation described in this article.

Configuring SCAN without DNS

Defining the SCAN in only the hosts file (/etc/hosts) and not in either Grid Naming Service (GNS) or DNS is an invalidconfiguration and will cause the Cluster Verification Utility to fail during the Oracle grid infrastructure installation:

Figure 19: Oracle Grid Infrastructure / CVU Error - (Configuring SCAN without DNS)

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

130 of 136 4/18/2011 10:17 PM

Page 131: 11gr2on openfiler

INFO: Checking Single Client Access Name (SCAN)...INFO: Checking name resolution setup for "racnode-cluster-scan"...INFO: ERROR:INFO: PRVF-4657 : Name resolution setup check for "racnode-cluster-scan" (IP address: 216.24.138.153) faiINFO: ERROR:INFO: PRVF-4657 : Name resolution setup check for "racnode-cluster-scan" (IP address: 192.168.1.187) failINFO: ERROR:INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "racnode-cluster-scan"INFO: Verification of SCAN VIP and Listener setup failed

Provided this is the only error reported by the CVU, it is OK to ignore this check and continue by clicking the [Next] buttonin OUI and move forward with the Oracle grid infrastructure installation. This is documented in Doc ID: 887471.1 on the MyOracle Support web site.

If on the other hand you want the CVU to complete successfully while still only defining the SCAN in the hosts file, simplymodify the nslookup utility as root on both Oracle RAC nodes as follows.

Although Oracle strongly discourages this practice and highly recommends the use of GNSor DNS resolution, some readers may not have access to a DNS. The instructions belowinclude a workaround (Ok, a total hack) to the nslookup binary that allows the ClusterVerification Utility to finish successfully during the Oracle grid infrastructure install. Pleasenote that the workaround documented in this section is only for the sake of brevity andshould not be considered for a production implementation.

First, rename the original nslookup binary to nslookup.original on both Oracle RAC nodes:

[root@racnode1 ~]# mv /usr/bin/nslookup /usr/bin/nslookup.original

[root@racnode2 ~]# mv /usr/bin/nslookup /usr/bin/nslookup.original

Next, create a new shell script on both Oracle RAC nodes named /usr/bin/nslookup as shown below while replacing24.154.1.34 with your primary DNS, racnode-cluster-scan with your SCAN host name, and 192.168.1.187 withyour SCAN IP address:

#!/bin/bash

HOSTNAME=${1}

if [[ $HOSTNAME = "racnode-cluster-scan" ]]; then echo "Server: 24.154.1.34" echo "Address: 24.154.1.34#53" echo "Non-authoritative answer:" echo "Name: racnode-cluster-scan" echo "Address: 192.168.1.187"else /usr/bin/nslookup.original $HOSTNAMEfi

Finally, change the new nslookup shell script to executable:

[root@racnode1 ~]# chmod 755 /usr/bin/nslookup

[root@racnode2 ~]# chmod 755 /usr/bin/nslookup

Remember to perform these actions on both Oracle RAC nodes.

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

131 of 136 4/18/2011 10:17 PM

Page 132: 11gr2on openfiler

The new nslookup shell script simply echo's back your SCAN IP address whenever the CVU calls nslookup with yourSCAN host name; otherwise, it calls the original nslookup binary.

The CVU will now pass during the Oracle grid infrastructure installation when it attempts to verify your SCAN:

[grid@racnode1 ~]$ cluvfy comp scan -verbose

Verifying scan

Checking Single Client Access Name (SCAN)... SCAN VIP name Node Running? ListenerName Port Running? ---------------- ------------ ------------ ------------ ------------ ------------ racnode-cluster-scan racnode1 true LISTENER 1521 true

Checking name resolution setup for "racnode-cluster-scan"... SCAN Name IP Address Status Comment ------------ ------------------------ ------------------------ ---------- racnode-cluster-scan 192.168.1.187 passed

Verification of SCAN VIP and Listener setup passed

Verification of scan was successful.

===============================================================================

[grid@racnode2 ~]$ cluvfy comp scan -verbose

Verifying scan

Checking Single Client Access Name (SCAN)... SCAN VIP name Node Running? ListenerName Port Running? ---------------- ------------ ------------ ------------ ------------ ------------ racnode-cluster-scan racnode1 true LISTENER 1521 true

Checking name resolution setup for "racnode-cluster-scan"... SCAN Name IP Address Status Comment ------------ ------------------------ ------------------------ ---------- racnode-cluster-scan 192.168.1.187 passed

Verification of SCAN VIP and Listener setup passed

Verification of scan was successful.

Confirm the RAC Node Name is Not Listed in Loopback Address

Ensure that the node name (racnode1 or racnode2) is not included for the loopback address in the /etc/hosts file. Ifthe machine name is listed in the in the loopback address entry as below:

127.0.0.1 racnode1 localhost.localdomain localhost

it will need to be removed as shown below:

127.0.0.1 localhost.localdomain localhost

If the RAC node name is listed for the loopback address, you will receive the following error during the RAC installation:

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

132 of 136 4/18/2011 10:17 PM

Page 133: 11gr2on openfiler

ORA-00603: ORACLE server session terminated by fatal error

or

ORA-29702: error occurred in Cluster Group Service operation

Openfiler - Logical Volumes Not Active on Boot

One issue that I have run into several times occurs when using a USB drive connected to the Openfiler server. When theOpenfiler server is rebooted, the system is able to recognize the USB drive however, it is not able to load the logicalvolumes and writes the following message to /var/log/messages - (also available through dmesg):

iSCSI Enterprise Target Software - version 0.4.14iotype_init(91) register fileioiotype_init(91) register blockioiotype_init(91) register nullioopen_path(120) Can't open /dev/rac1/crs -2fileio_attach(268) -2open_path(120) Can't open /dev/rac1/asm1 -2fileio_attach(268) -2open_path(120) Can't open /dev/rac1/asm2 -2fileio_attach(268) -2open_path(120) Can't open /dev/rac1/asm3 -2fileio_attach(268) -2open_path(120) Can't open /dev/rac1/asm4 -2fileio_attach(268) -2

Please note that I am not suggesting that this only occurs with USB drives connected to the Openfiler server. It may occurwith other types of drives, however I have only seen it with USB drives!

If you do receive this error, you should first check the status of all logical volumes using the lvscan command from theOpenfiler server:

# lvscaninactive '/dev/rac1/crs' [2.00 GB] inheritinactive '/dev/rac1/asm1' [115.94 GB] inheritinactive '/dev/rac1/asm2' [115.94 GB] inheritinactive '/dev/rac1/asm3' [115.94 GB] inheritinactive '/dev/rac1/asm4' [115.94 GB] inherit

Notice that the status for each of the logical volumes is set to inactive - (the status for each logical volume on a workingsystem would be set to ACTIVE).

I currently know of two methods to get Openfiler to automatically load the logical volumes on reboot, both of which aredescribed below.

Method 1

One of the first steps is to shutdown both of the Oracle RAC nodes in the cluster - (racnode1 and racnode2). Then, fromthe Openfiler server, manually set each of the logical volumes to ACTIVE for each consecutive reboot:

# lvchange -a y /dev/rac1/crs# lvchange -a y /dev/rac1/asm1# lvchange -a y /dev/rac1/asm2# lvchange -a y /dev/rac1/asm3

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

133 of 136 4/18/2011 10:17 PM

Page 134: 11gr2on openfiler

# lvchange -a y /dev/rac1/asm4

Another method to set the status to active for all logical volumes is to use the Volume Group change command as follows:

# vgscan Reading all physical volumes. This may take a while... Found volume group "rac1" using metadata type lvm2

# vgchange -ay 5 logical volume(s) in volume group "rac1" now active

After setting each of the logical volumes to active, use the lvscan command again to verify the status:

# lvscan ACTIVE '/dev/rac1/crs' [2.00 GB] inherit ACTIVE '/dev/rac1/asm1' [115.94 GB] inherit ACTIVE '/dev/rac1/asm2' [115.94 GB] inherit ACTIVE '/dev/rac1/asm3' [115.94 GB] inherit ACTIVE '/dev/rac1/asm4' [115.94 GB] inherit

As a final test, reboot the Openfiler server to ensure each of the logical volumes will be set to ACTIVE after the bootprocess. After you have verified that each of the logical volumes will be active on boot, check that the iSCSI target serviceis running:

# service iscsi-target statusietd (pid 2668) is running...

Finally, restart each of the Oracle RAC nodes in the cluster - (racnode1 and racnode2).

Method 2

This method was kindly provided by Martin Jones. His workaround includes amending the /etc/rc.sysinit script tobasically wait for the USB disk (/dev/sda in my example) to be detected. After making the changes to the/etc/rc.sysinit script (described below), verify the external drives are powered on and then reboot the Openfilerserver.

The following is a small portion of the /etc/rc.sysinit script on the Openfiler server with the changes (highlighted inblue) proposed by Martin:

..............................................................# LVM2 initialization, take 2 if [ -c /dev/mapper/control ]; then if [ -x /sbin/multipath.static ]; then modprobe dm-multipath >/dev/null 2>&1 /sbin/multipath.static -v 0 if [ -x /sbin/kpartx ]; then /sbin/dmsetup ls --target multipath --exec "/sbin/kpartx -a" fi fi

if [ -x /sbin/dmraid ]; then modprobe dm-mirror > /dev/null 2>&1 /sbin/dmraid -i -a y fi

#-----

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

134 of 136 4/18/2011 10:17 PM

Page 135: 11gr2on openfiler

#----- MJONES - Customisation Start#-----

# Check if /dev/sda is ready while [ ! -e /dev/sda ] do echo "Device /dev/sda for first USB Drive is not yet ready." echo "Waiting..." sleep 5 done echo "INFO - Device /dev/sda for first USB Drive is ready."

#-----#----- MJONES - Customisation END#----- if [ -x /sbin/lvm.static ]; then if /sbin/lvm.static vgscan > /dev/null 2>&1 ; then action $"Setting up Logical VolumeManagement:" /sbin/lvm.static vgscan --mknodes --ignorelockingfailure &&/sbin/lvm.static vgchange -a y --ignorelockingfailure fi fi fi

# Clean up SELinux labelsif [ -n "$SELINUX" ]; then for file in /etc/mtab /etc/ld.so.cache ; do [ -r $file ] && restorecon $file >/dev/null 2>&1 donefi..............................................................

Finally, restart each of the Oracle RAC nodes in the cluster - (racnode1 and racnode2).

Conclusion

Oracle RAC 11g release 2 allows the DBA to configure a clustered database solution with superior fault tolerance and load

balancing. However, for those DBA's that want to become more familiar with the features and benefits of databaseclustering will find the costs of configuring even a small RAC cluster costing in the range of US$15,000 to US$20,000.

This article has hopefully given you an economical solution to setting up and configuring an inexpensive Oracle 11g release 2

RAC Cluster using Red Hat Enterprise Linux (or CentOS) and iSCSI technology. The RAC solution presented in this articlecan be put together for around US$2,700 and will provide the DBA with a fully functional Oracle 11g release 2 RAC cluster.

While the hardware used for this guide is stable enough for educational purposes, it should never be considered for aproduction environment.

Acknowledgements

An article of this magnitude and complexity is generally not the work of one person alone. Although I was able to author andsuccessfully demonstrate the validity of the components that make up this configuration, there are several other individualsthat deserve credit in making this article a success.

First, I would like to thank Bane Radulovic from the Server BDE Team at Oracle. Bane not only introduced me to Openfiler,but shared with me his experience and knowledge of the product and how to best utilize it for Oracle RAC. His research andhard work made the task of configuring Openfiler seamless. Bane was also involved with hardware recommendations andtesting.

A special thanks to K Gopalakrishnan for his assistance in delivering the Oracle RAC 11g Overview section of this article. In

this section, much of the content regarding the history of Oracle RAC can be found in his very popular book OracleDatabase 10g Real Application Clusters Handbook. This book comes highly recommended for both DBA's and Developers

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

135 of 136 4/18/2011 10:17 PM

Page 136: 11gr2on openfiler

wanting to successfully implement Oracle RAC and fully understand how many of the advanced services like Cache Fusion

and Global Resource Directory operate.

Lastly, I would like to express my appreciation to the following vendors for generously supplying the hardware for thisarticle; Seagate, Avocent Corporation, and Intel.

About the Author

Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE.Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. Hiswork includes advanced performance tuning, Java and PL/SQL programming, developing high availability solutions, capacityplanning, database security, and physical / logical database design in a UNIX, Linux, and Windows server environment.Jeff's other interests include mathematical encryption theory, programming language processors (compilers andinterpreters) in Java and C, LDAP, writing web-based database administration tools, and of course Linux. He has been a Sr.Database Administrator and Software Engineer for over 17 years and maintains his own website site at:http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor'sdegree in Computer Science.

Copyright (c) 1998-2011 Jeffrey M. Hunter. All rights reserved.

All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected undercopyright laws of the United States. This document may not be hosted on any other site without my express, prior, written permission. Application to host any of

the material elsewhere can be made by contacting me at [email protected].

I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and allresponsibility for any loss, damage or destruction of data or any other property which may arise from relying on it. I will in no case be liable for any monetary

damages arising from such loss, damage or destruction.

Last modified onSaturday, 26-Feb-2011 12:19:26 EST

Page Count: 11560

DBA Tips Archive for Oracle file:///D:/rac11gr2/CLUSTER_12.shtml

136 of 136 4/18/2011 10:17 PM